2016-2017 Survey Results

Dear Neighbor,

As you know, in the fall/winter last year I used a portion of remaining funds in my campaign account to finance both an online and paper copy Community Satisfaction Survey. The online survey netted 546 responses, which vastly exceeded my goal by several hundred. To ensure that the survey reached the homes of those that might not have access to a computer, I also mailed over 400 surveys to random households all across the city with a goal of receiving 138 responses back. The good news is that I received 143 responses, the bad news is that the demographic data is not consistent with city demographic data. In short, while the paper version yielded feedback from an important subset of the population, it can’t be considered a statistically representative sample, but is still useful data.

I’m providing reports for both versions so that you can compare them yourselves. A “cliff-notes” summary will show that the differences between the online and paper version are marginal at best, resulting in very few anomalies. I’m personally inclined to largely focus on the online results for two reasons: 1) the demographic data is more evenly distributed and consist with US Census data, and 2) 546 responses is a solid response rate.

If I/we do this again, I’ve learned a lot of how the questions should be ranked by the user…and most important, what not to do. Questions are ranked 1=Poor; 2=Fair; 3=Good; 4=Excellent and 5=N/A.

I should not have included a “N/A” (no opinion) as option 5. Even though only a few responded in this manner…when I try to summarize the score…all “5” responses get averaged in with the overall score, skewing the results slightly. To evaluate the impact, my obsessive compulsive tendency took over, and I’ve taken the time to manually extract those “no opinion” responses and compared them to the automated report. The good news is that the overwhelming majority of responses are not significantly influenced by my error with the exception of two questions. Otherwise, the margin of error is no greater than +/- 0.3.

The two impacted questions (on-line survey only) are:

Public bus transportation – actually reported a score of 1.75 and not 3.04

Values Public Input – actually reported a score of 1.87 and not 2.10

I hope you also find these reports as helpful and insightful to the opinions and expectations of our shared community. This is just the beginning, and I look forward to hearing your feedback as we begin our work on the update to the Comprehensive Plan this month.

Online Survey Results:

https://tshortjr.typeform.com/report/O9vKmm/EjHZ

Paper Survey Results:

https://tshortjr.typeform.com/report/fkV9Hv/kwD4

I’m happy to answer any questions that you might have.