Part IV · Methods and Research Design

Chapter 23. Survey and Interview Design

How to design surveys and interviews that produce useful, reliable, and ethical community data—from consent protocols to question design, demographic framing, story-eliciting techniques, and the choice between digital and paper instruments.

5,840 words · 23 min read

Chapter 23: Survey and Interview Design


Chapter Overview

Surveys and interviews are fundamental tools in Community Mapping research. Done well, they capture information that administrative data cannot reach: lived experience, barriers, priorities, stories, and the meaning residents assign to place. Done poorly, they produce misleading data, waste people's time, and reproduce harm. This chapter teaches the craft of asking useful questions with informed consent, designing instruments that respect respondents, and choosing formats that serve both researchers and communities. The focus is practical: what works, what fails, and why.


Learning Outcomes

By the end of this chapter, you will be able to:

  1. Design survey and interview questions that are clear, unbiased, and grounded in the research question
  2. Recognize and avoid leading questions, double-barreled questions, and false dichotomies
  3. Write informed consent language that meets ethical and legal standards
  4. Apply best practices for demographic questions that respect identity complexity
  5. Distinguish between asset-focused, needs-focused, place-based, and story-based question types
  6. Evaluate trade-offs between digital and paper survey instruments
  7. Analyze survey data while respecting qualitative nuance and acknowledging limitations

Key Terms

  • Leading Question: A question that suggests or implies a "correct" answer, biasing responses.
  • Open-Ended Question: A question that allows respondents to answer in their own words, producing qualitative data.
  • Closed-Ended Question: A question with predefined response options (multiple choice, scales, yes/no).
  • Informed Consent: The process of ensuring participants understand what they are agreeing to, including purpose, risks, rights, and data use.
  • Double-Barreled Question: A question that asks about two things at once, making it impossible to know which the respondent is answering.
  • Story-Eliciting Question: A question designed to invite narrative responses rather than simple ratings or counts.

23.1 Asking Useful Questions

The quality of your data depends on the quality of your questions. A poorly designed survey produces unreliable answers, frustrated respondents, and wasted time. A well-designed survey produces usable data, engaged respondents, and actionable insight.

What makes a question useful? Clarity. The respondent must understand what you are asking. Avoid jargon, ambiguous terms, and complex sentence structures. If a question could mean two different things, rewrite it. "Do you feel safe in your neighborhood?" sounds simple but raises immediate problems: Safe from what? Crime? Traffic? Harassment? Natural hazards? At what time of day? In what specific locations? A clearer version might be: "In the past month, have you avoided walking in your neighborhood after dark because of safety concerns?"

Relevance. Every question must serve the research purpose. If you cannot explain why you are asking a question and how the answer will be used, delete it. Survey fatigue is real. Long surveys with irrelevant questions train respondents to stop taking you seriously.

Specificity. Vague questions produce vague answers. "How often do you visit the library?" invites confusion. Does "often" mean weekly? Monthly? Compared to what? A better version: "In the past three months, how many times have you visited the public library?" with response options: 0 times / 1-2 times / 3-5 times / 6-10 times / More than 10 times.

One thing at a time. Each question should ask about one concept, one behavior, one opinion. Double-barreled questions — "Do you find the community center accessible and welcoming?" — create uninterpretable data. A respondent might find it physically accessible but not culturally welcoming. Split it: "Do you find the community center physically accessible?" and "Do you find the community center welcoming?"

Useful questions also respect the respondent's knowledge. Do not ask people to estimate things they cannot reasonably know. "What percentage of your neighbors are immigrants?" is unanswerable for most people. "Can you name a neighbor who is an immigrant?" is concrete and within their lived experience.

Finally, useful questions match the analysis plan. If you plan to compare responses across demographic groups, you need demographic questions. If you plan to map responses spatially, you need location data (address, intersection, neighborhood name, or GPS coordinates). If you plan to quantify, you need closed-ended questions. If you plan to understand narrative and meaning, you need open-ended questions. Design backward from the research question: What do I need to know? What question will get me there?


23.2 Avoiding Leading Questions

A leading question suggests the "correct" answer, biasing responses toward what the researcher expects or desires. Leading questions produce invalid data and undermine trust.

Some leading questions are blatant: "Don't you agree that more parks would improve the neighborhood?" The phrasing pressures agreement. A neutral version: "Do you think more parks would improve the neighborhood, make no difference, or not be a priority?"

Other leading questions are subtle. They hide bias in word choice, framing, or assumed context. Consider: "How much has the new transit line improved your commute?" This assumes improvement. A respondent whose commute worsened or stayed the same has no honest way to answer. A neutral version: "Since the new transit line opened, has your commute improved, stayed the same, or gotten worse?"

Loaded words also create bias. "How satisfied are you with the city's efforts to address homelessness?" carries an assumption that the city is making efforts. A respondent who sees no effort cannot answer honestly. Rephrase: "How would you rate the city's response to homelessness?" with options from "Very effective" to "Not effective" plus "I don't know" and "The city has not responded."

Question order can also lead. If you ask, "Have you experienced crime in your neighborhood?" immediately before "Do you feel safe at night?", the first question primes respondents to think about crime, artificially inflating fear. Reorder the questions or separate them with unrelated items.

Avoiding leading questions requires self-awareness. Researchers bring assumptions, hopes, and hypotheses. The survey must not encode those biases into the instrument. Pilot testing helps. If most respondents give the same answer, and that answer aligns with what you hoped to find, check whether the question led them there.

An important exception: in advocacy surveys, where the purpose is mobilization rather than neutral inquiry, some degree of framing is expected. A community group surveying residents about a proposed highway might ask, "Do you support or oppose the highway project that would displace 200 families?" The framing (displacement) is deliberate and transparent. The goal is not unbiased research; it is organizing. This is ethically acceptable if the survey is clearly presented as advocacy, not neutral research. The line between persuasion and manipulation is thin. Stay on the right side.


23.3 Consent Language

Informed consent is not optional. It is an ethical and often legal requirement. Before someone answers your questions, they must understand what they are agreeing to and have a genuine choice to decline.

Informed consent includes several core elements: purpose (why you are conducting the survey and what you will do with the data), voluntary participation (the respondent can decline or stop at any time without penalty), risks (any potential harm, including emotional discomfort from sensitive questions or risks from data exposure), confidentiality and data use (who will have access to responses, how data will be stored, and whether responses are anonymous or identifiable), benefits (what the respondent or community might gain), contact information (who to reach with questions or concerns), and withdrawal rights (how to withdraw consent after participation).

Here is an example of functional consent language for a community asset and needs survey:

Survey Consent Form
Purpose: This survey is part of a community mapping project led by [Organization Name]. We are asking residents about local services, assets, and needs to inform a community action plan.
Participation: Your participation is completely voluntary. You can skip any question or stop at any time.
Confidentiality: Your answers will be kept confidential. We will not share your name or address. Survey results will be reported in aggregate (combined with other responses) and may be shared with community organizations, local government, and the public.
Risks: Some questions ask about challenges you or your household may face. You do not have to answer any question that makes you uncomfortable.
Benefits: This survey helps the community understand what is working and what is needed. Your input may influence future programs, services, and funding.
Questions: If you have concerns or questions, contact [Name] at [Email/Phone].
Consent: By completing this survey, you indicate that you understand the above and agree to participate.

For in-person or phone interviews, consent must be spoken and documented. The interviewer reads the consent statement aloud, asks if the participant has questions, and asks for verbal consent. Many researchers audio-record the consent exchange to document it.

For vulnerable populations — children, people with cognitive disabilities, people experiencing homelessness or incarceration, undocumented immigrants — consent requires extra care. Institutional Review Boards (IRBs) and research ethics boards (REBs) provide guidance. In some cases, consent from a guardian or support person is required. In all cases, consent language must be accessible, translated if needed, and free of coercion.

A common mistake is burying consent in fine print or assuming that starting a survey implies consent. Consent must be explicit, understandable, and front-and-center. If someone does not understand what they are agreeing to, you do not have informed consent.


23.4 Demographic Questions

Demographic data (age, gender, race, income, education, household composition, language, disability status) allows researchers to analyze whether experiences, access, and outcomes differ across groups. But demographic questions are among the most sensitive and poorly designed parts of many surveys.

Gender. Do not force binary categories. Best practice: an open-ended text box ("What is your gender?") or a multiple-checkbox format with options like: Woman / Man / Non-binary / Two-Spirit / Prefer to self-describe (with text box) / Prefer not to answer. Never use "Male/Female" as the only options without an alternative. Some researchers also distinguish between gender identity and sex assigned at birth, depending on the research question.

Race and ethnicity. Recognize that race is socially constructed and categories vary by national context. In Canada, common practice follows Statistics Canada's approach: multiple checkboxes for Indigenous identity (First Nations, Métis, Inuit), visible minority categories (South Asian, Chinese, Black, Filipino, Arab, Latin American, Southeast Asian, West Asian, Korean, Japanese, Other), and "Not applicable" or "Prefer not to answer." Allow multiple selections. Never force monoracial categorization on people with complex identities. In the United States, Census categories differ; follow context-appropriate frameworks.

Income. Do not ask for exact income. Many people do not know or will not share it. Use ranges: Under $20,000 / $20,000-$39,999 / $40,000-$59,999 / $60,000-$79,999 / $80,000-$99,999 / $100,000 and above / Prefer not to answer. Adjust ranges to local context. In a high-income city, you may need more granular high-end brackets. In a low-income rural area, you may need more granular low-end brackets.

Household composition. Ask about the number of people in the household and, if relevant, the number of children, seniors, or people with disabilities. Avoid assumptions about family structure. "How many people live in your household?" is better than "How many people live in your family?"

Language. Ask what languages are spoken at home, not "What is your native language?" Language use is complex. Many people are multilingual. Some speak one language at home and another in public. Frame the question to match the research need: "What languages do you speak fluently?" or "What language do you prefer for written information?"

Disability. Follow inclusive design principles. Rather than asking "Do you have a disability?" (which some people reject as identity), ask about functional needs: "Do you have difficulty seeing, even with glasses?" / "Do you have difficulty hearing, even with hearing aids?" / "Do you have difficulty walking or climbing stairs?" / "Do you have difficulty remembering or concentrating?" This approach, used in the Washington Group Short Set on Functioning, is more respectful and produces better data.

Demographic questions should come at the end of the survey, not the beginning. Starting with demographics can alienate respondents or prime them to answer through a demographic lens. The exception: if demographic eligibility determines survey access (e.g., "This survey is for residents aged 65+"), screen at the start but keep it brief.

Always include "Prefer not to answer" options. Never force disclosure. And always explain why you are asking. A brief note before demographic questions — "These questions help us understand whether different groups have different experiences. You can skip any question." — builds trust.

Canada's Gender-Based Analysis Plus (GBA+) framework offers guidance on inclusive demographic design. It's a real federal framework, not an invented citation. Apply it.


23.5 Asset Questions

Asset questions identify strengths, resources, capacities, and positive features of the community. They are essential in Community Mapping because they counterbalance deficit-focused narratives and reveal leverage points for development.

Asset questions can be closed-ended or open-ended. Closed-ended examples:

  • "Which of the following community resources do you use regularly? (Check all that apply)"
    ☐ Public library
    ☐ Community center
    ☐ Parks and playgrounds
    ☐ Farmers market
    ☐ Faith communities
    ☐ Sports leagues or recreation programs
    ☐ Volunteer groups
    ☐ Other: __________

  • "How would you rate the quality of parks in your neighborhood?"
    ○ Excellent ○ Good ○ Fair ○ Poor ○ Don't know / Not applicable

Open-ended asset questions invite narrative:

  • "What is one thing you really like about living in this neighborhood?"
  • "Tell us about a person, place, or organization that makes your community stronger."
  • "What local businesses, services, or groups do you rely on?"

The best asset surveys combine both. Closed-ended questions provide quantifiable data (e.g., "60% of respondents use the public library"). Open-ended questions provide texture, surprise, and insight into why certain assets matter.

Asset questions should not be naively positive. Not every respondent will identify assets. Some people live in communities with deep disinvestment, displacement, or harm. Forcing positivity alienates them. Offer space for complexity: "What do you value about this community, if anything?" or "Is there a place, person, or service you would miss if it were gone?" These phrasings allow for honest ambivalence.

Asset questions also reveal whose assets are visible. If a survey asks only about formal institutions (libraries, clinics, schools), it misses informal assets (a neighbor who watches kids after school, a corner store owner who extends credit, a park where elders gather). Phrase questions broadly enough to capture both.


23.6 Need Questions

Need questions identify gaps, barriers, unmet requirements, and challenges. They are politically sensitive because they can reinforce deficit narratives, but they are also necessary. A community cannot advocate for change without evidence of need.

Need questions should be specific and actionable. "What are the biggest problems in your community?" is too vague. Responses will be all over the map, hard to analyze, and hard to act on. Better:

  • "In the past year, have you or someone in your household needed any of the following services but been unable to access them? (Check all that apply)"
    ☐ Healthcare (doctor, dentist, mental health)
    ☐ Childcare
    ☐ Affordable housing
    ☐ Food (groceries, food bank, meal programs)
    ☐ Transportation
    ☐ Employment support
    ☐ Legal help
    ☐ Other: __________

This format produces clear, quantifiable data. You can report: "35% of respondents could not access affordable childcare in the past year."

Need questions must also identify barriers, not just gaps. Knowing that someone did not access healthcare is useful. Knowing why — cost, distance, wait times, language, discrimination, lack of awareness — is actionable.

  • "If you or someone in your household needed healthcare but did not get it, what were the main reasons? (Check all that apply)"
    ☐ Too expensive
    ☐ No transportation
    ☐ Clinic too far away
    ☐ Wait times too long
    ☐ Clinic not open when needed
    ☐ Language barrier
    ☐ Felt unwelcome or discriminated against
    ☐ Did not know where to go
    ☐ Other: __________

Pair needs questions with open-ended follow-ups:

  • "Is there a service or resource you need that does not exist in this community?"
  • "What would make it easier for you to access [healthcare / childcare / transportation / etc.]?"

Need questions can trigger distress. Asking someone to list all the things they cannot access or afford can feel demoralizing. Keep the tone matter-of-fact, not pitying. Remind respondents that the goal is action, not judgment. And always follow needs questions with asset questions or future-oriented questions to shift the emotional arc: "If you could improve one thing about this community, what would it be?"


23.7 Place-Based Questions

Community Mapping is inherently spatial. Place-based questions link experiences, assets, and needs to specific locations, enabling geographic analysis and visualization.

The simplest place-based question is "Where do you live?" For privacy reasons, do not ask for exact street addresses unless absolutely necessary. Instead, use neighborhood names, postal code prefixes, intersections, or hand-drawn map zones. "Which neighborhood do you live in?" with a list of recognized neighborhood names works in most cities. In rural areas, ask for the nearest town or community name.

Other place-based questions can be point-specific or area-specific:

  • Point-specific: "Where do you usually buy groceries?" (Respondent names a specific store or location.)
  • Area-specific: "Are there any areas in this neighborhood where you feel unsafe walking alone after dark?" (Respondent describes general areas, not exact coordinates.)

For in-person surveys, you can use participatory mapping methods. Provide a large printed map of the area and colored stickers. Respondents place stickers on locations as they answer questions:

  • Green sticker: "Mark a place you feel safe."
  • Red sticker: "Mark a place you avoid."
  • Blue sticker: "Mark a place where you connect with others."

This produces spatial data without requiring literacy or digital access.

For digital surveys, tools like Maptionnaire, Social Pinpoint, or custom web maps allow respondents to click on locations directly. This is powerful but excludes people without internet access or digital literacy.

Place-based questions must be voluntary. Do not force respondents to disclose their home location if they are uncomfortable. Especially for vulnerable populations (people experiencing homelessness, undocumented immigrants, people fleeing violence), exact location data can be dangerous. Always include a "Prefer not to answer" option.

Finally, place-based questions should ask about meaningful places, not just service locations. "Where do you feel you belong in this community?" or "Is there a place that feels like home to you?" produces richer data than "Where is the nearest clinic?"


23.8 Story-Based Questions

Story-based questions are the most important design move in qualitative survey and interview work. Rating scales and checklists produce data. Story-based questions produce understanding.

A story-based question invites a narrative response. It typically starts with "Tell me about a time when..." or "Describe..." or "Can you give an example of...?" Story-based questions do not ask for opinions in the abstract. They ask for lived experience in concrete detail.

Compare:

  • Not story-based: "Do you think the community center is welcoming?" (Yes / No / Somewhat)
  • Story-based: "Tell me about a time you visited the community center. What was that experience like?"

The first produces a rating. The second produces context: Did they go alone or with others? Did staff greet them? Was signage clear? Did they feel comfortable? The story reveals why someone feels welcome or unwelcome — and that's where actionable insight lives.

More examples of strong story-based questions:

  • "Can you describe a time when a neighbor helped you or you helped a neighbor?"
  • "Think of a place in this community where you feel safe. What makes it feel safe?"
  • "Tell me about a time when you needed help but couldn't find it. What happened?"
  • "Describe a moment when you felt proud to live in this community."

Story-based questions work especially well in oral history and photovoice projects. Chapter 21.4 covered oral history methods in depth; the connection here is methodological. Both story-based survey questions and oral history interviews elicit narrative. Both require the researcher to listen, follow up, and resist the urge to cut people off when their answers are longer or messier than expected.

Story-based questions produce qualitative data that cannot be graphed directly. But they can be coded thematically. If fifty respondents answer "Tell me about a time you avoided a place because you felt unsafe," you can identify patterns: lighting, isolation, past experience of harassment, presence of visible drug use, lack of other people around. These patterns inform action more clearly than a Likert scale.

Story-based questions take longer to answer and analyze, but the payoff is depth. Use them strategically. A survey can include a few story-based questions among closed-ended ones. An interview can be built entirely around story-based prompts. The format should match the research goal.


23.9 Digital vs Paper Surveys

Choosing between digital and paper formats is not a neutral technical decision. It is a decision about who can participate.

Digital surveys (web-based, mobile apps, tablets) are faster to deploy, easier to analyze, and cheaper to scale. They eliminate data entry. They can include skip logic (if someone answers "No" to "Do you have children?" the survey skips childcare questions). They can be translated with a click. They can include images, videos, or interactive maps.

But digital surveys exclude people without internet access, smartphones, or digital literacy. In Canada, as of 2021, 12% of rural households and 6% of urban households lack home internet (Statistics Canada, real data). Older adults, people living in poverty, and people with certain disabilities are disproportionately excluded by digital-only formats.

Paper surveys are slower and more labor-intensive. Someone must print, distribute, collect, and manually enter responses. Handwriting can be illegible. Skip logic is impossible. But paper is accessible to anyone who can read, requires no technology, and can be completed at the respondent's pace without digital pressure.

Best practice: offer both. A mixed-mode approach maximizes inclusion. Distribute paper surveys at community centers, libraries, clinics, and door-to-door. Simultaneously provide a web link and QR code for those who prefer digital. This dual approach produces higher response rates and more representative samples.

Some contexts lean more heavily toward one format:

  • Urban, younger, higher-income populations: Digital is often acceptable as primary with paper backup.
  • Rural, older, lower-income populations: Paper is often primary with digital backup.
  • Indigenous communities, remote regions, low-bandwidth areas: Paper or in-person interviews are often the only feasible options.
  • Multi-lingual communities: Digital tools with auto-translation (Google Forms, Microsoft Forms) can help, but paper in multiple languages may be more trusted.

In-person interviews (with the interviewer recording responses on paper or tablet) solve many access problems. The interviewer can clarify questions, read questions aloud for people with low literacy, and translate on the fly if they are bilingual. This format is time-intensive but produces high-quality data and builds relationship.

A critical ethical point: do not make digital-only surveys and then claim the results represent "the community." If your method excludes significant groups, your findings are biased. Acknowledge the limitation or change the method.


23.10 Analysis and Interpretation

Survey data is not self-explanatory. Analysis transforms responses into findings. Interpretation transforms findings into meaning.

Quantitative analysis of closed-ended questions produces frequencies, percentages, and cross-tabulations. If 200 people completed a survey and 80 said they cannot access affordable childcare, that is 40%. If you cross-tabulate by household income, you may find that 60% of low-income households and 20% of higher-income households report this barrier. That pattern is a finding.

Basic analysis tools: Excel or Google Sheets for small datasets (under 500 responses). SPSS, R, or Python for larger datasets or complex statistical analysis. Most community-based surveys do not require advanced statistics. Frequencies and cross-tabs are usually sufficient.

Qualitative analysis of open-ended and story-based questions requires thematic coding. Read through responses, identify recurring themes, label (code) each response with one or more themes, and count how often each theme appears. Example: if you asked "What would make it easier to access healthcare?" and fifty people mentioned transportation, thirty mentioned cost, and twenty mentioned language, those are your top three themes.

Coding can be done manually (highlighting and labeling in a Word document or spreadsheet) or with qualitative analysis software (NVivo, Atlas.ti, Dedoose). For small datasets (under 100 responses), manual coding is often faster and more intuitive.

Interpretation is where you move from "what" to "why" and "so what." A finding might be: "40% of survey respondents cannot access childcare." Interpretation asks: Is this higher or lower than expected? How does it compare to provincial or national averages? Does the barrier disproportionately affect certain demographic groups? What does this mean for workforce participation, child development, and family wellbeing? What are the implications for policy or programs?

Good interpretation requires context. Compare survey findings to census data, administrative data, or prior research. Triangulate with other sources. If a survey shows high satisfaction with parks but observational audits show poor maintenance, investigate the discrepancy. Maybe people have low expectations, or maybe the survey question was not specific enough.

Always report response rate and sample characteristics. If you distributed 500 surveys and received 100 back, that's a 20% response rate. Who responded? Were they older, younger, whiter, wealthier than the general population? If your sample is not representative, acknowledge it as a limitation.

Also report limitations. No survey is perfect. Common limitations: self-selection bias (people who respond may have stronger opinions), small sample size, lack of representation of certain groups, reliance on self-reported data, survey fatigue (if the survey was too long), and ambiguous questions (if respondents interpreted questions differently than intended).

Honest interpretation acknowledges what the data can and cannot tell you. If your survey had no disability questions, you cannot claim the findings apply to people with disabilities. If your survey was digital-only, you cannot claim it represents digitally excluded populations. Integrity requires restraint.


23.11 Synthesis and Implications

Survey and interview design is both craft and ethics. Craft because good questions require clarity, precision, and attention to language. Ethics because every question is a request for someone's time, trust, and disclosure — and because bad design can produce misleading data that drives harmful decisions.

The core principles to carry forward:

  1. Design backward from the research question. Every question on the survey should serve a clear purpose. If you cannot explain why you are asking and how the answer will be used, delete the question.

  2. Avoid bias. Leading questions, loaded language, and question order effects produce invalid data. Pilot-test with real people, not just colleagues.

  3. Obtain informed consent. People must understand what they are agreeing to and have a genuine choice to decline. Consent is not fine print.

  4. Respect identity complexity. Demographic questions must not force people into false categories. Open-ended options, multiple checkboxes, and "Prefer not to answer" are non-negotiable.

  5. Balance assets and needs. Deficit-only surveys stigmatize communities. Asset-only surveys miss real barriers. Both are needed.

  6. Invite stories. "Tell me about a time when..." produces richer, more actionable data than rating scales alone.

  7. Choose format with inclusion in mind. Digital-only surveys exclude significant populations. Mixed-mode (paper + digital + in-person) maximizes reach.

  8. Interpret with humility. Acknowledge limitations. Triangulate with other data. Do not overstate findings. Respect the gap between what people say and what is true.

Surveys and interviews are relational tools. They are conversations, even when structured. The quality of the data depends on the trust you build, the clarity of your questions, and your willingness to listen not just to the answers you hoped for, but to the ones respondents actually give.

The implications ripple outward. Poorly designed surveys waste time, produce bad data, and erode community trust in research. Well-designed surveys produce insight, build relationships, and support evidence-based action. The difference is not luck. It is skill, care, and ethical commitment.

As you move into the final chapter of this Part — Chapter 24 on fieldwork and observation — you will see these principles in motion. Survey design is desk work. Fieldwork is where you test whether your questions actually work, whether people understand them, and whether the format serves the community or just the researcher.


23.12 Survey Drafting Exercise

Purpose: This exercise asks you to draft a working community asset and needs survey, applying the principles covered in this chapter. You will write consent language, asset questions, needs questions, demographic questions, and at least one story-based question. The deliverable is a complete survey instrument ready to pilot-test.

Materials Needed:

  • Word processor or survey design tool (Google Forms, Microsoft Forms, SurveyMonkey, or similar)
  • A real or hypothetical community context (choose one: a neighborhood, a campus, a small town, or a rural area)
  • Chapter 23 as reference

Steps:

  1. Define the research purpose. Write one clear sentence: "This survey will help us understand [what] in [which community] in order to [action or outcome]." Example: "This survey will help us understand childcare access and barriers in the Downtown East neighborhood in order to advocate for new childcare spaces."

  2. Draft informed consent language. Write a consent statement that includes: purpose, voluntary participation, confidentiality, risks, benefits, contact info, and how consent is indicated. Use plain language. Keep it under 200 words.

  3. Write 5-7 asset questions. Include at least one closed-ended question (checkbox or multiple choice) and one open-ended question. Make sure questions are specific, clear, and unbiased.

  4. Write 5-7 needs questions. Include at least one question about barriers (not just gaps). Include at least one story-based question (e.g., "Tell me about a time you needed help but couldn't access it").

  5. Write 4-6 demographic questions. Follow the guidelines in §23.4. Include gender, age, and at least two others (income, household composition, language, race/ethnicity, disability status, or other relevant categories). Ensure inclusive response options.

  6. Add one place-based question. Example: "Which neighborhood do you live in?" or "Where do you usually buy groceries?" Choose a format appropriate to the context (neighborhood name, postal code, intersection, or open-ended).

  7. Review for bias and clarity. Read each question aloud. Could it be misunderstood? Does it lead the respondent? Is it double-barreled? Revise as needed.

  8. Format the survey. Put consent language first. Then asset questions, needs questions, story-based question(s), demographic questions, and place-based question. End with "Thank you for your time. Your input will be used to [brief statement of how findings will be used]."

Deliverable: A complete survey instrument (2-4 pages if paper, 15-25 questions if digital). Include a 1-page cover sheet with: (1) the research purpose sentence from Step 1, (2) a note on whether this is intended as paper, digital, or mixed-mode, and (3) one paragraph reflecting on a design challenge you encountered and how you resolved it.

Time Estimate: 2-3 hours

Safety and Ethics Notes: If this survey will be used with real people, it must be reviewed by a research ethics board or community advisory group before deployment. This exercise is for learning purposes. Do not distribute it to real respondents without ethical review and revision based on community input.


Key Takeaways

  • Useful survey questions are clear, relevant, specific, unbiased, and ask one thing at a time.
  • Leading questions and double-barreled questions produce invalid data. Pilot-test to catch them.
  • Informed consent is non-negotiable. Respondents must understand purpose, risks, rights, and data use.
  • Demographic questions must respect identity complexity: open-ended gender, multiple-checkbox race/ethnicity, income ranges, and "Prefer not to answer" options.
  • Story-based questions ("Tell me about a time when...") produce deeper, more actionable insight than rating scales alone.
  • Digital-only surveys exclude significant populations. Mixed-mode (paper + digital + in-person) maximizes inclusion.

Recommended Further Reading

Foundational:

  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Mail and Internet Surveys: The Tailored Design Method (4th ed.). Hoboken, NJ: Wiley. (This is a real text and the standard reference for survey methodology.)

Academic Research:

  • Suggested: Research on survey bias, question wording effects, and mode effects (digital vs paper).

Practical Guides:

  • Plain Language Action and Information Network (PLAIN). Federal Plain Language Guidelines. U.S. General Services Administration. (Real U.S. federal resource on writing clearly.)
  • Gender-Based Analysis Plus (GBA+) framework. Government of Canada. (Real federal framework for inclusive demographic design.)

Case Studies:

  • Suggested: Case studies of participatory survey design in Indigenous communities, immigrant communities, and low-income neighborhoods.

Plain-Language Summary

Surveys and interviews are tools to ask people about their lives, experiences, and needs. But asking good questions is harder than it looks. A bad question can confuse people, waste their time, or push them toward answers that aren't true. A good question is clear, respectful, and gets you the information you actually need.

This chapter teaches how to design surveys and interviews that work. That means avoiding biased or confusing questions, getting people's real consent before you ask them anything, respecting that identity is complicated (don't force people into boxes), and asking for stories — not just ratings — when you want to understand what life is really like.

It also means thinking about who can answer your survey. If you only do digital surveys, you exclude people without computers or internet. If you only do paper surveys in English, you exclude people who speak other languages. Good design means making it possible for everyone to participate, not just the people who are easiest to reach.

Finally, it means being honest about what your data can and can't tell you. If only happy people answered your survey, you can't say "everyone is happy." If you didn't ask about disability, you can't say your findings apply to people with disabilities. Good research admits its limits.


End of Chapter 23.