You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analysing Civic Conversations with Sensemaking Tools
1. Introduction
Provide an overview of the guide's scope. Help readers understand what the guide covers and how it relates to other resource e.g. the existing Jigsaw guide. The guide is about providing a way to structure an analysis project.
2. Tools and Capabilities
Familiarise readers with the available Sensemaker tools and how to access them through the Consul Democracy integration or directly using the sensemaker package. Help readers understand the basic differences between each tool.
3. Problem Framing
As the first step in an analysis / sensemaking workflow, help readers define clear analytical questions and identify scope before starting. This is to ensure the analysis addresses the right questions.
4. Data Collection and Preparation
Guide readers through gathering conversation data and preparing it for analysis. Explain how the Consul integration handles data collection automatically, and highlight considerations for privacy, spam filtering, and data quality.
5. Exploratory Data Analysis
Familiarise readers with exploratory techniques to understand their data before deeper analysis. Explain how to use Sensemaker tools for initial topic discovery and exploration, including generating exploratory web reports.
6. Analytical Activities
Walk readers through the core analytical activities: topic discovery, categorisation and refinement, summary generation, and comparative analysis. Emphasise the iterative nature of these activities and when to use each approach.
7. Human Review
Emphasise the critical role of human judgment throughout the analysis process. Guide readers on how to evaluate topics, fix categorisation issues, review summaries responsibly, and maintain a decision log for transparency.
8. Principles for Good Civic Sensemaking
Establish core principles for conducting ethical and trustworthy sensemaking analyses. Cover traceability to source comments, representing minority voices, avoiding oversimplification, checking for bias, language issues, and handling sensitive content appropriately.
9. Reporting and Communication
Help readers convert insights into actionable findings for stakeholders. Guide them on structuring reports, communicating uncertainty and limitations, and using pre-publication checklists to ensure quality outputs.
Appendix: Using Sensemaker with Consul Democracy
Explain how Sensemaker tools apply to different Consul Democracy process types. Clarify what is being analysed (comments vs. proposal texts) and the specific use cases for each target type, including polls, debates, proposals, and participatory budgets.
Analysing Civic Conversations with Sensemaking Tools
1. Introduction
This guide provides practical, step-by-step guidance for conducting sensemaking analyses of civic conversations. It covers the analytical workflow from problem framing through to creating trustworthy outputs for decision-makers.
Note: The workflow described here is iterative rather than strictly linear. You will move back and forth between activities depending on what you discover and the questions you're trying to answer.
2. Tools and Capabilities
Before diving into the analytical workflow, it's helpful to understand what tools are available and what they do. This section provides an overview of the Sensemaker tools available through the Consul Democracy integration.
2.1 Quick Reference
Tool
Purpose
Output Format
When to Use
Categorise
Assign comments to topics and subtopics
CSV file with categorisations
When you need to categorise comments into known or discovered topics
When you need raw data for further processing or custom visualizations
Report
Interactive HTML report combining analysis and summary
Single HTML file
When you want a comprehensive, readable view (useful for both exploration and final reporting)
2.2 Consul Integration Interface
The Sensemaker tools are accessible through the Consul Democracy admin interface. As long as the feature is enabled, the "Sensemaker" tools will be available in the admin menu.
Accessing the tools:
Navigate to the admin area
Select "Sensemaker" from the admin menu
Choose "New Run" to start an analysis
Select the target type (poll, debate, proposal, etc.) and search for the specific item you want to analyse
Choose which tool to run and configure any additional context
Running analyses:
The interface shows currently running jobs and past runs
You can view job status, metadata (e.g., number of comments analysed), and download output files
For Analyse runs, visualizations are displayed in the interface
All output files can be downloaded for further analysis
2.3 Available Tools
Categorise
Purpose: Assign statements (comments) to topics and subtopics.
What it does:
Takes a CSV of comments and categorises each comment into one or more topics/subtopics
Can use topics you've already identified, or discover topics from the data
Supports multiple topic assignment (a comment can belong to multiple topics)
Output: CSV file with comments and their topic/subtopic assignments
For manual/CLI usage: Use categorization_runner.ts. See the Sensemaker README for CLI usage details.
Summarise
Purpose: Generate a narrative summary of the conversation.
What it does:
Produces a structured summary with introduction, overview, top subtopics, and detailed sections
Identifies themes, common ground, and differences of opinion (when vote data is available)
Includes citations linking back to source comments
Output: HTML file containing the full summary
For manual/CLI usage: Use runner.ts. See the Sensemaker README for CLI usage details.
Analyse
Purpose: Generate detailed statistical analysis with raw data outputs.
What it does:
Performs topic identification, categorization, and summarization
Produces JSON files with topics (including sizes and subtopics), comments with alignment scores, and summary data
Generates visualizations showing topic structure and distributions
Provides raw data for further processing or custom visualizations
Output:
JSON file with topics and their sizes/subtopics
JSON file with all comments and their alignment scores
JSON file with summary data
Visualizations displayed in the admin interface
For manual/CLI usage: Use advanced_runner.ts. See the Sensemaker README for CLI usage details.
Report
Purpose: Generate an interactive HTML report combining analysis and summary.
What it does:
Automatically runs the Analyse tool first to produce necessary data
Combines analysis and summary into a single interactive HTML report
Includes hoverable citations linking back to source comments and vote counts
Provides navigable structure to explore topics and subtopics
Exports as a single HTML file for easy sharing
Output: Single HTML file with interactive report
Note: The Report tool is useful both for exploration and as a final deliverable. You can generate multiple reports as you iterate.
For manual/CLI usage: Use single-html-build.js, which requires the JSON files produced by advanced_runner.ts (topics, summary, and comments-with-scores). See the Sensemaker README for CLI usage details.
2.4 Additional Context
TODO: Indicate where this in screenshots.
All tools support providing additional context when running analysis. This context helps the model better understand the domain and produce more relevant results. Examples:
"This is from a conversation on a $15 minimum wage in Seattle"
"These are comments on a proposed park development"
"Focus on identifying concerns about accessibility"
You can use additional context to:
Provide domain-specific information
Guide the model's interpretation
Correct or refine the analysis approach
Add instructions for handling specific themes
3. Problem Framing
Before collecting data or running tools, clearly define what you're trying to learn. Your analytical questions shape everything that follows—which data you collect, how you prepare it, and which analytical activities you emphasise.
3.1 Define your core questions
Common analytical questions include:
"What are people talking about?" — Identifying main themes and topics
"How are perspectives distributed?" — Understanding who holds which views and whether there's consensus or division
"How does conversation evolve over time?" — Tracking how themes shift across phases or time periods (TODO: Consider whether temporal is relevant in our context)
"Are there duplicate or related proposals?" — Finding semantic duplicates that should be merged
"What concerns are emerging that weren't in the original prompt?" — Discovering unexpected themes
Your questions determine which analytical activities you'll emphasise and in what order.
3.2 Identify scope and constraints
Platforms and sources: Which conversations, debates, proposals, or processes are in scope?
Time windows: Are you analysing a specific period, or tracking changes over time? (TODO: Consider whether temporal is relevant in our context)
Privacy considerations: What personal or identifying information must be handled carefully?
Ethical considerations: Are there groups or topics that require special handling?
4. Data Collection and Preparation
4.1 Data collection
Gather conversation data and relevant context that will accompany the analysis.
Under the hood, Sensemaker requires a CSV file with comment_text, comment-id, and vote columns (agrees, disagrees, passes). It also supports group-specific vote breakdowns for comparative analysis in summaries. For complete CSV format requirements and details, see the Sensemaker README or documentation.
The Consul integration provides an interface for selecting and preparing data for the following processes. This handles much of the data collection step automatically.
As long as the feature is enabled, the "Sensemaker" tools will be available in the admin menu.
4.2 Data preprocessing
Clean and normalise the text before analysis. The Consul Democracy integration is designed to handle this step automatically formatting the data for use with Sensemaker. For details of the input format refer to the Sensemaker README.
However, there are a few things to consider regarding the input data:
Privacy: You may want to check for any personal or identifying information that should be removed or anonymised before analysis and certainly before sharing quotes.
Spam and noise filtering: You may want to manually review and exclude spam, test comments, admin, facilitator replies, or other noise that isn't automatically filtered
5. Exploratory Data Analysis
Exploratory Data Analysis (EDA) is an open-ended process to familiarise yourself with the data, uncover initial insights, and identify potential issues. This may lead you back to further data preprocessing or to revisiting your analytical questions.
5.1 Summarise main characteristics
Note: Sensemaker focuses on topic discovery and summarization, not basic data statistics. For the characteristics below, you'll need to analyse the data yourself (using the exported CSV or other tools) or check your platform's analytics.
Volume: How many comments, from how many participants?
Distribution: Are comments evenly distributed, or do a few participants dominate?
Temporal patterns: When did most comments arrive? Are there peaks or gaps?
Length and quality: What's the typical comment length? Are there many very short or very long comments?
5.2 Identify patterns, outliers, and issues
Note: These checks require manual review or separate analysis tools. Sensemaker doesn't automatically detect spam, bots, or data quality issues—it will attempt to categorise all content you provide.
Outliers: Are there unusually long comments, or comments that seem completely off-topic?
Potential issues: Do you see evidence of spam, bots, or coordinated campaigns?
Missing data: Are there important fields that are frequently empty?
5.3 Exploring with Sensemaking tools
Sensemaking tools can be part of your EDA:
Run an initial topic discovery to see what themes emerge before you refine
Look for unexpected clusters that might reveal issues with your data or questions
Check for obvious miscategorisation that suggests data quality problems
After running an analysis using the Sensemaker integration in Consul, you will be able to see run metadata (e.g. no. of comments analysed), and, for Analyse runs (see Section 2.3), visualisations of the data. Most importantly, you will be able to download the output files for further analysis. TODO: Add some more metadata to job show view in admin
These visualizations help you quickly understand the structure of your conversation data and identify which topics are most discussed. The visualizations are built using Sensemaker's visualization library and provide an interactive way to explore the analysis results.
5.3.1 Generating an exploratory web report
For a high-level, readable view of your data during exploration, use the Report tool (see Section 2.3 for details). This tool automatically runs the Analyse tool first to produce the necessary data, then generates an interactive HTML report.
When to use for exploration:
After your first topic discovery run, to get a readable overview of what the model found
When you want to quickly share findings with team members for discussion
As a way to review summaries and check for obvious issues before deeper refinement
To get a comprehensive view that combines topics, summaries, and statistics in one interactive format
The web report is useful both for exploration and as a final deliverable. You can generate multiple reports as you iterate—each one helps you decide whether your topic structure and context are working well before moving to more detailed analysis.
5.4 Other exploratory techniques
Beyond Sensemaking tools, consider:
Basic word frequency analysis: What words or phrases appear most often?
Simple keyword searches: Are there specific terms you expected to see but don't?
Sampling and reading: Manually read a random sample of 20-50 comments to get a feel for the data
6. Analytical Activities
Once you understand your data and have clear questions, you'll use Sensemaking tools to answer them. The activities below are not a fixed sequence—you'll move between them iteratively based on what you discover.
6.1 Discover topics
When you first run the tool, it will cluster comments and generate topic summaries. Do not treat these as final.
Three levels of topic hierarchy:
Sensemaker can discover topics at three levels of detail:
Topics (top-level): Main themes in the conversation (e.g., "Transport", "Housing", "Environment")
Subtopics: More specific themes within each topic (e.g., under "Transport": "Bus Reliability", "Parking", "Cycling Infrastructure")
Themes (sub-subtopics): The most granular level, identifying specific discussion points within subtopics
You can configure the tool to discover:
Just top-level topics
Topics and subtopics
All three levels (topics, subtopics, and themes)
TODO: Note that pre-specified topics is not a capability of the integration.
Pre-specified topics:
If you already know the main themes you expect (e.g., from a structured consultation), you can provide your own top-level topics. Sensemaker will then discover only subtopics within those topics, rather than discovering topics from scratch. This is useful when you want to ensure certain themes are covered.
Initial review:
Scan the landscape: Look at the top-level topics to see if they cover the main themes you expect
Check for "Misc" or "Other": Sensemaker may generate a generic "Other" category for comments that don't fit clearly. If this category is too large, it likely contains distinct sub-themes that need to be broken out
Look for unexpected themes: Topics you didn't anticipate may reveal important insights or data quality issues
When to use: Start here for exploratory questions like "What are people talking about?" Use pre-specified topics when you have a structured consultation with known themes.
6.2 Categorise and refine topics
Refine the initial AI categorisation to better fit your context and questions.
Multiple topic assignment:
Statements can belong to multiple topics simultaneously. A comment about "affordable housing near public transport" might be categorised into both "Housing" and "Transport" topics. This reflects the reality that many civic issues are cross-cutting.
Using the Categorise tool:
Use the Categorise tool (see Section 2.3) to assign statements to topics and subtopics. You can use this to categorise comments into topics you've already identified, or to see how the model categorises comments into discovered topics.
Iterative refinement:
You can run topic discovery, refine the structure, then use the refined topics as input for a new run to further refine subtopics. Generate a Report (see Section 2.3) after each iteration to see how your refinements are working.
TODO: Expand on this and how it can be done in Consul- (well,not yet but will be soon).
6.3 Generate summaries
Once topics are reasonably stable, generate summaries for each cluster. Sensemaker produces rich, structured summaries that go beyond simple text summaries.
Choosing a tool:
Summarise (see Section 2.3): Produces a straightforward summary HTML file
Analyse (see Section 2.3): Provides detailed statistics and JSON outputs for further processing
Report (see Section 2.3): Combines analysis and summary into an interactive HTML report—ideal for comprehensive, readable views (useful for both exploration and final reporting)
Summary structure:
Sensemaker generates summaries with several sections:
Introduction: Overview with counts of statements, votes, topics, and subtopics
Overview section: High-level summary of all themes, with percentages showing how many statements relate to each topic (percentages can exceed 100% since statements can belong to multiple topics)
Top 5 Subtopics: Quick overview of the most discussed subtopics
Detailed topic/subtopic sections: For each subtopic, Sensemaker provides:
Number of statements assigned
Themes: Up to 5 key themes identified within that subtopic
Common Ground: Summary of statements with high agreement (based on vote data)
Differences of Opinion: Summary of statements with clear disagreement (based on vote data)
Relative Agreement Level: Label indicating whether the subtopic shows "high", "moderately high", "moderately low", or "low" agreement compared to other subtopics
Using vote data:
Sensemaker uses vote counts (agrees, disagrees, passes) to identify Common Ground and Differences of Opinion. It:
Selects statements with the clearest signals for agreement or disagreement based on vote statistics
Requires at least 20 total votes for a statement to be included in Common Ground/Differences sections (to avoid misleading impressions from small samples)
Only considers vote information (not text analysis) when selecting statements for these sections
Citations and grounding:
Summaries include citations that link back to the original comments and vote counts. This allows readers to:
Verify the summary against source material
See which specific comments were referenced
Check vote counts for statements mentioned in Common Ground/Differences sections
In web reports generated by the Report tool (see Section 2.3), these citations are interactive—readers can hover or click to see the original comment text and vote counts.
Additional context:
You can provide context strings when running analysis (e.g., "This is from a conversation on a $15 minimum wage in Seattle"). This helps the model better understand the domain and produce more relevant summaries.
When to use: See Section 2.3 for guidance on choosing between Summarise, Analyse, and Report tools.
Ensure you have vote data if you want Common Ground/Differences analysis.
TODO: Explain how this is a chance to influence the analysis peformed by the model e..g by providing further instruction or corrective action. We can provide examples in other sections.
6.4 Compare segments and time periods
TODO: Do we really need temporal analysis in our context?
Use preserved metadata to answer questions about who is saying what, and when.
Comparing groups:
Using metadata: Use fields like area, age band, role to see how topic prevalence differs across groups
Using group-specific vote data: If you included group-specific vote columns (e.g., young-adults-agree-count, seniors-agree-count), you can compare not just what topics different groups discuss, but also their levels of agreement/disagreement on those topics
Watch for blind spots: If some groups are under-represented in the raw data, be explicit that comparisons between them and other groups may be unreliable
Comparing time periods:
Compare stages: If the conversation runs in phases (e.g., initial consultation vs. feedback on a draft), compare how themes shift over time
Temporal analysis: Use timestamps to track how topics emerge, peak, or fade over the course of the conversation
When to use: For questions like "How are perspectives distributed?" or "How does conversation evolve over time?"
Note: The Consul Democracy integration provides tools for segmenting data by various dimensions. See the integration documentation for specific guidance on segmentation workflows.
6.5 Other analytical approaches
Depending on your questions, you might also consider:
Stance detection: Pro/con positions (may be part of topic discovery)
Representative comments: Identifying comments that best represent each topic
Network structure analysis: If your data includes reply threads or relationships between participants
Note: Sensemaking tools focus on topic discovery and categorisation. Other analytical needs may require additional tools or techniques.
7. Human Review
AI accelerates reading, but it does not replace judgment. Human review should happen throughout your analysis, not just at the end. Your role is to prevent "hallucination," ensure nuance, and make interpretive decisions.
7.1 How to evaluate topics
A "correct" topic structure depends on what decision-makers need to know and what questions you're answering. Here are a few properties that you can consider when evaluating the topic structure:
Granularity: Whether or not the topics are too broad or too narrow. Aim for a level where a policymaker can say, "I can assign this issue to a specific department"
Subtopics: Presence of subtopics for nuance (e.g., Parent: Public Transport -> Sub: Bus Reliability)
Representativeness: Whether the topics are representative of the data.
Cross-cutting issues: Decide how to handle themes like "Equity" or "Budget" that cut across everything—either as their own topic or as tags
7.2 Fixing common categorisation issues
Handling miscategorisation: Comments can be sarcastic or complex. If a comment saying "Great job destroying the economy" is classified as "Praise for Economic Policy," then determine if it should be manually moved altered or further instruction added to the additional context area to that is provided to the model.
Ambiguous statements: If a statement doesn't fit clearly, consider if it belongs to a new "Emerging" category or if it's truly noise
7.3 Reviewing summaries responsibly
Spotting drift: Ensure the summary hasn't "drifted" away from the original text. If the comments say "The park is unsafe at night," but the summary says "Residents want more lighting," that is an inference, not a summary. Verify if the inference is supported.
Checking for hallucination: Sometimes models invent details. If a summary mentions a specific location or statistic not found in the source comments, flag it immediately
Oversight levels: Review 20-30% of comments for high-risk policy decisions; 5-10% for general feedback --- TODO: a reference or a different suggestion to back this up?
7.4 Keep a decision log (audit trail)
Topic naming, merging, and splitting are interpretive decisions. A simple decision log makes your analysis more transparent and defensible.
Record key changes: When you merge or split topics, or significantly rename one, note what changed and why
Capture disagreements: If analysts disagree on where a tricky comment belongs, note the rationale and final decision
Link to outputs: Refer to this log when writing your methodology section, so readers understand how the structure evolved
TODO: Mention how job runs in the admin area are recorded and can be referred to later
Example: Refinement in Action
Input Data: 5 comments on a new park (some praising the playground, others complaining about traffic).
Initial AI Output: "Parks and Recreation" (Summary: People like the park.)
Human Refinement: The analyst notices tension. While people like the park, they hate the access. They split the topic.
Refined Output:
Park Amenities: Residents appreciate the playground.
Traffic & Access Safety: Strong concerns about parking and safety.
Result: Two actionable insights instead of one generic summary.
TODO: Expand on refinement example
8. Principles for Good Civic Sensemaking
8.1 Traceability to source comments
Every insight must have a clear lineage back to source text. A policymaker should be able to click a summary and see the real human comments that generated it.
Sensemaker's summaries include citations that link back to source comments. In web reports, these citations are interactive—readers can hover or click to see the original comment text and vote counts. Always verify that citations are accurate and that the cited comments genuinely support the summary claims.
8.2 Representing minority voices
Don't just report the loudest voices. Highlight small clusters that might contain critical, novel ideas.
8.3 Avoiding oversimplification
Sensemaking ≠ Summarising. Summarising reduces information; sensemaking structures information so it can be understood without losing critical nuance.
8.4 Checking for bias & distortion
Ensure the tool isn't favouring one type of language (e.g., formal vs. colloquial) or hallucinating consensus where there is division.
TODO: Consider putting a note on language earlier on, e.g. in exploratory or data preparation stage e.g. a Dundee consultation and reference to "circles" in the city doesn't refer to road markings but roundabouts.
TODO: Consider adding a section about using the "additional context" area to provide course correction to the model.
8.5 Iterating as new data arrives
Sensemaking is dynamic. As new comments come in, re-evaluate if your topic structure still holds or if new themes are emerging.
8.6 Ethics, safeguarding, and quoting participants
Working with civic input involves responsibilities beyond technical correctness.
Sensitive content: Be careful when quoting comments that include personal stories, trauma, or identifiable details; paraphrase or mask specifics where needed
Consent and expectations: Respect the terms under which participants contributed (e.g., whether they were told comments might be quoted publicly)
Avoid harm through framing: Do not frame minority positions as "noise" or "fringe" if that could stigmatise already marginalised groups
9. Reporting and Communication
Convert insights into actionable findings tailored to your stakeholders (policy teams, researchers, general public).
9.1 What to include in a findings report
Executive Summary of key themes
Detailed breakdown of each topic with representative quotes
Quantitative distribution (how many people said what)
Common Ground and Differences of Opinion: Where vote data is available, highlight areas of consensus and division
Methodology note (how AI was used and how humans reviewed it, including reference to your decision log)
TODO: Check back on the above as I'm not sure what's fully expected of reports
Web report generation:
Use the Report tool (see Section 2.3) to generate interactive HTML/webpage reports that make it easy to explore findings. These reports include hoverable citations, navigable structure, and can be exported as a single HTML file for easy sharing. Web reports are useful both during exploration and as final deliverables for stakeholders.
9.2 Structuring insights for decision-makers
Signal vs. Noise: Highlight consensus (where everyone agrees) and divides (where opinion is split)
Actionability: Frame topics in a way that aligns with departmental responsibilities (e.g., "Park Maintenance" vs "Traffic Safety")
9.3 Communicating uncertainty & limitations
Communicate uncertainty: If a theme is based on only a few comments, say "A small group suggested..." rather than "The public wants..."
Don't overfit: Fewer, clearer topics are often better than hundreds of fragmented ones
Include ethical and uncertainty considerations: Be transparent about limitations, sampling, and any groups that may be under-represented
9.4 Pre-publication checklist
Before sharing any report, quickly run through a small set of checks:
Drift check: Do a final spot-check of summaries against source comments
Hallucination check: Remove or correct any invented locations, numbers, or policies not grounded in comments
Minority voice check: Confirm that important small clusters are at least acknowledged, even if briefly
Structure check: Ensure topics are not so fragmented that they confuse more than they clarify
Traceability check: For each key finding, confirm there are clear example comments or links back to the data
Appendix: Using Sensemaker with Consul Democracy
Consul Democracy offers various processes for participation. The integration supports analysis of different target types. This appendix explains what is being analysed (comments vs. proposal/investment texts) and how Sensemaking tools can be used for each.
Analysing Comments
For these target types, Sensemaker analyses the comments that users have written, using the target (poll, proposal, debate, etc.) as context to improve understanding.
1. Poll
What's analysed: Comments written by users about the poll.
Context provided: Poll questions, response options, and vote counts are included in the context to help Sensemaker understand the poll structure.
Sensemaking Use:
Understand the themes in comments about poll questions.
Identify concerns or additional perspectives that weren't captured in the poll options.
Goal: Gain deeper insight into public opinion beyond the structured poll responses.
2. Citizen Proposal (Single Proposal)
What's analysed: Comments written by users on a specific citizen proposal.
Context provided: Proposal title, description, summary, and vote counts are included in the context.
Sensemaking Use:
Analyse comments to understand public feedback and concerns about a specific proposal.
Identify areas of support, opposition, or suggested improvements.
Goal: Understand how the public is responding to a specific citizen proposal.
3. Citizen Debate
What's analysed: Comments written by users in response to a debate.
Context provided: Debate title, description, and vote counts (for/against) are included in the context.
Sensemaking Use:
Cluster comments into "For," "Against," and "Nuanced/Alternative" perspectives.
Identify emerging arguments that weren't in the original debate text.
Goal: Produce a "Debate Digest" that summarises the state of the discussion for latecomers.
4. Collaborative Legislation Debate
What's analysed: Comments written by users on a question within a collaborative legislation process.
Context provided: Process title, question text, response options, and answer counts are included in the context.
Sensemaking Use:
Analyse comments to understand public concerns about specific legislation questions.
Identify themes in feedback that may inform the legislative process.
Goal: Help policymakers understand public sentiment and concerns about proposed legislation.
5. Collaborative Legislation Debate (Segmented by Option)
What's analysed: Comments from users who selected a specific response option (e.g., "Strongly Agree")—filtered to only include comments from those users.
Context provided: Process title, question text, the specific option selected, and a note explaining the filtering.
Sensemaking Use:
Understand the reasoning behind a specific choice (e.g., why people selected "Strongly Agree").
Identify common themes in comments from people who made the same choice.
Goal: Gain insight into the values and reasoning that drive specific legislative preferences.
6. Collaborative Legislation Proposal
What's analysed: Comments written by users on a proposal within a collaborative legislation process.
Context provided: Process title, proposal text, and vote counts are included in the context.
Sensemaking Use:
Run analysis per proposal to see specific feedback on draft text.
Group feedback into "Drafting suggestions" (grammar/wording) vs. "Substantive objections" (policy intent).
Goal: Help legal drafters quickly identify which proposals are controversial and why.
Special Cases: Analysing Proposal/Investment Texts
For these target types, Sensemaker analyses the texts/descriptions of proposals or investments themselves (not comments on them), treating each proposal/investment as a "comment" in the conversation.
7. All Citizen Proposals
What's analysed: The title and description text of all citizen proposals submitted to the platform (not comments on proposals).
Context provided: A note explaining that these are citizen proposals submitted to the platform.
Sensemaking Use:
Use semantic similarity to find duplicate or related proposals (e.g., 50 different people suggesting bike lanes).
Thematic Grouping: Cluster proposals to see demand by category (Environment, Housing) before manual categorisation.
Goal: Reduce the "noise" of duplicate proposals and help the analyst gauge the volume of demand for specific themes.
8. Participatory Budget
What's analysed: The title and description text of all investment proposals within a budget process (not comments on investments).
Context provided: Budget name and phase are included in the context. Each investment is converted into a comment-like item, with vote padding to ensure all investments are included.
Sensemaking Use:
Analyse investment descriptions to understand the values driving requests (e.g., are people asking for parks because of health or social connection?).
Identify thematic clusters of similar investment proposals.
Goal: Align budget allocation not just with project counts, but with the underlying community needs expressed in descriptions.
9. Participatory Budgets by Group
What's analysed: The title and description text of investment proposals within a specific budget group (e.g., "Environment", "Transport")—not comments on investments.
Context provided: Group name and parent budget information are included in the context. Each investment is converted into a comment-like item, with vote padding.
Sensemaking Use:
Understand themes within a specific budget category.
Compare priorities across different budget groups.
Goal: Gain deeper insight into community priorities within specific budget categories.