Questions public tech needs answered

Happily, the civic technology movement includes many smart people, building and researching many important and interesting things. But I notice that design research about "cross-cutting issues" happens less frequently. In other words, we often lack practical, data-driven recommendations for designing solutions to problems many governments (and the organizations that serve them) share.

This list is not at all comprehensive, and updated periodically.

Public-value-focused security

Governments spend lots of time and money building computer security strategies and tools to prevent break-ins and breaches. Although certainly needed, these strategies have inadvertent effects, like making it harder for people to access some systems, or losing some privacy. These trade-offs are inevitable, but could governments make them more deliberately?

  • Models and examples. What are model government security plans, strategies and approaches with goals other than just national security and infrastructure protection?

  • "Mind openers." What are the most effective tactics for encouraging security decision makers to consider outcomes beyond just data loss and prevent?

  • Measurement. How can we measure outcomes of security programs beyond compliance and breach prevention? How can those "key performance indicators" incorporate other outcomes?

  • Tools, simulations, frameworks. What decision making tools, simulations or frameworks tools could government security professionals use to consider the trade-offs and inadvertent effects of their security decisions?

Impact measurement, especially for "digital services"

The "measurement problem" continues to hound digital service teams inside government, non-profits and others good-doing programs. Gathering evidence that a program is effective, particularly that its _outcomes_ are worth the cost, as as hard now as it was 10 years ago. As a result, it is still difficult to justify investment in these programs, particularly as budgets shrink. How might we demonstrate evidence for our work?

  • Models and examples. Who has measured the outcomes and impacts of government digital services well? Are there any generalizable lessons?

  • Frameworks. Are there particularly categories of metrics that most governments should consider? Processes for breaking the "metrics problem" down into pieces?

  • Tools. Are there ways to automate or ease the tedious data gathering that often comes with impact measurement? Can those scaffold thoughtful measurement outcomes?

Algorithms, automation and government service design

Whether we like it or not, government departments will increasingly depend on algorithms, automation and other forms of artificial intelligence to deliver services. The hazards of this change are well-documented; we need to equip service designers and builders with the right tools to use automate services well.

  • Design methods and patterns. What concrete design methods can designers use to mindfully and ethically include algorithms and AI in their government products? How can these methods build on existing ones? Are they any different?

  • Design patterns for transparency. How can we explain automated decision systems (and their results) to people receiving government services? How do these differ from existing government design patterns?

  • Template decision automation that's ethical by design. Can we provide simple, ready-to-go systems that help civil servants speed up decisions, but don't compromise ethical standards? Is it possible to make such a generic or re-usable system that's useful and safe?

Culture change

At 18F, we often (half) joked that our work was "culture change disguised as service delivery." But often, it feels like our only method for culture change is building things. Are there other approaches? Can we learn from other fields?

  • Change management lessons learned. Can public tech learn lessons or best practices from the broader civic tech literature? What might they be?

  • Team architecture. If your goal is changing the culture of your agency, what's the best way to organize your team? Is there any realistic and sustainable conclusion?

Ecosystem design

Some government services are delivered via many different organizations and actors working together. Many of our design best practices assume a service is controlled by a single organization. But what if there are unavoidable handoffs? How do we design for organizations working together to deliver a service?

  • Design patterns for handoffs. How can we best prepare people using government services to be jolted from one system to another? Is there any way to get ease the inevitable confusion?

  • Do goal-based "road maps" really work for users? Does giving someone a set of instructions (or a map) for how to meet a goal involving multiple agencies help them? Or does this assume people move more linearly than they actually do? Are there alternatives?

  • Non-centralized, user-initiated data hand-off between organizations? Is there a way for users to chose to "hand off" bits of their information from one organization to another? (Without storing it all in some central warehouse vulnerable to attacks and snooping.) Are there any services this would actually help?