Jan 14

How Do We “Immunize” Society Against Technology Futures We DON’T Want?

Recently The Guardian published an interesting critique of the TED Talks series by Benjamin Bratton that I’ve been thinking about since I read it.   The piece asks what good does it do for TED to take extremely complex topics and boil them down into 20 minute presentations, which are viewed as infotainment by a certain segment of people, and then not much gets done about the issues being discussed.  I think it’s an interesting critique, and as someone who organizes technology conferences, I often worry that if we all just come and do a lot of talking and not much afterwards, what purpose has the conference really served?  I’d be interested to hear others’ thoughts.

Beyond the critique of TED Talks, however, there were two lines in particular that really struck me:

Because, if a problem is in fact endemic to a system, then the exponential effects of Moore’s law also serve to amplify what’s broken.

And the concept of not just innovating but also “immunizing” society:

The potential for these technologies are both wonderful and horrifying at the same time, and to make them serve good futures, design as “innovation” just isn’t a strong enough idea by itself. We need to talk more about design as “immunisation,” actively preventing certain potential “innovations” that we do not want from happening.

Regarding the exponential effects of Moore’s Law, I’ve written before that I think our public institutions (government, academia, social structures) aren’t just failing to keep pace with changes in technology, but that the technology itself is amplifying their (our) failures.  Wherever a gap existed before the information age, now it’s becoming a gulf (think income disparity, socio-economic mobility, access to real political power).

Whatever minor systemic failures or bureaucratic quagmires that crept in during the industrial age are turning into full-blown catastrophic disasters in the information age. See the US Congress or our public education system for stark examples, both represent not just a failure to adapt to a changing world, but technology is also amplifying the ills inherent in those systems with truly catastrophic results – a congress that has gone from dysfunctional to not functional at all, and a public school system that is failing the very students it was designed to help – the poor, the underserved, the first-generation students.

We talk and read about “disruptive innovation” every day in the tech and business press, but often its in the context of “creative destruction” as some new business model or product displaces an old one, and in general that’s seen as a positive outcome in a “free” market system.  But for public systems and institutions, those public goods that have no profit or market incentive, this amplification of the broken is really very scary to me and I am not at all convinced that privatization of public systems is the answer (which is why I don’t support charter schools or for-profit education businesses, no matter how innovative they promise to be – MOOCx blah blah blah).

The most important things in life can’t be quantified in dollars and we can’t “innovate” a business model or technology solution that changes that basic fact.

So where does that leave us?  I’m not sure, but I’m intrigued by Bratton’s concept of “immunizing” society against the futures we don’t want, and I’m wondering just how we might go about doing that.  Bratton says:

Problems are not “puzzles” to be solved. That metaphor assumes that all the necessary pieces are already on the table, they just need to be rearranged and reprogrammed. It’s not true.  “Innovation” defined as moving the pieces around and adding more processing power is not some Big Idea that will disrupt a broken status quo: that precisely is the broken status quo.

.. and I’m inclined to agree.  I think those of us who consider ourselves technology evangelists and futurists need to think long and hard about these questions.

As a practical step, perhaps one way to help “immunize” society against the technology futures we don’t want would be to make sure that every talk we give, every presentation, every slide deck (or Prezi or whatever), every workshop has a section about possible NEGATIVE outcomes of the technology we’re talking about, and what we could or should do to avoid it?  If we’re going to spread the word about new tech, don’t we have a responsibility to also discuss the possible negative effects? Perhaps as conference organizers and workshop planners, we need to include not just positive visioning, activities, and keynotes, but  sessions that specifically talk about the possible negative outcomes?

I’m not sure, but it’s something I’m thinking about and want to keep in mind.

Oct 13

This is an awesome paragraph about what ISN’T said (often enough) about workplace culture

Culture is about power dynamics, unspoken priorities and beliefs, mythologies, conflicts, enforcement of social norms, creation of in/out groups and distribution of wealth and control inside companies. Culture is usually ugly. It is as much about the inevitable brokenness and dysfunction of teams as it is about their accomplishments. Culture is exceedingly difficult to talk about honestly.

Read the full post, which is about startup culture, but I thought this paragraph was very insightful and applicable across organization types.