Davi Ottenheimer

Date published: 9/14/2020
Davi Ottenheimer, VP of Trust and Digital Ethics, Inrupt
Follow Davi on Twitter: @DaviOttenheimer
Flying Penguin Blog: https://www.flyingpenguin.com

This is the first podcast where I’ve been able to dig deep into the role that ethics plays in innovation, capitalism and cyber resilience. The guest on this episode delivers a clear and compelling argument that Ethics is security, that Facebook should not only broken up, but dissolved and that keeping markets free means removing things that harm the marketplace. Also in this episode, we take a look at the role that ethics officers play in business, and thoughts on the California Consumer Privacy Act of 2018 and CPRA. And the most provocative question of all: Is The US is still in a civil war?

Today my guest is none other than Davi Ottenheimer. Davi curretly serves as VP of Trust and Digital Ethics at Inrupt where he is focused on cultural disruptions of emerging technology and the ethics of defense. With over 25 years of experience in global security operations and assessments, including digital forensics and incident response, he is currently working on the forefront of adapting and applying security models to preserve human rights and freedoms, helping to ensure a brighter future on a trusted and respected web.

TRANSCRIPT

Words: 8827
Average time to read: 44 minutes

Steve:

Davi, this is the Nonconformist Innovation Podcast, and I think it’s appropriate that we explore some sensitive and urgent topics like the erosion of our privacy and our democracy in front of our very own eyes. I’ve been following your work for many years, and needless to say, it’s a pleasure to have you on the Nonconformist Innovation Podcast.

Davi:

Oh, thanks for having me, Steve. I’m very appreciative to be on your podcast.

Steve:

Help the audience by answering the question, who’s Davi Ottenheimer? A historian, technologist, iconic class, all of the above? I’m sure, no doubt, you’ve put some thoughts into answering that question, so-

Davi:

It’s an interesting question because sometimes people ask me my career path, and I tell them I don’t know, and people say, “What would you recommend for somebody who wanted to do what you’ve done?” Again, I’m like, “I’m not sure.” I think what I’ve learned from others who have been very successful is they start with a very firm grounding in general topics. So being a humanist would be amazing. I’m an aspiring humanist. For example, science, this is a crazy statistic that the number of Nobel Prizes awarded to the British went down after they formalized the study of science into schools. Before, when people just learned the classics, Latin, for example, and then played a lot, science was a game and exercise of fun, they actually earned more awards.

So hacking has that sort of tradition to it. Why don’t you go and learn the music, become a musician and then try to solve problems in identity management? So I see myself as a humanist in that context, or I’m trying to improve the human condition. That’s also why I was lucky to go to London School of Economics, which had a very, very social, good orientation around study, and I studied history.

Steve:

I think that’s awesome. I spent the first 15 years of my career being a technologist, and it wasn’t until recently where I had an awakening, if you will, to… I’ve been working on this technology, but am I really solving any kind of problems that matter? So I’d love your take on solving human problems, and especially in this day and age where human problems really are desperate to be solved. Last time we talked, you mentioned about the book that you’re writing, realities of security for big data or securing big data that you shelled the project. That was an interesting response that I wasn’t expecting. Now you’re thinking about or writing a different book and going in a new direction. So it seems COVID has given you plenty of time for reflection. What’s on your mind lately with regards to that and the direction of your new work relative to the current world events?

Davi:

Well, I’d say the turning point came in around 2015 or ’16. I worked inside some of the biggest big data environments in the world, on some of the toughest problems. What I noticed was the solutions at the security level, the technical level were misaligned. So for example, as I figured out and gave presentations about lack of patches or bad randomness, keys could be easily discovered or broken, and the whole thing would fall apart, or just the way the organizational management you might find in an ISO standard of operational security, the sort of thing the CSOs of the day were working on. That did not align at all with the uses of the technology.

So it felt very much like we were building the next atomic bombs and nobody was thinking about what happens when you drop them. That’s where I started to say, “Hey, Facebook is like a monarchy. It’s like an oligopoly. It’s like the worst forms of political governance ever in history. It’s not a tech firm. It’s literally like trying to create a situation of leverage that hurts people. Since it has that Genesis, how does a security department work within?” That’s one of my 2016 presentations just laid it out just like that.

Over history, thousands of years, we’ve moved to certain direction. They take us back to pre Magna Carta days. So that’s been on my mind for a long time now, and I’ve amassed lots of examples. Just to give you an example of the difference. In 2013, ’14, I was giving talks that showed pandemics are a big, big problem. Big data should be able to solve pandemics. Chemical warfare is a big problem. Voting is a big problem. These all should be solved by the security community by looking at integrity. This is 2013, ’14, ’15. Nobody was working on that. It was devastating to me to find out that people who are taking CSO roles were thinking about availability in encryption for confidentiality, only, and leaving integrity on the table. It was realized in all of the atrocities that we’ve seen, and as much as I’ve tried to warn people, I think only now people are beginning to see how integrity is so vital to security. They just completely dropped it.

Steve:

You brought up a good point in your recent post about Facebook being a digital slavery plantation as what ultimately led me to say, “Okay. I got to get Davi on my podcast.” So everybody knows now that when it’s free, you are the product, right? Your blog posts was very fascinating. You say fail faster. It turns out to be just fail without accountability, which turns out to be just a privilege to do known wrongs to people and get rich. It seems like it’s getting worse. How do you see this going if we don’t introduce legislation? I mean, do you think Facebook needs to be broken up? Is legislation going to stop them from continuing to do harm at scale?

Davi:

Yeah. There’s no question in my mind that breaking up Facebook is the right thing to do. Even more right would be just dismantling them altogether, shutting them down. I would basically turn them off. I don’t think that they provide good. I don’t think that people that work there are working on good. I think that primarily they’re working on getting rich, and they’re harming people to profit from it. I think it’s hard for everyone to see themselves as a villain. We always see ourselves as a hero in our own story. We find this in a lot of criminal studies, that the crimes are committed by people who say, “Well, I’m doing this because of some good that they superimpose upon their situation.” But if you take an externalized view, inherited rights or inherited ethics, as they called it, the 1700s, you stop this sort of perception of ultimate, right, this God doctrine, which is what they’re doing. They see themselves as kings of the hill, and they can do what they want.

You see some of this in other firms as well, Microsoft. Should they have been broken up? They recovered because of government action against them. Their hubris was removed. I don’t think Facebook is in a position to have that kind of sea change. Microsoft did. Amazon is in some of the same water, constantly trying to tell people it’s our way or the highway. Again, I see Amazon as potentially being able to make that change. Google has tried repeatedly to make that change, so kudos to them for attempting it, but they’ve failed miserably, many, many times.

So, I think Microsoft is the best example of where you don’t have to break up a company. You can just give them what the future should look like, and they adopt it. I don’t think Facebook is capable of that at all.

Steve:

Yeah. That’s a great point that you bring up. Recently, I was watching an interview with Bill Gates where he remarked that he didn’t have a home or an office in Washington, DC as if taunting the regulators. Until it went through full course, there was still some arrogance there, and Bill Gates was a younger man than he is now. But last October, there was an article that Bill Gates gave that government needs to get involved to regulate big tech companies. You see the ones that you’ve mentioned that collectively and even individually have more power than governments do, which is the scary aspect of it, especially when you have companies that so blatantly disregard and abuse data, right?

So there seems to be sufficient evidence of how Facebook was used by the Russians meddling in the 2016 elections. I’m not convinced that it’s not continuing to happen now and in an election year. In the current state of politics being what they are, can we rely on government to step in and make some change, even if, let’s say Biden comes into the White House in 2021? Are you optimistic about that? If not, what are the alternatives that you see?

Davi:

So am I optimistic about that? I’m optimistic about what I see as people recognizing the harms and wanting to work to fix them. But I’m somewhat in a bubble, and then I’ve inserted myself into communities that are working on fixing these things. I also try to keep a toe in communities that don’t believe there is a problem, and they still seem very committed to perpetuating harms. In fact, I just saw a colleague join Facebook, several colleagues join Facebook to fight the legal battles. I’m seeing more and more of this, that they’re armoring up, you might say, to continue their injustices by hiring experienced lawyers who can gain the system.

Steve:

You had a few words about that, right? [laughs] Friends don’t let friends work at Facebook. What was your advice or feedback about that?

Davi:

Well, I continued to do that. I remember a friend who was working at Apple said they wanted to go to Uber, and it was when Uber was just starting. I laid out how unethical and dangerous Uber was to them, and I said this would be a career change that would be unethical. You’d be working on an unethical project for harm. Uber turned out to be exactly that. So I was happy about warning them, but they went anyway. They made a lot of money. They have a very decorated resume being very successful. You might say, like some people say, Rommel was a decorated general of the Nazis, which is not entirely true. Rommel actually wasn’t a great general. That’s a false narrative. I feel that a lot of people have perpetuated for harmful reasons, and I’m happy to go toe to toe with them about how bad Rommel was as a general, including killing himself in the end, by order so that his family would live because that was the blackmail he was under.

So this is what I see, people going to work at Uber, terrible decision, and then they went to work for Facebook after. It’s troubling because it’s like seeing people go take an appointment in North Korea to work for a dictator or going to work for a known human rights abuser, like a mercenary. I have friends who go as foreign legionnaires, for example, and they may work for the wrong side of a conflict on human rights violations. You try to put a stop to it, but it’s difficult. It’s definitely difficult to see people make the wrong choices in life.

Steve:

You’re sharing this perspective as, I wouldn’t say taking the moral high road, but maybe just a perspective of concern for society. I think it’s really interesting that there are businesses that… Salesforce, for example, has an ethics officer. Your role as a data ethics officer is a fascinating one. Not all companies have them, even a lot of them that should. So I wanted to know, why is it important that companies and society at large make data ethics a consideration, and what types of companies need to not only consider having a data ethics officer, but have them when they might consider it a cost on their balance sheet?

I’ve always seen IT as an exercise in economics

Davi:

Well, I’ve always seen IT as an exercise in economics. So it’s the allocation of resources within the company, and the CFO position is already essentially the economist in the company. So IT is an extension or derivation of economics. Who gets shared time on the mainframe? Who gets access to the cloud? Who gets more memory? Who gets more storage? IT is really functioning as economics. And out of that, I realized that security is ethics because the question isn’t just who should get the memory on a strict economic scale because they’re going to make things, but who should get it from a moral or an ethical stance? So when you see security as ethics, then you realize that the decisions you should be making are really about the use of that technology. This to me is the shift in the industry.

Also, to that point, I found that people in security tend not to be ethical. Sometimes it’s actually quite bad. You’ll see chief security officers, for example, who realized that they could adopt the title with very little or no experience at all and relevant experience, and then they could get a lot of money, and they could direct company resources and make quite good out themselves but aren’t being ethical. So that to me was the fundamental difference. Insecurity being unethical is a terrible situation for companies because again, back to economics, huge cost. So as an ethicist, you’re really trying to predict as an innovator, even more to the point, you’re trying to predict where things would go wrong and trying to protect the company’s bottom line.

ethics is really about security, and it’s a better form of security

To be successful, you would say, for example, that if you don’t patch now, it’ll be much more expensive to patch later, or you’ll be breached, and then it’s extremely expensive with all the regulatory fines, but just also the embarrassment, customer loss. Likewise, you would say, if you make this mistake now, you may have a huge ethical crisis on your hands. So maybe the product could be pulled entirely, which we’ve seen over and over again in the AI. So ethics is really about security, and it’s a better form of security than what you get from a lot of security purists, and it really makes IT or information technology more successful. It makes the company more successful by being better at economics

Steve:

Davi, that blows my mind. It seems like I’ve not been clued into that kind of insight for more than half of my career, and I have huge regrets because of it. I wasted so many years trying to solve the wrong problem. I really want to spend time on this because I think this is really where things get interesting from a Nonconformist Innovation standpoint. The paradox is that what is good, right, and logical for the corporation is not good, right, or logical for the economy as a whole, and there’s a movement that’s been discussed many times at the world economic forum and in a recent book by the CEO of Salesforce. Marc Benioff wrote a book about the distinction between shareholder capitalism and stakeholder capitalism, and there’s a renewed interest in changing the balance because he argues that the current form of capitalism today is broken, and it’s not working. What do you think is needed to get organizations to shift from bad to operating in a more ethical and consumer friendly manner? What more can we do to really help good be successful?

Davi Ottenheimer
Episode sponsor:
IdRamp Sponsor

SUBSCRIBE TO THE PODCAST

We will notify you about new episodes and important updates.

Davi:

There’s two things here. One, I’m very passionate about the fact that markets function best when people are well informed and they’re free. So keeping a market free means removing things that harm the market. I think people have to have a willingness to see harm. So cognitive blindness is a real, real problem. For example, I’ve written recently about how the African Prince phenomenon, which we researched for a long time or advanced fee fraud from Africa, the people who fall for that tend to be very intelligent. They’re not stupid, but they fall for it because they’re racist, or they have no idea how Africa operates. They believe something to be true because they’re ignorant in a very, very particular narrow way.

cognitive bias is one of the biggest problems I see

Antifa has the same effect. Very, very intelligent people believe antifa exists and is a threat when it is not. It is basically a non-threat, and they ignore all the real threats because again, they have ignorance about a particular, very narrow sector. So cognitive bias is one of the biggest problems I see. So for example, if you’re raised in a racist household, you have a lot of strange ideas about fairness in the market. You won’t hire a person of color different than yourself because you are, and this is global, right, not just the United States with whites not hiring blacks, but this is… People have opinions about others because they’re not exposed to reality. So their blindness harms the market.

So the first thing is really to get people to a place where they can accept information that will be accepted in a way that they can really make the market freer. The second thing is government exists and needs to get involved in regulation because it is the lack of regulation that allows for these cognitive biases and harms to manifest into oligopolies and to huge, huge, dangerous corporations. A lot of people mistake the Boston Tea Party as somebody rebelling against taxation. But it is not. What the colonists really objected to was that there’s this super powerful middlemen who is taxing and regulating their ability to function in the market. They were rebelling against the Facebooks and the Microsoft’s. That’s such an important understanding of history because what we’re seeing today, like the railroads, like all through history is people getting upset that there’s this private individual who’s taking out of their pocket without representation.

Now, I say that because so many times in Silicon Valley, I’ve been here two decades, I see young, excited white men come and say, “How can I get my hand into everyone’s pocket?” I just think, “Are you so ignorant of history? Do you not understand the Boston Tea Party? Your job here is not to just find a way to tax everybody.” For example, somebody who said they wanted to take every parking spot in San Francisco and block it and then charge people to use it. Government needs to step in to prevent that from being seen as innovation, that innovation is preventing harms. It is preventing costs. It is lowering barriers. Innovation is not creating barriers.

So that inversion is one of the problems, and that’s why ethics is so essential to changing having a moral sense or a moral code and a grasp of history is so essential to changing the market towards a better functioning one.

Steve:

Yeah. In your excellent talk at RSA earlier this year, I didn’t get to attend, but I watched this on YouTube. It’s on YouTube for anyone who’s interested in seeing it, breaking bad. We’re seeing frightening conditions today in big tech. By the way, it must cause a complete existential crisis to think that we have a president in power that actually tries to create these conspiracies and who is not doing anything to protect our economic and political freedoms. You see that whole power equation that’s out of balance. In your presentation, Davi, you explored the origins of the term of the use of breaking bad and a pretty grim definition of character flaw. If you can be blunt, are we headed for a actual civil war, or is a civil war going to be fought based upon ideologies in cyberspace? What are the potential future conflicts that you see, A, that can be avoided, and B, that can arise if we don’t have clear heads that prevail?

Davi:

There are several problems with this topic, and a lot of it has to do with terminology and semantics, right? What is war? Are we already in a state of civil war, for example? Is war simply the continuation of politics by other means, and in particular, information warfare? So I gave an example recently of the 1917 Beersheba Battle, where a saddlebag was dropped with bad information, which the opposing side picked up and then aligned themselves around, which allowed them to be defeated. Or another way of putting it is Arista would be very familiar or Sun Zu would be very familiar with the current state of affairs, the information warfare we’re going through.

The politics weren’t working. So you started waging war.

I mean, this has been my whole study history of understanding the past, understanding the present level. Sure. Not just this information, but the whole concept of war being changing the landscape of morality to where you can do extraordinarily harmful things because you’re trying to get to a different state, right? The politics weren’t working. So you started waging war. I would say the United States in that sense has been in a civil war, is in a civil war. You see people blatantly lying, blatantly saying they’re going to do something and then doing the opposite, which is camouflage or deception.

You see people stating harms about their opposition, which are in fact the things they’re doing themselves. In a sense, it’s like the civil war didn’t end. When you look at the slaughterhouse cases right after the civil war ended, the United States let go people from prison who had fought viciously against freedom who wanted to perpetuate and expand slavery. Once they got out of jail, and the slaughterhouse case is a good example of this, they immediately started trying to undermine the rights of black Americans. So you have literally right at the end of the civil war the rise of the KKK up into 1870s, President Grant gets elected by a landslide by black voters. So immediately the KKK is going around trying to disenfranchise, get rid of black voters.

Woodrow Wilson removed all the blacks from government. He was elected on the principle of representation. The blacks voted for him because he promised them better representation and then turned around and took it all away from them, blatantly. He restarted the clan again. So I think civil war needs to be expanded by definition into not just muskets of the 1860s, but back to the Jacksonian times, 1830s, he waged all-out war on America to perpetuate slavery. Andrew Jackson was a terrible, terrible human, and he’s the role model for the current occupant of the White House.

So that should tell you everything, that we’re trying to continue the harms of the 1830s, which as an information security exercise is useful because encryption really took off in the 1840s in America and then was used extensively in the civil war. So we’re just repeating the mistakes of history.

Steve:

The tools in the battlefield have changed. So your background in economics is I think not only fascinating. It’s interesting to bring to bear on this conversation. You’re no doubt familiar with the economist, Milton Friedman who argued that monopoly is a concentration of power by a firm that has sufficient control over a particular product or service to determine significantly the terms on which other individuals should have access to it. So today we have talked a lot about oligopolies that actually act like monopolies, right? So with that tacit collusion, or in some cases, they may act like cartels, and it occurs right in front of our very eyes out in plain sight, it’s illegal tacit collusion that has been normalized and rational today.

Davi:

Yeah. One of the weird inversions though you see, the telegraph, for example, would be a natural monopoly in theory, because you have to lay physical lines the way fiber is being laid today. But isn’t really a monopoly when you look at the way telegraph works. Right? Barbed wire fences, a few people realize were made from the same metal as telegraph wire. That’s where barbed wire came from. That’s why it became so inexpensive was because telegraph wires becoming so plentiful and galvanized wire. So the ranch and farm operators just undid their barbed wire fences and connected them to the telephone lines, and they could literally become their own providers. They were hacking into the lines.

So if you have the same sort of freedom in 1968, example was the Carterfone. Here’s a rancher. Again, rural communities tend to be the most prolific innovators out of necessity. He connected wireless to the phone lines, and the monopoly said, “Hey, you can’t do that. We’re supposed to be a monopoly.” The courts, the government stepped in and said, “This is good for the market.” And thus, the internet was born out of the Carterfone because you can attach things like modems and faxes to the monopoly.

So when you look at the 1880s, and you see that the railroads and the telegraph and all this stuff is being laid down, I think a better way of looking at it is… I love the way urban studies work, transportation in particular, because they talk about the last mile. What you really want is a flourishing last mile, which is like the ends of the bell curve. Think about a bus that takes your or a train that takes you on a standard route. But then the last mile is completely open to innovation, and that feeds back into the sort of standardized shared network that you’ve put down out of necessity.

In my book, I actually found this breaks down in every form of science I’ve researched or studied. You either have an easy routine, a minimal judgment path, which everybody benefits from because it’s so ERM, if you will. Or you have one that’s taking a lot of information and acquisition. It’s storing all sorts of data. It’s constantly evaluating, constantly adapting. I think a lot of people think the big firms are supposed to do the latter. But in fact, you find it’s the small innovators, it’s the small startups that are doing it. So again, if you think about the Boston Tea Party, if you think about the railroads, if you think about the telegraph, what you find is that the natural monopolies that start to form actually try to prevent the individuals by unfair practices, which comes from… Right.

The artificial nature of power is used in a cognitive bias way to disadvantage people that could be innovative. So what you find is white men running Lyft, for example, refuse, apparently to hire any blacks. Lyft is supposed to be a transportation network that’s innovative, but it copies black communities. I mean, it’s literally doing what black communities have done for generations, including in the South of the United States and then removes all the blacks from the picture. So that’s not innovation. That’s just appropriation, and then trying to position themselves as a natural monopoly is false. That’s where government has to step in. By not stepping in, government allows this. Facebook wouldn’t exist if Harvard had held Zuckerberg accountable when he committed his first crimes.

Steve:

That’s such a great point. What makes it even harder, Davi, you have people like Warren Buffet who say in business, I look for economic castles protected by unbreachable moats. That’s his investment strategy.

Davi:

Yeah. Well, this is a pet peeve of mine that a lot of Silicon Valley talks about investing in things that have moats that keep people inside. That’s not what a moat is. The prisons  by water with alligators. A moat is meant to defend people inside against outside attack. So I really bristle when I hear people misuse history in that way, they’re describing a prison and trying to make it sound like a castle.

Steve:

Going back to the Facebook blog posts that you wrote some time back, that now, there are Facebook employees who are leaving taking pictures of their badges but writing sometimes not so friendly letters on their way out arguing for safer, better practices of their employers and including their founder.

Davi:

An interesting perspective on that would be General Beck left as head of the Nazi Army. He left and said, “This is not right.” 1938, what Hitler’s doing is wrong. He is celebrated less, for whatever reason. I mean, there’s a beer named after him, Beck’s. But he’s celebrated less than generals who stayed on and did all the wrong things. I think we need to celebrate more people who recognize early the wrongs and they move on and try to do rights. But in his case in particular, what’s probably the most insightful for me is he decided not to commit a coup. He decided not to force Hitler out of power because he felt that he didn’t have the institutional support after the Munich Convention. In other words, the lack of British ability to move or the international community to move to support overthrow of Hitler by appeasing him.

Facebook will continue to do massive amounts of harm and people inside will just be facilitators.

That’s another whole complicated topic because I believe Chamberlain was right at the time in what he did unfortunately, and how he positioned it. But the fact that back and Chamberlain didn’t align on removal of Hitler is what we should think about today. Are there people in the Facebook situation that are saying we could make a change and then leaving instead because it’s hopeless. I believe that to be the case. I believe that Facebook will continue to do massive amounts of harm and people inside will just be facilitators. I do believe we need to hold them accountable, at least the people who were running an organizing and showing a predilection for profit over human rights. That’s something that escapes me why we’re not doing more to stop them.

Steve:

Well, we’ve spent most of the podcast, all of the podcasts up until now talking about some pretty depressing topics, the current worrisome state of politics, of shareholder dominated capitalism and a bunch of other things. I’d like to shift gears maybe to a little bit happier subject, which is some things that your own company, Inrupt is working on and others in the form of decentralization of really rethinking the internet, rethinking the architecture of the World Wide Web, the protection of data and so forth.

Last year, I had a podcast with Dr. Ann Cavoukian who reminded us of a statement of your own CTO and co-founder Inrupt, Tim Berners-Lee what he made regarding how he’s horrified at what he created in the World Wide Web, which is a centralized model that promotes tracking and surveillance of people’s activities, retaining control in the companies that retain it. So in the future, what you’re currently working on today, is the future decentralized, and do you think there’s enough desire for change at a grassroots level for consumers to carry or accept that burden to change and control their own data?

Davi:

Oh, absolutely. Having lived through the first one myself and been an active participant in, I mean, my career changed dramatically when 1994, I basically just started working on TCP/IP decentralized systems. At the time, there was a lot of pushback from… I mean, I had to work on mainframes. I had to work on VAX/VMS. As a VAR even, I sold a lot of high-end systems to do all sorts of interesting things that were centralized. I see the same thing happening, and I do, I believe history repeats. So I’m a big believer that this is the next innovation cycle. But another take on that is the cable companies are extremely powerful, and I remember very clearly how we saw Netscape rise out of a completely different trajectory saying everybody should have a web server, and everybody should have a web browser, and the natural monopoly of the cable companies isn’t going to give us the innovation we need.

So not only do I see people as ready and willing to take the challenge. It’s pent-up demand. I think so many people are finding that when they get onto them, the platforms, again, it’s the ERM. It’s easy routine and minimal judgment, but that means that everyone gets a white tube sock, and at the point at which the ready to go for a hike and they need wool socks, or they need tall socks or they deed compression socks, and they’re not getting them from their white tube sock, big box, big technology companies, that innovation is ripe.

So that’s where we’re trying to build again, opportunity in terms of the platform for people to be innovators. It’s going to take people a while, I think, to come to terms with it because they are so used to sometimes doing the easy thing. But doing the hard thing is why we’re here. So we make it easier and easier for them by investing our own sweat equity and time, and that’s what I think the transition will be. But to be fair to Tim, I also want to point out that what he created was that opportunity. He takes a lot of blame for himself, and he takes a lot of responsibility because he’s that type of guy. He’s just an amazing human, and that’s why it’s an honor to be working with him and for him on this.

But what really happened I think was there was wealth preservation by massive corporations in the dot bomb. So when the market crashed, a lot of the money ran towards artificial monopolization as a way of preserving value. So that was a trauma response that shifted the web. It proved the web both to be right, as in standards based model, but also due to trauma, reverted us almost in a warlike scenario to nationalization of it. We should have reverted back once we proved it was right and the money was safe, but we didn’t.

Instead, we had another financial crisis, which was the mortgage crisis, and that doubled down again in the over centralized market, which was wrong. It then used the web to create the application market, which included a lot of lock-in and moats. That’s where you get a lot of people developing apps on proprietary platforms on the back of the web. So you have two traumatic, almost warlike events that have shifted the technology into the wrong space and the management of technology in the wrong space, and the government has been docile and stood by while Amazon, Google, Facebook and platforms developed these business models that treat people very unfairly and differently that are exploiting that license that is granted them by the government unfairly and that are really just truly… They’re facilitating genocide. They’re disenfranchising women. They’re disadvantageous platforms for people of color.

None of that had to be the case. But because of financial crisis and wealth preservation by elites, we saw the technology shift under that pressure. So it’s not that Tim created a thing that was bad, but it was used for bad. So that’s why we have to try to create something that’s not only innovative again or allows for innovation again at a much more humanist level but also protected this time, try to get government prepared, infuse it with security from the start, infuse it with protections from the start, moral codes, ethics, inherited rights, and then have governments around the world be prepared to protect it, I think, as we understand it better now.

Steve:

Davi, you’re in California. We have round two of CCPA on the ballot in November in your state. Any thoughts or reflections on that in terms of providing, coming back to this question of, is there enough momentum to take back control over our data and to have better human-centered values and regulations over how companies can use data? Have you had a chance? Any thoughts about what’s now called CPRA that’s on the ballot in November that includes things like data protection by design? I know there are some in the industry that think even within the privacy industry, that because it puts the onus or the burden on users to exercise those rights, that it’s either not doing enough or even causing damage or harm to consumers. I mean, from your perspective, economically, historically speaking, technologically or as a humanist, is CPRA going to be a incremental enhancement, or is this potentially groundbreaking for the future of privacy in California and in the US?

Davi:

I am more on the side of groundbreaking. I think California has been right. Europe got ahead of us in regulation, which to be very clear, as someone who works inside innovation firms for decades and has released a lot of products and has invested a ton of money and engineering cycles to create new technology, regulation is the engine of innovation. We would not have released end-to-end encryption for databases that was client side if we had not had GDPR. The fact that the Europeans got ahead of California is a bit of an embarrassment.

California’s right to try to rotate the innovation back to this market.

So, I think California’s right to try to rotate the innovation back to this market. It’s really sad that it has to be seen by some as harmful to the market. The people who typically say that tend to be economists or investors who are completely disconnected from the actual work of innovation. What they’re saying is bad regulation can stifle the market, but bad anything can stifle anything. What we have to see is a CEO who says, “I want better product,” is regulation. A CTO who says, “I want a new feature,” is regulation. A third party or a government that says, “I want to preserve privacy of people,” is regulation. They’re all innovation engines.

What’s being protected here is the market in a sense that the freedom to operate, right, the way that people will be able to operate is dependent on their ability to protect or control their data. So I don’t make any bones about the fact that you’re going get a better, larger, more expansive market for technology. It’s going to be so much better the more regulation we have around people’s privacy. Again, it goes back to the concept of the Tea Party. It goes back to the concepts of history.

So the second point is, look at 1386. I was deeply involved in massive technology changes at the time 1386 was passed. I was told repeatedly that this was an ambulance chasing legal nightmare, where people would just be trying to get rich, and it wouldn’t change anything. In fact, it created the breach laws that today drive all of the security investment around preventing breaches. Can you imagine a world where we didn’t measure breaches, or we didn’t even pay attention when everyone’s information was lost versus today where the entire security industry, almost all the money we spend and all the innovation and all the research we do is around stopping breaches?

So that’s the type of history, the regulation. You don’t have to go back to the Tea Party. You don’t have to go back to the train, the Carterfone. You can just go back to 1386. California is behind, I think because of the impact of Facebook in particular, the lobby effort of Facebook or Uber to try to prevent regulation. It’s shameful. I think it’s harmful. A good example of that was Uber told San Francisco that they were going to run cars with no license. Amazon did the same thing, and the city said, “You can’t do that. That’s dangerous.” Uber said, “Trust us. We’re the God.” The fight turned into Uber saying, “Fine. We’re going to move to Arizona.” What immediately happened in Arizona is Uber had a crash, and then Uber after that killed somebody, exactly what regulators feared. San Francisco was right to put their foot down and say, “This is not how it works.”

SUBSCRIBE TO THE PODCAST

We will notify you about new episodes and important updates.

Steve:

I love how you reverse out of a technical framework of looking at, is it working, or is it broken and talk about how direction or the directives given by the CEO or the CTO are in a sense, the form of regulation or at least parameters around how the organization needs to build products or deliver services. I recently wrote this article on connecting Nonconformist Innovation to how it enables ethical business transformation. So I think for many in my network, having spent 15-plus years in working for a Silicon Valley tech companies, that you might be surprised to see me thinking about and writing about this.

But really, this article, the premise of the article is that for better or worse, the technologies that we use today are nothing more than a manifestation of the philosophies, the politics, and the economics of their creators. If you focus just on the technology, you’re really focusing on the wrong problem, right? So I think it’s so important that we are able to take a step back and if we can instrument, conceive of, and execute change at the level of the culture or the philosophical operating system that the leaders of organizations have that we can have a better chance at making a positive impact at a societal level.

Coming back to this idea that about data, the CCPA and CPRA, it came about because of the rampant abuse of data by big tech, Facebook, right? So we’re looking at this movement now about the decentralization of data. In a minute, I want to get to the data pods and decentralization concepts you’re currently working on. But last month at the Nonconformist Innovation summit, we had a technical track that dove into the details of self-sovereign and decentralized identity. In a sense, at the end of the day, it does come back to the question of data and how it is being protected and managed. So one question that came up, I’m curious about your take on it is, how can we tell where the data has come from, what it’s good for, how it’s intended to be used and permissioned?

Davi:

Oh, those are very deep, great questions. The answer I have found, and this is another reason I pivoted the book was because people are focusing too much on confidentiality. In fact, it is a classic CSO mistakes, chief security officer mistake was to over-rotate on privacy in the sense of confidentiality instead of integrity. So what you’re really asking is, or the way my book moved, eventually it was, how do we control our data, or how does anyone control any data, whether it be providence, origin, ownership, control, custody?

If you can turn something off, that’s power.

The answer I found from an engineering exercise, there’s two parts. One is the power button. I talk about this all the time now. If you can turn something off, that’s power. So if you have a vacuum cleaner that’s hoovering up all of the data, if you can’t turn it off, you have a problem. The second is the ability to reset. You need a reset button. I put it in these terms because engineers can build these things. If you tell them to build ethical tools, it’s an open box. Nobody knows what to do. But if you tell them to build a reset tool, you get some really, really good innovations. So can you look at a piece of data and say that’s wrong and hit reset.

Now, authorization to do that is a whole topic on its own. But if you have the button, you can work on the authorization models. The reset becomes a way that you can say, this is not right. This is not from the right source. This is not put in the right place. This is not right. So you can, in a sense, regain control by saying, “I would like to reset this data to this level.” This should not be unfamiliar for anyone who’s ever worked in virtual machines, hypervisors, backup, restore, disaster recovery, business continuity.

So I’m applying these old, very well worn concepts to the current state of problems in the massively distributed big data space. So I mean, that’s just a tip of the iceberg on how we’re going to enter into this and solve it. For years, I’ve struggled with what’s the solution. But now, these two things I found so far have really helped solved.

Steve:

At Inrupt, you and your team have the mission of applying security models that directly preserve human rights and freedoms, helping to ensure can achieve a truly trusted and respected web. That’s from your website. So no, I think the idea behind your company’s product, Solid, I think it’s called is that your data lives in pods. Interesting concept. I think there’s some more context and color that’s needed around this. So can you explain the thinking about what are pods, how does it provide protection and scale and achieve the stated objectives of your company?

Davi:

Oh, yeah. So Solid is the standard that the W3C is propagating with the idea being that we need an open standard, the same way HTTPS works today. So you can go on the web and use a protocol, a standard to interoperate. So Solid is that interoperable technology. You basically use it. Everybody would be developing towards Solid. Inrupt is providing a Solid server or Enterprise Solid Server or ESS, for example, would be the product that you would use like a web server, but the sort of Solid, and then people would have applications. On the web there was a web browser, but now there will be applications that communicate with the Solid pods.

So your data goes into your pod. You have an identity, a web identity that you use to access your pod. Everyone else has a web identity for their pods. Now you can interoperate. You can have links to other people’s data. It’s very, very similar to what the web was in the beginning, because it’s basically trying to do what the web tried to do in the first place without the centralization effect. So by decoupling the app developers, the identity providers, and the data stores, we allow innovation. So there’s an internetworking capability in the middle, which becomes that ERM. You get the telegraph operators. You get the internet operators. But you really, really still get innovation at the end.

So it’s like saying, how do modems connect to each other? How do fax machines connect to each other? How do telephones connect to each other? You have to have a standard, which is what Solid is, and then you have to make the products, and that’s what Inrupt is doing.

Steve:

So as a follow-up to that question, one of the things on my mind, as it relates to any kind of new standards or technologies is one of the big questions with decentralized anything is, can it scale? So what is your vision for how Solid can become ubiquitous? Will shifting the burden of authorization and management to consumers and end users scale? Or how do you tackle that new challenge that is arising?

Davi:

I think there are two parts of that question. One is, can it scale because people wanted to, and then the other is, is it technically capable of scale? Decentralization has this worry, people saying it’s too used. But what we found over and over again in looking at the past is that’s actually the inverse of what we think. The scale comes from the innovation. It’s these ideas that are so simple that are so applicable at a local innovation level that then take over the world. It’s not usually the large providers and having worked at some of the largest, largest companies in the world and built some of the technology inside of them. It’s not generally the large providers that are moving the dial. So it’s actually the opposite. It’s not only ready to scale in terms of the technology, but it’s the best way to scale.

It’s the best new form of scalable technology. It will be decentralized, but it doesn’t have to be. You could actually run it in a central fashion. The point being, again, if you felt like you were being misrepresented, you wouldn’t have to be centralized. You would have the option of being decentralized, and that’s what we’re missing today. So it’s the best scale. But in terms of people’s willingness or readiness for this, I think one of the questions that often comes up is, why would people change from what they know, or why would they shift to a newer, better system?

I’ve been surprised. I think five years ago, I had a hard time making this argument and making this sell. I, in 2016, gave a talk about, I already mentioned, the monarchy or the oligarchy of Facebook and how it’s like going back pre Magna Carta. Most people said they didn’t know what I was talking about. Today I see article, after article, after article, people are disenfranchised. They distrust these big oligarchs like Facebook. They see the harms. They see the genocide of Inmar. They see Facebook saying they realized they were wrong. They interpret that as a commitment to do something, and then nothing happens, and they realized that Facebook never said they would do anything better. They just said they realized they were wrong and moved on. Or they say that we realize we’re bad, and then they move on. The same way Mark has always done since he’s been at Harvard and wasn’t held accountable.

So I think people are quickly coming to realize they might want to have their own website again. I’ve run my own website, my own blogs since 1995, and it’s a pleasure. It’s easy. The WordPress model, the printing press model, the modem, these are better models than the false monopoly. So I think people see that the same way the Tea Party saw that they didn’t want the corporations to unfairly discriminated against them to downright their comments. Even LinkedIn, you get on and people say I about an important social issue, I got six views. I showed people about this, I got 40,000 views.

Steve:

An awesome website, by the way, flyingpenguin. I’ll have a URL to your website in the show notes that people can click on, and I encourage them to go read the blog posts that, oh, Permanent Improvisation: Nazi Dictatorship Was Opposite to Law and Order from July 23. That was one day before my birthday. This was a brilliant blog post. I’ll link to it in the show notes. You elaborate further on some of questions. So in wrapping up, Davi, I have to admit a tinge of jealousy that I feel because you get to work with folks like Tim Berners-Lee and Bruce Schneier, titans in our industry and had massive impacts on the internet and technology as we know it today and security as well. Bruce is very well respected in this regard. So I have to ask before letting you go, what’s it like working with these two guys on epic shit like data pods and decentralized data?

Davi:

That’s amazing, because my career has been mostly like a firefighter parachuting into dumpsters and trying to put out flames all around me. I have typically gone into places that are just completely wrong and tried to make people safe. Working with them is like 180 degrees, where they’re saying, “This is the right thing to do. This is how to make things better. Let’s get on it.” It’s like moving from being the guy with a backpack full of water, trying to put out flames surrounding you to building the plane that’s going to dump the water, pick it up out of the ocean and put out the fire completely. It just feels like a whole different level of helping people and doing things the right way.

Steve:

Well, I wish you and your team all the success in the world. I think our world depends on smart people like you and this team you’re working with. So it’s been a pleasure having this time to talk with you on the podcast, and I hope you’ll be able to come back again and do it soon.

Davi:

Thanks, Steve. I really appreciate you having me on.