How to rebuild the internet

AUSTIN, Tex. The upheavals of recent years in the world of technology he turned conventional wisdom upside down by transforming it even-runs into celebrities, titans in turkeysand even the financial system itself turned upside down.

Then, of course, the ambitious technical team gathered at SXSW this year they are ready to take advantage of the opportunity. And not just necessarily to make a quick buck, but to ask some bigger questions, like: What if we use this moment as a real catalyst for change?

Some even more ambitious digital futurists are promoting entirely new ways of thinking about ways of the internet that explain the risks and possibilities posed by AI, the burnout and malaise that social media users experience, or the constant reminders that our most personal information I am sadly insecure.

All of this was the topic of a panel yesterday afternoon titled Open Innovation: Breaking The Default Together, in which two heads of the Mozilla Foundation, together with an industrial designer, outlined their vision for an Internet free from economic structures and incentives that have driven so much of the digital heartburn of the last few decades.

After the talk, I spoke to one of the speakers: Liv Erickson, the team leader of the Mozillas VR-focused Hubs project. We talked about what’s at play with the way the internet is currently designed and how we might make it better as technologies like AI and virtual reality reshape it before (sometimes literally) our eyes.

The conversation has been condensed and edited for clarity:

When you talk about internet repair, you quickly focus on digital property. Why is it so central to these big questions about Internet architecture?

There is a lot of evidence of how people interact with each other online and it points to a very enthusiastic approach to content creation. People want to share their experiences and talk about what’s important to them. Right now, what we’re seeing is it’s really hard to build an audience and have ownership over that content. This means that if a platform changes its terms of service or shuts down because it’s no longer profitable for that business, a lot of that content could simply disappear.

Just last week one of the first social VR platforms, Altspace, has been closed. People mourn that experience because they’ve lost not just videos and photos, but entire worlds they’ve built, social connections they’ve made, and versions of themselves. When we think about this next generation of the internet and what it may become, data ownership is a key component of it due to the enormous amount of psychological and emotional attachment we have to our online identities.

What are the future risks for Internet users that concern you the most?

Data collection is an important part of it. But then it’s also about what applications are doing to respond to that. This is one reason why generative AI is cool, but it’s also scary. When you think about it on a more dystopian long-term horizon, what might people do with your information to immediately change the environment you’re in?

Philip Rosedale spoke on the ethics in XR on Sunday, and he pointed out really well that in the physical world we know, in general, when it’s advertised. But being here at SXSW you’re always advertised and you don’t necessarily know it’s an announcement, which is one of the things I think about in terms of immersive worlds and XR technologies.

What happens when I think about interacting with a friend or colleague in VR, and it turns out that it was actually just a bot that is developing a relationship with me, and it just happens to be online all the time when I’m online, and it starts telling me their opinions political, and I start to wonder if those are supposed to be my political views? There is a lot to address about how information can be manipulated in virtual spaces.

What role should US technology policy play in safeguarding the future of the Internet?

When I look at what we are talking about in terms of data privacy, many of the words used to describe the types of personal information collected can be [vague]like, they weren’t Actually collect biometric data on a headset, in many cases. But it is inferred and the inferred data is not included in some of these data privacy laws.

It is crucial to think about this from a consumer protection perspective. I did a political fellowship with the Aspen Institute a couple of years ago, and even then I was imagining this world where advertisers could scrape the profile pictures of my Facebook friends and generate a human being who looked like one of the my friends and start using them in their advertising. I would have no way of knowing that’s what they were doing. It’s not technically my personal information, but it’s meant to play on my emotional experiences. I think that’s an area the FTC could look into.

Has the rise of generative AI made your job seem drastically more urgent?

I think what we’re able to do with these tools is amazing. I also want people to go another level deeper and understand the full scope of how they could be used to do harm, and I think that’s usually where the conversation stops at I did this awesome thing. I have friends and colleagues who will come and say, look, I’ve made this amazing art where I’ve collaborated with an artist via generative AI and it’s like it’s a collaboration if the other humans aren’t around?

What do you think about the idea that blockchain could be a solution to digital property problems?

I don’t think technology in and of itself is ever really a solution.

I’m a big believer in people owning the value they’re creating, so I think a lot of the principles of distributed systems are really fundamental and powerful. One exciting thing about Web3 Spaces is that more and more people are recognizing that it can be a tool to take back creative control and ownership over what they’re doing. I also think there are places where it gets pushed as a solution to problems it won’t actually solve. Every time you take a technology and say this technology will solve a human problem, that’s when my alarm bell rings.

Is there a simple rule observers or policy makers can apply to whether a new technology or platform is designed with users in mind, the way you describe it?

How it generates revenue. This forces people to talk about the decisions they are making, whether or not they sell data. And if a technology can speak to a fundamental, underlying need: what is this solution for people? What does it give them in their daily life?

The most dangerous trap we can set for ourselves is to say that we have to do things because that’s how we’ve always done them. This is a key moment and I want as many people as possible to question that as we learn about these new technologies. The software that’s creating these virtual worlds is giving us the ability to try new things and actually say, you know what, I used to like it better. I liked that version of me better.

OpenAI has released the sequel to its revolutionary GPT-3 yesterday, and GPT-4 is already changing the way people think about what large language models can do.

In the blog posts Introducing its release, OpenAI describes its best-ever (though far from perfect) results on factuality, maneuverability, and refusal to step off guardrails. (You are alert, Waluigi-lovers.) He is also capable, with remarkable accuracy, of recognizing and describing even relatively small images sophisticated memes which require an understanding of irony that, frankly, escapes many humans.

GPT-4 doesn’t quite represent the conceptual leap that its predecessor made when OpenAI made it public. But it’s making some of the predictions about the power of technology seem much more feasible and easier to conceptualize than they once were.

For example: The Niskanen Centers Sam Hammond, based on his prediction in a previous blog post, supposed THE one-click lawsuits could allow you to make an endogenous version of [Charles Murrays] proposal for a libertarian legal defense fund that would allow people to massively defy finicky laws and regulations until the system blocks large-scale legal civil disobedience.

Perhaps an unlikely winner in recent weekends the collapse of a bank that has dealt a severe blow to cryptocurrencies: the Stablecoins.

POLITICIANs Bjarke Smith-Meyer has the report for Pro subscribersdescribing how by Monday, after a crash, the Circles USDC stablecoin had fully returned to parity, undermining the central bank’s talking point that the asset class posed too much risk to the financial system.

However, it may not be anything related to the technology that has kept it, well, stable, more just that USDC’s holdings are subject to the same rules as everyone else. Rather than being a vindication of cryptocurrency strength, USDC may have benefited from the regulated state of the broader systems, Bjarke writes. It was only after US regulators stepped in to secure SVB deposits to avert further panic, that the USDC’s slump eased.