In our final two newsletters of the year, we’ll highlight their insights. Their answers help us make sense of this past year and spark our imagination for what’s possible in 2024.
Formerly, Sarah was a product leader and technologist at both Apple and Microsoft building products centered around artificial intelligence, machine learning, and intelligent devices.
Looking forward: What do you predict will happen in 2024 for the responsible tech movement?
2024 is the “biggest election year in history” with most of the world’s democracies having a major election next year. These major contestations for power between competing worldviews will inevitably influence the direction of responsible technology and policy. I imagine there will be a rise in AI-generated misinformation campaigns and other political advertising.
I think we’ll see the big technology companies continue to vertically integrate further by building their own chips and owning the “full stack” of artificial intelligence. While there have been advancements and greater accessibility of open-source models, which have seen a flurry of experimentation, many developers still rely on the computing infrastructure of big technology companies to operate. I suspect this operational model, and power imbalance, will come under increasing scrutiny as countries are actively developing AI legislation.
//Antonio Zappulla, CEO of Thomson Reuters Foundation
Through news, media development, free legal assistance and convening initiatives, the Foundation combines its unique services to advance media freedom, raise awareness of human rights issues, and foster more inclusive economies. In 2022, Antonio received the Order of Merit of the Italian Republic for his work as a journalist and for spearheading a range of media freedom initiatives at the Foundation.
Looking forward: If we were to fast-forward one year and look back on 2024 and declare it a huge success for the responsible tech movement, what happened?
To deem 2024 a successful year for responsible technology, several milestones would need to take place:
Enhanced Accountability and Transparency: The tech industry would have undergone a pivotal shift towards greater accountability and transparency, with notable changes in how companies operate and engage with users and stakeholders.
Robust Regulatory Frameworks: A major triumph would be the establishment and enforcement of more comprehensive regulatory frameworks that balance fostering innovation with protecting human rights and data privacy.
Widespread Adoption of Ethical AI Standards: Throughout the year, a substantial move towards adopting ethical AI standards across various sectors would be essential. This includes setting guidelines that eliminate biases in AI algorithms and guaranteeing fairness and transparency in AI-driven decisions.
Empowered Media and Civil Society: A key success indicator would be the effective education of media and civil society in understanding AI's complexities and legal frameworks. This would lead to more rigorous scrutiny and accountability of tech companies and policymakers.
Effective Combating of Disinformation: Finally, a crucial achievement would be the successful measures taken against disinformation across digital platforms. This includes developing advanced tools to identify and mitigate false information and stringent content accuracy policies by social media and news organizations.
//Emma Leiken, Responsible Tech Lead at Omidyar Network
Emma Leiken is the Responsible Tech Portfolio Lead at Omidyar Network, where she leads a portfolio focused on youth organizing and responsible technology.
She is also a Technology Policy Fellow with UC Berkeley, a proud board member of Cyber Collective and a co-founder of the Responsible Technology Youth Power Fund. Prior to working at Omidyar Network, Emma conducted research on the risks and opportunities of biometric digital identity at the London School of Economics.
Looking back: What were the most important stories of 2023 in the responsible tech movement? And why?
Europe is further along than the USA in its passage of tech accountability legislation. The EU’s Digital Services Act (DSA) was designed to promote safer online markets and is essentially a transparency machine; platforms will have to submit a report describing their content moderation activities in the EU. One takeaway is that platforms can provide critical datasets to understand online harms, but that they are unlikely to “self-regulate.”
A long overdue discussion of harms that take place on more niche platforms, including dating apps, finally took to the forefront. Mother Jones for instance, did thoughtful reporting exposing dating platforms’ broad legal immunity, like Grindr and Tinder, and what that means for harms that take place on these sites.
Fifteen funders including Omidyar Network came together to distribute $2 million to youth and intergenerationally-led organizations working on a variety of responsible technology issues, ranging from youth online safety, to trustworthy AI, to digital wellbeing, to online radicalization. The pooled fund they created, the Responsible Technology Youth Power Fund, is the first of its kind philanthropic vehicle seeking to shift power to youth leaders in technology policy, design, and advocacy work.
//Dele Atanda, serial tech entrepreneur
Dele Atanda is a serial tech entrepreneur. He is CEO of metaMe, a self sovereign information and AI service, and the creator of metaKnyts, a CryptoMedia franchise designed to teach self sovereignty through storytelling and play.
Dele has been a pioneering voice on the emergence of web3 technologies and an avid advocate of the potential of decentralized technologies to advance humanity.
Looking forward: What do you predict will happen in 2024 for the responsible tech movement?
1) Convergence: The convergence of AI, blockchain, and spatial computing into a mega wave of innovation, and the convergence of Europe’s GDPR, MiCA (Markets in Crypto Assets Regulation), and the AI Act into an inter-related regulatory block that will set the tone of regulation around the world.
2) Divergence: Big Tech and regulators will become increasingly at odds with each other as each tries to usurp and contain the other. There is a golden opportunity for responsible tech to broker a peace between them that fosters innovation on one hand while curbing the inherent vulnerability of innovation to exploitation by those with power on the other.
Project Liberty in the news
// 🤖 Project Liberty’s CEO Martina Larkin was quoted in a story from Sky News on the potential impact of AI on 2024’s elections.
Other notable headlines
// 🇨🇳 An article in CoinDesk reported that China will begin to verify its citizens' identities with a new blockchain-based platform.
// 🇬🇧 Prime Minister Rishi Sunak’s UK government is considering a crackdown on social media access for children under the age of 16, including potential bans, according to an article in Bloomberg.
// 🏛 According to an article in WIRED, Microsoft’s AI chatbot, Copilot, replies to election questions with conspiracies, fake scandals, and lies.
// 🧬 We need to focus more on tech design and less on content moderation. That’s the argument in an article in Tech Policy Press that calls for moving beyond content governance to prosocial tech design governance.
// 🛠 The right to repair movement and other efforts to liberate technology from monopolistic corporations is a precondition for winning many vital societal battles, according to an article in Noema Magazine.
// 🤖 An article in The New York Times highlighted how students at a high school in New Jersey want to broaden the conversation around AI literacy and its applications, moving beyond the traditional narratives of tech magic and doomsday panic.
// 🇰🇪 Nathan Nkunzimana, the man leading Kenyan content moderators’ battle against Meta, claims Meta and its contractor fired content moderators for protesting working conditions and demanding the right to unionize, according to an article in Rest of the World.
// ⛪ An article in Fast Company featured Pope Francis’s call for a treaty to regulate artificial intelligence.
Partner news & opportunities
// $500k investment and accelerator for companies building AI agents
AI Camp: Agents is a 13 week accelerator program run by Betaworks, aimed at bringing together the most creative pre-seed & seed stage companies building agents and/or the infrastructure that enables them. Betaworks is reviewing applications on a rolling basis with the final deadline for applications on Friday 1/12. Learn and apply here.
// New partnership between MIT and DemocracyNext
The MIT Center for Constructive Communication and the nonprofit DemocracyNext have launched a two-year pop-up lab, based at the MIT Media Lab, to harness powerful AI technologies to create constructive, tech-enhanced, and human-led systems shaped by the proven model of Citizens’ Assemblies worldwide. Learn more here.