Greetings, everyone, and welcome to our latest edition of TechCrunch’s AI newsletter.
This past Sunday marked a significant announcement from President Joe Biden, stating his decision not to run for another term. He extended his “full endorsement” to VP Kamala Harris as the Democratic Party’s candidate. Following this endorsement, Harris quickly garnered majority support from the Democratic delegates.
Given Harris’s vocal stance on technology and AI policies, her potential presidency raises questions about the future of AI regulations in the U.S.
Over the weekend, my colleague Anthony Ha shared insights on this topic. Harris and President Biden have opposed the notion that public safety and innovation advancement cannot coexist. During his term, Biden issued an executive order for the creation of new AI development standards. Harris regarded these voluntary commitments as a starting point towards a safer AI future, emphasizing the need for regulation and oversight to prevent tech companies from prioritizing profit over customer welfare, community safety, and democratic stability.
Conversations with AI policy experts suggest a consensus that under Harris’s leadership, the current AI policies are likely to remain intact rather than being dismantled or deregulated as seen in the approaches endorsed by Donald Trump’s administration.
AI consultant Lee Tiedrich, working with the Global Partnership on Artificial Intelligence, told TechCrunch that Biden’s support for Harris might bolster the continuity of U.S. AI policies. These are framed within the 2023 AI executive order and demonstrate a commitment to multilateral efforts through global organizations. The executive order encourages increased government oversight of AI, including enforcement enhancements, more comprehensive agency rules and policies, a focus on safety, and mandatory testing and disclosures for significant AI systems.
Cornell government professor Sarah Kreps observed a perception in parts of the tech industry that the Biden administration was overly stringent on regulations. She believes Harris is unlikely to dismantle the AI safety measures introduced under Biden but wonders if Harris might adopt a less centralized approach to regulatory measures to appease critics.
Research fellow Krystal Kauffman at the Distributed AI Research Institute also echoed Kreps and Tiedrich’s sentiments, predicting Harris would continue Biden’s initiatives on AI risk management and transparency. Kauffman, however, hopes Harris will engage a broader range of stakeholders in policy development, particularly those data workers facing poor wages, working conditions, and mental health issues.
“It’s critical for Harris to involve data workers in these crucial policy dialogues moving forward,” Kauffman stated. “Policy shaping shouldn’t be confined to discussions with tech executives behind closed doors, as it misdirects the course of action.”
News
Meta unveils new models: This week, Meta introduced Llama 3.1 405B, a new model for text generation and analysis boasting 405 billion parameters. It’s the most expansive “open” model from Meta to date, being integrated across Facebook, Instagram, and Messenger as part of the Meta AI experience.
Adobe enhances Firefly: Adobe has recently updated its Firefly suite for Photoshop and Illustrator, providing graphic designers with more advanced options through the company’s proprietary AI models.
Facial recognition controversy at school: A British school faced formal censure from the UK’s data protection authority for employing facial recognition technology without obtaining explicit consent from students for their facial data processing.
Cohere’s substantial funding: Cohere, a generative AI venture co-created by former Google researchers, has secured $500 million in funding from noteworthy backers including Cisco and AMD. Coherent stands out by tailoring AI models for large corporations, a strategy pivotal to its success.
Exclusive interview with CIA’s AI director: In TechCrunch’s continuous Women in AI series, I had the pleasure to interview Lakshmi Raman, the AI director at the CIA. We discussed her career trajectory, the CIA’s AI application, and the vital equilibrium between technological innovation and responsible deployment.
Research paper of the week
The transformer model structure stands out for complex reasoning tasks, driving technologies like OpenAI’s GPT-4o and Anthropic’s Claude. However, its imperfections have led researchers to explore alternatives.
State space models (SSM) emerge as a promising contender by amalgamating attributes from earlier AI models, like recurrent and convolutional neural networks, into a more efficient architecture adept at processing lengthy data sequences. The latest version, Mamba-2, was discussed in a recent paper by researchers Tri Dao and Albert Gu. This model can process vast data inputs more efficiently than transformer counterparts while showing competitive performance in language generation tasks. The improvement of SSMs suggests potential deployment on standard hardware, achieving more sophisticated generative AI applications.
Model of the week
A recent innovation presents a new generative AI model structure that claims to surpass the capabilities of the strongest transformers and Mamba models in efficiency.
Known as test-time training models (TTT models), this architecture facilitates reasoning over millions of tokens, with potential scalability to billions, enhancing future generative AI applications’ capabilities due to its efficient data processing and reduced hardware strain.
For more insights into TTT models, explore our in-depth feature.
Grab bag
Stability AI, rescued financially by investors such as Napster’s Sean Parker, has stirred controversy with its new terms for product usage and licensing. The latest stipulations for commercial use of its open AI image model, Stable Diffusion 3, have notably tightened, leading CivitAI to halt the use of models based on or trained with Stable Diffusion 3 images while seeking legal guidance.
Stability AI’s recent decision to revise Stable Diffusion 3’s licensing terms for broader commercial exploitation marks an effort to address the backlash. The firm assures that lawful use of the model, in alignment with their license and usage policies, won’t attract demands for image deletion, fees, or derived product restrictions, even without monetary compensation to Stability AI.
This ongoing saga underscores the legal complexities surrounding generative AI and the evolving interpretations of “openness” within the industry. The escalating debate over restrictive licenses suggests the AI field is far from reaching a consensus on balancing openness with responsibility.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


