AI Byte: Q1 2023 AI Update

By: Alberto Tohme , Adine Mitrani , Brandon Cheung

Fenwick's Artificial Intelligence (AI) & Machine Learning attorneys are at the forefront of the latest developments and trends in AI. In their latest AI Update, the team covers Stanford University's Polarization and Social Change Lab and the Institute for Human Center Artificial Intelligence's research on the persuasion of AI-generated messages; Adobe's beta release of Firefly, a family of generative AI models; Claude, a new AI chatbot backed by Google; and the pros and cons of openness in AI research and development.

The Power of Persuasion: Recent Findings from Stanford University on AI-Generated Messaging

By: Alberto Tohme

A recent experiment from researchers at Stanford University’s Polarization and Social Change Lab and the Institute for Human-Centered Artificial Intelligence found that AI-generated messages intended to persuade human readers to reconsider their stance on a variety of hot-button policy issues were as persuasive as messages generated by humans. Participants became “significantly more supportive” of certain policies – including a smoking ban, gun control laws and a carbon tax – after reading AI-generated messages. While the AI-generated messaging adopted a more logical and factual approach to persuasive writing than human-generated messaging, it is unclear whether this is universally true for AIs trained on the GPT-3 model or whether the specific prompt input by researchers explains the AI’s approach.

This experiment underscores the ability of AI chatbots trained on large language models to influence and persuade the general public. While well-intentioned actors can use such AI for noble purposes, malicious actors can just as well use them to spread misinformation or disinformation on a wide scale, including through social media and other online channels. The Stanford University researchers urge caution and call for lawmakers to immediately consider placing guardrails on the use of AI in political campaigns and activities.

Adobe Releases AI Tools for Image and Text Effect Generation

By Adine Mitrani

Last month, Adobe announced the beta release of Firefly, a family of AI tools focused initially on image and text effect generation. In the press release, Adobe highlighted that the training corpus for Firefly consisted exclusively of “hundreds of millions of professional-grade, licensed images in Adobe Stock along with openly licensed content and public domain content where the copyright has expired.” By leveraging this type of content, Adobe has dramatically mitigated the risk that the output generated by its customers would infringe the copyright rights of a third-party artist or creator. Other AI models, such as Stable Diffusion, have chosen to go with a more retrospective approach – they instead offer artists and creators an opportunity to opt-out of having their works used as training data. However, users of Firefly will still have to think through whether the image generated by it is even protectable and enforceable. See our article “Breaking Dawn: Understanding the Copyright Office’s Policy on Works Containing AI-Generated Materials” for additional details about the Copyright’s Office official guidance on this point, as well as useful tips for applicants.

Meet Claude, an AI System Backed by Google

By Nicolás Parra

A UK-based startup backed by Alphabet’s investment arm, Anthropic, has released a new AI system named Claude. Designed to mimic the way humans learn and reason, Claude is a general-purpose AI system that can be applied to a wide range of tasks. The system is based on a probabilistic programming language called Birch, which allows developers to create models that can handle uncertainty and make more accurate predictions. In essence, Birch clusters a huge volume of numerical records then concentrates on densely occupied regions to create a compact summary.

According to the cofounder of Anthropic, the goal of the new AI system is to build more transparent and explainable AI models that can be used in a variety of industries, including healthcare and finance. The company has already raised $124 million in funding, and it plans to use the new system to develop a range of AI-powered products and services.

The recent release of Claude could be a game-changer for startups looking to differentiate themselves in crowded and competitive markets, as those startups able to leverage the capabilities of this new AI system could potentially gain a significant advantage over their competitors. With its focus on transparency, “explainability,” and the ability to handle uncertainty due to its prediction-based model, Claude has the potential to enable startups to make more accurate predictions and build more trustworthy AI-powered products and services. In the tech industry, for example, startups can use Claude to build more effective recommendation engines, chatbots or predictive analytics models to help gain an edge over their competitors. In the life sciences industry, startups that are able to use Claude to predict the efficacy of drugs or therapies could potentially bring life-saving treatments to market more quickly.

The release of Claude is likely to increase competition in the AI industry and could lead to more innovation and investment in the space. Claude’s debut is a significant step forward for Anthropic, which is hoping to establish itself as a major player in the AI industry. Early testimonials tout Claude’s ease of use and its more conversational tone over competitors. While there are high expectations for Claude, it remains to be seen how widely adopted Claude will be and whether it will live up to its potential as a game-changing AI system.

LLaMA Drama: The Pros and Cons of Openness in AI Research

By Brandon Cheung

On February 24, Meta released LLaMA and offered it to researchers at institutions, government agencies and nongovernmental organizations under a noncommercial license. The LLaMA was promptly leaked by the website 4Chan, and users promptly reposting the model on other sites. The community then began to tinker with the model, including adapting it to widely available hardware – achieving feats like running the 65-billion-parameter model on a single Nvidia A100, or implementing the 13-billion-parameter version on a MacBook Pro M2 with 64 gigabytes of RAM. Stanford researchers were also able to create a LLaMA variant named Alpaca that could run on a Raspberry Pi or even a Pixel 6 smartphone.

The LLaMA leak and corresponding post-leak achievements by the community have raised questions and discussions around the advantages of openness in AI research and democratization of these new technologies. Proponents view that transparency can help advance the field as a whole with collaboration and provides accountability to these new technologies from an ethics perspective, which remain from a technical standpoint largely inaccessible to the public due to hardware, industry expertise and technical knowledge requirements. The downside, as demonstrated by this leak, was the theft and misuse of IP, such as algorithms and datasets, which can be rapidly proliferated to bad actors given the speed at which the internet can spread information, datasets and software code. This discussion continues on an ongoing basis in light of the incredible feats achieved by the public, as well as the bad actors responsible for proliferating unlicensed content, in the wake of this “LLaMA drama.”

Login

Don’t have an account yet?

Register