Artificial intelligence and its relevance has caught fire in the last few years. Chatbots and AI models have popped up everywhere, from Google search results to theSamsung Galaxy S24 Plus. Every day, a new breakthrough or model makes headlines (see our coverage ofRussia’s GigaChat). Much of the coverage is centered around the dangers of AI, from the issues deceptive or misinformed AI could pose in disseminating information to the problems posed by AI trained on copyrighted artistic work.
EleutherAI is the result of those sorts of conversations. It started as a grassroots community and grew into a formal organization. So what is it, and what is its purpose?

First, a bit of background information
EleutherAI was founded in July 2020 by Connor Leahy, Sid Black, and Leo Gao. It began as aDiscord serverprimarily hosting discussions of GPT-3, OpenAI’s 2020 version of its large language model.
Since then,EleutherAI has become a non-profitresearch group which, according to its charter, was historically focused on broadening access to AI models and promoting open science norms in natural language processing. It’s now shifting focus to ensure AI alignment andinterpretability.

What is artificial intelligence?
Its presence can be found on many of our devices right now
What are EleutherAI’s goals?
So what does that mean? Interpretability refers to an understanding of how AI models work and how they reach the conclusions they make. The core idea is that to make the best use of AI, we need to understand how they learn, and how to guide that learning, so we can refine models to be as useful and accurate as possible. Much of that data comes from studying and understanding, or interpreting, their outputs.
Alignment refers to how well an AI model aligns with human values and goals. A model that is helpful, harmless, and works as intended is said to be well aligned. One that is misaligned is pursuing the wrong or unintended objectives. An AI can also be misaligned if it pursues its goals in ways that are potentially harmful or deceptive.

Google Bard explained: What this AI-powered ChatGPT competitor can do
Google Search may never be the same
EleutherAI’s stated goals are to better understand how AI functions and evolves and to ensure that AI continues to serve the best interests of humanity. It pursues those goals by conducting research on existing AI models and training and publicly releasing series of large language models. The institute employs two dozen research staff and counts around a dozen volunteers among its members.

There’s also a focus on democratizing AI and AI research, according to the group’s website. The EleutherAI leadership doesn’t believe that decisions about the future and deployment of AI should be made exclusively by tech companies who seek to use it for profit.
What does EleutherAI do?
What are domain-specific LLMs?
And how are they transforming how we interact with technology?
The company continues to operate apublic Discord serverthat encourages open conversation between employees, volunteers, and collaborators at other institutions. They ask that novices curious about AI restrict themselves to reading and observing.
Pushing for an open source AI future
EleutherAI is one of several companies and organizations that have sprung up independent of the tech monoliths that have historically invested in (and by extension controlled) AI development. It’s a sign that some of the interesting and promising research is being done at the independent level by scientists who are cognizant of the potential dangers and pitfalls of the technology.
If you want to join the AI vanguard, check out our guide to thebest AI apps for your Android device.