Stability AI Unveils Cutting-Edge Japanese StableLM Language Model

Stability AI Unveils Cutting-Edge Japanese StableLM Language Model

Posted on
64 / 100

Today Stability AI has unveiled its inaugural Japanese language model (LM), the Japanese StableLM Alpha. This model is not just a first for the company, but it also stands as the best-performing openly available LM for Japanese speakers, setting a new standard in the field.

The Japanese StableLM is a formidable tool, boasting 7 billion parameters and a general-purpose language model. It has earned the distinction of being the top-performing publicly available Japanese LM, a title it secured after rigorous testing against four sets of other Japanese LMs in a comprehensive benchmark suite.

Stability AI has made the Japanese StableLM Base Alpha 7B available under the commercially viable Apache License 2.0. Meanwhile, the Japanese StableLM Instruct Alpha 7B, designed exclusively for research use, is set to make waves in the academic world.

Stability AI Japanese StableLM

The Japanese StableLM Base Alpha 7B is a powerhouse, trained for text generation using large-scale data predominantly sourced from the web. This data is primarily composed of Japanese and English text, with a small 2 percent of the material in the form of source code. The training data is a collaborative effort, including datasets created by Stability AI Japan and datasets developed in cooperation with the Japanese team of the EleutherAI Polyglot project, as well as members of Stability AI Japan’s community.

The Japanese StableLM Instruct Alpha 7B model is a unique language model, fine-tuned to follow user instructions. This additional tuning is achieved using Supervised Fine-tuning (SFT) for the extra training. The model’s performance was evaluated on a variety of tasks, including sentence classification, sentence pair classification, question answering, and sentence summarization. The evaluation was conducted using the lm-evaluation-harness benchmark of EleutherAI, where the Japanese StableLM Instruct Alpha 7B scored an impressive 54.71, placing it far ahead of other Japanese models.

These models are readily available on the Hugging Face Hub, where they can be tested for inference and additional training. This move is a testament to Stability AI’s commitment to open access generative AI. The company is actively working with partners to deliver next-generation infrastructure globally, with a focus on imaging, language, code, audio, video, 3D content, design, biotechnology, and other scientific research. This release marks a significant step forward in their mission, bringing advanced language modeling capabilities to Japanese speakers.

Source: Stability

Filed Under: Technology News, Top News

Latest togetherbe 

 

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, togetherbe may earn an affiliate commission. Learn about our Disclosure Policy.

Gravatar Image
My lisa Nichols is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, lisa Nichols brings a unique blend of creativity and accuracy to every piece

Leave a Reply

Your email address will not be published. Required fields are marked *