- The Qwen team at Alibaba has developed a new model - It is about the “reasoning” model QwQ-32B-Preview - This model can compete with o1 from OpenAI - Qwen's AI model contains 32.5 billion parameters - It can handle queries of up to 32,000 words - The model outperforms o1-preview and o1-mini in a number of tests. - For example, the model performs better on AIME and MATH tests - AIME evaluates the performance of different models - The MATH test is a set of text problems - That said, there are a number of problems for the QwQ-32B-Preview so far - For example, it can switch languages unexpectedly - It also performs poorly on tasks where “common sense” is needed - The QwQ-32B-Preview is available on the Hugging Face platform - The model is available “openly” under the Apache 2.0 license