Google is trying to teach its own AI to think logically
Anonymous sources say that engineers want not just to copy OpenAI's work, but to design the thinking process in their own, better way. To do this, they are using a process where the model does not answer instantly, but pauses after receiving a question. During this time, it considers other promts related to the question and tries to form the best answer based on them.
They call this logic Chain-of-Thought Prompting, and it is assumed that in this way the model can be trained to consider hypotheses for the answer along the chain. Google has not yet commented on these experiments, so it's too early to draw conclusions. But the main thing is clear - after the September release of o1, the company made a fuss and will not leave us without AI releases for long.
Science & Tech
AI / ML
10
1
0
Share
Comments
Recommended
Pavel Ryzhikh
80 subscribers
a year ago
Subscribe
Pika's generative neural network has received an update
The Pika generative neural network has received a 1.5 update that improves its ability to model physical effects and interactions.
Users share impressive Pika test results: any objects in the video can be inflated, cut or flattened
Science & Tech
9
1
0
Share
Artem Vershinsky
82 subscribers
a year ago
Subscribe
US has created a neural network to communicate with itself from the future
Scientists from the Massachusetts Institute of Technology (MIT) have unveiled an artificial intelligence called Future You. It allows people to have online text conversations with a potential version of themselves from the future.
During the conversation, users answer questions about their current life, values and goals. The AI then generates “memories of the future” for more realistic interactio