Google is trying to teach its own AI to think logically
Anonymous sources say that engineers want not just to copy OpenAI's work, but to design the thinking process in their own, better way. To do this, they are using a process where the model does not answer instantly, but pauses after receiving a question. During this time, it considers other promts related to the question and tries to form the best answer based on them.
They call this logic Chain-of-Thought Prompting, and it is assumed that in this way the model can be trained to consider hypotheses for the answer along the chain. Google has not yet commented on these experiments, so it's too early to draw conclusions. But the main thing is clear - after the September release of o1, the company made a fuss and will not leave us without AI releases for long.