Worth Reading: AIs Are Often Wrong But Never Uncertain One way of accounting for the fact that the LLMs couldn’t consider the possibility that they did not solve the problems is that they is not conscious. They are machines programmed to arrive at “solutions”, not necessarily solutions based on symbolic reasoning. And they are not going to suddenly step outside their nature and start using symbolic reason because they somehow “know” they should. They stick to their programming, of course. Related ← Worth Reading: JPNIC RPKI guidelines releasedWorth Reading: Relying on Artificial Intelligence Reduces Critical Thinking Skills →