What most people view AI as Interesting how similar the concept of generative models are to our own cognition: - ![[After Socrates Episode 7 - Daimonion Dr. John Vervaeke#^nx5g69]] - ![[After Socrates Episode 7 - Daimonion Dr. John Vervaeke#^p5jyul]] - This is what [[Mental representations|Mental models]] are but just for the real world, not on a social level - ![[After Socrates Episode 7 - Daimonion Dr. John Vervaeke#^5d35cc]] - Reminds me of [[Metavision]] in blue lock where Isagi is able to find the best plays by internalizing the ideal plays of everyone else - This then relates to Ilya's praise for LLMs having this capacity: ![[Ilya Sutskever Explains How LLMs Being Able to Predict the Next Word Shows Real Understanding#^cd5qs3]] - We shouldn't just reduce the idea of predictive word processing — it's more nuanced since it has to perform [[Relevance realization]] based on a certain perspective, even if it isn't [[Embodied cognition]] - This is [[Mutual modelling]] ![[Ilya Sutskever Explains How LLMs Being Able to Predict the Next Word Shows Real Understanding#^5o8d49]] - Same process for when I ask it to troubleshoot code, it can find the relevant issue in a super quick rate faster than I could since it has propositional experience with the entirety of Stack Overflow. His knowing greatly transcends mine in that regard, but what he lacks is the capacity to best fit it to my current situation since that's something only I can best interpret. Does this mean that greater self-awareness and AI prompting is going to be the meta skill? if ai are tools that are good at divergence, they are limited by our capacity for relevance realization and wisdom