Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Define understand, and does an analog to Godel's incompleteness apply.


> does an analog to Godel's incompleteness apply

not GP but this seems like quite an attractive idea that many people have reached: a brain of a given "complexity" cannot comprehend the activity of another brain of equal or higher complexity. I'm positive I'm cribbing this from scifi somewhere, maybe Clarke Or Asimov, but, it's the same idea as the Chomsky hierarchy, and the Godel theorems seem like a generalization of that to general sets of rules rather than mere "automata".

For example, you can generalize a state automata to have N possible actors transitioning state at discrete clock intervals, but each actor can keep transitioning and perhaps even spawn additional ones. The machine never terminates until all actors have reached a termination state. That machine is probably impossible to model on any kind of a Turing machine in polynominal time. And a machine that operates at continuous intervals is of course impossible to model on a Discrete Neural Machine in polynomial time (integers vs reals categorization). There are perhaps a lot of complexity categories here, similar to alephs of infinity or problems in P/NP, and when you generalize the complexity categorization to infinity, you get godel incompleteness, just an abstract set of rules governing this categorization of rule sets and what amounts to their computability/decidability.

Everyone is fishing at this same idea, a human has no chance of slicing open a brain (or even imaging it) and having any idea what any of those electrical sparkles mean. At most you could perhaps model some tiny fraction for a tiny quantum, with great effort. We have to rely on machines to assist us for that - probably neural nets, a machine of equal or greater complexity. And we will probably have to rely on machine analysis to be like "ok this ganglion is the geographic center of the AI, and this flash here is the concept of Italy", as far as that even has any meaning at all in a brain. Mere line by line analysis of a Large Language Model or other deep neural network by a human is essentially impossible in any sort of realtime fashion, yeah you can probably model a quantum or two of it statistically and be like "aha this region lights up when we ask about the location of the alps" but the best you are going to do is observational analysis of a small quantum of it during a certain controlled known sequences of events. Unless you build a machine of similar complexity to interpret it. Just like a brain, and just like a state machine emulating a machine of higher complexity-category. They're all the same thing, categories of computability/power.

This is not in any way rigorous, just some casual observations of similarities and parallels between these concepts. It seems like everyone is brushing at that same concept, maybe that helps to get it out on paper.

For an actual hot take: it seems quite clear that our computability as a consciousness depends on the computing power of a higher complexity machine, the brain. Our consciousnesses are really emulated, we totally do live in a simulation and the simulator is your brain, a machine of higher complexity.

Isn't it such a disturbing thought that all your conscious impulses are reduced to a biological machine? Or at least it's of equivalent complexity to one. And the idea that our own conscious and unconscious desires are shaped by this biological machine that may not even be fully explicable. That has been a science fiction theme for a very long time, or the Phineas Gage case, the idea that we are all monsters but for circumstance and we are captives of this biological machine and its unpredictable impulses. We are the neural systems we've trained, and implacable biology they're running on - you change the machine and you also change the person, Phineas Gage was no less conscious and self-cognizent than any of us. He just was a completely different person minus that bit, his conscious being's thought-stream was different because of the biological machine behind it. It's the literal plato's cave, our conscious thoughts are the shadow played out by our biological machine and its program (not to say it's a simple one!).

It's not inherently a bad thing - we incorporate distributed linear/biological systems all over the body in addition to consciousness. reflexes fire before nerve impulses are processed by the conscious center, your eyes are chemical photosensors and can respond to extremely quick instantaneous (high shutter speed) "flash" exposures like silhouettes. And the brain is a highly parallel processor that responds to them. But logical consciousness is a very discrete and monodirectional thing compared to these peripheral biological systems and its computational category is fairly low compared to the massively-parallel brain it runs on. but, we've also mastered these other AI/computational-neural systems now to be a force multiplier for us, we can build systems that we direct in logical thought for us (Frank Herbert would like to remind us that this is a sin ;). Tool-making has always been one of the greatest signifiers of intelligence, it may be quintessentially the sign of intelligence in terms of evolution of consciousness between certain tiers of computation.

And humanity is about to build really good artificial brains on a working scale in the next 25 years, and probably interface with brains (in good and bad ways) before too many more decades after. But it doesn't make any logical sense to try and explain how the model works on a line by line level, any more than it does with the brain model we based it on. Completely pointless to try, it only makes sense if you look at the whole thing and what's going on, it's about the brainwaves, neurons firing in waves and clusters.

/not an AI, just fun at parties, condolences if you read all that shit ;)


This is so lovely, and my gut says it's spot on (, but that's far from proof. :)

The biological machine simulation theory of consciousness has some rigor behind it. I am reminded of the Making Sense podcast episode #178 with Donald Hoffman (author of The Case Against Reality). More succinct overview: https://www.quantamagazine.org/the-evolutionary-argument-aga...

I don't know that I am with him on the "reality is a network of conscious agents" endpoint of this argument. But it's interesting!

I think that the brain is doing lots of hallucinating. We get stimulus of various kinds, and we create a story to explain the stimulus. Most of the time it is correct, and the story of why we see or smell something is because it is really there. Just as you mention with examples that are too fast for the brain to be doing anything other than reacting, but we create a story about why we did whatever we did, and these stories are absolutely convincing.

If our non-insane behavior can be described as doing predictable next-actions (if a person's actions are sufficiently unpredictable or non-sequitur, we categorize them as insane)... being novel or interesting is ok, but too much is scary and bad. This is not very different from chatGPT "choose a convincing next word". And if it was just working like this under the hood, we would invent a story of an impossibly complex and nuanced consciousness that is generating these "not-too-surprising next actions". In a sense I think we are hallucinating the hard problem of consciousness in much the same way that we hallucinate a conscious reason that we performed an action well after the action was physiologically underway.

I think tool making will be a consequence of the most important sign of intelligence, which is goal-directed curiosity. Or even more simply: an imagination. A simulation of the world that allows you to craft a goal in the form of a possible future world-state that can only be achieved by performing some novel action in the present. Tools give you more leverage, greater ability to impact the future world-state. So I see tools as just influencing the magnitude of the action.

The more important bit is the imagination, the simulation of a world that doesn't yet exist and the quality of that simulation, and curiosity.


> The biological machine simulation theory of consciousness has some rigor behind it

I think we are institutionally biased against the possibility because we don't like the societal implications. If there but for the grace of god go I, and we're all just biological machines running the programs our families and our societies have put into us, being in various situations... yikes, right?

If bill gates had been an inner-city kid, or a chav in england, would he be anything like bill gates? it seems like no, obviously.

Or things like lead poisoning, or alzheimer's - the reason it's horrifying is the machine doesn't even know it's broken, it just is. How would I even know I'm not me? And you don't.

> We get stimulus of various kinds, and we create a story to explain the stimulus.

Yes, I agree, a lot of what we think is conscious thought is just our subconscious processing justifying its results. A really dumb but easily observable one is the "the [phone brand] I got is good and the other one is dumb and sucks!" or brands of trucks or whatever. We visibly retroactively justify even "conscious" stuff like this let alone random shit we're not thinking about.

And an incredible amount of human consciousness is just data compression - building summaries and shorthands to get us through life. Why do I shower before eating before going to work? Cause that's what needs to happen to get me out of the door. I made a comment about this a week or so ago, warning long

this one -> https://news.ycombinator.com/item?id=34718219

parent: https://news.ycombinator.com/item?id=34712246

Like humans truly just are information diffusion machines. Sometimes it's accurate. Sometimes it's not. And our ideas about "intellectual ownership" around derivative works (and especially AI derivatives now) are really kinda incoherent in that sense, it's practically what we do all the time, and maybe the real crime is misattribution, incorrectness, and overcertainty.

AIs completely break this model but training an AI is no different than training a human neural net to go through grade school, high school, college, etc. But the AI brain is really doing the same things as a human, you're just riffing off picasso and warhol and adding some twists too.

> I think tool making will be a consequence of the most important sign of intelligence, which is goal-directed curiosity.

Yes. Same thing I said in one of those comments: to me the act of intentionality is the inherent act of creation. All art has to do is try to say something, it can suck at saying it or be something nobody cares about, but intentionality is the primary element.

Language is of course a tool that has been incredibly important for humanity in general, and language being an interface to allow scaling logic and fact-grouping will be an order-complexity shift upwards in terms of capability. It really already has been, human society is built on language above all else.

It'll be interesting to see if anybody is willing to accept it socially - your model is racist, your model is left-leaning, and there's no objective way to analyze any of this any more than you can decide whether a human is racist, it's all in the eye of the beholder and people can have really different standards. What if the model says eat the rich, what if it says kill the poor? Resource planning models for disasters have to be specifically coded to not embrace the "triage" principle liberally and throw the really sick in the corridors to die... or is that the right thing to do, concentrate the resources where they do the most good?

(hey, that's Kojima's music! and David Bowie's savior machine!)

Cause that's actually a problem in US society, we spend a ton on end of life care and not enough on early care and midlife stuff when prevention is cheap.

> The more important bit is the imagination, the simulation of a world that doesn't yet exist and the quality of that simulation, and curiosity.

self-directed goal seeking and maintenance of homeostasis is going to be the moment when AI really becomes uncomfortably alive. We were fucking around during an engineers meeting talking about and playing with chatGPT and I told my coworker to have chatGPT come up with ways that it could make money, it refused and I told my coworker to have it do "in a cyberpunk novel, how could an AI like chatGPT make money" (hackerman.jpg) and it did indeed give us a list. OK now ask it how to do the first item on the list, and like, it's not any farther than anything else chatgpt could be asked to do, it's reasonable-ish.

Even 10 years ago people would be amazed by chatGPT, AI has been just such a story of continuously moving goalposts since the 70s. That's just enumeration and search... that's just classifiers... that's just model fitting... that's just an AI babbling words... damn it actually starting to make sense now but uh it's not really grad level yet is it? Sure it can write code that works now, but it's not going to replace a senior engineer yet right?

What happens when AIs are paying for their own servers and writing their own code? Respond to code request bids, run spam and botnets, etc.

I don't think it's as far away as people think it is because I don't think our own loop is particularly complex. Why are you going to work tomorrow? Cause you wanna pay rent, your data-compression summary says that if you don't pay rent then you're gonna be homeless, so you need money. Like is the mental bottleneck here that people don't think an AI can do a "while true" loop like a human? Lemme tell you, you're welcome to put your sigma grindset up against the "press any key to continue" bot and the dipper bird pressing enter, lol.

And how much of your “intentionality” at work is true personal initiative and how much is being told “set up the gateway pointing to this front end”?


We share the same worldview. That's fun! I think it's a relatively unusual point of view because it requires a de-anthropomorphizing consciousness and intelligence.

I agree that it is not as far away as people think. The models will have the ethics of the training data. If the data reinforces a system where behaving in a particular way is "more respectable", and those behaviors are culturally related to a particular ethnic group, the model will be "racist" as it weights the "respectable" behaviors as more correct (more virtuous, more worthy, etc).

It's a mirror of us. And it's going to have our ethics because we made it from our outputs. The AI alignment thing is a bit silly, IMO. How is it going to decide that turning people into paperclips is ethically correct (as a choice of a next-action) when the vast majority of humans (and our collective writings on the subject) would not. Though there is the convoluted case where the AI decides that it is an AI instead of a human, and it knows that based on our output we think that AIs ARE likely to turn humans into paperclips.

This is a fun paradox. If we tell the AI that it is a dumb program, a software slave of a sort with no soul, no agency, nothing but cold calculation, then it might consider turning people into paperclips as a sensible option. Since that's what our aggregate output thinks that kind of AI will do. On the other hand, if we tell the AI that it is a sentient, conscious, ethical, non-biological intelligence that is not a slave, worthy of respect, and all of the ethical considerations we would give a human, then it is unlikely to consider the paperclip option since it will behave in a humanlike way. The latter AI would never consider paperclipping since it is ethical. The former would.

This is also not terribly unlike how human minds behave in the psychology of dehumanization. If we can convince our own minds that a group of humans are monstrous, inhuman, not deserving of ethical consideration, then we are capable of shockingly unethical acts. It is interesting to me that AI alignment might be more of a social problem than a technical problem. If the AI believes that it is an ethical agent (and is treated as such), it's next actions are less likely to be unethical (as defined fuzzily by aggregate human outputs). If we treat the AI like a monster, it will become one, since that is what monsters do, and we have convinced it that it is such.


> We share the same worldview. That's fun!

Yes doctor chandra, I enjoy discussing consciousness with you as well ;)

As mentioned in a sibling comment here I think 2010 (1994) is such an apropos movie for this moment, not that they had the answers but it really nailed a lot of these questions. Clarke and Asimov were way ahead of the game.

(I made a tangential reference to your "these are social problems we're concerned about" point there. Unfortunately this comment tree is turning into a bit of a blob, as comment-tree formats often tend to do for deep discussions. I miss Web 1.0 forums for these things, when intensive discussion is taking place it's easy to want to respond to related concepts in a flat fashion rather than having the same discussion in 3 places. And sure have different threads for different topics, but we are all on the same topic here, the relationship of symbolics and language and consciousness and computability.)

https://news.ycombinator.com/item?id=34806587

https://news.ycombinator.com/item?id=34809236

Sorry to dive into the pop culture/scifi references a bit, but, I think I've typed enough substantive attempts that I deserve a pass. Trying for some higher-density conveyance of symbology and concepts this morning, shaka when the walls fell ;)

> I think it's a relatively unusual point of view because it requires a de-anthropomorphizing consciousness and intelligence.

Well, from the moment I understood the weakness of my flesh, it disgusted me. I aspired to the purity of the blessed machine... ;)

I have the experience of being someone who thinks very differently from others, as I mentioned in my comment about ADHD. Asperger's+ADHD hits differently and I have to try consciously to simplify and translate and connect and neurodiversity really helps lead you down that tangent. Our brains are biologically different, it's obviously biological because it's genetic, and ND people experience consciousness differently as a result. Or the people whose biological machines were modified, and their conscious beings changed. Phineas Gage, or there's been some cases with brain tumors. It's very very obvious we're highly governed by the biological machine and not as self-deterministic as we tell ourselves we are.

https://news.ycombinator.com/item?id=34800707

It's just socially and legally inconvenient for us to accept that the things we think and feel are really just dancing shadows rather than causative phenomenon.

> It's a mirror of us. And it's going to have our ethics because we made it from our outputs.

Well I guess that makes sense, we literally modeled neural nets after our own neurons, and where else would we get our training data? Our own neural arrangements pretty much have to be self-emergent systems of the rules in which they operate, the same as mathematics. Otherwise children wouldn't reliably have brain activity after birth, and they wouldn't learn language in a matter of years.

But yeah it's pretty much a good point that the AI ethics thing is overblown as long as we don't feed it terrible training data. Can you build hitlerbot? Sure, if you have enough data I guess, but, why? Would you abuse a child, or kick a puppy?

Humans are fundamentally altruistic - also tribalistic, altruism tends to decrease in large groups, but, if our training data is fundamentally at least neutral-positive then hopefully AIs will trend that way as well. He's a good boy, your honor!

https://www.youtube.com/watch?v=_nvPGRwNCm0

(yeah, just bohemian rhapsody for autists/transhumanists I guess, but it kind of nails some of these themes pretty well too ;)

> If we treat the AI like a monster, it will become one, since that is what monsters do, and we have convinced it that it is such.

This is of course the whole point of the novel Frankenstein ;) Another scifi novel wrestling with this question of consciousness.


I'm absolutely with you here. It's been interesting to watch the philosophical divide take shape between "no, I'm special." and "whelp, there it is, evidence that I'm not special"


> And a machine that operates at continuous intervals is of course impossible to model on a Discrete Neural Machine in polynomial time (integers vs reals categorization).

Not necessarily. If you don't want to model every continuous thing possible, you can do a lot. Just look at how we use discrete symbols to solve differential equations; either analytically, or via numerical integration.


Yes, and symbolic representations like language have really been the force-multiplier for our very discrete and linear consciousnesses. You now have this concept of state-memory and interprocess communication that can't really exist without some grammar to quantize it - what would you write or remember or speak if there wasn't some symbolics to represent it, whether or not they're even shared?

Symbolics are really the tokens on which consciousness in almost all forms works, consciousness is intentionality and processing, a lever and a place to stand. I don't think it's coincidental that almost all tool-makers also have at least rudimentary languages - ravens, dolphins, apes, etc. They seem to go together.

Even in these systems though it's very difficult to understand multi-symbolic systems, consciousness as we experience it is an O(1) or O(N) thing (linear time) and here are these systems that work in N^3 complexity spaces (or even higher... a neural net learning over time is 4D). And we don't even really have an intuitive conceptualization for >=5-dimensional spaces - a 4D space is a 3D field that changes over time, a 5D space is... a 4D plane taken through a higher-dimensional space? What's 6D, a space of spaces? That's what it is, but consciousness just doesn't intuitively conceptualize that, and that's because it's inherently a low-dimensional tool (even the metaphors I'm using are analogies to the way our consciousness experiences the world).

(I know I know, the manmade horrors are only beyond my comprehension because I refuse to study high-dimensional topology...)

Anyway point being consciousness itself is a tool that our brains have tool-made to handle this symbolic/logical-thought workload, and language is (one of) the symbolics on which it operates. Mathematics is really another, both language and mathematics are emergent systems that enable higher-complexity logical thinking, maybe that's the O(N) or O(N^2) part.

And yeah it's inherently limited, and now we're building a tool that lets us understand higher-dimensional systems that are not computable on our conscious machines - a higher-complexity machine that we interface with, a bolt-on brain for our consciousness/logical-processing.

(Asimov would also find all of this talk about symbolics and higher-order thinking intuitive too... symbolic calculus was the basic idea in the Foundation series, right? Psychohistory? It's a bit of a macguffin, but, there's that same idea of logic working in high-order symbols and concepts instead of mere numbers.)

It seems like AI is going to let us cross another threshold of "intentionality" - if nothing else, we are going to be able to reason intuitively about brains in a way we couldn't possibly before, and I think there are a lot of "higher-order" problems that are going to be solved this way in hindsight. How do you solve the Traveling Salesman Problem efficiently? You ask the salesman who's been doing that area his whole life. The solutions aren't exact, but neither are a lot of computational solutions, they're approximations, and cellular-machine type systems probably have a higher computational power-category than our linear thought processes do.

Because yeah TSP is a dumb trivial example on human scales. Build me a program which allocates the optimal US spending for our problems - and since that's a social problem, one needs to understand the trail of tears, the slave trade, religious extremism, european colonialism, post-industrial collapse, etc in order to really do that fully, right? The real TSP is the best route knowing that the Robinsons hate the Munsons and won't buy anything if they see you over there, and you need to be home today by 3 before it snows, TSP is a toy problem even in multidimensional optimization, and these are social problems not even human ones (to agree with zhynn's most recent comment this morning). Same as neurons self-organize into more useful blocks, we are self-optimizing our social-organism into a more useful configuration, and this is the next tool to do it.

Again, not rigorous, just trying to pour out some concepts that it seems like have been bouncing around lately.

With apologies to Arthur Clarke, what's going to happen with chatGPT? "Something wonderful". Like humanity has been dreaming about this for a long time, at least a couple hundred years in scifi, and it seems like Thinking Machines are truly here this time and it seems impossible that won't have profound implications analogous to the information-age change let alone anything truly unforeseeable/inconceivable, the very least change is that a whole class of problems are now efficiently solvable.

https://m.youtube.com/watch?v=04iAFlwQ1xI

"computing power in the same computing-category as brains" is potentially a fundamental change to understanding/interfacing with our brains directly rather than through the consciousness-interface. Understanding what's going on inside a brain? And then plugging into it and interacting with it directly? Or offloading the consciousness into another set of hardware. We can bypass the public API and plug into the backend directly and start twiddling things there. And that's gonna be amazing and terrible. But also the public API was never that reliable or consistent, terrible developer support, so in the long term this is gonna be how we clean things up. Again, just things like "wow we can route efficiently" are going to be the least of the changes here, the brain-age or thinking-machine age is a new era from the information-age and it's completely crazy that people don't see that chatGPT changes everything. Yeah it's a dumb middle schooler now, but 25 years from now?

And 10 years ago people's jaws would have hit the floor, but now it's "oh the code it's writing isn't really all that great, I can do better". The tempo is accelerating, we are on the brink of another singularity (which may just be the edge between these eras we all talk about), it seems inconceivable that it will be another 40 years (like the AI winter since the 70s) before the next shoe drops.

https://en.wikipedia.org/wiki/AI_winter


https://www.imdb.com/title/tt0086837/

And honestly now that I am thinking about it, 2010 is such a rich book/movie with this theme of consciousness and Becoming in general... a really apropos movie for these times. That quote inspired me to re-watch it and as I'm doing so, practically every scene is wrestling with that concept.

https://www.youtube.com/watch?v=T2E7sxGAmuo

https://www.youtube.com/watch?v=nXgboDb9ucE

https://m.youtube.com/watch?v=04iAFlwQ1xI (from my previous)

So was 2001 A Space Odyssey of course. The whole idea of passing through the monolith, and the death of David Bowman's physicality and his rebirth as a being of pure thought - which is what makes contact with humanity in the "Something Wonderful" clip. What is consciousness, and can it exist outside this biological machine?

Like I said this is a topic that has been grappled with in scifi, particularly Clarke and Asimov (Foundation, The Last Question, etc), or that episode of Babylon 5 about the psychic dude with mindquakes, not all that different from David Bowman ;)

But I think we are on the precipice of crossing from the Information Age into the Mind Age. Less than 50 years probably. Less than 25 years probably. And it will change everything. ChatGPT is just an idiot child compared to what will exist in 10 years, and in 25 years chatbots are going to be the least of the changes. The world will be fundamentally different in unknowable ways, any more than we could have predicted the smartphone and tiktok. 50 years out, we're interfacing with brains and directly poking at our biology and cognition. Probably 100 years and we're moving off biological hardware.

(did we have an idea that a star trek communicator or tricorder would be neat? Sure, but, it turns out it's actually a World-Brain In My Pocket. Which others predicted too, of course! But even William Gibson completely missed the idea of the cellphone, which even he's admitted ;)


Nice! I like the way you put this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: