@gaugevectormoron this was just too good to leave in the tags. Also yay it turns out I’m not alone in this specific oddity of mine :D
I have a question: has anyone else tried using their boobs’ nipples to scroll on a touchscreen or is this an insane person activity I have just attempted?
I tried to google it but it’s a pretty unsearchable query.
And now for something completely different.
This is the ADHD Teapot. I made it in a ceramics class a few years ago. I use it to explain executive dysfunction to people who haven’t come across the term before (and those who think of ADHD mostly as Hyperactive EightYear Old Boy Syndrome).
So, most people’s brains are like a regular shaped teapot with a single spout. Let’s say that your time, energy, focus etc is the liquid you have in the teapot. Your executive function is the spout, that directs the tea into the specific cup you want to fill-aka the task that you’re meant to be doing. Spills happen occasionally, but generally most of the tea goes in the right cup.
If you have executive dysfunction, you have multiple spouts going in different directions. You can try pointing one of them at your chosen cup and you will probably get some liquid in there, perhaps you will even fill it right up (finish the task). But meanwhile, tea is also pouring out of several other places and not going where you want it. If you have another container nearby, perhaps some of it will end up in there. But quite a lot of it is going to end up on the floor and accomplish nothing.
And at the end of the day you’ll have filled one or two cups ( or sometimes not even one) compared to the five or six that somebody with the same sized teapot (but only one spout) has filled, and everyone wonders why you’re so bad at getting tea poured, and why you make such a mess in the process.
One day I’d like to spend more time learning pottery and create a really technically good fucked up little adhd teapot. But that’s a long way off since i currently live in the outback and the nearest pottery workshop is some 400km away. But I figure that for now, it might be a useful or interesting metaphor to somebody even in its rough draft form.
This post is the cup I filled instead of cleaning my house btw.
the flag maker is finally in a presentable state :) not the embarrassing pile of default css it was on the first day...
i even added a little OpenGraph image preview!!
What i have to say today is this:
I have noticed that I was so addicted to reddit that even now that I haven't used reddit in months, my fingers would still open a new tab and type reddit.com whenever i need dopamine.
It's only thanks to a site blocker extension that i catch myself and go, Woah there buddy, what are you doing, you're not supposed to do that
This addiction is ingrained so much in me that it's muscle memory now
My friend’s little brother (non-verbal) used to hide people’s shoes if he liked the person, because it meant they had to stay longer. The more difficult it was to find your shoes, the more he liked you.
One day my cousin came over, and she was a bitch. When it was time to leave, my friend’s brother handed her shoes directly to her and she went on and on about how he must have a crush on her because he only “helped” her.
All fancy smancy generative ai models know how to do is parrot what they’ve been exposed to.
A parrot can shout words that kind of make sense given context but a parrot doesn’t really understand the gravity of what it’s saying. All the parrot knows is that when it says something in response to certain phrases it usually gets rewarded with attention/food.
What a parrot says is sometimes kinda sorta correct/sometimes fits the conversation of humans around it eerily well but the parrot doesn’t always perfectly read the room and might curse around a child for instance if it usually curses around its adult owners without facing any punishment. Since the parrot doesn’t understand the complexities of how we don’t curse around young people due to societal norms, the parrot might mess that up/handle the situation of being around a child incorrectly.
Similarly AI lacks understanding of what it’s saying/creating. All it knows is that when it arranged pixels or words in a certain way after being given some input it usually gets rewarded/gets to survive and so continues to get the sequence of words/pixels following a prompt correct enough to imitate people convincingly (or that poorly performing version of itself gets replaced with another version of itself which is more convincing).
I argue that a key aspect of consciousness is understanding the gravity and context of what you are saying — having a reason that you’re saying or doing what you are doing more than “I get rewarded when I say/do this.” Yes AI can parrot an explanation of its thought process (eli5 prompting etc) but it’s just mimicking how people explain their thought process. It’s surface level remixing of human expression without understanding the deeper context of what it’s doing.
I do have some untested ideas as to why its understanding is only surface level but this is pure hypothesis on my part. In essence I believe humans are really good at extrapolating across scales of knowledge. We can understand some topics in great depth while understanding others similarly on a surface level and go anywhere in between those extremes. I hypothesize we are good at that because our brains have fractal structure to them that allows us to have different levels of understanding and look at some stuff at a very microscopic level while still considering the bigger picture and while fitting that microscopic knowledge into our larger zoomed out understanding.
I know that neural networks aren’t fractal (self-similar across various scales) and can’t be by design of how they learn/how data is passed through them. I hypothesize that makes them only understand the scale at which they were trained. For LLM’s/GAN’s of today that usually means a high level overview of a lot of various fields without really knowing the finer grain intricacies all that well (see how LLM’s make up believable sounding but completely fabricated quotes for long writing or how GAN’s mess up hands and text once you zoom in a little bit.
There is definitely more research I want to do into understanding AI and more generally how networks which approximate fractals relate to intellegence/other stuff like quantum physics, sociology, astrophysics, psychology, neuroscience, how math breaks sometimes etc.
That fractal stuff aside, this mental model of generative AI being glorified parrots has helped me understand how AI can seem correct on first glance/zoomed out yet completely fumble on the details. My hope is that this can help others understand AI’s limits better and therefore avoid putting too much trust into to where AI starts to have the opportunity to mess up serious stuff.
Think of the parrot cursing around children without understanding what it’s doing or why it’s wrong to say those words around that particular audience.
In conclusion, I want us to awkwardly and endearingly laugh at the AIs which mimic the squaks of humans rather than take what it says as gospel or as truth.
This reminds me of my silly little web projects where I’d just play around with distance functions or GPGPU or whatever
Remember it is a competition and you are here to win, you WILL be the faggiest person on the train
everyone who reblogs femboys is bisexual. i don't make the rules.
20, They/ThemYes I have the socks and yes I often program in rust while wearing them. My main website: https://zephiris.me
132 posts