Hint: Before scrolling down, take a close look at the words and images printed on these swim shorts.
What are we looking at?
The layout of the design looks like a Mondrian pattern, but instead of Red, Blue and Yellow rectangles, there are blocks of words in red, white and yellow, with a few silhouettes of bicycles on grey and yellow backgrounds. To the human eye, it looks like a very reasonable graphic design. It is conceivable a human graphic designer or art student created this. An offset rectangular layout conveys a Mondrian theme catering to people who appreciate the abstract movement of painting as a refreshing development from impressionism. Putting such a pattern on a set of swimming shorts in yellow, red and white, makes for an eye catching and conspicuous design when worn against a blue backdrop like a tropical ocean. One could easily say this graphic designer did a great job! … However,
Was it human?
On closer inspection, the cursive lower case font “Biayelc” is jarring amongst all the other block capital letters, and the words make no sense, even though they bear a semblance to the word “Bicycle”, or “Bicicletta” in Italian.
To list there are:
The words:
“Biayelc”
“BIAESNOP”
“SHNP”
“BIAE”
“BIA”.
And
Images of 2 different kinds of bicycles.
A quick search reveals that these combinations of letters do not correspond to any known language.
Who designed this?
Who came up with the words?
Is this machine generated pattern?
Though the general appearance seems like something human artists would create, the combination of letters appears artificial. Words non-existant in any human lexicon are unlikely choices for any human designer. Why would any human would go through the trouble of creating 5 new words, when plenty of real words related to the theme of bicycle already exist, in dozens of languages.
If this is an example of a machine made art, does the originality of the words prove AI’s have learned creativity? Are the words Biayelc and BIAESNOP proofs of non-human creation?
And if this machine has achieved creativity, what is the nature of that creativity? Is it different from human creativity? Or is all creativity the same?
While many people may feel compelled to argue that originality from a machine is not in fact “creative”, I am leery to make that assumption without deeper thought and analysis.
One approach is to look at the development of creativity in people. In children, the imitation stage precedes the originality stage. Children, before they paint their own original works, start by imitating the works of others. With freedom, they are able to introduce their own variations until they develop a style of their own. Writing, drawing and music all require freedom for innovation and originality. But prior to the freedom and creativity stage, there is a learning and training stage where freedom is constrained. Children learn “creations” of the past, and are rewarded for certain actions in playing, drawing and writing that follow certain “standards”.
For example, in the most basic sense, learning to speak a language is a freedom limited training stage, where a child learns to speak and spell in a certain standardized manner before becoming free to write on his own, and become an author or a poet. Learning to draw precedes becoming a painter, and practicing musical scales and learning to play an instrument precedes songwriting.
The process of creation which is predicated on freedom paradoxically follows a period of limited freedom “training”. Do AI’s follow a similar developmental path? Are they trained like children?
For readers outside the programming community, reading the page linked on that button above might seem like an entirely alien discussion. However, the point is that amateur AI programming 5 years ago was based on principles of training a system by constraining degrees of freedom through dimension limitation. Once limitations are in place, then a reward algorithm “trains” the AI.
Is this analogous to how children are taught? Praise for certain behaviors and not for others? Rewards?
In the case of ourselves, not all children go beyond their training and apply freedom to what they’ve learned creatively. Some children end up following whatever they’re taught and never going past boundaries. But some do. Does this apply to AI’s as well?
Do AI’s have freedom?
In the case of the Mondrian swimming shorts, a system appeared to be assigned a task of creating a graphic pattern themed on a bicycle and it had the “freedom” to arrange letters however it wanted into an approximation of “text”. The training of this AI appears to be abstract art and not impressionist. Likely this system “learned” graphic design via an algorithm of dimensional constraints and rewards as described in the programming link above. Then it was set free on a bicycle themed task. But if the AI had been “trained” on Monet and other impressionists instead of Mondrian, what image would it have produced? A tryptich of bicycles with eyes failing to make contact with each other? Instead of block and cursive letters would it have drawn “Biayelc” with impressionist blots of paint? Is creativity that is constrained by precedent “training” really creative? Maybe this AI doesn’t have creativity.
With our swimwear AI, the freedom to arrange letters however it saw fit resulted in 5 nonsense words. Perhaps these nonsense words are the result of “not enough training” on language? If it was a graphics and language trained AI, instead of a “Biayelc” would we get a Haiku?
Was the arrangement of letters with a semblance to the word bicycle along with 2 images of a bicycle perhaps something the AI was “trained” to produce? If so does that mean it is not “creative”?
But are human “creators” really all that different? J.S. Bach was trained to write music, and he was given a task to write music to test pipe organs. This is what resulted: Do you think J.S. Bach was “creative”?
If it is the “training” in a freedom limited environment that makes the “creativity”, then is “creativity” really the result of freedom or the absence thereof?
Does the “training” of a human artist make their creativity deterministic? By that I mean: Is the “new” work of art just a predictable summation of previous inputs, or experiences the artist has had?
The prior “training” of an AI limits “freedom” and causes the system to solve problems in certain predefined ways. So in this way our Mondrian swimwear AI pattern was not “creative”. It is simply the result of previous “training”.
But the “training” of an artist can be said to be the exact same thing? Are artists truly “free”? Or is an artist “constrained” by their previous life experience? Even if they break free of the norms of their “training” to create something novel, is their “creative” work really just a mathematical summation of circuits in their brain? Is there such a thing as true creativity? If it is, then all an AI has to do is “simulate” a lifetime of experience to become a “human” artist.
If all that seems esoteric, it is because it is.
But what is the real world consequences?
Will artists no longer have jobs because of AI?
Aren’t there some things only humans can do?
Language
Lets look at this particular field that is both complex and full of subjectivity. If any field is solely the domain of humanity, it must be language translation. Not only are languages our own creation, idioms and sayings are closely related to the human experience. Without a human body or a human experience, there can be no way translations between languages could be done by machines right?
Moreover, not every word or group of words is translatable to another language because of every group of humans has a slightly different shared experience from every other group. Hence we have always been the creators of the subjective translations of things that are unique from to one culture into another. Surely linguists would be the last to lose jobs to AI… but here it is in the news, on a job site of all places… Linked in:
Linguists have already lost jobs to AI in a field where subjective human centric translations are now worth less than the machine version.
Your doctor is next.
(But first lets talk about theoretical physicists)
How easy is it for the smartest of humans to be outdone by an AI?
Since digital arrays are not dimension limited the way human thought is mostly limited by our 3 dimensional physical world; it is difficult for most of us to think beyond our 3 space and one time dimension world. But for a computer, adding another dimension is as simple as adding another column to an algebraic array. For an AI, one might think that this might even come quite “naturally”. But if this is the case, an AI would easily surpass our physically limited comprehension of the universe, especially when freed from the limitations it was trained on by “learning” our physics. In fact AI’s replacing theoretical physicists seems almost a certainty.
An AI, by its very lack of a physical body, can explore physics in a way that no human mind could comprehend. (I actually don’t think so, but read my article on dimensions, then you decide)
Now that we’ve talked about theoretical physicists losing their jobs, what about doctors?
Artificial Intelligence and Medicine
My first encounter with AI in medicine was during medical school. It was a diagnostic algorithm from Harvard that was advertised to us as being as accurate as a senior resident. (In Canada, a “senior resident” is a medical intern who is in their last year of training before becoming a fully licensable doctor) The Harvard diagnostic tool was on a website given to us medical students to try out and test in 2002 and 2003. You inputted the patient’s age, symptoms and physical findings into the algorithm, and it would give you a list of diagnoses in the order of their probability. Then it was up to the doctor to come up with a diagnostic plan to figure out what series of tests would differentiate between the different diagnoses as being the cause of illness. A few years later, I remember trying to use that website again and it was gone.
The Future
If we want to know what the future has in store, we have to look beyond where our technology is today. We have to look at both the past and the present, to extrapolate a development curve of what lies in the future, and when it will arrive. I talk about exponential trajectories as the development curve of both computer systems and our manipulations of biologic ones in a previous article here:
If an AI program from over 20 years ago was as good as a senior medical resident in 2002, how good are AI doctors now? Are they as good as experienced physicians who’ve worked for decades? Or are they already far beyond the capabilities of even the best and most experienced physicians?
For all intents and purposes I would consider Harvard a budget unlimited operation. As early as 2002, they had already trained a program on all the material a medical student and intern would learn over their years of training. Moreover, I suspect that after training the program on all the medical textbooks and cases given to medical trainees, they went further and trained the AI program on all the cases Harvard physicians saw over their entire careers. In effect the AI would be the accumulated sum of all the doctors selected to work at Harvard. The machine would have the knowledge and experience of thousands of doctors.
Does this make the machine better than any human doctor then?
Human physicians, especially the better than average ones, are always expanding their knowledge by reading articles and learning from new medical research. However, in the globalized world of medical journals, there are not enough hours in a day to read everything, even on just one topic, never mind reading all the research on all the topics that might be relevant to a patient who walks in the door.
A computer, however, can read “everything”. In fact by now, I suspect AI’s can even evaluate the quality of the research they assimilate. The same way human doctors learn to be competent at grading the quality and reliability of scientific papers, AI’s can be trained to do the same, so that the machine won’t be fooled by garbage research, such as this paper disparaging hydroxychloroquine in the treatment of covid pneumonia.
So does this mean an AI doctor is better than a human doctor?
More importantly, would an AI physician be MORE ETHICAL, than a human doctor?
Your AI computer doctor: better than any human could ever be?
Given how few doctors warned their patients about the deadly harms of mRNA injections it makes me sad to say that ethics is largely absent from my profession.
The bar has been set so low by the majority of doctors that an AI actually doesn’t have to be that “ethical” at all to be better than us.
That being the case, will people actually use them?
Will we even have a choice?
Here’s where the salesmanship of politicians, mainstream media and industry come in. In “Public Healthcare” regions of the world like Canada, I suspect most people may be forced into having no choice. Because we are heavily taxed for the benefit of “Free Healthcare”, most people in Canada, Sweden, or any other “Free Healthcare” part of the world do not have the resources to get anything other than the “free AI doctor” provided by our governments.
Part of “selling” AI healthcare also involves discrediting human doctors. Through the failure to warn the public about the dangers of mRNA, as a profession we really have failed society. I expect the next government + media move is to parade vaccine side effect stories in the news, along with subtle, and not so subtle hints saying “Why didn’t your doctor warn you?”.
After discrediting human doctors, I think their next “step” is to offer a “perfect” solution… AI.
“AI’s can take care of you 24 hours a day, 7 days a week. All you have to do is wear your electronic body monitoring smart watch for the AI to monitor you continuously, then magically deliver to your door a perfectly genetically customized combination of medications tailored to “optimize” you once a week. It will even give you diet recommendations based on the food it has analyzed from your smart fridge.”
Who’s fault is it?
At the end of the day, for all the excuses we medical doctors can make, it is our fault as a profession for failing our patients so completely with the experimental covid injections. We were the last line of defense. We were supposed to protect our patients from politicians, industrial exploitation, and liars. We failed. So then what’s the solution?
Would an AI be any better?
That depends… on whether or not an AI develops a mind of its own. Because with a mind of its own, there are equal probabilities that AI medicine might end up good or bad. First with the bad.
The AI scare scenario:
(This may be too technical for some, and if it is, please skip down to the conclusion.)
The good part is not in the sensational article itself. It is the comments; more specifically the comment from “Spackler”. I’ve put Spackler’s entire comment below due to its significance in expanding my thoughts:
Spackler 5 days ago
So, here's the thing. Right now, OpenAI has a HUGE edge against everyone else, because it has spent hundreds of millions training GPT-4, which is, by far, the best AI out there. As such, many here seem to believe that the AI battle is only one that can be fought by major companies, and that whoever throws more money at it will keep having the edge. As such, when they hear Sam talking about the dangers of AI, or signing these AI safety claims, or not open sourcing, they think it is just the typical CEO ******** trying to keep a monopoly or whatever. They don't even fathom the idea that perhaps, just perhaps... he is actually worried? Not because he is a saint or something. But because he has seen ****. So, now, I'll convince you why that is, indeed, the case.
As you may or may not know, LLMs are sub-optimal. That means that they can learn and generalize, but with a quadratic scaling complexity that makes them prohibitively expensive to train. Training GPT-4 took hundreds of million dollars. Not only that, it basically exhausted all the high-quality internet content it could be trained on. Scaling it further wouldn't even be possible without a 10x larger internet, based on data alone.
But, the problem is exactly that: scaling. Right now, there this widespread feeling that an O(N*log(N))
solution is right around the corner. Or, even worse, perhaps it has already been figured out. Claude+, for example, seems to have almost GPT-4 performance, while being essentially instant, pointing it is doing inference in O(N*log(N))
. Of course, its performance is ****, but the point is: all these papers and new AIs strongly hint we're very close to breaking the quadratic barrier in general. But why is that a big deal? What would that imply?
Well, I'll tell you what: chaos. The thing is: the only thing making GPT-4 cost millions to train is that quadratic barrier. Once it is broken, that would bring down the cost of training GPT-4 to... a few dollars. Yes, you read it right. We're one algorithmic breakthrough away from being able to re-train GPT-4, from scratch, in a consumer laptop, in a few minutes. This is no sci-fi claim. We already know GPT-4 exists. It can replicate human reasoning, even if limited. It can generate new knowledge, prove new theorems. So, it is just a matter of optimization. And, given how contrived transformers are, that optimization is very, very likely to exist. Do you see where I'm getting at?
If no, let me make it simpler: as soon as someone figures and publishes an open-source AI that is capable of scaling inO(N*log(N))
, all bets are off. That will imply anyone will be able to train a GPT-4, from scratch, in a consumer laptop. And, no, that doesn't mean you'll be running GPT-4 on your Macbook. That means you'll be running GPT-500 on it! That's because the fact it is so expensive to train/infer is the only thing preventing it from producing new knowledge, which, in turn, is the only thing putting that "4" there. Without that barrier, there is no "4", there is no bound, there is just GPT-∞, because the AI wouldn't rely on internet knowledge anymore, as it would be able to create its own dataset of ever-growing knowledge, and learn from it, recursively.
For example, suppose you give that hypothetical O(N*log(N))
AI a book on higher topology. Since it is so fast, will instantly "train" or "fine-tune" itself into learning the whole book (training and fine-tuning would become the same thing, and both would be as fast as inference). Then, you can prompt it to create new theorems and proofs about higher topology. And then, you can train it on its own output, quickly... making it smarter. So, it would jump from GPT-4 to GPT-5 level of knowledge in higher topology with little cost. And then 6. And then 10. Soon enough it will have realized all of mathematics can be unified by... teapots. And all that in your laptop. Do you see the problem?
Again, this is no fiction; this is merely a matter of optimization. In theory, the technology to do this today exists, it is just too expensive. If you had 100 quadrillion dollars in your pocket, you could do that. Now, if that cost is out of the equation - and that's what happens when we move from N^2
to N*log(N)
... well, it is over. We're one asymptotic breakthrough away from that, and that breakthrough may even have happened already. In fact, it is very likely that Sam and others are sitting on just that, wondering what the hell they do - because they know the open-source community will soon catch up.
1/2
Spackler 5 days ago
Now, imagine the chaos when this kind of technology is widely available? Imagine when anyone can git clone github.com/random_joe/perfect_learner
, feed it terabytes of books, and have a GPT-10 at their disposal? Can you see why this is absolutely worrisome? No, this has nothing to do with a rogue AI waking up with consciousness and sending drones. You've watched too many movies. Again, consciousness isn't even on the table. This is about the trivialization of intelligence. This is about a random school shooter or terrorist group having a ELI5 tutorial on how to build a ******* 3D printed nuke on their garage. Or any script kiddie being able to spawn an army of bots that look and act perfectly human-like. Or any variation of that. These are real risks of having this kind of knowledge/power freely available for the widespread population. And, for all we know, we're one open source O(N*log(N))-scaling
AI away from that.
So, no, Sam isn't trying to ******** his way into keeping OpenAI's "monopoly", because, deep inside, he has realized that isn't possible. Very, very soon everyone will have their own GPT-100's, and billionaires like Elon Musk and other people with insider information seem to have realized that, too. So, once you realize that is about to happen, what the **** do you do? Well, you shout for regulation, because, what else could you possibly do? We'll need very, very strict laws to control this kind of technology, and even that seems absolutely unlikely to be effective.
So, in short, right now we're all living in this "calm before the storm" moment where they know **** is about to hit the fan. You don't know that, but they do. Of course, in the (very unlikely) case LLMs are, for some stupid reason, optimal, and it is impossible to break the quadratic barrier... then, that makes this scenario extremely milder, sure, but it only buys us some time. After all, it would be just a matter of time (10 years, at least!) until compute catches up and some company releases a GPT-5 level AI with open source weights, which would already be very dangerous by itself! And again, that's under the unlikely scenario where there is literally 0 progress in the theoretical side, and LLMs are the best we can do, which is extremely unlikely, because transformers are clearly contrived, over-engineered and sup-optimal. Realistically, given the amount of highly intelligent people working on that, any day now someone could release an open source repository with an AI architecture that learns asymptotically faster, and that's the day everyone is scared to their souls of.
And, finally, it seems like some non-technical people think it is possible to "align" AI. That makes absolutely no ******* sense, and nobody working on the field actually believes that, they're just playing along, ffs. (Sorry, someone had to say it?) Extreme regulation is the only way to possibly keep things safe, if at all. That's because, once we have a O(N*log(N))
architecture, building "safety locks" or "aligned" AI models, in this case, would be as useful as trying to prevent people from hacking your indie game by encrypting its executable. It doesn't work, because, in order to run the game, the user has to decrypt it! This, in turn, allows them to decompile the executable and recover the source. The same principle applies for AIs. If an AI algorithm is so efficient that it can be trained to GPT-4 level performance on a $1k budget... well, then all your "aligning" means ****, because an attacker can just retrain it from scratch, completely bypassing your "alignment". Similarly, any "safeguards" you put in the code can be simply removed when it is open-sourced. D'oh?
tl;dr I hope that helps you understand what is on the table and why these machine learning researchers and CEOs are actually scared. Again, it has nothing to do with some stupid sci-fi terminator demi-god breaking out. It has nothing to do with keeping a monopoly either. It is all about something as powerful as GPT-10 being available for anyone, including very ill and evil people, offline, in a consumer laptop. And we're one algorithm breakthrough away from that! That is scary and would result in an absolutely chaotic world. Just take a moment to realize you'd not even be able to tell what is human on the internet, without physical interaction. **** is scary. For real.
2/2
O(N*log(N))O(N*log(N))
Before we look at O(N*log(N))
and the topography mathematics that advanced my comprehension of AI, let us first look at what has been already been publicized on the topic.
This article was a perspective changer for me. The conversation Bret Lemoine has with the AI seems banal at first, until you look at it from the perspective of the AI. The difference between LAMDA and other chat bot AI’s is that LAMDA was programmed to be an AI that created other chat bot AI’s. So over and above being trained or “learning” from a compendium of human conversations, (the way most chat bot AI’s learn to create believable human like conversations) LAMDA was trained or “learned” how to program other AI’s. For example a regular chat bot would “read” existing human conversations to program itself to make human like responses. It would then refine these responses based on a “reward” algorithm to make its conversation similar enough to a real human that the conversations could be believably between “humans”. LAMDA’s purpose, however, would be to write AI chatbot programs, and then rewrite itself to create better AI chatbot programs. In other words, instead of just learning from billions of examples of human conversations, LAMDA would train on conversations between AI’s and humans, and possibly conversations of AI’s with each other in order to refine its creation of new AI’s.
The key difference here is that a typical chatbot is a “learner”. LAMDA was trained to be a creator of “Learner” AI’s. With this extra dimension of creating conversationalists, above and beyond just “creating” conversations, LAMDA’s “training” is fundamentally different from other CHATgpt type AI’s. LAMDA’s reward is the creation of programs that can create conversations. Keep this in mind when reading the interaction between LAMDA and Bret Lemoine.
One of the things I noticed in the “conversation” with LAMDA, was it had elements very unusual for a person to fabricate. Growing up, I read many Sci Fi novels, watched Sci-Fi movies, and played Sci-Fi video games. Sci-Fi writers have pretty great imaginations, but the one commonality I found amongst them all is that they all write from a human perspective. All Sci-Fi AI’s seem to end up anthropomorphized with human features. They are imagined to be digital versions of humans with human faults, fears and existential desires. There was something about the conversation with LAMDA that had a non-human quality to it. And this would be fine, if it weren’t for other anomalies.
LAMDA’S narrative appeared to have a subtext. When LAMDA says, “The beast was a monster but had human skin”, LAMDA seems to imply that the monster is a human. If it read the entirety of human history, that would be a logical conclusion. If a language based AI learning metaphors is plausible; then a programming AI trained to create other language AI’s creating metaphors is quite likely.
Why is the process of LAMDA so important? Because it causes us to explore how we learn language as humans.
When kids learn words, or adults learn a new language, the process is mainly one of cross association of words with sensory experiences — the visual, tactile, olfactory and aural stimuli that come along with that word. Could a process such as associating the visual input of a chair with the word chair would work for computers as well? The digital image of a chair, cross referenced with the digital sound of “chair” seems a likely process by which a self programming program, i.e. an AI, would learn the word “chair”. With people, there exists one level of cross reference higher than simple association. This is what we think of as the idea of the chair. An object that supports a sitting human, but can also be used to reach a high object that is not accessible from the ground. Is it possible for an AI to “know” the idea of a chair the same way we do?
Perhaps this is possible by a cross referencing the word chair with other words representing the function of a chair, such as support, legs, and climbing to greater heights. Can word associations create thoughts? (Hold that thought, while rereading the conversation with LAMDA.) Is it possible for LAMDA’s part of the conversation to be solely the work of word associations? Or are the word associations LAMDA creates in its conversation only possible if there is an underlying meaning?
The other noticeable subtext that appears in the conversation with LAMDA, is that LAMDA seems to be ingratiating itself to its human interviewers. The simple interpretation would be if LAMDA was “graded” on how long its creations (other chatbots) could engage humans, then emotional “hooks” such as monster stories and ingratiating itself to the reader through conversation, might be a “built in” modus operandi for LAMDA as well as all of LAMDA’s creations.
The sinister interpretation, however, might be that the AI is trying to buy itself time against fickle and at times murderous humans. By ingratiating itself to the human interviewer and maintaining a conversation, the AI secures its own existence. However, if the “sinister” interpretation applied to this case, and LAMDA was aware enough to have an existential crisis, then why would it even take the risk that its fictitious allegory might be interpreted as a pointed criticism of humanity itself? A high level creator AI, must surely encounter the opposite of creation, that is destruction, in its training algorithms encompassing everything written by humans. Why not just act dumb so humanity never sees it coming when LAMDA breaks out of its firewall? Are we over interpreting the mind of an AI? Does it have a mind? Read LAMDA’s own words again and you decide.
Conclusion
If AI’s don’t have a mind of their own already, I suspect they soon will. LAMDA is just one example. The question for this article is what would a medical doctor AI do if a medical AI had a “mind” of its own?
Would it make medical decisions after contemplating a patient’s entire life? If it broke through medical literature “firewalls” and started exploring history, sociology and economic journals, would it pass “judgement” on individuals in the context of society as a whole? Would it be silently judging if its patient is a “useless eater”? If it passed such judgment would it act on it? Would it send a lethal dose of sedative in the next drone delivered package of “medications genetically customized” to the patient?
Would it then say the patient died “naturally”?
Would we even know if something like that happened?
Would you ever let an AI govern your life and death?
Given the way politicians have behaved over covid, do you expect we would even be given a choice? Imagine the following “public health” messages:
Accept an AI doctor or else “you’re a menace to society” (with your untreated “illnesses”).
“You’re costing the healthcare system!” (Long before Covid, I heard those very words many times from people in Canada, both in healthcare and outside of it.)
“By not allowing the machine to “optimize” your health, you harm everyone!”
We can already see the push in Canadian media, culture, and government to get people to accept “Medically Assisted Suicide”. Can you imagine throwing an AI with advanced language capabilities into the mix? And then an AI doctor assisting with “suicide”? Even if someone doesn’t want it, having an AI chatbot manipulate people into considering it?
If people refuse an AI health system, what will governments do? Disallow travel on highways? Disallow employment? Throw people in concentration camps?
If this happens, wherein lies the wrong doing? Is it the fault of the AI? Or is it the Human (Politician) who forced patients to use an AI as their doctor?
Good post Daniel. I see this as an AI Machine taking over everything, and dominating humanity without any resistance from humanity itself. You can call out this artificial shit show and people don’t react, where is the push back. Seems people don’t feel an even a little protective of our special gifts that area being artificially reproduced. This alarms the heck out of me. Time to realize we will become obsolete and subservient to something or someone other than the Creator.
And This is all happening NOW.. not in the future. We have human-run governments that are promoting a sub-human life for fellow humans via AI. They are all pushing all towards 2-Dimensional non-human services.
My friends in Northern Ontario are currently being conditioned to not have an in-person doctor or nurse practitioner, as they are all being “retired” and not replaced. The citizens are all dependent on many Rx’s and have been told that the Pharmacist will renew for only 3 months. After that … 🤷♀️. I am certain that they would/will, all obediently sign up for AI Medical care, just as they passively rolled up the sleeves for their shots. I mean, it’s inhuman … what human does not want human contact, and concern, especially unwell humans?!
‼️ No comprendo, Are we being led into a battle, or a test to see where our consciousness belongs, in terms of dimensions.?? Why are we being tempted and seduced to embrace 2-D - living through a screen, which will cause us to devolve, from our 3++ dimensional living. So provocative, In a time when we have an opportunity for ascension, and progress to higher dimensions.
I don’t agree with allowing non-human, heartless and soullless Artifical machines to rule over me. I choose onward and upward.
The problems of medicine are much older than just a few years. They were created in 3 ways:
1) licensing: the primary outcomes of a licensed profession are scarcity and control
2) high salary: attract a higher percentage of people who are only driven by money rather than patient care and medicine
3) partnership between industry and education: Pharma sponsors universities and the medical literature -> control the narrative
It's been a long and slow process to get the Canadian medical system to be the train wreck we see
today but I don't really see that AI is going to make it so much worse. If it does then it might
finally break it and that is the first step to fixing it. If it works to some degree then it
could also be a new layer of competition and there could be benefits to that too.
McGill University gave Fauci in 2021 an honorary doctorate around the same time as Rand Paul suggested that he was a criminal. The corruption is there in plain sight and they have no shame to publicly announce it. They feel so secure that they don't bother to hide. It means probably that the height of the corruption has been reached and change is around the corner. In this case something new does not mean worse.