About YMN

Welcome to The Conscious Machines! I’m Yaman, a curious mind constantly ​exploring the intersection of technology, society, and human potential. This blog is ​my creative outlet and intellectual playground where I dive deep into topics that ​fascinate me—from artificial intelligence and digital transformation to the future of ​work, cognitive science, and the evolving relationship between humans and ​machines.


I launched The Conscious Machines because I’ve always believed that technology, ​especially AI, isn’t just about making our lives more efficient—it’s about how we ​consciously shape the tools that, in turn, shape us. My goal here is to unravel the ​deeper questions behind the innovations of today and tomorrow. How will AI ​redefine creativity and intelligence? What does the digital age mean for our ​collective future, and what can we do to steer it in the right direction?


My research and writing focus on the convergence of technology with broader ​social, ethical, and philosophical issues. I explore how the digital revolution is ​transforming everything—from how we work and communicate to how we think ​and live. You'll also find me diving into emerging fields like cognitive enhancement, ​neuroscience, and the future of human-machine collaboration.


But it’s not just about the tech; it’s about understanding its human implications. My ​posts often reflect on how we can harness technology to elevate human ​experiences rather than diminish them. Whether I’m dissecting the latest ​advancements in AI, pondering over philosophical questions, or sharing insights ​into how we can build more conscious, thoughtful systems, I aim to spark ​conversations that matter.


Join me as I explore the conscious machines of the present and the possibilities for ​a more mindful, intelligent future. Let’s rethink what it means to live in an age ​where technology is not just a tool, but an extension of who we are.

About TMC

Coming Soon

Introduction to The Conscious Machines


Welcome to the first installment of The Conscious Machines (TCM) series. My goal for this blog is for it to ​become the hub for exploring and discussing those difficult and challenging questions surrounding the ​rapid development of Artificial Intelligence, which has now become a ubiquitous part of our everyday lives.

So, strap in tightly, because it's going to be a bumpy ride on a mind-bending road.


As artificial intelligence continues to evolve at breakneck speeds, the year 2024 marks a special milestone ​in the technological development of AI and its ever-increasing capabilities, pulling us closer and closer ​towards the so-called 'Singularity' — the point at which mankind has accomplished the creation of Artificial ​General Intelligence (AGI). AGI is defined as an AI system with the ability to understand, learn, and apply ​intelligence across a broad range of tasks at a level equal to or surpassing human capability.


If you're still unclear on the exact meanings of the Singularity and AGI, worry not—our journey will be to ​deeply explore those topics and the world that may potentially be awaiting us, one that could be a utopian ​reality of human-AI cooperation for the benefit of all mankind, or a dystopian reality fraught with fear, ​fighting, and fragmentation.


So, has there ever been a time more critical than now—to be informed?

The Purpose of The Conscious Machines


The Conscious Machines is a space dedicated to navigating the ethical, societal, and governance issues ​that accompany the development of AI, AGI, and ultimately Artificial Superintelligence (ASI). Before we ​even began to take larger and larger strides towards these groundbreaking advancements, it was already ​increasingly important to understand not just the technical progress but also the broader implications for ​humanity.


In this space, we will explore:


    • The Current State of AI and Its Progress Toward AGI: Examine the current advancements in AI ​technology and highlight the breakthroughs moving us closer to AGI and ASI. This section will ​provide readers with an overview of where we stand today in the evolution of intelligent systems.


    • Positive and Negative Impacts of AI: Analyze the benefits AI has brought to society, from ​healthcare advancements to enhanced productivity, while also examining the harmful ​consequences, such as job displacement and fraud. We will dig into both the pros and cons of the ​rapid race toward advanced AI.


    • The Impact of AI, AGI, and ASI on Different Aspects of Human Life: Explore the diverse areas of ​human life that have already been affected by AI, including the economy, employment, education, ​healthcare, wealth distribution, politics, culture, and more. This section will also provide predictions ​grounded in science on the inevitable impacts that AGI and ASI could have on various aspects of ​human life.


    • The Current State of Ethical Frameworks and Governance Structures: Present an overview of ​existing ethical frameworks and governance structures within the context of AI, AGI, and ASI. We ​will explore the current initiatives, discuss the challenges of governing generative AI, and consider ​what these challenges may reveal about future governance needs for more advanced AI systems.


    • The Monumental Burden on Corporations, Policy-Makers, and Governments to Act: Discuss the ​crucial role of corporations, policymakers, and governments in shaping the future of AI. We will ​delve into the complex ethical questions, scenarios, and issues surrounding AGI and ASI, and the ​urgent need for governance frameworks that can effectively manage these powerful technologies ​to ensure alignment with human values. This topic will delve into creative and innovative initiatives ​that groups can undertake at scale to ensure every governance level for these technologies is ​aligned with the benefit and wellbeing of humans.


    • Democratizing the Design of Ethical Frameworks and Governance Structures: This final topic will ​focus on collective and collaborative programs we can take to democratize the design of ethical ​frameworks and governance structures for AI, AGI, and ASI. We will explore and brainstorm ​creative and innovative initiatives and projects on how individuals, organizations, and governments ​can contribute to creating governance mechanisms that not only benefit humanity at an ​organizational, group level but also address the broader needs at a global scale.

To Shape Our Future Together


AI technology is not just a tool; it is an entity that is beginning to reshape the very fabric of our society. It ​holds the potential to either elevate humanity to new heights of achievement and prosperity or introduce ​unprecedented risks and ethical dilemmas.


Through TCM, the aim is to bring together thinkers, enthusiasts, and anyone intrigued by the profound ​shifts AI is bringing to society, striving to not only describe and predict but also mitigate challenges, ​embrace opportunities, and thoughtfully adopt the new world in the era of AGI and ASI — a world that ​could be shaped by our collective efforts to ensure a balanced and ethical future.

The Conscious Machines: The Countdown to 0:00 O'Clock — The ​Glaring Absence of Ethical Frameworks & Governance Structures ​and The Awakening We're Unprepared For

The Fact of the Matter Is...


“If you know the enemy and know yourself, you need not fear the result of a hundred battles.” — a word of ​wisdom that is arguably Sun Tzu's main claim to fame and also the main reason why his book, The Art of ​War, has become common knowledge across the world.


The phrase, however, does not bode well for all of us. Given the conditional nature of Tzu's statement, it ​would indicate that there is very good reason for us to fear those battles that lay ahead because the fact ​of the matter is no one, not the scientists at the frontier of cutting-edge AI research nor the tech-savvy ​users of the latest tech gadgets know what or who the enemy inside this ongoing AI revolution is, or will ​be.


According to the majority of leading scientists in the field, and most notably Geoffrey Hinton, a pioneering ​figure in AI research known as the Godfather of AI — even the best experts in generative AI cannot reliably ​explain or predict how the current, most advanced LLM models generate their output. That means we have ​already reached a point where the decision-making and internal processes of this technology are beyond ​our full understanding.

Information Inequality


This oceanic fault in our knowledge is only immensely multiplied in magnitude when you consider the ​information divide that exists in the communities of developing and underdeveloped countries. So, the fact ​of the matter is, it is incredibly difficult for anybody to grasp the full extent of danger that will inevitably ​unfold in the near horizon.


That is why many scientists, in stark contrast to most capitalists and entrepreneurs, are now sounding the ​alarm on the progress at which AI is advancing—not because they are less ambitious than their industrialist ​counterparts, but because of the relatively complete lack of progress on side of advancing ethical ​frameworks and systems of governance by which we can try to design and control this technology to ​safeguard humanity's welfare.

The Autobahns of Evolution


Whether it is the ChatGPT app on your phone or the Neuralink AI microchip that Elon Musk wants to ​implant inside your brain, even those amazing advancements are the inventions of yesterday. We are now ​at the brink of a plethora of potential applications and use cases for technologies that previously were ​exclusively in the domain of sci-fi films, which feels incredibly exciting. Two things, however, should make ​you uneasy: (1) the speed at which these technologies are evolving, and (2) where it is that they are ​evolving.


(1) The Element of Speed


There is a law from 1965, that depending on your field of expertise may or may not sound familiar, by the ​name of Moore's Law that I doubt even Mr. Moore himself expected to stand until today, which states that ​semiconductors will approximately double in power every 2 years. This trend is holding true with the ​development of AI technology.


This exponential growth explains how all the computing power on the entire Apollo 11 spacecraft, which ​landed humans on the moon in 1969, is literally a 1/100th fraction of the computing power that is in our ​smartphones today, which instead landed me on a bizarre video wherein a former United States president ​running for office accuses a certain demographic of eating their neighbors' cats and dogs on TikTok. ​Fascinating, yes?


But imagine if, 10 minutes before that scheduled rocket launch to the moon, the rocket scientists told the ​astronauts that while they had developed the technology to fire up the spacecraft's engines and propel ​them to the moon, they hadn't yet figured out how to attach a vessel strong enough to not collapse. Such ​is the case with AI development — the engines symbolize the technology and its capabilities, while the ​vessel represents the ethical frameworks and governance needed to safely use said technology.


If AI continues to evolve without enforceable speed limits like the autobahns of Germany, then those ​technological capabilities will continue to vastly outpace the evolution of ethical guidelines and ​governance systems. And without the components of this crucial structure, a "blowing up in space" type ​situation is not just a more probable scenario, it is a certainty.


As for (2) The Element of Where, tune in next week for Part 2 where our journey will take us through the ​delicate intricacies of why 'where' matters.


The Race Between Us & Technology — The Urgency for Ethical ​Guidelines & Governing Systems in AI, and Why You Should Pay ​Attention - Part I

Part I Recap


So, during Part I last Thursday, we discussed the urgent need for ethical guidelines and governing systems ​in the development of AI and highlighted two main elements of concern: (1) the 'speed' at which AI is ​evolving, and (2) there 'where' it is evolving. We used the autobahns of Germany to describe the ​breakneck speeds of current AI development and specific parts of the rocket ship to visually represent ​certain key roles needed within AI, and emphasized that the technology's power and speed of ​development are far outpacing the development of any form of vessel or structure for a safe and optimally ​beneficial journey with that technology.


This article discusses the second element presented, the element of where.

AI's Connection to NASA & Germany


There is a reason why society thought it more sensible—or at least felt more at ease—that rocket ships and ​space exploration, until just the past decade or two, be exclusively in the domain of governmental ​agencies like NASA. These agencies were composed of large teams of scientists, subject matter experts, ​and last but not least—national security advisors and government safety officials. We at the time deemed ​such institutions as best suited to bear the significant risks and responsibilities of those tasks at hand.


Similarly, but perhaps less formally, there is also a reason why the autobahns—the German freeways with ​open speed limits—exist only in Germany. If there is one thing the Germans are known for, it is their strict ​adherence to design codes, hence why the term German Engineering is internationally recognized as ​synonymous with being of the highest quality and durability. So much so that when it came to define the ​rules by which to govern their roads, the German public appears to have felt confident and safe enough to ​stop short of instituting speed limits or any mechanism to control it. Without getting further away from ​political correctness: although no other authority or municipality in the world today adopts this standard, it ​still just intuitively makes sense to most people that it would exist in Germany.


What these examples mean to say is that we as a global society do not place massive, transformative ​ventures in just anyone's hands. For the same reasons why space exploration was conducted in the ​confines of government agencies with highly skilled scientists and experts and why it's for the best to just ​keep autobahns in Germany, the development and creation of AI, and consequentially AGI then ASI, should ​appropriately be done somewhere that is on some level democratic, transparent, and allows for public ​discourse, consent, and control.

The Rise or The Fall...?


We have mobilized in the past to alleviate fears and distrust when they happen to be shared on a global ​level. This, however, required that the implications and what's at stake be clear to the people of the world ​at large, which in the case of the most widely used examples of biological and nuclear weapons, were ​quite easy and straightforward to understand (e.g. big bomb go boom we all die).


But as the consequences of AGI even for those well-informed on its inner workings are significantly more ​complicated and difficult to sufficiently grasp, combined with AI's development occurring behind the ​closed doors of black-box, profit-seeking enterprises in the form of influential corporations with larger ​operating budgets than entire countries on one hand or little-known, homebrewed labs and startups on ​the other, I would dare say that even today, the vast majority of the human population on this planet is ​actually completely oblivious to what's at stake, increasing the threat of dramatic upheavals.


All that combined with the fundamental and drastic degree by which this innovation is going to reshape ​our reality, it is frankly quite shocking that we have yet to witness even a symbolic fraction of the urgency ​that basic sensibility would require from key stakeholders to rectify the glaring absence of any form of ​oversight, guidelines, frameworks, regulations, or legislations by which we could actually rise, rather than ​fall, to the occasion of creating AGI.

There Still Is Brightness In the Horizon

The analogy is clear: we are building rocket ships but without the ships that can survive the journey. ​Having said all that, I would be lying if I told you that I, a random Palestinian kid from the 90s, don't badly ​want to be alive for when we do achieve AGI, and be a witness to what would probably seem like literal ​magic.



So, for everything that needs to be done for that wondrous thing to happen, tune in to the next and 4th ​episode of TMC later this week, where we’ll brainstorm and begin to discuss the potential systems and ​programs that must be established in order to mitigate risks and forge a safe and prosperous path forward ​for all of us.

The Race Between Us & Technology — The Urgency for Ethical ​Guidelines & Governing Systems in AI, and Why You Should Pay ​Attention - Part II

About NXTLVL_Labs

Coming Soon