The “danger line” I perceive is when we give anything “agency”. It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season… all that from a simple little (binary) bit of “artificial intelligence” - but when it’s granted “agency” to operate the flood gates without competent oversight, it becomes dangerous.
May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.
Lately, we’ve got <a href=“https://en.wikipedia.org/wiki/ELIZA”>ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot “is real” like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.
I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction… Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls…
Then there’s the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: “He who must not be named did great things, terrible to be sure, but also great.”
I love the nuanced approach here - neither pessimistic nor optimistic but rather realistic. Then again, I would strongly question the utilityn here or even definition of “great” - except you were just using it in an explanatory sense, so I get what you mean, but like for a corporation to achieve “success”, at the expense of an enormous number of workers let go… is that really “great”, truly?
Beauty lies in the eye of the beholder and I see such ugliness, even while I also see potential for truly great good as well. It is definitely not the “fault” of the tool, but rather the wielder, although either way I see why people have anxiety, when they consider the ways that the tools are currently and actively being used against their interests.
The “danger line” I perceive is when we give anything “agency”. It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season… all that from a simple little (binary) bit of “artificial intelligence” - but when it’s granted “agency” to operate the flood gates without competent oversight, it becomes dangerous.
May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.
Lately, we’ve got <a href=“https://en.wikipedia.org/wiki/ELIZA”>ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot “is real” like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.
I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction… Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls…
Then there’s the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: “He who must not be named did great things, terrible to be sure, but also great.”
I love the nuanced approach here - neither pessimistic nor optimistic but rather realistic. Then again, I would strongly question the utilityn here or even definition of “great” - except you were just using it in an explanatory sense, so I get what you mean, but like for a corporation to achieve “success”, at the expense of an enormous number of workers let go… is that really “great”, truly?
Beauty lies in the eye of the beholder and I see such ugliness, even while I also see potential for truly great good as well. It is definitely not the “fault” of the tool, but rather the wielder, although either way I see why people have anxiety, when they consider the ways that the tools are currently and actively being used against their interests.