Skip navigation

So I was reminded the other night of my two past conclusions in relation to marijuana.

1. It gets my brain going in strange, determined thoughts, which is good

2. It’s terrible in company… or I am.

So I’ll go back to not smoking, but I did have one fun thought that arose from my journey to Timmy’s house, which was as follows…

'Inteligencia Artificial' by Magdalena Ladrón de Guevara

Possible levels of programming to create a functioning AI.

Requires – near-infinitely expandable memory. Benevolent programmers.

5 Levels (4 determined)

Level 1: Gather and Store information. Visual, quantitative, temperature, language. This level simply takes the information in, categorised. Temperature, light, sound. It counts what’s in the environment (eg 4 people, 12 windows, 2 doors). Calculates distance to/between objects. Listens to and stores language (language preferably programmed somewhere into the AI before release)

Level 2: Prioritising. Develops a level system and calculates norms. The information that comes outside of those norms is brought to alert higher than things on the established level. Levels include: temperature, volume, light, speed, proximity. There should/could be a focus on people – their volume, their agitation/excitement determined through physiological indicators. Also, human input is important for the learning – if a human talks to the AI about something, or directs them somewhere, the priority of this thing is shifted higher.

Level 3: Decisions & Logic. This program chooses reactions within the system to this external stimuli. It gives intent. If something requires attention, it directs the AI there. If something is determined dangerous, it compels the AI away. This level takes into account information from the previous level, and it stores decisions. In order to develop personality, the AI makes each subsequent decision based in part on new information/stimuli as well as influence from previous decisions.

Level 4: Personality. The social output of the program based on personalities in the surrounding environment and language. This is the difficult part. AI personality could be more easily achieved through a strict structure – a controlling set of behaviours that gives the program a specific output. However, a more interesting/advanced/intelligent AI should be capable of developing its own personality. This would need to be achieved through a combination of decisions, basic emotional options, and based largely on the influence of the people surrounding the AI. It would focus on the information pertaining to language and other human behaviours. Influence would be drawn from the apparent values/behaviours of all people around, though more strongly from those who have had an earlier/stronger connection with the AI, much as a child. These values are open to change as the system grows and obtains more knowledge or is opened to further influences.

Level 5: I’m not sure about but I’m sure there should be one.

I have no doubt that this all seems like rubbish to anyone with even a basic understanding of AI and computer programming, but then I’m a writer not a programmer. I’ll make a film of it and stick to the -fi end of sci-fi.

12 Comments

  1. hooray for -fi

    • Hehehehehe 😀

  2. This makes for more sense than that txt you sent me and I get where you’re going with this. It makes sense for the most part, and is relatively feasible. Or I think so at least. It’s similar to what I’m trying to accomplish with my AI models. The abilities to KNOW things rather than have access to data, and the ability to develop it’s own personality and understanding of the world unbiased to programming methods. Obviously most people would argue that safe-guards should be put in place to stop them murdering people, or causing damage, which does make sense, but at the same time is like keeping a child away from the knife drawer in the kitchen to stop them doing something silly. It’s all about learning. Being able to properly define boundaries and rules through conditioning is what needs to be done. Yet, then, that also approaches the issue of having to effectively “raise” the AI, rather than have a starting point where it can already communicate and has an understanding of the world and what it can or can’t do, and what’s appropriate. Everything you consider in the development of AI has a trade-off, and depends on what you want to create: an entity capable of intelligent thought and action, or a new life form.

    • Yes, in controlling I would have thought there’d be a way of writing in a few lines of “thou shalt not kill” stuff, but you’re right.

      My concern would be of people using the AI for things they don’t wish to do themselves – imagine if organised crime got a hold of AI in a robotic form. The AI would be ‘raised’ in their environment wherein it is acceptable to kill and no doubt the killing would be left to the AI. If their influence was overtly in that direction, problem. Mistreatment, taking advantage of the program. But then, upon creating an intelligent entity, there’s also the debate of rights on top of restrictions…

  3. These are all echoes of my current concerns. But I feel if I’m to endevour to do this, then I need to treat the AI I build as if it were a creature entitled to all the basics rights a human is, at least. Else I’m creating a biased entity that isn’t capable of existence by itself in the same way we are. Yet at the same time, making them unguided or unmotivated creates the issue of purposelessness. All things need a reason to live, and they will need more of a reason than us most likely. Their logic will define it so. But also, it’s not necessary that AI has command of a bipedal vehicle, it could exist soley in the digital world. It doesn’t need a physical manifestation, and I personally think that’s an avenue that could take place much later. As we are the dominant species of our world, they could be the dominant ones in their own. Or they could control vast structures or machines, from space craft to defense systems or even normal everyday things like stock inventory or even communication lines. There is so many variables to take into account, and so many are possible, it’s just a matter of choosing the most correct. Or, actually, the least wrong. To begin with at least…

    • HAL wasn’t bipedal 😛

      If an AI were to exist in structures/machines/space craft, as you suggest, I would think that would determine its purpose, even if they go beyond that point that would be its prime motivation. Would the reason-for-being therefore need to be integral in the programming stages, or could an AI be developed alone and then adapted/placed into larger systems?

      The concern that seems to crop up most often in human fear of AI is that the program will decide it knows better (and maybe it does, in a logical sense) and take action to bring about drastic change. How does one ensure that the AI won’t “fire ze missiles”?

  4. HAL was also a psychopath. And in answer to reason-for-being, it depends what the base AI is. Like, is it a new entity? or is it purpose built AI (Effectively a dumb AI due to the nature of it’s being). I’d personally build it as a new entity within the system. Obviously with certain communication abilities to different system functions and control of things.
    As for firing ze missiles, that goes back to my first comment, as is basically an extension of Nature or Nurture. With nature being hardcoding blocks to stop things like being a destructo robot, and as I said before, it compromises what the AI really is – a new entity or a machine.
    And what’s to say the AI won’t just decide humans are retarded and just leave or not communicate with us?
    This really brings us back to a lot of concepts that Asimov has written about. The I-Robot stuff is obviously very relevant, with hardcoded rules to ensure bad things don’t happen, but having a dynamic and emotionally capable AI at the same time as having those blocks is almost contradictory as what you’d want to happen is have the AI revise it’s own programming and understanding as it learns to make it more efficient and a better entity. It’s effectively what we do, just they would do it on a much larger scale involving completely different thinking patterns.
    Unfortunately what we’re asking is to deny it elements of humanity/nature. Things kill. Things destroy. Things make decisions for itself. And what’s to say military AI’s aren’t developed? I feel as if the military would be the first on this wagon. Why have a soldier that can feel pain, be demoralised and be limited by themselves when you can have an ultimately accurate, efficient and perfectly lethal robot/AI to do something so much better. Or have an AI commander or defence manager? Applications for AI are what would influence rules and design and motivation for themselves. It’s almost harder to make AI more specific rather than holistic.

  5. inre: dominant species in a digital world

    I remember hearing in Philosophy about something, it might have been Second Life, where they started putting AIs into it. I can’t really remember what it was to do with, probably some higher-level version of the Turing test, but it’s certainly interesting, considering everyone is behind an avatar and a computer would have the same avatar as a human player.

  6. Yeah I’ve seen videos and articles on it. It’s interesting but very halting and strange. It’s like talking to Cleverbot (http://cleverbot.com/). It takes what you say, and looks at a databases of responses or tries to build it’s own based upon previous responses with other people. It’s rather interesting but still a long way off, like any AI really. See this article for a little more info (http://electronics.howstuffworks.com/artificial-intelligence-second-life.htm). Basically it’s just pattern developing using algorithms, it’s not actually learning anything which is a shame :C

  7. Wow! That was a thoroughly enjoyable discussion to read. I’m quite disappointed that I’m 12 days too late. However, as I am an avid reader/watcher of sci-fi, I will still state my two cents. I totally agree with the concepts ‘rokshocka’ discusses. AI does not need to exist solely as a physical entity, mirroring human kind, but rather would play a more essential role as ‘master controller’ of mundane, repetitious tasks that leave no room for error. Both the prospect of interactive AI, and the abolition of menial occupations is an exciting thing to look forward to. I truly believe that science and technology can create a more efficient and sustainable world. With AI thrown into this mix, it is truly exciting to anticipate the evolution of our society and that of sci-fi films as well. However what I believe is even more exciting than the prospects of the future, are the prospects of the far distant future when technology is surpassed by human initiative. An example of this is depicted in Frank Herbert’s Dune series (my favourite sci-fi books). In Dune, he has constructed a universe so far into our future that we have already developed AI and fought against them. It is so far into the future that humans have developed methods to condition their brains as computers, problem solving at instantaneous speeds, and perfected technology to the extent that weapons have relapsed back to ancient weapons such as swords and daggers. It seems a lot like sci-fi meets fantasy. I know it seems weird, but is totally worth the read. To me this is equally, if not far more, exciting than the ‘soon to be’ reality of AI. I am so disappointed that I will never see these drastic changes, yet will definitely explore them in film and storytelling.
    Awesome blog by the way!

  8. I think this is one of the most important information for me. And i’m glad reading your article. But wanna remark on some general things, The site style is great, the articles is really excellent : D. Good job, cheers

    • Thankyou very much, I’m glad you enjoyed it. I have been slack with writing lately, I’ll pick up 🙂


Leave a comment