Archive | Analysis part 2 RSS feed for this section

Thoughts on game design.

24 May

I’ve recently had my first prance through the meadow of serious game design when I made my full flash game Spaced Out. I created the game so that it would have three fully playable levels with boss fights, highscore system and everything. But I must say, getting the basics right were easy. Getting it winnable by everyone, but yet having them all finding the game gently challenging was a different matter entirely.

Game design is a tricky business, you need to have just the right balance of everything to succeed. You have to be quite careful, but yet, as with my case, you have to have everything you would expect from a game, and then a bit more.

In games, you have the basic system of events:

You have a character.

You control it in some aspect, usually motion. (in my case, just firing the lazer eyes).

Enemies appear.

You kill them.

You get points.

It’s a basic series of events that the majority of action type games have. All games work on a reward basis, you do something right, you get rewarded, you get it wrong, you get accosted. Basic stuff man.

So when it comes to my game, I employed these age old techniques for game design. And for the majority of it, it wasn’t something that I had to think about either, having played a number of games before, this was the kind of stuff that came naturally for me. When you think of a game, you think of those basic aspects, however subconciously. But the score became prominent in my mind, perhaps due to the fact that I was thinking about it as an arcade game.

As I pointed out in a previous posts, arcade games ARE their highscore system. That’s what generates a motivation to replay the game, getting that highscore.

So when it came time for me to think more about my game mechanics, I thought more about how I could make the game better to play, more enjoyable for people but importantly accessible for a whole bunch of people to play.

At first, I didn’t really have any difficulty system in place. I never really planned it. I had planned levels and I expected that it would get more difficult when you go up the levels, but apart from that I didn’t plan much.

But that’s usually how games and game developers tackle the issue of difficulty

“I know, I’ll just stick a whole bunch of levels in there and people can play through them all! The levels will get increasingly harder! Brilliant!”

Not for me, I’m not a hardcore gamer, when I play games, I don’t usually play them for very long, I just play for about 10 minutes before going back to doing whatever it is I was doing.

I don’t set out time to play games, I usually play them while waiting for something or if I’m bored. So having games that are really long don’t interest me, since I will never see the end.

So that’s a problem for me right there, why should I create a game that’s really long? This just wont work in an exhibition space, people won’t want to play a game that’s really long, they’ll get bored and wander off. Having a short game was the answer.

I also faced another dilemma, if I had a set difficulty rating, the highscore would only ever be so much, and the scoring would look very blocky and rounded. People would achieve the same score over and over again, wheres the fun in that?

So, while discussing this with gary, he suggested I make a scoring system based on your health at the end of a level, you would get a bonus or a multiplier if you had loads of health left.

Good in theory, didn’t work well in practice. I’m not sure why, but it was never really noticeable when you got the multiplier, and having lots of health left was really difficult for me sometimes.

So, that’s where my new theory and thoughts came in.

My thought was that I could combine the difficulty, letting people complete the game (and so see all my story, my game has a story unlike most short, casual games) and a flexible scoring system and have it all running silently in the background.

A bonus was that I was using brain wave reading. By doing this it lets me access how people are thinking about my game at any given time. Very handy.

So, what my thoughts were, was to take an average reading of your concentration levels during an in game level. By taking an average, you can see if someone spent most of the time concentrating hard, or hardly at all. I then set up a little comparison table which looked something like this:

if attn >90 – level == veryHard

if attn < 20 – level == veryEasy

I also set up levels in between. I had 5 levels in total so that’s all good.

By having the game compare your thought levels against this basic table, I could set the level at which you need to concentrate to kill enemies higher or lower depending on the level. I also then set multipliers on the score depending on this level too, so now I had a nice fluxuation in the scores. You could have up to 40 per kill or 4. Fantastic!

So by having this system in place, you can make a game who’s difficulty is never set in stone, but instead changes depending on how well you are playing the game at that particular time. This is great for casual games or games that are targeting wide audience like games for kids and teenagers.

But these thoughts can be implemented into a wide range of games, you don’t need to have a brainwave reading headset on for this theory to work. You could have it based on health percentage, as long as your game is geared better towards its health system. (mine wasn’t it was just kind of there to enable the game to be failed, otherwise you would only win!)

You could have it based on time, so lap speed, how fast you solve a puzzle. You can really put this in all sorts of games and it would work. You just need to think about how you can implement the averaging.

To paraphrase the comedian Dara o’Brian

Game difficulty denies you content. I’ve never been reading a book which slams shut half way through because I wasn’t able to recall all the prose within the last chapter.

It’s true, games are the only entertainment medium which denies you content because you are crap at using the medium. You don’t get that with film, books, music, anything! So why have we been putting up with it in games? I know I won’t be any more.

A technical perspective of developing Spaced Out

23 May

So, I’ve talked a lot on here about the whole process of Spaced Out, all the technical stuff is covered but it is dotted around in various places.

For those of you who are new to this blog/project and wondering what the hell I’m talking about, Spaced Out is a mind controlled arcade game that I made as part of my final project at university. It’s a game all about a giraffe in space who gains mind powers and has to battle his way home through various enemies including David Bowie.

The game is created using Adobe Flash, and for input I use a brain wave reading headset and a kinect. And this is how I did it.

In reflection, I developed four separate interface systems to control my game. One of course being mind control, that one was set up within the first few weeks, and I had it running, and running reliably which is something that I’m always keen on developing.

But the others? Well I only chose one of those in the end, the Kinect of course. But first of all I had other motion controls, both of them a bit ropey.

Let’s discuss.

Brainwave reading was an obvious first choice for me, it was something that would allow me to control flash games through the power of thought, and that’s just crazy, even now 9 months after I started this project. (9 months, holy crap. It’s just flown by). I can show this project off to people and they look at me like I’m actually using magic.

And that’s why I chose mind control. Who wouldn’t want to make a game where the go “Are you kidding me? What?!” when you tell them that you just have to THINK to control the game.

That was the simplest choice I had to make during the project.

It’s pretty straight forward in terms of its technology and implementation too actually.

I use a star wars force trainer, which I took apart and found some pins inside. I read up online and found out that you could hook a micro controller up to two of those pins and read serial data out from them. A little bit more digging and I found that someone had written up some code for the arduino so that it could read the serial data in through that micro controller. This was perfect for me, I like arduino’s and I know how to implement them with Flash and control things in real time.

To get the data from Aduino into Flash, I use Tinkerproxy, the mac version of serialProxy which is just a bit of java code that takes the data from arduino and sends it out on a local socket which you can then tap into with a bunch of programs, such as Flash.

I then use some pretty standard code for getting the data into flash.

private var arduinoSocket:Socket = new Socket(“localhost”,5331);

That’s how I set up a basic socket in as3, and then I split the incoming string to split up all the data which I store in an array so I can have the Concentration data, the Relaxation data and the Signal data all streaming from the headset.

var arduinoOutput:String = arduinoSocket.readUTFBytes(arduinoSocket.bytesAvailable);

output = arduinoOutput.split(“&”,4);

This code isn’t anything special, it’s just some basic AS3 socket code that can read incoming data from an Arduino. It’s the barebones type stuff that gets things up and running and working quickly.

NOTE: I have three items of incoming data, but I split my string up into 4 chunks. Why? To stop an annoying bug that was rendering my array useless a lot of the time.

When streaming data over sockets from arduino to flash, you tend to get an invisible character turning up every now and again, usually a carriage return or something like that. It’s just something little that can throw my whole game out of balance.

In my arduino code, I just add on an extra character, and then in flash split the string up into 4, this protects my first three data chunks from getting malformed by this bug. Annoying when it happens, but a relatively simple fix.

Now, that’s pretty much it for my brainwave reading dabbling, now I shall move onto an interface method which took up half of my total dev time. Head tracking.

This, on reflection was a silly mistake really, I’m not really sure why I spent so much time trying to get this to work instead of finding other methods.

What I did was mount IR LED’s on top of the brainwave reading headset. Then, by using a wiimote, I could track the players head position. They could then move their head side to side and rotate it to control a cursor on screen. This is old hat for us IMer’s, we’ve done motion controls with wiimotes before. When it came out and it was found to be hackable, of course we used it! Full on cheap motion controls for our installations. Brilliant.

Well, this method kind of worked, if you sat in the exact right place under the mounted wiimote(s). But even then, shift a bit out of place and the whole control mechanism goes out the window. That’s not cool.

But, I did learn how to make a wiimote to flash connection package that worked a hell of a lot better than wiiFlashServer. On mac, wiiFlashServer can spend ages not connecting your wiimotes to your mac, sometimes it just wont work at all. It’s really weird.

I found out about a neat little program called osculator. Now, this is for Mac only, but if you are a flash developer, with a mac, playing around with wiimotes, I’ll let you in on how I made instant connecting, wiimote powered flash games. It’s bloody marvelous.

What I did was create an osculator file that moves all the necessary data to the right place. It moves the data from the localhost port to 9000. This was a little tricky to set up, so if you want to use my files, you can. I’ve added all my files for connecting wiimotes to flash, including my AS3 code to github.

https://github.com/JonathanReid/Osculator-to-Flash—Wiimote

Thats the link right there, feel free to download it and maybe improve it, but it’s a nice little stand alone class that you can plug into your games and access instantly.

Right now, that code just deals with the IR sensor input from the wiimotes, but it can be easily adapted to use button clicks instead if you prefer.

The code uses FLOSC to get the data from OSC into flash and parse it correctly, my code takes the OSC stream and makes it more human friendly.

Anyway, that’s one of my more exciting moments in coding to date, developing that little system.

Back to wiimote head tracking. It became a little problematic as I found that people didn’t sit down the same as others. Also, I would have had to build a huge cabinet to hold everything in. Not fun for me, lots of hard work there.

So, after some time I decided not to bother with head tracking. I thought it was time to move onto something that I personally found quite exciting. Eye tracking.

Now, I had seen these projects where people had created really cheap ways of eye tracking, and I thought that with a little jiggery pokery, I could get that info into flash and control things with my eyes.

Well, I did.

I built myself a little eye tracking system based on the design from the guys over at the graffiti research lab and used their code as a base to start on.

They had built their code to track eyes with in Open Frameworks. And this was my first foray into the OF world. It’s confusing at first since if you don’t link up your files correctly, (theres lots of files, I dont know where any of them are) you can’t compile.

So after a long while trying to figure all that out, I finally got it to work, and then from there it was only a matter of time before I had it sending out the data in a flash compatible way. Didn’t take too long actually, someone had written an OF to Flash plugin for OF, all I had to do was send over the data that I needed, the X and Y co ordinates of your eye.

And I did, and I had a game that you controlled by looking at the screen. Cool ay!

Apparently not to anyone apart from me, others found it cumbersome and not very interesting or exciting. There was no magic, everyone could see exactly how it was done so it didn’t excite anyone.

Now the Kinect, that brought up all kinds of excitement.

I had the eye tracking running for about 2 weeks before I decided to ditch it. That’s just the kind of world I live in. So the kinect it is.

This was to be my final choice for input. This is the one that I would be exhibiting with. Luckily I had it up and running with my game in less than a day.

Now, to do so I used a program called TUIOKinect a little program by the makers of the reactable that detects hand sized blobs with the kinects camera and then broadcasts it over your computer in a TUIO friendly manner.

Luckily for me, there is so native AS3 TUIO code that can harness the power of blobs. After implementing that, I hand control over my game by waving my hands around.

This proved to be a much more intuitive interface than the previous two, which is good because at this point, I had run out of ideas to rely on.

People are also still wowed by the kinect, they can just put out their arm, and there it is, they are controlling stuff on screen. That’s pretty cool right?

I would talk more about the TUIO as3 client but I simply dont know much about it, I got it working by sheer luck really, It slots in my code and stores all instances of blobs on screen as an array which I can access and create cursors for everything in that array, which is what I did. I’m not sure what’s going on past that, I’m not much of a TUIO man, I’ve dabbled in it before, trying to see how it would work and I had never got it up and running before, it’s not exactly the easiest of things to get running to be honest. So, if you want to learn more, have a look at their site tuio.org.

And that’s how I went about creating four different interfaces for my game.

The game itself is quite a simple one, it’s my first steps into the flash gaming world, and now I know that I’ve made plenty of obvious coding mistakes while creating it, but hey, now I know what to do. I can code cleaner now and I’ve learned a lot about keeping your code manageable. All through my own proud mistakes.

The game is essentially a little tower defence type game, you have a bunch of enemies held in an array that get animated towards the left hand side of the screen. The enemies are generated based upon your meditation levels. If you are relaxing over a certain amount, an enemy gets generated. Simple stuff. But, as I found out while testing, this wont work for everyone, some people just dont relax, so as a fail safe, I added in an enemy that gets generated every 5 seconds or so to keep the game going.

Then, I use simple hit test structure with the cursor (generated by the kinect sensed hands) and the enemy. When that hit test is triggered, I also check to see if you are concentrating over a certain level, if you are, then the enemies explode!

so the in game process goes Point > Think > Explode.

Simple stuff in terms of gameplay, but’s it’s one that people enjoy due to the physical and mental aspects of the game rather than the exciting gameplay mechanics.

Hey, I taught myself how to make flash games while making all the rest of that stuff, what do you want from me?

being truly indie, I also drew all my own graphics which is fun. I love drawing so it’s nice when I get down to drawing and art again, I have a very distinct style of photos + vector outlines. It does mean that it takes me some time to do just one character since I draw them by hand first, but hey, to me it’s worth it to get that visual style. It fits in well with the whole surreal aspect of my game to. Handy that.

So yeah, learn to embrace the ports and sockets, because you can make some pretty fun stuff when you start streaming data all over the place. Get a wiimote and try it out yourself.


Thinking into the future.

18 May

Where can I take this project in the future?

What could I improve?

These questions and more are what I am about to set out and answer.

This project has a lot of potential for more. In some cases, different aspects of the game have far more potential than the thing as a whole.

At one point during my exhibition, Mark Jackson and another woman who I don’t know were talking about how my project would be great for exhibitions and would be a nice, physical game to show off. Not sure what exhibition they had in mind, but it’s one that will happen in the next year they told me. So, theres that. My game would indeed be great for exhibition’s and bigger spaces, thats what its built and designed for. It would be nice to submit my project to exhibitions and show off my project a bit more.

For me personally, I will definately be taking some of the game logic and understanding I learned about this game with me. Especially the self adjusting game idea so that anyone can win at your game. I think that’s a great idea that you just don’t see in games at all. And, as my game proved, it’s a solid idea that does indeed work in practice. It’s just figuring out how to include it in games that don’t read your mind.

Mind control is something that I will playing with more in the future. I would definitely like to control physical items with thought, even I do so as a hobby, I would still like to do it and see if I can make everything mind controlled. Why not? If I can develop a simple system that’s effectively plug and play with household items, it’d be foolish not to .

I’d like to also do more mind control + kinect games. It’s a nice little interaction system. It’d be nice to develop more with the kinect too as you can create some pretty nice systems with it. I’d like to get the skeleton tracking working and create full body flash games. Something that makes you do silly little poses for everything, that’d be fun I think.

Clearly it’s just left up to my imagination really. I could do so many different things, I just have to think of them first.

IMPROVEMENTS

First and foremost, a better brain wave reading headset. Considering I can get one for a decent price (£100 including postage) and it reads far more information AND I dont have to hack into it (giving me an arduino spare for doing cool shit) it’s a no brainer really. I bought one today from Neurosky, and I’ll be awaiting it’s arrival.

Next improvement would be to rewrite the game. I’ve learned a lot about game programming and I realise now that I’ve done a lot of it wrong or and a cack handed way. I could rewrite the game so it would be a lot smoother, less confusing to read and would be more efficient. It would take a good long week to sort that out though, for me it’s not really worth doing it just yet for the upcoming grad show. The game works, but I have a feeling I have a memory leak somewhere which means I need to restart the game every 4 hours or so. I may look into that if I have time, but rewriting it is the option i’ll take if I suddenly have a lot of free time on my hands. (Right now I’d prefer to learn new things, like making cool games).

If I was to do it all again though, I don’t think I’d change much if anything. I would most likely do more research into how the user initially uses the game and try and make it more open and self initiating so that I didn’t need to constantly be around making sure people put the hat on correctly.

Other than that, I can’t think of any other improvements. The headset one is annoying since the new headset came out about a month or so before the grad show happened. I didn’t have time then (or money) to really think about that as a viable choice. I had my old version working for about 9 months, it’d do for a little while longer.

It’s been a good experience.

Some analytical thoughts after the exhibition.

18 May

The exhibition went really well I thought.

Everything worked nearly as expected with a few problems that you could only find after playing the game on and off for 5 hours or so.

But, the execution and interaction with my project went exactly as i’d hoped for.

There were a few incidents with the headset and people trying to use it by themselves which caused an issue. I more or less had to always be around just to guide people through the first hurdle of my game, making sure that the headset was on correctly.

This is something that, even if you were playing the toy that I hacked apart would give people trouble. The headset has to be in contact with skin at all times, hair cannot get in the way and it takes up to half a minute sometimes for the hardware to detect that it’s on correctly.

For some people, this was all just too much.

For one person, when I told them they just had to wait a bit to see if the headset was on correctly just took the headset off straight away and said “it’s not working for me, oh well”. I was very disappointed with this outburst.

With stuff like this, I would have thought people would be prepared to wait as it is relatively new tech and most people did wait, but there will always be a few people that expect everything to be instant. This was more so among the younger audience. The over 20’s seemed to get that you had to wait and understood that, whether this being an inherent thing due to us growing up with tech rather than having it always around is something to be discussed. I think cartridge loading games and dial up internet are my friend when it comes to things like this though.

An interesting point that came up several times, people didn’t believe me when I said its mind controlled.

They would look at me funny and go “you’re not serious are you? How can it do that? What?” And then they would ask to have a go and they would get it, it actually reads your thoughts.

That’s another point right there, asking to have a go. Interesting. It’s an arcade style game, yet people seemed slightly afraid that they would break it if they picked anything up.

In the future, I’d have to address these problems. The hat connection problem can most likely be solved by getting a proper brain wave reading headset, which I plan to do. Neurosky, the company who made the chips that go in the toy headsets, now make an “education” version of their full on headset which allows for quick brainwave reading and cheaply too. €89 is the price tag, which is pretty handsome.

Making a furry hat seem more robust? I’m not sure I could do that with this tech. If it was just kinect controlled then go for it! But having wearable tech is something that will always be hard to make robust and self explanatory.

Overall, the game had the desired effect, people who played it were wowed by the fact that they could just blow things up with their mind. It was nice feeling to see people act like that.

And after the initial tutorial lead by myself, “put your arm out and concentrate on blowing things up” everyone was up and running. Everyone. That’s a fantastic success rate. The game failed no one. No one was unable to think enough, unable to get a grip on how to control the cursor, it was fantastic. The game just worked.

The fact that I had a self correcting difficulty level setting on my game just made it better, most people got to the end of the game. Only a few died before that but a huge majority got to the end and saw the whole game. That’s brilliant! My game overcomes what other games fail to do, let the player see the story. They get to see what happens throughout the game, the bosses weren’t stopping people from completing the game. Unlike most console games, you have to beat very hard bosses to see what happens, since my game auto corrects itself and figures out what is hard for you, everyone could see the story I worked hard on creating.

That is the biggest success for me, and it’s something that nobody notices.

And because the game figured out what was hard for you, you could see that reflected in the scores so if your thoughts were on parrallel for what the game was programmed to be hard, you would get a better score. This just lead to a competitive nature between others. They liked seeing that highscore, they liked seeing who was at the top, especially if it was them.

But yet, every single highscore, which ranged from near 2000 to just over 500, that player got to the end of the game. Now I think thats some clever programming right there.

Not just basic mechanics, but something that makes the game enjoyable to play. It’s a nice little touch.

On another note, no one really paid attention to my mind controlled lights, they were right there in front of them, being very bright, yet no one commented on them changing colour or anything. Interesting. Makes me think that it wasnt obvious that they were mind controlled and that they were more of just a nice little lighting show for the game. Oh well, if they didn’t question it’s existence, then it was obviously a good sign. They felt that it fit in then.

But overall, I had nothing but good, supportive comments on my game. So, I think I’ve succeeded in creating a game that appeals to people and is easy to play. It’s good stuff.

I think I have achieved quite a lot. I have made this quite visually impressive game that works through mind control, thats pretty good right? I’ve had a lot of positive press about this project which just reinforces this belief, and I’ve learned quite a lot about game mechanics and game interaction through this process. I’ve thoroughly enjoyed working on this project and enjoyed experiencing everything and learning about the way humans interact and think about games as they play them.

Overall, it’s been a fantastically positive experience that I learned a lot from.

If anything, it made me realise that I want to make games full time. Maybe not these huge brainwave reading games, but the theory behind game interaction is truly fascinating to me. Especially when it comes to applying real world theory to games and how that works. It doesn’t, game worlds have their own physics and behaviours. I like it.

My mum says I make good games.

2 May

I got my mum to play my game and I watched from the side lines to see how she played it and how it worked with her.

I admit, I had to prod her along occasionally, but in my deffense, she had to take her hearing aid out to put on the hat and thus couldn’t hear any of the in game instructions.

Also,  she never ever plays games. So she didn’t really understand certain aspects of the game since she had no prior knowledge or frame of reference for certain things.

NOTES!

I took notes as she played and this is what I got from it:

  • Mum gets it after a little bit of confusion with the hat.
  • Mainly with putting it on, but that was always an issue, hence the new “about the hat” board
  • Slowly got better at the game, although I don’t think she quite knows what’s going on. (hearing issue, unable to comprehend story)
  • Problem with audio on videos again, sometimes it drops out for no reason at all.
  • Couldn’t get to grips with enter name screen, although she couldn’t really see the cursor much on my tiny laptop screen from a distance. Perhaps linked.
  • After two deaths, she finished the game and placed 4th on my highscore list.
It’s one of those things that when you know how to do it, you just want to keep doing it.
She got the hang of it in the end which was good. She seemed to enjoy the whole game experience and the kinect + hat interface was really intuitive for her. She picked that up with in seconds.
Only thing was she didn’t seem to quite get the concentration bit for a minute or so, but then on the second try she was a dab hand at the game and understood how to blow things up, so that’s positive I guess.
Considering my mum get’s scared when my dad moves icons around on her computers desktop, I think I’ve done well here.

MIT making fan boys dreams come true.

14 Apr

And everyones arms really sore in the process.

One of my new favourite toys is the Kinect. It’s ability to track the human form and indivual parts of the skeleton like hands and even fingers in some cases, is utterly mind blowing.

It’s something that myself and my tutor Adam Martin have been wanting to do for years now but have had to make do with the pathetic on comparison attempts that a humble webcam can offer in return.

Many years ago, Steven Speilberg had a dream, that dream was to give every geek in the world huge biceps and the an entirely gesture driven computer interface.

Now, in 2011, that dream has finally been realised.

There are so many problems with the basic idea behind a gesture driven interface. Mainly the fact that the average persons arms get tired of holding themselves up after a few minutes.

But, the ability to track the human hands and form isn’t silly at all.

Having a way to interface between the human body and how it is reacting to something that’s infront of them is very handy indeed, especially for interactive artists.

Now we can create very passive interfaces that just sit in the background and help out in daily tasks. Like my previous post about mind control, my stance is that these interfaces should be very passive and should never be the main input. These things just arnt built for the main input since they require a lot of stress on parts of our body that we just do not use enough in that way to put stress on.

Keyboard, mouse and touch is what is tried and tested. Thats fine, we as humans need that physical reaction of touching things. We need to be able to know that we have done something right, and the fact that we get the feedback through button clicks and screen taps is that little psychological buzz that rewards us when we do our daily computing activities.

Waving our arms around like fools just doesnt provide us with this same level of biological feedback and so cannot be used as a reliable, everyday input.

It can be used in exhibitions, large spaces and performances. A stage performance where you move you hand and lights are focused where you point is like magic. Magic and wonder keeps our imagination alive.

Anyway, MIT have come up with a way to use the kinect to track finger gestures to control a computer interface, just like in the film Minority Report.

This is a horrid use of the kinect and is just something that we need to get out of our geeky systems now so that we can all move on and actually do something half decent with this little black magic camera box that microsoft have brought to us.

Come on guys, we can come up with better ideas than a nine year old sci fi film cant we? I mean we’ve moved on so much since then.

Charlie Brookers unbridled cynicism towards mind control.

14 Apr

My brother, just like myself, enjoys to read the rantings of Charlie Brooker. My brother pointed me towards an article that Brooker posted in the Guardian recently about mind control and computers.

There are a few things that I would like to take and savour from the article. First of all is this incredibly humbling point:

All computers are mind-controlled already. My hand may steer the mouse and my fingers may punch the keys, but none of this takes place without my mental say-so. My brain runs things round here.

Isn’t that brilliant? That’s  a brilliant observation on Brookers part. Of course, everything is mind controlled, it just has to be.

Nothing we say or do in this world isn’t mind controlled, it’s just that we haven’t become so bewilderingly lazy to cut out the middle man until just recently.

Yes, being able to control things with just you’re thoughts is amazing, it’s the future waving hello at us while we blink our newly formed eyes in it’s bright, shining wake.

In a few years we will be controlling all sorts of things with thought alone. And that’s terrifying.

The problem is that the body is the final, crucial buffer between the skittish human mind and the slavish machine servant. Think of how many furious email responses you’ve composed in haste, only to halt and reflect at the final moment as your finger hovers over the “send” button. The simple fact that a small physical action is required to actually deliver the damn thing is often enough to give pause for thought.

I’m scared for the future that we are developing just because we can. A future where the only control over our actions is what happens inside our heads. The terrifyingly bizarre thoughts that we all thought were kept locked away inside our heads will have complete control over the world around us.

I don’t like that at all.

I’m all for using new technology to be able to passively interface between man and machine. Have computers recognise mood to play the exact right song for what you are doing, change your workspace to match your ability to concentrate at that time. The computer being able to adapt and work around your own mental ability to do the work would benefit work and probably speed up work flow.

Being able to send an email by thinking about it will destroy the world as we know it.

It makes me think though, my work, as limited as it is, is probably dancing around on the happier side of this horrible line. The side of entertainment and more practical use instead of focusing on stream lining tasks or making complicated actions simple.

To be honest, mind control is an odd thing to demonstrate to others, how can you really prove to others that you are doing what you say you are doing. They only person who knows that is you, the guy who’s thinking about the thing you’re doing.

It’s something that will never be used for big, crowd gathering displays. It’s all about the personal usage.

Imagine, sitting in your home, you feel relaxed and warm but you dont want to get up to turn the lights down. You don’t need to because your computer knows that you are relaxed and sorts that out for you. You’ve fallen asleep but you’ve left the lights and the TV on. Not a problem, the built in mind reading headset knows that you’ve fallen asleep and turns everything off for you. Saving you money in the process.

That’s the kind of future world I want to live in. One where things around me adjust to the way that I think and feel. And frankly, that’s something that I know I can be a part of. Making games that react to how you feel about the game is something that I myself can make. I want to make.

The future is all ours for the taking, we just have to build it the way we want it and be careful not to do anything and everything “just because we can”.

Article link: http://www.guardian.co.uk/commentisfree/2011/apr/11/beware-mind-controlled-computers