Connect with us

Column

Are algorithms hacking our thoughts?

As Facebook shapes our access to information, Twitter dictates public opinion, and Tinder influences our dating decisions, the algorithms we’ve developed to help us navigate choice are now actively driving every aspect of our lives.

But as we increasingly rely on them for everything from how we seek out news to how we relate to the people around us, have we automated the way we behave? Is human thinking beginning to mimic algorithmic processes? And is the Cambridge Analytica debacle a warning sign of what’s to come–and of happens when algorithms hack into our collective thoughts?

It wasn’t supposed to go this way. Overwhelmed by choice–in products, people, and the sheer abundance of information coming at us at all times–we’ve programmed a better, faster, easier way to navigate the world around us. Using clear parameters and a set of simple rules, algorithms help us make sense of complex issues. They’re our digital companions, solving real-world problems we encounter at every step, and optimizing the way we make decisions. What’s the best restaurant in my neighborhood? Google knows it. How do I get to my destination? Apple Maps to the rescue. What’s the latest Trump scandal making the headlines? Facebook may or may not tell you.

Wouldn’t it be nice if code and algorithms knew us so well — our likes, our dislikes, our preferences — that they could anticipate our every need and desire? That way, we wouldn’t have to waste any time thinking about it: We could just read the one article that’s best suited to reinforce our opinions, date whoever meets our personalized criteria, and revel in the thrill of familiar surprise. Imagine all the time we’d free up, so we could focus on what truly matters: carefully curating our digital personas and projecting our identities on Instagram.

It was Karl Marx who first said our thoughts are determined by our machinery, an idea that Ellen Ullman references in her 1997 book, Close to the Machine, which predicts many of the challenges we’re grappling with today. Beginning with the invention of the Internet, the algorithms we’ve built to make our lives easier have ended up programming the way we behave.

Photo courtesy of Shutterstock/Lightspring

Here are three algorithmic processes and the ways in which they’ve hacked their way into human thinking, hijacking our behavior.

1. Product Comparison: From Online Shopping to Dating

Amazon’s algorithm allows us to browse and compare products, save them for later, and eventually make our purchase. But what started as a tool designed to improve our e-commerce experience now extends much beyond that. We’ve internalized this algorithm and are applying it to other areas of our lives–like relationships.

Dating today is much like online shopping. Enabled by social platforms and apps, we browse endless options, compare their features, and select the one that taps into our desires and perfectly fits our exact personal preferences. Or just endlessly save it for later, as we navigate the illusion of choice that permeates both the world of e-commerce and the digital dating universe.

Online, the world becomes an infinite supply of products, and now, people. “The web opens access to an unprecedented range of goods and services from which you can select the one thing that will please you the most,” Ullman explains in Life in Code. “[There is the idea] that from that choice comes happiness. A sea of empty, illusory, misery-inducing choice.”

We all like to think that our needs are completely unique–and there’s a certain sense of seduction and pleasure that we derive from the promise of finding the one thing that will perfectly match our desires.

Whether it’s shopping or dating, we’ve been programmed to constantly search, evaluate and compare. Driven by algorithms, and in a larger sense, by web design and code, we’re always browsing for more options. In Ullman’s words, the web reinforces the idea that “you are special, your needs are unique, and [the algorithm] will help you find the one thing that perfectly meets your unique need and desire.”

In short, the way we go about our lives mimics the way we engage with the Internet. Algorithms are an easy way out, because they allow us to take the messiness of human life, the tangled web of relationships and potential matches, and do one of two things: Apply a clear, algorithmic framework to deal with it, or just let the actual algorithm make the choice for us. We’re forced to adapt to and work around algorithms, rather than use technology on our terms.

Which leads us to another real-life phenomenon that started with a simple digital act: rating products and experiences.

2. Quantifying People: Ratings & Reviews

As with all other well-meaning algorithms, this one is designed with you and only you in mind. Using your feedback, companies can better serve your needs, provide targeted recommendations just for you, and serve you more of what you’ve historically shown to like, so you can carry on mindlessly consuming it.

From your Uber ride to your Postmate delivery to your Handy cleaning appointment, nearly every real-life interaction is rated on a scale of 1-5 and reduced to a digital score.

As a society we’ve never been more concerned with how we’re perceived, how we perform, and how we compare to others’ expectations. We’re suddenly able to quantify something as subjective as our Airbnb host’s design taste or cleanliness. And the sense of urgency with which we do it is incredible — you’re barely out of your Uber car when you neurotically tap all five stars, tipping with wild abandon in a quest to improve your passenger rating. And the rush of being reviewed in return! It just fills you with utmost joy.

Yes, you might be thinking of that dystopian Black Mirror scenario, or that oddly relatable Portlandia sketch, but we’re not too far off from a world where our digital score simultaneously replaces and drives all meaning in our lives.

We’ve automated the way we interact with people, where we’re constantly measuring and optimizing those interactions in an endless cycle of self-improvement. It started with an algorithm, but it’s now second nature.

As Jaron Lainier wrote in his introduction to Close to the Machine, “We create programs using ideas we can feed into them, but then [as] we live through the program. . .we accept the ideas embedded in it as facts of nature.”

That’s because technology makes abstract and often elusive, desirable qualities quantifiable. Through algorithms, trust translates into ratings and reviews, popularity equals likes, and social status means followers. Algorithms create a sort of Baudrillardian simulation, where each rating has completely replaced the reality it refers to, and where the digital review feels more real, and certainly more meaningful, than the actual, real-life experience.

In facing the complexity and chaos of real life, algorithms help us find ways to simplify it; to take the awkwardness out of social interaction and the insecurity that comes with opinions and real-life feedback, and make it all fit neatly into a ratings box.

But as we adopt programming language, code, and algorithms as part of our own thinking, are human nature and artificial intelligence merging into one? We’re used to think of AI as an external force, something we have little control over. What if the most immediate threat of AI is less about robots taking over the world, and more about technology becoming more embedded into our consciousness and subjectivity?

In the same way that smartphones became extensions of our senses and our bodies, as Marshall McLuhan might say, algorithms are essentially becoming extensions of our thoughts. But what do we do when when they replace the very qualities that make us human?

And, as Lainier asks, “As computers mediate human language more and more over time, will language itself start to change?”

Image: antoniokhr/iStock

3. Automating Language: Keywords and Buzzwords

Google indexes search results based on keywords. SEO makes websites rise to the top of search results, based on specific tactics. To achieve this, we work around the algorithm, figure out what makes it tick, and sprinkle websites with keywords that make it more likely to stand out in Google’s eyes.

But much like Google’s algorithm, our mind prioritizes information based on keywords, repetition, and quick cues.

It started as a strategy we built around technology, but it now seeps into everything we do–from the the way we write headlines to how we generate “engagement” with our tweets to how we express ourselves in business and everyday life.

Take the buzzword mania that dominates both the media landscape and the startup scene. A quick look at some of the top startups out there will show that the best way to capture people’s attention–and investors’ money–is to add “AI,” “crypto” or “blockchain” into your company manifesto.

Companies are being valuated based on what they’re signifying to the world through keywords. The buzzier the keywords in the pitch deck, the higher the chances a distracted investor will throw some money at it. Similarly, a headline that contains buzzwords is far more likely to be clicked on, so the buzzwords start outweighing the actual content. Clickbait being one symptom of that.

Where do we go from here?

Technology gives us clear patterns; online shopping offers simple ways to navigate an abundance of choice. Therefore there’s no need to think — we just operate under the assumption that algorithms know best. We don’t exactly understand how they work, and that’s because code is hidden: we can’t see it, the algorithm just magically presents results and solutions. As Ullman warns in Life in Code, “When we allow complexity to be hidden and handled for us, we should at least notice what we are giving up. We risk becoming users of components. . .[as we] work with mechanisms that we do not understand in crucial ways. This not-knowing is fine while everything works as expected. But when something breaks or goes wrong or needs fundamental change, what will we do except stand helpless in the face of our own creations?”

Cue fake news, misinformation, and social media targeting in the age of Trump.

Image courtesy of Intellectual Take Out.

So how do we encourage critical thinking, how do we spark more interest in programming, how do we bring back good old-fashioned debate and disagreement? What can we do to foster difference of opinion, let it thrive, and allow it to challenge our views?

When we operate within the bubble of distraction that technology creates around us, and when our social media feeds consist of people who think just like us, how can we expect social change? What ends up happening is we operate exactly as the algorithm intended us to. The alternative is questioning the status quo, analyzing the facts and arriving at our own conclusions. But no one has time for that. So we become cogs in the Facebook machine, more susceptible to propaganda, blissfully unaware of the algorithm at work–and of all the ways in which it has inserted itself into our thought processes.

As users of algorithms rather than programmers or architects of our own decisions, our own intelligence become artificial. It’s “program or be programmed” as Douglas Rushkoff would say. If we’ve learned anything from Cambridge Analytica and the 2016 U.S. elections, it’s that it is surprisingly easy to reverse-engineer public opinion, to influence outcomes, and to create a world where data, targeting, and bots lead to a false sense of consensus.

What’s even more disturbing is that the algorithms we trust so much–the ones that are deeply embedded in the fabric of our lives, driving our most personal choices–continue to hack into our thought processes, in increasingly bigger and more significant ways. And they will ultimately prevail in shaping the future of our society, unless we reclaim our role as programmers, rather than users of algorithms.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Column

Facial recognition software is not ready for use by law enforcement

Recent news of Amazon’s engagement with law enforcement to provide facial recognition surveillance (branded ‘Rekognition’), along with the almost unbelievable news of China’s use of the technology means that the technology industry needs to address the darker, more offensive side of some of its more spectacular advancements.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color — to deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens, and, a slippery slope to losing control of our identities, altogether.

There’s really no ‘nice’ way to acknowledge these things.

I’ve been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement.

As the Black chief executive of a software company developing facial recognition services, I have a personal connection to the technology both culturally, and socially.

Having the privilege of a comprehensive understanding of how the software works gives me a unique perspective which has shaped my positions about its uses. As a result, I (and my company) have come to believe that the use of commercial facial recognition in law enforcement or in government surveillance of any kind is wrong — and that it opens the door for gross misconduct by the morally corrupt.

To be truly effective, the algorithms powering facial recognition software require a massive amount of information. The more images of people of color it sees, the more likely it is to properly identify them. The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.

And misidentification could lead to wrongful conviction, or far worse.

Let’s say the person wrong person is held in a murder investigation. Let’s say you’re taking someone’s liberty and freedoms away based on what the system thinks, and the system isn’t fairly viewing different races and different genders. That’s a real problem, and it needs to be answered for.

There is no place in America for facial recognition that supports false arrests and murder.

In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities — does not serve citizens– and will only lead to further unrest.

Whether you believe government surveillance is ok, or not, using commercial facial recognition in law enforcement is irresponsible and dangerous.

 

PETER PARKS/AFP/Getty Images

While the rest of the world speculated the reasons we are being monitored, the Chinese government has been making the reasons they are watching all 1.4 billion of its citizens transparent– and it’s not for their safety.

China’s use cases for Face Recognition software for surveillance are actually an outstanding example of why we have never and will never engage with government agencies – and why it’s an ethical nightmare to even consider doing so.

China is currently setting up a vast public surveillance network of systems that are utilizing Face Recognition to construct “social credit” systems, which rank citizens based on their behavior, queuing rewards, and punishments, depending on their scores. They’ve already proven in the case of arresting one man spotted by their CCTV network in a crowd of 60,000 people exactly how poorly this could go.

The exact protocol is being guarded, but examples of ‘punishment worthy’  infractions include jaywalking, smoking in non-smoking areas, and even buying too many video games. ‘Punishment’ for poor scores includes travel restrictions and many other punishments.

Yes. Citizens will be denied access to flights, trains— transportation— all based on the ‘social behavior’ equivalent of a credit score. If all of this constant surveillance sounds insane, consider this:  right now the system is piecemeal, and it’s in effect in select Chinese provinces and cities.

China News Service via WSJ

Imagine if America decided to start classifying its citizens based on a social score?

Imagine if America and its already terrifying record of racial disparity in the use of force by the police – and had the power and justification of someone being “socially incorrect”?

Recently, we read about Amazon Face Rekognition being used in law enforcement in Oregon. They claimed that it won’t be a situation where there’s a “camera on every corner” as if to say that face recognition software requires constant, synchronized surveillance footage.

In truth, Rekognition and other software simply requires you to point the software at whatever footage you have — social media, CCTV footage, or even police bodycams. And that software is only as smart as the information it’s fed — and if that’s predominantly images of, for example, African Americans that are “suspect,” it could quickly learn to simply classify the black man as a categorized threat.

Facial recognition is a dynamic tool which helps humanize our interactions with machines. Yet, desperate for more data, we’re seeing a preview in China of face recognition, when used for government surveillance, truly dehumanizing entire populations.

It’s the case of an amazing technology capable of personalizing experiences, improving interactions and creating positive feelings— being used for the purpose of controlling citizens. And that, for me, is absolutely unacceptable. It’s not simply an issue for people of color, either – eventually, scanning software of any kind could measure the gait (the way you walk), the gestures, the emotions of anyone considered “different” by the government.

It is said that any tool, in the wrong hands, can be dangerous.

In the hands of government surveillance programs and law enforcement agencies, there’s simply no way that face recognition software will be not used to harm citizens. To my core, and my company’s core, we truly believe this to the point that we have missed out on very, very lucrative government contracts. I’d rather be able to sleep at night knowing that I’m not helping make drone strikes more “effective.”

We deserve a world where we’re not empowering governments to categorize, track and control citizens. Any company in this space that willingly hands this software over to a government, be it America or another nation’s, is willfully endangering people’s lives. And letters to Jeff Bezos aren’t enough. We need movement from the top of every single company in this space to put a stop to these kinds of sales.

News Source = techcrunch.com

Continue Reading

Activision

Inside Atari’s rise and fall

By the first few months of 1982, it had become more common to see electronics stores, toy stores, and discount variety stops selling 2600 games. This was before Electronics Boutique, Software Etc., and later, GameStop . Mostly you bought games at stores that sold other electronic products, like Sears or Consumer Distributors. Toys ’R’ Us was a big seller of 2600 games. To buy one, you had to get a piece of paper from the Atari aisle, bring it to the cashier, pay for it, and then wait at a pickup window behind the cash register lanes.

Everyone had a favorite store in their childhood; here’s a story about one of mine. A popular “destination” in south Brooklyn is Kings Plaza, a giant (for Brooklyn) two-story indoor mall with about 100 stores. My mother and grandmother were avid shoppers there. To get to the mall from our house, it was about a 10-minute car service ride. So once a week or thereabouts, we’d all go. The best part for me was when we went inside via its Avenue U entrance instead of on the Flatbush Avenue side. Don’t ask me what went into this decision each time; I assume it depended on the stores my mother wanted to go to. All I knew was the Avenue U side had this circular kiosk maybe 50 feet from the entrance. The name has faded from memory. I remember it was a kind of catch-all for things like magazines, camera film, and other random stuff.

But the most important things were the Atari cartridges. There used to be dozens of colorful Atari game boxes across the wall behind the counter. When we walked up to the cashier’s window, there was often a row of new Atari games across the top as well. Sometimes we left without a new cartridge, and sometimes I received one. But we always stopped and looked, and it was the highlight of my trip to the mall each time.

For whatever reason, I remember the guy behind the counter gave me a hard time one day. I bought one of Atari’s own cartridges—I no longer remember which, but I’m almost sure it was either Defender or Berzerk—that came with an issue of Atari Force, the DC comic book. I said I was excited to get it. The guy shot me a dirty look and said, “You’re buying a new Atari cartridge just for a comic book?” I was way too shy to argue with him, even though he was wrong and I wanted the cartridge. I don’t remember what my mother said, or if she even heard him. Being too shy to protest, I sheepishly took my game and we both walked away.

Mattel Stumbles, While Atari Face-Plants

Mattel began to run into trouble with its Intellivision once the company tried to branch out from sports games. Because Mattel couldn’t license properties from Atari, Nintendo, or Sega, it instead made its own translations of popular arcade games. Many looked better than what you’d find on the 2600, but ultimately played more slowly thanks to the Intellivision’s sluggish CPU. Perhaps the most successful was Astrosmash, a kind of hybrid of Asteroids and Space Invaders, where asteroids, space ships, and other objects fell from the sky and became progressively more difficult. Somewhat less successful were games like Space Armada (a Space Invaders knock off).

Mattel also added voice synthesis—something that was all the rage at the time—to the Intellivision courtesy of an add-on expansion module called Intellivoice. But only a few key games delivered voice capability: Space Spartans, Bomb Squad, B-17 Bomber (all three were launch titles), and later, Tron: Solar Sailer. The Intellivoice’s high cost, lack of a truly irresistible game, and overall poor sound quality meant this was one thing Atari didn’t have to find a way to answer with the 2600.

These events made it easier for Atari to further pull away from Mattel in the marketplace, and it did so—but not without a tremendous self-inflicted wound. A slew of new 2600 games arrived in the first part of 1982. Many important releases came in this period and those that followed, and we’ll get to those shortly. But there was one in particular that the entire story arc of the platform balanced on, and then fractured. It was more than a turning point; its repercussions reverberated throughout the then-new game industry, and to this day it sticks out as one of the key events that ultimately did in Atari.

Pac-Man (Atari, March 1982)

The single biggest image-shattering event for the 2600—and Atari itself—was the home release of its Pac-Man cartridge. I can still feel the crushing disappointment even now. So many of my friends and I looked forward to this release. We had talked about it all the time in elementary school. Pac-Man was simply the hottest thing around in the arcades, and we dreamed of playing it at home as much as we wanted. The two-year wait for Atari to release the 2600 cartridge seemed like forever. Retailers bought into the hype as well. Toy stores battled for inventory, JC Penney and Kmart bought in big along with Sears and advertised on TV, and even local drug stores started stocking the game. And yet, what we got…wasn’t right.

Just about everyone knows how Pac-Man is supposed to work, but just in case: You gobble up dots to gain points while avoiding four ghosts. Eat a power pellet, and you can turn the tables on the ghosts, chase them down, and eat them. Each time you do so, the “eyes” of the ghost fly back to the center of the screen and the ghost regenerates. Eat all the dots and power pellets on the screen, and you progress to the next one, which gets harder. Periodically, a piece of fruit appears at the center of the screen. You can eat it for bonus points, and the kind of fruit denotes the level you are on (cherry, strawberry, orange, and so on).

But that’s not the game Atari 2600 owners saw. After securing the rights to the game from Namco, Atari gave programmer Tod Frye just five weeks to complete the conversion. The company had learned from its earlier mistakes and promised Frye a royalty on every cartridge manufactured (not sold), which was an improvement. But this was another mistake. The royalty plus the rushed schedule meant Frye made money even if the game wasn’t up to snuff, and thus Frye had incentive to complete it regardless. Atari also required the game to fit into just 4KB like older 2600 cartridges, rather than the newer 8KB size that was becoming much more common by this point. That profit-driven limitation heavily influenced the way Frye approached the design of the game. To top it all off, Atari set itself up for a colossal failure by producing some 12 million cartridges, even though there were only 10 million 2600 consoles in circulation at the time. The company was confident that not only would every single existing 2600 owner buy the game, but that 2 million new customers would buy the console itself just for this cartridge.

We all know how it turned out. The instruction manual sets the tone for the differences from the arcade early on. The game is now set in “Mazeland.” You eat video wafers instead of dots. Every time you complete a board, you get an extra life. The manual says you also earn points from eating power pills, ghosts, and “vitamins.” Something is definitely amiss.

Pac-Man himself always looks to the right or left, even if he is going up or down. The video wafers are long and rectangular instead of small, square dots. Fruits don’t appear periodically at the center of the screen. Instead, you get the aforementioned vitamin, a clear placeholder for what would have been actual fruit had there been more time to get it right. The vitamin always looks the same and is always worth 100 points, instead of increasing as you clear levels. The rest of the scoring is much lower than it is in the arcade. Gobbling up all four ghosts totals just 300 points, and each video wafer is worth just 1 point.

The ghosts have tremendous amounts of flicker, and they all look and behave identically, instead of having different colors, distinct personalities, and eyes that pointed in the right direction. The flicker was there for a reason. Frye used it to draw the four ghosts in successive frames with a single sprite graphic register, and drew Pac-Man every frame using the other sprite graphic register. The 2600’s TIA chip synchronizes with an NTSC television picture 60 times per second, so you end up seeing a solid Pac-Man, maze, and video wafers (I can still barely type “video wafers” with a straight face), but the ghosts are each lit only one quarter of the time. A picture tube’s phosphorescent glow takes a little bit to fade, and your eye takes a little while to let go of a retained image as well, but the net result is that the flicker is still quite visible.

It gets worse. The janky, gritty sound effects are bizarre, and the theme song is reduced to four dissonant chords. (Oddly, these sounds resurfaced in some movies over the next 20 years and were a default “go-to” for sound designers working in post-production.) The horizontally stretched maze is nothing like the arcade, either, and the escape routes are at the top and bottom instead of the sides. The maze walls aren’t even blue; they’re orange, with a blue background, because it’s been reported Atari had a policy that only space games could have black backgrounds (!). At this point, don’t even ask about the lack of intermissions.

One of Frye’s own mistakes is that he made Pac-Man a two-player game. “Tod used a great deal of memory just tracking where each player had left off with eaten dots, power pellets, and score,” wrote Goldberg and Vendel in Atari Inc.: Business is Fun. Years later, when Frye looked at the code for the much more arcade-faithful 2600 Ms. Pac-Man, he saw the programmers were “able to use much more memory for graphics because it’s only a one player game.”

Interestingly, the game itself is still playable. Once you get past the initial huge letdown and just play it on its own merits, Pac-Man puts up a decent experience. It’s still “Pac-Man,” sort of, even if it delivers a rough approximation of the real thing as if it were seen and played through a straw. It’s worth playing today for nostalgia—after all, many of us played this cartridge to death anyway, because it was the one we had—and certainly as a historical curiosity for those who weren’t around for the golden age of arcades.

Many an Atari 2600 fan turned on the platform—and Atari in general—after the release of Pac-Man. Although the company still had plenty of excellent games and some of the best were yet to come, the betrayal was immediate and real and forever colored what much of the gaming public thought of Atari. The release of the Pac-Man cartridge didn’t curtail the 2600’s influence on the game industry by any means; we’ll visit many more innovations and developments as we go from here on out. But the 2600 conversion of Pac-Man gave the fledgling game industry its first template for how to botch a major title. It was the biggest release the Atari 2600 had and would ever see, and the company flubbed it about as hard as it could. It was New Coke before there was New Coke.

Grand Prix (Activision, March 1982)

The next few games we’ll discuss further illustrate the quality improvements upstart third-party developers delivered, in comparison with Atari, which had clearly become too comfortable in its lead position. First up is Activision’s Grand Prix, which in hindsight was a bit of an odd way to design a racer . It’s a side-scroller on rails that runs from left to right, and is what racing enthusiasts call a time trial. Although other computer-controlled cars are on the track, you’re racing against the clock, not them, and you don’t earn any points or increase your position on track for passing them.

Gameplay oddities aside, the oversized Formula One cars are wonderfully detailed, with brilliant use of color and animated spinning tires. The shaded color objects were the centerpiece of the design, as programmer David Crane said in a 1984 interview. “When I developed the capability for doing a large multicolored object on the [2600’s] screen, the capability fitted the pattern of the top view of a Grand Prix race car, so I made a racing game out of it.” Getting the opposing cars to appear and disappear properly as they entered and exited the screen also presented a problem, as the 2600’s lack of a frame buffer came into play again. The way TIA works, the 2600 would normally just make the car sprite begin to reappear on the opposite side of the screen as it disappeared from one side. To solve this issue, Crane ended up storing small “slices” of the car in ROM, and in real time the game drew whatever portions of the car were required to reach the edge of the screen. The effect is smooth and impossible to detect while playing.

The car accelerates over a fairly long period of time, and steps through simulated gears. Eventually it reaches a maximum speed and engine note, and you just travel along at that until you brake, crash into another car, or reach the finish line. As the manual points out, you don’t have to worry about cars coming back and passing you again, even if you crash. Once you pass them, they’re gone from the race.

The four game variations in Grand Prix are named after famous courses that resonate with racing fans (Watkins Glen, Brands Hatch, Le Mans, and Monaco). The courses bear no resemblance to the real ones; each game variation is simply longer and harder than the last. The tree-lined courses are just patterns of vehicles that appear on screen. Whenever you play a particular game variation, you see the same cars at the same times (unless you crash, which disrupts the pattern momentarily). The higher three variations include bridges, which you have to quickly steer onto or risk crashing. During gameplay, you get a warning in the form of a series of oil slicks that a bridge is coming up soon.

Although Atari’s Indy 500 set the bar early for home racing games on the 2600, Grand Prix demonstrated you could do one with a scrolling course and much better graphics. This game set the stage for more ambitious offerings the following year. And several decades later, people play games like this on their phones. We just call titles like Super Mario Run (a side-scroller) and Temple Run (3D-perspective) “endless runners,” as they have running characters instead of cars.

Activision soon became the template for other competing third-party 2600 developers. In 1981, Atari’s marketing vice president and a group of developers, including the programmers for Asteroids and Space Invaders on the console, started a company called Imagic. The company had a total of nine employees at the outset. Its name was derived from the words “imagination” and “magic”—two key components of every cartridge the company planned to release. Imagic games were known for their high quality, distinctive chrome boxes and labels, and trapezoidal cartridge edges. As with Activision, most Imagic games were solid efforts with an incredible amount of polish and were well worth purchasing.

Although Imagic technically became the second third-party developer for the 2600, the company’s first game didn’t arrive until March 1982. Another company, Games by Apollo, beat it to the punch by starting up in October 1981 and delivering its first (mediocre) game, Skeet Shoot, before the end of the year.

But when that first Imagic game did arrive, everyone noticed.

Demon Attack

At first glance, the visually striking Demon Attack looks kind of like a copy of the arcade game Phoenix, at least without the mothership screen (something it does gain in the Intellivision port). But the game comes into its own the more you play it. You’re stuck on the planet Krybor. Birdlike demons dart around and shoot clusters of lasers down toward you at the bottom of the screen. Your goal is to shoot the demons all out of the sky, wave after wave.

The playfield is mostly black, with a graded blue surface of the planet along the bottom of the screen. A pulsing, beating sound plays in the background. It increases in pitch the further you get into each level, only to pause and then start over with the next wave. The demons themselves are drawn beautifully, with finely detailed, colorful designs that are well animated and change from wave to wave. Every time you complete a wave, you get an extra life, to a maximum of six.

On later waves, the demons divide in two when shot, and are worth double the points. You can shoot the smaller demons, or just wait—eventually each one swoops down toward your laser cannon, back and forth until it reaches the bottom of the screen, at which point it disappears from the playfield. Shoot it while it’s diving and you get quadruple points. In the later stages, demons also shoot longer, faster clusters of lasers at your cannon.

The game is for one or two players, though there’s a cooperative mode that lets you take turns against the same waves of demons. There are also variations of the game that let you shoot faster lasers, as well as tracer shots that you can steer into the demons. After 84 waves, the game ends with a blank screen, though reportedly a later run of this cartridge eliminates that and lets you play indefinitely. If I were still nine years old, I could probably take a couple of days out of summer and see if this is true. I am no longer nine years old.

Demon Attack was one of Imagic’s first three games, along with Trick Shot and Star Voyager. Rob Fulop, originally of Atari fame and one of Imagic’s four founders, programmed Demon Attack. In November 1982, Atari sued Imagic because of Demon Attack’s similarity to Phoenix, the home rights of which Atari had purchased from Centuri. The case was eventually settled. Billboard magazine listed Demon Attack as one of the 10 best-selling games of 1982. It was also Imagic’s best-selling title, and Electronic Games magazine awarded it Game of the Year.

“The trick to the Demon Attack graphics was it was the first game to use my Scotch-taped/rubber-banded dedicated 2600 sprite animation authoring tool that ran on the Atari 800,” Fulop said in 1993. “The first time Michael Becker made a little test animation and we ran Bob Smith’s utility that successfully squirted his saved sprite data straight into the Demon Attack assembly code and it looked the same on the [2600] as it did on the 800 was HUGE! Before that day, all 2600 graphics ever seen were made using a #2 pencil, a sheet of graph paper, a lot of erasing, and a list of hex codes that were then retyped into the source assembly code, typically introducing a minimum of two pixel errors per eight-by-eight graphic stamp.”

Although you can draw a line from Space Invaders to just about any game like this, Demon Attack combines that with elements of Galaga and Phoenix, with a beautiful look and superb gameplay all its own.

Pitfall! (Activision, April 1982)

A watershed moment in video game history, David Crane’s Pitfall! was one of the best games released for the 2600. As Pitfall Harry, your goal is to race through the jungle and collect 32 treasures—money bags, silver bars, gold bars, and diamond rings, worth from 2,000 to 5,000 points each. Jump and grab vines, and you soar over lakes, quicksand, and alligators, complete with a Tarzan-style “yell.” You can stumble on a rolling log or fall into a hole, both of which just dock you some points. Each time you fall into quicksand or a tar pit, drown in a lake, burn in a fire, or get eaten by an alligator or scorpion, you lose a life. When that happens, you start the next one by dropping from the trees on the left side of the screen to keep playing.

Pushing the joystick left or right makes Pitfall Harry run. He picks up treasure automatically. Holding the stick in either direction while pressing the button makes him jump, either over an obstacle or onto a swinging vine (running into the vine without jumping also works). Push down while swinging to let go of the vine. You also can push up or down to climb ladders.

In an incredible feat of programming, the game contains 255 screens, with the 32 treasures scattered throughout them. The world loops around once you reach the last screen. Although Adventure pioneered the multiroom map on the 2600, Pitfall! was a considerably larger design. Crane fit the game into the same 4KB ROM as Adventure. But rather than storing all 255 screens as part of the ROM—which wouldn’t have fit—Crane’s solution was not to store the world in ROM at all. Instead, the world is generated by code, the same way each time. This is similar to games like Rogue, but even in that case, the game generates the world and then stores it during play. Pitfall! generates each screen via an algorithm, using a counter that increments in a pseudorandom sequence that is nonetheless consistent and can be run forwards or backwards. The 8 bits of each number in the counter sequence define the way the board looks. Bits 0 through 2 are object patterns, bits 3 through 5 are ground patterns, bits 6 and 7 cover the trees, and bit 7 also affects the underground pattern. This way, the world is generated the same way each and every single time. When you leave one screen, you always end up on the same next screen.

“The game was a jewel, a perfect world incised in a mere [4KB] of code,” Nick Montfort wrote in 2001 in Supercade: A Visual History of the Videogame Age, 1971-1984.

You get a total of three lives, and Crane points out in the manual that you need to use some of the underground passages (which skip three screens ahead instead of one) to complete the game on time. The inclusion of two on-screen levels—above ground and below ground, with ladders connecting them—makes the game an official platformer. And the game even gives you some say in where to go and what path you take to get there. Pitfall Harry is smoothly animated, and the vines deliver a genuine sensation of swinging even though the game is in 2D.

The game’s 20-minute timer, which approximates the 22-minute length of a standard half-hour television show, marked a milestone for console play. It was much longer than most arcade games and even cartridges like Adventure, which you could complete in a few minutes. The extra length allows for more in-depth play.

“Games in the early ’80s primarily used inanimate objects as main characters,” Crane said in a 2011 interview. “Rarely there would be a person, but even those weren’t fully articulated. I wanted to make a game character that could run, jump, climb, and otherwise interact with an on-screen world.” Crane spent the next couple of years tinkering with the idea before finally coming up with Pitfall!. “[After] only about 10 minutes I had a sketch of a man running on a path through the jungle collecting treasures. Then, after ‘only’ 1,000 hours of pixel drawing and programming, Pitfall Harry came to life.”

Crane said he had already gone beyond that 4KB ROM limit and back within it many times over hundreds of hours. Right before release, he was asked to add additional lives. “Now I had to add a display to show your number of lives remaining, and I had to bring in a new character when a new life was used.” The latter was easy, Crane said, because Pitfall Harry already knew how to fall and stop when he hit the ground. Crane just dropped him from behind the tree cover. “For the ‘Lives’ indicator I added vertical tally marks to the timer display. That probably only cost 24 bytes, and with another 20 hours of ‘scrunching’ the code I could fit that in.”

Pitfall! couldn’t have been timed more perfectly, as Raiders of the Lost Ark was the prior year’s biggest movie. The cartridge delivered the goods; it became the best-selling home video game of 1982 and it’s often credited as the game that kickstarted the platformer genre. Pitfall! held the top spot on Billboard’s chart for 64 consecutive weeks. “The fine graphic sense of the Activision design team greatly enriches the Pitfall! experience,” Electronic Games magazine wrote in January 1983, on bestowing the cartridge Best Adventure Videogame. “This is as richly complex a video game as you’ll find anywhere…Watching Harry swing across a quicksand pit on a slender vine while crocodiles snap their jaws frantically in a futile effort to tear off a little leg-of-hero snack is what video game adventures are all about.” Pitfall!’s influence is impossible to overstate. From Super Mario Bros. to Prince of Persia to Tomb Raider, it was the start of something huge.

News Source = techcrunch.com

Continue Reading

Column

The best home Wi-Fi and networking gear

Editor’s note: This post was done in partnership with Wirecutter. When readers choose to buy Wirecutter’s independently chosen editorial picks, Wirecutter and TechCrunch earn affiliate commissions.

It’s safe to say that for many, a world without internet is hard to imagine. When you need a solid internet connection for work, studying or for catching up on your favorite shows, having a poor connection is almost as bad.

While fast and reliable internet starts with having a good provider, owning the right gear helps to support it. From network storage to the best router, we’ve compiled picks for helping you set up and secure a dependable home Wi-Fi network.

Wi-Fi router: Netgear R7000P Nighthawk

More than anything, when it comes to setting up a home Wi-Fi network, it’s important to have a good Wif-Fi router. If you don’t rent one from your internet service provider, you’ll want one that’s easy to use, has a decent connection range, and that’s able to handle a crowded network.

Our top pick, the Netgear R7000P Nighthawk, is a dual-band, three stream 802.11ac router and it offers solid speed and throughput performance across long and moderate ranges. Its load-balancing band steering automatically kicks in when networks are busy, which means you won’t sit around clicking refresh and resending requests. We like that its toggles and features are easy to find. This router is ideal for larger spaces, including homes that experience coverage issues.                                                                                                           

Photo: Kyle Fitzgerald

Cable modem: Netgear CM500

If you have cable internet, a cable modem is one of the pieces of equipment that’s supplied by your service provider. As with a router, you may be paying an additional fee outside of the cost of internet service to have one. While a router helps your wireless devices communicate and use an internet connection, a modem is the device that connects your home network to the wider internet.

If you plan on, or have recently opted out of renting a modem, you’ll like that our top recommendation, the Netgear CM500, pays for itself in about six months. It’s compatible with a most cable Internet service providers, it’ll last for years, and you can rely on it to support Internet plan speeds of up to 300 Mbps.

Photo: Michael Hession

Wi-Fi mesh-networking kit: Netgear Orbi RBK50

In some homes, it isn’t uncommon to try moving a router around to find the best signal. However, with spaces larger than 2,000 square feet — or small to large spaces with brick, concrete, or lath-and-plaster interior walls — relocating a router may not do the trick. Instead of just using a single router, a Wi-Fi mesh-networking kit uses multiple access points improving overall Wi-Fi performance and range.

Our top pick, the Netgear Orbi RBK50, comes with a base router and satellite, each unit a tri-band device. We think these two units are enough for supporting a solid home Wi-Fi network in most spaces, but you can add another unit to this kit if necessary. It’s equipped with more than enough Ethernet ports, and it’ll work without an internet connection during setup or an internet outage.

Photo: Michael Hession

Wi-Fi extender: TP-Link RE200

For an even simpler, budget-friendly solution to bolstering a home Wif-Fi signal, we recommend the dual-band TP-Link RE200, our top pick for Wi-Fi extenders. It doesn’t extend the range of a network per se, but it increases throughput and decreases latency for a better Wi-Fi experience. It should be paired with a good router and is best for improving the speed of one device at a time over a network that isn’t too busy.

It’s compact, plugs into a power outlet and has an Ethernet port for easily connecting nearby devices. It’s a helpful middle-ground option for strengthening a Wi-Fi connection when you don’t need a mesh-networking kit and already have a decent router that doesn’t need to be replaced.                

Photo: Rozette Rago

VPN service: IVPN

A virtual private network, or VPN, helps to ensure that a connection is secure. It’s an extra layer of protection that encrypts your online activity and should be used in addition to password managers, browser plug-ins that support privacy and encrypted hardware. While network security is necessary, a service that’s transparent and trustworthy with helpful support is more important.

Out of the 12 VPN services that we tested, we think IVPN is the best provider. IVPN doesn’t log or monitor activity, and we like that it’s stable, fast and works across platforms. In situations where your network isn’t protected, or when you join an unsecure network, its “firewall” and OpenVPN protocol features will keep you covered.

Photo: Kyle Fitzgerald

NAS for most home users: Synology DS218+

For homes and spaces where multiple computers are used, network-attached storage (NAS) devices back up data and files to devices on one local network, or a cloud service. It’s a small computer equipped with one or two hard drive bays, and it always stays on using less power than a repurposed computer. Our top recommendation, the Synology DS218+, is easy to manage, and it has three USB ports and the fastest writing speeds of any NAS we tested.

For those with a massive database of files, a NAS device is a better option than an external drive. With the DS218+, you can play back media and conveniently access your data as it supports FTP protocol, VPN server capabilities, SSDs and more. The biggest plus with owning a NAS device is having the option to use it as a storage device, website-hosting device, a media streamer, or anything else that a Linux computer can function as.

This guide may have been updated by Wirecutter.

Note from Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending