There us no mistaking it. Computer Generated Imagery, or CGI, is now the mainstream in movie making. In recent years it has become cheaper, faster, more advanced, and, most importantly, widely successful. Visual effects have been used since even before computers were used to generate images. As in the reading, Movies as old as Citizen Cane used visual effects to produce unique scenes and shots. In many cases of older movies that didn’t have the luxury of CGI, there was a lot of work that when into creating scene that were unique to those films. Effects were produce by camera angles, costume, and in many cases, extreme uses of mise-en-cine. These days such effects are seen as cheap and uninspired because the modern audience is so used to seeing more realistic images in modern movies. This particularly true for modern day action and Science Fiction movies. Does our dependence on CGI, spoil us to the true effort that has gone into legacy films.
An example of how I feel that CGI has hurt movies is in my favorite movie, The Thing (John Carpenter 1982) which later got a sequel The Thing (2011 Matthijs van Heijningen Jr). From the common viewer both movies have the same mindset and plot in mind. In the isolated and frozen Antarctic, a team of scientist discover a horrific alien with the ability to absorb them. When the alien is discovered as a team member, it “reveals itself,” usually in a gruesome and gory manner. In 1982 these scenes were done with plastic, puppeteering, latex, synthetic slime, and more to create truly gross scenes. The 2011 movie did nearly all of it with CGI and, while still being a typical horror gross-out, was not received as well by the fans of the first. Of course, staying true to the traditions of the first movie would have corny to see in theaters, but without the legendary, and “natural” creature effects of Rob Bottin, there is a major difference in the creature design that made the first movie so successful. There are few images from both movies below. Can you tell the CGI from the non-CGI? Which one do you prefer?
There is, however, an example of when the use of CGI can bring something new to film series. The example I use for this is another classic The Terminator (James Cameron 1984) and Terminator 2: Judgment Day (James Cameron 1991). In the first movie, unique visual effects weren’t used until near the end where robot is seen chasing Sarah Conner. In most movies where a “hunter” is coming for the “hunted,” it’s usually just a guy in a costume. But for this move, it had already been shown that the “guy” had been stripped away and all that was left was the robotic endo skeleton. To do this, James Cameron employed the use of a puppet and stop motion animation. The took several pictures of the machine, while slightly changing the position of its hands and legs, to create the illusion that it was walking, climbing, etc. Despite how strange and impractical the effects looked, it was successful in giving off the eerie image of a murderous robot hunting its victim. In the second movie in 1991, when CGI was seeing much growth, James Cameron had a different idea of how to upgrade his killer robot. The T-1000, or the liquid, robot was shown transforming from person to person, or creating swords and crowbars from its arms, all using CGI. Such a use of the CGI made the villain of the second movie as iconic as the first and accomplished its goal of introducing the new antagonist.
As movies are becoming more expensive to make and require higher revenue to be considered successful, how far can CGI go. Marvel movies rarely go to locations outside the United States to be made, unlike other iconic movies such as Lord of the Rings, which were filmed places like New Zealand. Will we get to the point where entire “live-action” movies are made with CGI. As facial structures and movement become more refined, will we even need actors in the future?