Saturday, August 31, 2019

Highschool Stereotypes Essay

When you first enter high school, you can face many different difficulties. (exams, new friends, drama etc. ) But one of the most stressful difficulties of all are stereotypes. ‘What is a stereotype? Where do I fit in? Do I Have to fit in? ’ are just some of the things you may be worrying about. There are three main stereotypes that I have come to recognize that are in almost every single school. The preps, the nerds, and the emos. I will help you classify which is which and understand the difference so that you can determine whether you want to fit in with these groups, or go your own way. The first main stereotype is the preps. They will usually be well talked about and you will hear their names often. They dress nice in expensive clothing, are usually wealthy and are sometimes in sports like football or cheerleading. This stereotype is usually filled with good-looking people and these types of kids are known to be meaner than average. There are advantages and disadvantages to being one of the preps. Some good things about being one is that you are well known, people envy you and want to be your friend. You usually have fun because you are invited to parties and everyone wants to be around you. A few disadvantages are that you’re expected to dress nicely at all times, you are also supposed to act accordingly even if that means being rude to someone else. The second stereotype is the nerds. These kids are the exceptionally smart ones. These are the people in your class that get straight A’s. They enjoy doing homework and extra credit and are usually known to be socially awkward. They often wear glasses or braces or breathe heavily and often get bullied. Just like every other stereotype I am talking about today, being a nerd also has it’s own ups and downs. Some advantages are that you get excellent grades, which makes teachers impressed, and parents proud. Also, since you’re academically higher you are more likely to get into a good college and get an excellent education, which will probably lead to a high paying job in the future. Some downsides are that you won’t be invited anywhere fun, probably won’t have many friends and might get picked on a lot by classmates or bullies. The third stereotype is the emos. This is probably the easiest stereotype to spot out. They dress in all black, very dark makeup, dyed black hair and are usually depressed and don’t speak to anyone outside their circle. They frown upon the normal members of society and think they are ‘different’. They are usually sad and their conversations revolve around how they want to die and how terrible life is. These kids despise the preppy kids and hate anything happy or cheery. They usually spend a lot of time skipping class together, and chatting online to each other. The upside to joining this stereotype is that they have an odd sense of community. They all stick together no matter what and listen to each other’s problems. Some of the downsides are that you are looked down upon by most people in society, and most people do not want to approach you or get to know you. It will be hard for you to get a decent job, and people will make bad assumptions about you. Well, now you know the 3 main high school stereotypes. The preps, the nerds, and the emos. You can be mean and popular with the preps, be geeky but have good grades with the nerds, or be depressed but have a sense of community with the emos. You can weigh out the positives and negatives for each and decide if one of these groups is right for you or you can decide to just be yourself and go your own path.

Friday, August 30, 2019

Baseline Magazine

Baselinemag. com is a site which has been put in place to ideally serve as a guide on various aspects of technology. That is, it manages and disseminates critical information on technology. This information is usually in form of news stories, research studies, financial tools and company dossiers.As such, the site is structured in a formal format which can allow users of the site to gather all the information they need, perform analysis and make decisions on how to utilize the information in advancing their individual IT companies.Considering that the site is targeted at IT executives, the sites sub topics include a range of IT subjects such as IT management, techdirect, projects and white papers (Baseline, 2010). Furthermore, the subtopics which fall under these categories can be observed throughout the site making it easy for site users to access the exact information they are looking for immediately the site uploads. Basically, the site is structured in such a way that one uses li mited time trying to access data, a notion which appeals to the targeted audience.Overall Look and Feel Baselinemag. com targets technology and business leaders who are constantly on the lookout for cutting edge information on technology and on effective ways of managing their companies. As such, the site displays IT systems which have been implemented by other companies, how they have been implemented and the results emanating from them. This sort of information enlightens the users who measure their own success or failures against the expected results.By knowing how the most successful companies utilize IT solutions, companies are able to restructure their systems and alter their management strategies into fitting those of a winning company. All this information is spread out over a white colored background, with topics and subtopics highlighted by orange, brown and blue colors. These colors create a serene environment, depicting calmness which allows the user to fully concentrate on the information without any interruptions. The number of images is limited in the site and this gives the information greater significance.However, the site may not appeal to them as there are no catchy images. Useful Features The feature I found most useful was the stories posted on how to get promoted and on finding jobs online. These stories had no forms of jargon and have been written in a simple language to appeal to ordinary individuals seeking to advance in their careers. Majority of users are employees who are looking to get ahead in their careers, this feature can offer crucial information to ensure that they achieve this. Interesting Features Links visited included www.diskeeper. com, www. insight. com and www. smarttechnology. com. The link sponsored by smart technology was rather interesting as it carried the article on â€Å"the techie’s guide to fitness. † It offered a convenient and easy way of monitoring one’s fitness goals even while working . The gadget can indeed send significant details on one’s fitness directly to their phones or computer. Furthermore, Bottom-line Evaluation and Relation to Learning Objectives Basically, this site is of great value to business leaders and IT executives.However, its values decline in regards to ordinary people due to the irrelevance of the topic and subtopics found on the site. As pointed out earlier, only one or two features are relevant to an ordinary individual. According to Lagace (2000), value is depicted by a customer when the service delivery is effective. As such, the satisfaction of business leaders and executives is what can ideally rate this site while those stumbling on it can only speculate or even find it insignificant.Hoffman (1996) illustrates that past experiences can be advantageous when one is venturing into their own business. As such baselinemag. com offers even better options of evaluating other company’s experiences and learning from them. Bayan ( 2003) contends that there are tools which are quite useful while setting up an effective help desk and in the life of a company executive or business leader, baselinemag. com is one such tool. References Baselinemag. com. (2010). ZiffDavies, Inc. Retrieved on 23rd July 2010, from: www. baselinemag. com Bayan, R. (2004).Try these efficiency strategies when setting up a successful help desk. TechRepublic. Retrieved 23rd July 2010, from http://articles. techrepublic. com. com/5100-10878_11-5112468. html Hoffman, R. (1996). Help is Only a Phone Call Away! How to get good service. The Real World. Retrieved 23rd July 2010, from http://www. animatedsoftware. com/misc/stories/jobs/dbmscntr. htm Lagace, M. (2000). Calling all Managers: How to Build a Better Call Center. Harvard Business School: Working Knowledge. Retrieved on 23rd July 2010, from: http://hbswk. hbs. edu/item/1238. html

Thursday, August 29, 2019

Criminal Minds Essay

Essay is â€Å"Criminal Minds† it is a Police Procedural† about a team of profilers in the FBI’s Behavioral Analysis Units (Criminal Minds 2014). The team’s job is to establish a profile of the suspect. The suspect is always a criminal who committed unusual crime. This program is chosen because it clearly displays social deviance. The suspects in this Police Programs are not ordinary criminals. They usually suffer from a mental or personality disorder that makes the incapable of remorse. The criminals were serial killer, child rapist, cult murderers and cold blooded murders. The FBI team often meets together and study the evidence in the crime scene. The evidence may include the manner of killing, the motive, the weapons use and the strategy to conceal the crime helps the FBI establish a personality profile of the suspect or criminal. The criminals in these programs are social deviant and the where labeled by the police organization as deviant based on the crimes they committed. The FBI behavioral analysis unit who work hand in hand with the police, labels the suspects as not the everyday criminal. The criminals were extremely dangerous and usually suffering from behavioral disorders or mental illness. They are not normal criminals who committed their crimes because of survival or to earn money. These criminals usually commit their crimes due to passion, for fun or for some superstitious belief. The FBI unit labels these criminals as the most deviant of all criminals; they are extremely dangerous and will continue to commit their crimes until they are caught. It is therefore necessary for the FBI unit to build a personality profile of the criminal in order to known which is the next victim and where will be the next crime. The FBI team is the one doing the labeling for the deviance. The crimes committed by the criminals and the criminals themselves could be considered as primary deviance (Siegel, 2008). The crimes are murders; homicide, robbery and rape are all primary deviance. The criminal is pursued, arrested killed or sent to jail by the FBI team. This is the penalty of the criminal. They are treated and penalized like any ordinary criminal but the FBI team labels them as extre mely dangerous criminal. This is secondary labeling. Because they are labeled as extremely dangerous, they become top priority of the organization. The FBI may sometimes employ questionable practices just to capture these extremely dangerous criminals. They may sometimes hack the email of the suspect just to know what is in his mind. Another instance of  secondary labeling is when cult members commit a crime and then when another crime is committed, they were being blamed for the crime simply because they are cult members. Some individuals in this program are not actually criminals and do not actually commit a crime. Such as in the episode where Satanist are considered suspect for the crime they did not commit. However, the FBI have encountered murders and homicide committed by cult members. Hence, when a murder was committed and style of the murder is similar to a cult, the FBI agents quickly concluded that the Satanists were involved. There are no cults in the community but there are Satanists who meet together in discos and private gatherings. The NBI team felt that the Satanists were responsible for the crime simply because they are Satanists. This is secondary deviance. According to Sociologists, secondary deviation is what causes individuals to become hardened criminals. Stigma could also be found in this episode. Satanists in the film were considered deviants even though the leader of the Satanists claimed that they were only misunderstood, He has a valid justification but because the society considered Satan as the king of Evil, his believers were considered evil and deviant. The four functions of Emile Durkheim are also portrayed in the series. In the Season three- episode 12, entitled â€Å"3rd Life.† A teenager was found murdered and her friend went missing and believed to be abducted. The task of the FBI agents was to create a profile of the killer and finds him before he kills the other teen. According to Durkheim, Deviance serves four functions. The first is Affirming Cultural Values and Norms (Thompson, 2012). Murder and abduction that takes place in the episode go against the cultural values of norms of the American society. The murder is also against moral standard of the US society. It is wrong to murder this is the moral standards. Durkheim third function of deviance is promoting social unity. The crime allowed all community members to participate in the hunt for the killer and kidnapper. Everybody is willing to give information regarding the events related to the murder. People do not approve crime. When crime rate is high they group together and pressure the government to do something about the crime. Some participate in solving crimes and cooperate with the government by standing as witness or providing information to help solve the crime. Uniting the community is the third function of deviancy. Community may also group together to stigmatize people  who are considered not following the norms. The people in the TV series agreed that Satanists are bad. Both the policemen and the FBI agents stigmatized the Satanists. The fourth function of deviancy according to Durkheim is it encourages social change. The social change in the TV series as brought about by the crimes is implied. FBI procedure are revised whenever they encounter a very difficult to find criminal. References: Criminal Minds (2014) Per. Mandy Patikin, Thomas Gibson & Lola Glaudini. USA. ABC studios Siegel, L. (2008) Criminology. Theories, Practice and Typologies. NJ: Prentice Hall. Thompson. W. 92012) Society in Focus: an Introduction to Sociology. NY Allyn and Beacon

Controversial Issues in Entertainment Paper Essay

Controversial Issues in Entertainment Paper - Essay Example Since discussions on such topics are open to public the questions regarding its ethicality are disputable. According to Kuypers (2002) â€Å"Controversial issues are news, and for news we look to the press." (p. 1). This essay aims to discuss the various issues and debates surrounding the coverage of controversial issues / news by the media. For the purpose of this study, the recent highly dramatic photo of a subway train bearing down on a man who was pushed off the platform, published by the New York Post on front page is used. The highly graphic depiction sparked widespread debates and criticisms condemning the unethical publication of the image, the misuse of the freedom of press, and insulting the dignity of humanity. Nature of controversy: On December 4, 2012 The New York Post published a photo on its front page, sensationally titled "Pushed on the subway track, this man is about to die. DOOMED" (See Appendix Fig. 1) Immediate after the sensational publication of the disturbing image by the newspaper, wide-ranging debates and criticisms surrounding the ethics in photojournalism and dignity of humanity began pouring in from all quarters of the society. The criticisms mainly questioned the photographer's choice of action in the face of adversity. The intent of the photographer, R. Umar Abbasi was questioned raising serious concerns over the incident, particularly since he chose to capture and then sell the image to New York Post, rather than do the obvious – i.e. intervene and try to help the man. The inaction of the photographer highlighted his apparent apathy towards the victim, inviting severe criticisms concerning his lack of judgment and gross violation of human rights. The publication of the photo by the New York Post on the front page also called for debates and criticisms surrounding the blatant use of liberty of the press by the publication. The news journalists and photographers are bound by a strict code of ethics and good conduct which en tails them to act prudently towards an individual or group and adhere to the standards of morality and principles normally attributed to and deserved by the humanity. According to the National Press Photographers Association's Code of Ethics: "Photographic and video and images can reveal great truths, expose wrongdoing and neglect, inspire hope and understanding and connect people around the globe through the language of visual understanding. Photographs can also cause great harm if they are callously intrusive or are manipulated" (NPPA (2012) Although the publication of the photo did not in any way violate the code of ethics, it however did cross the line of human dignity, especially considering that the graphic image was published on the front page. The photo was used since it was ‘newsworthy’, and such images / shocking news items are quick to catch the audience’s attention, as is apparent from the widespread reactions garnered overnight from all aspects of so ciety including journalists and citizens alike. Soon after the publication of the photo by the New York Post, other newspapers and tabloids as well as the television media covered the incident with varied reactions on the issue. Most of the print and visual media condemned the publication of the graphic image but continued to cover the story as more and more videos and images captured by

Wednesday, August 28, 2019

Starbucks CEO Howard Schultz Case Study Example | Topics and Well Written Essays - 750 words

Starbucks CEO Howard Schultz - Case Study Example Starbucks has embraced a differentiation strategy at its business level of strategies (Geereddy, 2012). It provides products that cater for the needs of a specific targeted group of customers. The company offers tailor-made varieties of goods and services, and the premium prices. The company has more ways of differentiating products than most of its competitors since a customer gets an experience when he shops for coffee. They can thus charge a premium price. Starbucks focuses on innovation by continually introducing new products and coffee such as â€Å"instant coffee† Via. The instant coffee earned the company a sales growth of over 200 million. The new products are the force behind Starbuck’s evolution into a company that provides unique customer experience. The company tries to understand its target customers. Hence, they have grown globally as a number one choice for many clients. They provide a superior coffee to its clients. They value branding the image and product through word of mouth. As a result, they ensure their clients get a maximum experience in order to spread the word. They seek to understand the particular needs of an individual customer and serve him appropriately. For instance, they can allow customers to pay online or via phone if it is convenient for them (Starbucks Corporation, 2015). Starbucks applies demographic segmentation (categorizing markets by gender, age, ethnicity, income, and family life cycle). The company’s main target is men and women aged between 25-44 years. The market accounts for about a half of its total business. The company targets this group by offering special drinks that appeal to them. It further creates its business to be the third place to go between work and home by establishing unique and relaxing atmosphere. The next large group that Starbucks targets consist of young adults aged 18-24 years old. They account for about 40% of the company’s sales. Starbucks entice the young adults through the

Tuesday, August 27, 2019

Nigeria festival Research Paper Example | Topics and Well Written Essays - 750 words

Nigeria festival - Research Paper Example A wide range of festivities began the eating of new yam (Falola 143). This festival is organized after the people are able to get enough yields after the cultivation season, so basically this is how they express their feelings of joy and happiness by organizing just a big festival that gladdens the entire land of Igbo. New Yam Festival is celebrated at the time of completion of some serious farm work done by men to produce yam. It’s a ritual of dramatic performance and is seen as a dynamic and integral part of the culture of the people of Igbo. This 8-day festival accommodates people from different villages as well and thanksgiving to the gods and ancestors is undertaken. Not only adults but both young and old citizens participate in this festive treat and perform various rituals (Kalu 184). Eating and drinking along with dancing and singing music are the main features of this big annual ritual festival. At different periodic levels, the male head of the family manifests his joy and happiness by ringing the bell of his buffalo several times. Wine is being served in horns of oxen and sometimes in buffaloes’. People who are poor and less privileged are not allowed to drink this way. Discrimination is one of the inner features of this celebration that takes place. These people use gourd cups, along with women and children. This festival is an official declaration of the eating of new yam which is the result of intense hard work all during the year (Okoroike 135). This festival holds a significant role in the lives of the people who belong to Igbo land. It is demonstrated at the beginning even before the ceremony is being commenced which is the ritual washing of the children which are about to partake in the festival. After making a ritual space with the help of leaves, children are made to stand in it and then with some religious recitations, through the throat, they are asked to pass certain materials and then spit them out on the ground. Some of

Monday, August 26, 2019

Outback Steakhouse Essay Example | Topics and Well Written Essays - 1000 words

Outback Steakhouse - Essay Example This paper seeks to assess how employee selection methods at Outback Steakhouse help it to have a competitive advantage. It also seeks to establish the importance of fit to the organization. Moreover, it seeks to evaluate the organization’s employee selection process and whether or not its selection methods are valid. The employee selection methods at Outback Steakhouse have, no doubt, given it a competitive advantage. It has made good use of some of the most effective intangible human resource factors, and these together with a wide knowledge in organization have helped it to have advantages over its competitors. The intangible factors also make it hard for Outback’s competitors to imitate its operations. Outback aims at maintaining its competitive advantage, especially through the use of its already established intangible factors, which are mostly employee-centered. It uses a precise process of employee selection for both its hourly and management employees, and this process is one of the things that have helped it to hire and retain the best employees. During the hiring process, Outback always assesses the qualifications of all applicants before selecting the most appropriate for the vacancies in question. This selection process keeps in mind Outback’s objectives, and such, the successful candidates are always those who have proven beyond doubt that they are ready to work within its vision and values, and also adhere to its organizational culture. These employees must be competent enough to work in teams, and be responsible and accountable (Jackson and Randall, 2008). To efficiently perform in a competitive business world, every player needs to understand that structural flexibility is very crucial for success (Vesey, 1991). This is what Outback seems to have understood so well. Outback Steakhouse employs personality test when recruiting new employees. This is majorly aimed at identifying those candidates who â€Å"adaptable, highly soc ial and meticulous.† Thereafter, it keeps the records of all the successful candidates, which are later used to set the cutoff scores for hiring other employees in future. Only those who attain the set cutoff scores get hired, and as such Outback Steakhouse uses this opportunity to recruit only those applicants with characteristics almost similar to those of existing successful employees, a factor which has helped it maintain its competitive advantage. Moreover, its selection process has helped it to keep its employee turnover levels as low as possible. The low employee turnover is equally important as Outback Steakhouse maintains almost all the skills and talents it nurtures. The process is also very cost effective compared to other recruitment processes, since basing recruitment on the traits of established employees helps it to get simply the best employees (Jackson and Randall, 2008). Fit has proven to be very important for Outback Steakhouse. By matching the personal qual ities of all potential employees to Outback Steakhouse’s culture and values, fit provides it with the most effective employees. It helps it to get the most appropriate hourly and management staff, that is, it helps in hiring the right staff. This fit is also relatively important in meeting the organizations objectives, as it ensures that the best employees are retained as they develop their careers within the organization. This not only leads to better performance but it also increases the loyalty of the employees, who are likely to be more committed to achieving the organizational goals (Jackson and Randall, 2008). Most restaurants in the US are reportedly doing badly partly

Sunday, August 25, 2019

Freud and psychoanalysis Research Paper Example | Topics and Well Written Essays - 2000 words

Freud and psychoanalysis - Research Paper Example No matter what level of education an individual might have, it is likely he or she is familiar with the name of Sigmund Freud. Freud’s name is now synonymous with the theory of psychoanalysis. It is a science he essentially invented near the beginning of the 19th century. Sigmund Freud began his career as a medical doctor. He worked in Vienna with mental patients and it is because of this work that he is today considered one of the founding fathers of modern day psychology. It was while working with these patients that Freud developed the psychoanalytic theory. Clearly, Freud’s original training was not in the field that he started. He originally worked as a neurologist, but the common method of treating patients was to use hypnosis, something Freud wasn’t very good at but which forced him to take a more imaginative approach to treating the mind that would revolutionize the way people thought about thought (Robbins, 1999). Although Freud was obviously not the fir st individual to study how the mind worked and he would obviously not be the last, Freud is given credit for making the first major break-through in treating the mind as an entity existing at least partially separate from the body. He identified that there were at least three different levels of thought and realized that these levels interacted and intertwined. Putting these pieces together led to his development of psychotherapy. Freud classified three major components as comprising an individual’s psyche. These were the id, the ego and the superego. This intuitive leap from biological function to mental processes did not occur by accident. To fully understand Freud’s theories, his theoretical model of the human mind and how these ideas translate into his theories regarding human culture, it is necessary to understand how these ideas developed as well as how Freud expressed

Saturday, August 24, 2019

Financing a Small Business Research Paper Example | Topics and Well Written Essays - 1250 words

Financing a Small Business - Research Paper Example Luckily, there are still many ways which a smart and proactive person can use to make his living. Something which is better than his previous job infact. This is to start his own business. One might ask why anyone would want to start his own business and go through such a huge risk of investing so much and finding out in the end that it was all a bad idea and loses all his money. Well the answer is already given above. But apart from not having any other choice, it is wrong by many people to consider a business as a "risky investment" because a business is as secure as you want it to be. If you start out with a stupid idea which only you believe is going to sell, well then you will lose your money. On the other hand, if you value your investment a lot and do not want to take big risks, you can start something simple like a general or medical store. This type of business will give you lesser profit but it will be stable and will involve routine supply chain techniques which will there fore lowering the risk to a minimum. Now comes the most crucial part, one which actually makes a person decide on his destiny. It's how to finance his business. The first thing a person needs to answer is how much investment is he going to need. If his business plan is well defined and thoroughly covers the default five year plan technique, he will know exactly how much he will need in order to get it started. He will also know in how much time he will be able to reach break-even for his investment therefore answering his second query of repayment of his installments. There are several ways a person can finance his business, for the purpose of simplicity, we will discuss three of the most common ways which are used by small businessmen and even by big multinational giants (James E. Burk, Richard P. Lehman, 2006). Personal Credit This technique is primarily designed for small businessmen deciding on opening a low risk and stable earning platform. This is because as the name suggests, the businessman will borrow money from his personal contacts either by formal or informal agreements. These personal contacts are usually family members who come in the businessman's trust circle like his parents and siblings. This is the safest way of acquiring money for investment because there is a lot of flexibility involved in the process. Also, the business man will be more careful of investing it wisely as his share is the highest in the whole investment and he cannot gamble with it too much. Upon interview of selected people on whether they would lend money to a close relative, the answers received were highly unexpected. 80% of people interviewed said they were not comfortable in lending their hard earned money as they were not sure if they would be able to get it back once they lend it. On the other hand, people also said that they would not like to ask their family members for help as they do not want to parade about the fact that they are starting a business as they might become prey to extreme jealousy. Bank Loan This technique is the most common used in the business world today. Even banks offer special packages to customers seeking money for investing in their own small business which are good because they have lower markups and they are granted more easily thanks to governmental

Friday, August 23, 2019

Process Costing Essay Example | Topics and Well Written Essays - 1000 words

Process Costing - Essay Example Companies need to know the amount of money spent on their products before they can set appropriate selling prices. Firms that fail to accurately determine process costs can find themselves setting too low prices that lead to losses or too high prices that scare away customers. Accurate process costing helps to set right prices or adjust the process if costs cannot allow reasonable pricing. Importance of the Study Although many researchers have contributed to this topic, little can be seen in terms of standardization of the approaches of determining process costs. This paper is a literature review aimed at determining major points of divergence that have prevented standardization of process costing methods. Four scholarly articles are analysed to determine gaps, discrepancies and common grounds in relation to process costing. The paper will make suggestions on the best way forward as well as lay ground for future research. Literature Review One of the articles reviewed is a research s tudy done by Cooper and Slagmulder aimed at determining process costing methods used by different Japanese firms (2002). ... use failure to give all firms equal chances of being included in the sample introduced some biasness in the study (Dillman, Eltinge, Groves & Little, 2002). The interviewers made notes while taping interviewees’ responses. The findings indicated that most Japanese firms preferred target costing as opposed to process costing. In this regard, target costing is a projection of costs of future products with the aim of determining whether the assigned expenses can allow reasonable pricing to generate profits or not. The difference between process costing and target costing is that the latter is applied on future products while the former on complete or semi-complete products (Cooper & Slagmulder, 2002). All interviewed firms indicated that they relied on target costing and only applied process costing when they find discrepancies at the end of the production process. In another study, Everaert, Germain and Werner sought to investigate process costing methods used by different compa nies in Belgium (2002). The researchers based their study on four companies that were selected randomly (2002). After getting approval, they issued questionnaires to process costing officers in the selected firms and collected them after two weeks (Everaert, Germain and Werner, 2002). The results indicated that three out of the four sampled companies used weighted average method of process analysis. The remaining firms used the first-in first-out (FIFO) method. The FIFO method treats costs from different periods separately and has four steps comprising of analysis of physical flow of units, calculation of equivalent units, computation of unit costs and analysis of total costs (Everaert, Germain and Werner, 2002). The company that used FIFO said the method is reliable because separate calculation

Thursday, August 22, 2019

Mathematics & everyone everyday Essay Example for Free

Mathematics everyone everyday Essay Mathematics is used in the lives of everyone everyday. Whether it is used by engineers designing a machine or by clothes shopper determining how much they will save, all use math and mathematic concepts. Mathematics has also been used for at least seven millennia by many of the early great civilizations. Many of those civilizations became very dependent on the use of math to create their great empires. The importance of math has not diminished since its infant years. Because of its importance how it is presented to students has become equally important. Only 30 years ago most students did not have calculators to aid them in learning. Today, high-tech calculators can carry out extremely complex mathematical equations in a fraction of a second. This has created a debate on whether or not the use of calculator benefit or hurt students. More specifically the debate is about how much use of the calculator should be used in the classroom. There is much debate on the research as to whether it truly helps or hurts students to over use or under use calculators. Much of the debate is based on subjective approaches to the research data as each side is trying to claim as much ground as they can in this debate. Both sides wish the best for the students want to see the students excel. However, they cannot agree on whether the calculators advance their mathematical education. History of Math Mathematics is a concept that has been around since the earliest records of written language. The oldest archeological mathematics discovery was found in Swaziland, Africa where a 170,000 year old Lebombo bone with notches chipped into it (Williams, 2005). These marks seem to indicate days in a month by adding a notch for every day that passes. Unfortunately, much of the history of how math developed is left to interpretation of archeological finds. Some speculate that the designs of monuments and buildings seem to have a geometric understanding, but that is entirely up to interpretation. As civilizations began to emerge two in particular developed complex mathematical systems: the Babylonians and the Egyptians. Around 1850 BC, the Babylonians developed a base-60 system of mathematics. This system seems to be modeled after how the Babylonians viewed time. They divided the day into 24 hours, with each hour having 60 minutes and each minute having 60 seconds. This is the same system that is used to describe modern time. This base-60 model seems more complicated compared to the modern base-10 system, but the Babylonians only had to learn two characters as opposed to learning 10 characters in the base-10 system (St Andrews, 2000). This mathematical system helped sustain one of the most powerful civilizations of the ancient world. The Egyptians also developed their own form of mathematics. They developed a base-10 system around 2700 BC. Part of the Egyptians desire to learn mathematics had to do with understanding time. The annual flooding of the Nile River was a very important event in Egypt. The Nile River was the lifeblood of the entire civilization and its annual flood ensure that the ground would be fertile enough to grow crops. Because of this important event, it became necessary for the Egyptians to find a way to calculate when the annual flooding would occur. Once their mathematical system was born, it was applied to other areas of their civilization, particularly in building. The result can still be seen with the Pyramids. The Greeks took mathematics to a high level along with the Chinese and the Indians. The biggest contribution the Greeks gave to math was removing unknown concepts and applying logic to math. Math and logic have definite similarities in that both have problems with absolute answers. The logic minded Greeks applied these concepts to mathematical principles. However, the Greeks logic limited their use of irrational numbers. This made their form of Algebra somewhat inadequate and some speculate that it set back mathematical progress several centuries (UL, 2008). Both the Chinese and the Indians were able to calculate the formation of pi. However much of early Chinese mathematics was destroyed during the book burning before 202 BC. Much of what as written is speculation based on works written after the burning. The significance of Chinese mathematics is how well it thrived while its western counterparts fell into a dark period. The Indians developed the concepts of trigonometry and would later develop an early form of calculus (Dutta, 2002). With the exception of the Muslims, mathematics entered a dark period. Throughout Europe, math was neglected along with many other sciences. However, in the 12th century, many European scholars sought after scientific texts the Arabs had translated. The created a rebirth in European interests in mathematics. With the Arab texts, the Hindu-Arab numerals were introduced and eventually became the norm of mathematic script. By the time the Renaissance period began, the interest in mathematics exploded. Navigation brought an interest in detailed maps. This spawned a need for trigonometry. From this point forward, math advanced continued to expand. In the 17th century, Isaac Newton discovered both the laws of physics and modern calculus. John Napier developed the concept of decimals which helped replace the limitation of fractions. Since the 17th century, many more development is the field of mathematics has been made. Math is now applied to most fields of science. Scientists have found that math has proven particularly accurate in the fiends of chemistry, astronomy, and physics. Along with reading math has become the foundation of all learning. History of Calculators The earliest form of calculators was known as abacuses. These simple devices helped in the arithmetic calculations. They were often made with a wood frame with beads strung across the frame. Each string would represent a different base unit. One string would represent an individual unit, another 10 units, another 100, and so forth. The Roman and Chinese abacuses were very similar in this respect. So much that some speculate whether the two were developed together through trade. No evidence has been found to support this other than the similarities (Messina, 2008). These early calculators can be found in some places today where technology is not thriving such as rural town in the Far East. The first mechanical calculator was invented in 1623 by William Schickard. He invented a machine called the Calculating Clock that could do simple adding and subtracting up to 999,999. Beyond that, a bell would indicate a numeric overflow error. Although this machine could only add and subtract, John Napier, in 1617, discovered a logarithm that could calculate multiplication and division through adding and subtracting (Smart Computing, 2008). In 1822, Charles Babbage came up with the idea called a difference engine. This mechanic device could store up to seven numbers of 31 characters each. He later developed another model called an analytical engine. This device was steam driven and was around 100 feet by 30 feet in size. This machine could hold 1000 number of 50 digits. All four arithmetic operations plus square roots could be calculated by this machine. Unfortunately, eight years after Babbage died in 1871, the British Association for the Advancement of Science recommended against the machine and thus no government funding would be provided to complete the machine (Stanford, 2008). Other calculator inventions came during the 19th century but it was in the 20th century that the calculator can into its own. Mechanical calculators began to be more prevalent in major suppliers. Two World Wars helped advance calculators through the use of computers and microelectronics. In 1955, IBM introduced the first transistorized calculator (IBM, 2008). Three years later, Casio introduced the first compact calculator. However, it was Texas Instruments in 1967 that introduced what is the predecessor to the modern hand held calculator. By the 1970s, pocket calculators began to enter the market place. First in Japan, these calculators could perform simple computations. The only problem was the calculators were very expensive, a tape display, and its power supply was limited. These problems were fixed through several innovated solutions. First was the Liquid Crystal Display screen, or LCD. The LCD allowed the calculator to display the results on a screen that can change rather than using thermal paper that was both cumbersome and needed constant replacing. Another novel concept was the use of replaceable batteries. This meant the calculator could truly be portable and not limited to an electrical outlet. Over time the power consumption of the calculator was reduced and solar panels were able to power the pocket calculators. This further freed users to use the calculators where ever they needed them. Calculators have now becomes commonplace in homes, businesses, and schools. Calculators have become powerful enough that pocket calculators can now calculate complex algebra and calculus equations in a fraction of a second. Computers and the Internet allow for even more levels of complexity.

Importance of Literature Essay Example for Free

Importance of Literature Essay There are a lot of things that come to mind when a person hears the word literature. Usually, one thinks of a book, story, classical work or some variation of the aforementioned but to ask someone to define the term literature, proves to be a much trickier task. Many are stumped when asked this question. People often answer by giving examples of literary works but fail to actually give an encompassing definition of the term. Welcome to the world of Webster (Dictionary that is). Websters Dictionary defines the term literature as the body of written works of a language, period, or culture. This definition seems be broad but continuing on in the definition there was one part that really stood as a more adequate representation of what literature is about. The definition reads as follows; Â…the body of written work produced by scholars or researchers in a given field. This particular definition seems to hit at the core of literature; scholarship and specialization. Some feel that that all written works are forms of literature. I contend that a true work of literature must be written by a person who has studied a given field (specialization) and produced a work that allows the reader to gain some form of a lesson or viewpoint (scholarship), whether agreeing with the writer or taking an opposing stance of the information set fourth in the work. This is not to say that these types of works do not have any entertainment value but they have a type of entertainment that wraps the lesson, argument or viewpoint in an engaging style rather than being force-fed the authors stance. A true work of literature should evoke some form of discussion on the works main point. A work of literature needs all of the above, not only to be called a true work of literature but to also cement its place in history, which is the last piece of the puzzle when defining the term. Literature stands the test of time and can be discussed by great grandchildren as well as great grandmothers and all those in between. This all goes to say that the importance of literature is its ability to connect  a community known as the human race. Classical works of literature have been translated so that all human beings can share in a great piece, even being translated to an audio, so that those who can not read will not be stopped from participating in the act of community that literature provides. It is amazing that even though some might not share the same culture as others, they could start a discussion around a piece of literature. The literature in this instance acts as a unifier rather than just a book or story, thus proving the power of literature and the overall power of the written word. A true work of literature must have scholarship, specialization, a unifying human element and an entertaining delivery in order to stand the test of time. The points mentioned above give literature criteria on which to be compared. The importance of literature goes far beyond just having a good read. Literature possesses an intangible that can have a great impact on peoples lives. Great works of our time have brought nations together, stopped feuds, healed wounds, and have even taught humans how to be human again. Literature is extremely important and necessary to the growth of this world and for a creation of a global society.

Wednesday, August 21, 2019

The Big Band Swing Era

The Big Band Swing Era It is said by historians, that they believe the Big Band Swing era dates back as early as the 1920s in early routes of jazz. It wasnt until the 1930s ranging into the 1950s, when the Big Band era became more known. Although it is called Big Band, its name can often be misleading. A Big Band consists of an orchestra with anywhere from six to as many as twenty-five musicians. Each band varies with the amount of musicians. The Big Band Swing era got its name based off of the smooth jazz beat and dance that is incorporated with it. The terms jazz band, jazz ensemble, stage band, jazz orchestra, society band and dance band can be used to describe a specific type of a big band. (Wikipedia: Big Band ) In 1932, a dance orientated band, Duke Ellington composed and recorded a song called It Dont mean a thing if it aint got that swing (The World Book Encyclopedia: J,K). The name of this song says a lot about how popular and important the Swing Era was too many people. This type of music was especially important during the time of World War II and leading America out of the Great Depression, by lifting morale. Today, The Big Band Swing Era is known for its unique components and style, which still holds a special place in hearts of millions of Americans. There are two famous Big Bands that stand out among several other Big Bands. They are the bands of Glenn Miller and Tommy Dorsey, who also played with his brother Jimmy. Glenn Millers band, also known as orchestra, was distinctive in a way that he combined the sounds of the clarinet with four saxophones. Glenn Miller said, A band ought to have a sound all of its own. It ought to have a personality. (The World Famous Glenn Miller Orchestra) Glenn miller was an excellent trombonist and believed very strongly in the music he performed. During the time of 1926-1938, he played the trombone in several bands including the band of Tommy and Jimmy Dorsey. In 1935, he recorded for the first time under his own name. Some of his well known hits were Sunrise Serenade, Moonlight Serenade and Wishing (Will Make It So). On April 13, 1940, he played at Sunnybrook Ballroom located here in Pottstown, Pennsylvania. It wasnt until the year of 1942, when he entered the U.S Army. While in the Army, he disa ppeared during an air journey leaving his music behind. The band of Tommy Dorsey was also known as the Dorsey brothers. The Dorsey brothers played in bands together as early as in the 1920s. Tommy Dorsey played both the trumpet and trombone. Throughout the Swing era, Tommy Dorsey was ranked among the top two or three big bands. His orchestra had about fifteen top ten hits in the late 1930s. The Dorsey Brothers broke the charts with their recording of Coquette. Also, with their recording with Bing Crosby called Lets Do It (lets fall in love), broke into the top ten. By this time, they had one of the hottest bands in the country. In 1935, Jimmy left Tommy to go on and play music on his own. The Dorsey brothers both passed away in the late 50s. As well as Glenn Miller, the Dorsey Brothers also played at the Sunnybrook Ballroom. Instruments found in big bands were trumpets, saxophones, trombones, drums, pianos, acoustic bass, and guitars. Instruments varied depending on the bands instrumentation of choice. Composers, arrangers and band leaders would switch things up and use more or fewer players in each section. The sections consisted of brass, string, percussion and vocal. Music of the big band was written in strophic form with the same phrase and chord structure repeated several times. In big band music, we also see a chorus that follows the twelve bar blues form, or thirty-two-bar following a (AABA) song format. Solos were also a part of most big bands. Swing dancing itself was one of the main components of the swing era. The music was represented in the dance, as dance partners (male) would twirl their partners around. Swing dancing is a lively dance which takes up a lot of energy. During this time period, swing dancing released tensions of the depression and the world in war times. This dance is intended for all ages. Some types of dances incorporated with the swing era are; Fox trot, Jitterbug and the Charleston. Big Band Swing still exists today. In fact it is becoming popular again, much like it was in the 30s throughout into the 50s. Swing Kat Entertainment located at the Ballroom on High in Pottstown, offers lessons on dances that strived during the swing era. On occasion, they have live Big Bands that perform; mostly bands that replay such music, like that of the Dorsey Brothers and Glenn Miller along with several other composers and songs. Sunnybrook Ballroom, also located in Pottstown has events that get the community together to re- shape and make todays generation aware, as well as to be a part of the swing era. An organization called The Philadelphia Swing Dance Society holds swing dancing events at the Commodore Barry Club located in Philadelphia. Sometimes big band can be heard playing on the radio on wxpns station 88.5. Many people enjoy this type of music and swing dance thats involved. The younger population also participates in events and enjoy getting their swing on. Many people enjoy reliving the swing era and look forward to community gatherings. Much like during the swing era, many people would come from all over the nation to attend dances and live performances. On April 15th at the Sunnybrook Ballroom there was a swing dance event that marked the 66th anniversary of World War II. During this event the eighteen piece Swing Fever Dance Band performed its annual USO Canteen Show, featuring music of Duke Ellington, Glenn Miller and the Dorsey Brothers (Staff). On Saturday April 16, 2011, I attended a live performance of a Big Band called the Slicked up Nines and swing dance at Swing Kat Entertainment in Pottstown. This band consisted of nine musicians all being men. Each band member was playing a different instrument. I thought this was an awesome experience to hear the instrumentation and witness people swing dancing for the first time. I myself was sucked into the rhythm of the music and felt the dance floor come alive. Instruments that the Slicked up Nines played were the following; drums, saxophone, bass, trumpet, baritone, guitar, finger tambourines and a wood block. The musicians were very involved with the music as well as the crowd. Two musicians, who were playing the saxophone, would swing their instrument from side to side. The two instruments that I feel that bring the big band alive are the saxophone and bass. The saxophone gives the music its jazz routes and the base sets the pace and rhythm for swing dancing. I was familiar with all the instruments I saw, other than the finger tambourines and wood block. These instruments were not included in the research in which I performed. Along with hearing music from the swing era, I was also fortunate to experience a few songs based off of west coast swing. The only new instrument I saw with this type of swing was the electric guitar. This gave the music a nice touch. Some songs that the Slicked up Nines performed were slower songs that used less percussion and more bass. One song that stuck out to me the most had lyrics that were catchy. This song had the words chew tobacco as a chorus. Not all of the songs that were played had vocal sounds in them. This was a friendly environment which had all kinds of age groups present. Many people dressed up in clothing which was worn during the swing era, as well as girls/ladies wearing dresses and men wearing dress shirts and cackies. The dance floor came alive as the band and swing dancers got fully involved in the music. The dance floor was built, so that it would bounce as people would dance on it. I found this to be a really cool feature and felt like I was back in the time period of the big band swing era. At times the music would get really fast as the men would swirl their partners around. Here, everyone danced with everyone and it didnt matter who, or where you were from. At the end of each song, the gentlemen would dip their partner. I thought that this was an interesting feature to swing dancing. Another component that I found to be interesting was that the songs would end real suddenly. This is much different than most of the songs we find today. Types of dances that I saw were the Fox trot, Jitterbug and the Charleston. Out of these three dances, I found that the Fox trot seemed a little more relaxed then swinging around the dance floor. Overall this was a fun clean event for everyone. I especially enjoyed the live band performance. During intermission, while the band took a break there was another event that took place. This was called the birthday circle. The person running the event called anyone to the center of the dance floor with a birthday in April. After this, the people remaining in the room gathered around in a circle. Those who were in the center would pick a partner and would switch dance partners. Everyone who wanted to dance with those who shared a birthday in April had a chance too. This was really cool and fun to watch. Here I saw for myself that the big band swing still exists to a small extent, due to many age groups that find interest in it. For anyone looking for a different taste of music, I would recommend it for anyone. For me this was a fun experience and I feel that the more that people come familiar with the swing era, the more popular it will continue to be today. The swing era is something that will remain in peoples hearts for many more years to come.

Tuesday, August 20, 2019

Diseases :: essays papers

Diseases Diseases are any harmful change that interferes with the normal appearance, structure, or function of the body or any of its parts. Since time immemorial, disease has played a role in the history of societies. It has affected and has been affected by economic conditions, wars, and natural disasters. An epidemic of influenza that swept the globe in 1918 killed between 20 million and 40 million people. Within a few months, more than 500,000 Americans died^more than were killed during World War I (1914-1918), World War II (1939-1945), the Korean War (1950-1953), and the Vietnam War (1959-1975) combined. Diseases have diverse causes, which can be classified into two broad groups: communicable and noncommunicable. Communicable diseases can spread from one person to another and are caused by microscopic organisms that invade the body. Noncommunicable diseases are not communicated from person to person and do not have, or are not known to involve, infectious agents. Some diseases, such as the common cold, and come on suddenly and last for no more than a few weeks. Other diseases, such as arthritis, are chronic, consistent for months or years, or reoccur frequently. Every disease has certain characteristic effects on the body. Some of these effects, include fever, inflammation, pain, fatigue, dizziness, nausea, and rashes, are evident to the patient. These symptoms offer important clues that help doctors and other health care professionals make a diagnosis. Many times, the symptoms point to several possible disorders. In those cases, doctors rely on medical tests, such as blood examinations and X rays, to confirm the diagnosis. Communicable diseases are caused by microscopic organisms. Physicians refer to these disease-causing organisms as pathogens. Pathogens that infect humans include a wide variety of bacteria, viruses, fungi, protozoans, and parasitic worms. Also, it has been theorized that some proteins called prions may cause infectious diseases. Bacteria are microscopic single-celled organisms at least 1 micron long. Some bacteria species are harmless to humans, many are beneficial. But some are pathogens, including those that cause cholera, diphtheria, leprosy, plague, pneumonia, strep throat, tetanus, tuberculosis, and typhoid fever. The bacteria that are harmless and live in or on you are called resident bateria. Viruses are tens or hundreds of times smaller than bacteria. They are not cellular, but consist of a core of genetic material surrounded by a protective coat of protein. Viruses are able to survive and reproduce only in the living cells of a host. Once a virus invades a living cell, it directs the cell to make new virus particles. These new viruses are released into the surrounding tissues, and seek out new cells to infect.

Monday, August 19, 2019

What Is Credit Card Fraud? Essay -- Crime

Credit card fraud is highly publicized in this era and you should know what it looks like. This is the most common and simplest form of identity theft. All it would take is having someone else’s name, address, date of birth, and social security number. With all the pertinent information on a person one can get loans, and open new bank accounts. Also, someone could use the person’s existing bank account. Just about anything can be accomplished with someone else’s information. Simple and thought to be unimportant information could turn out to be very valuable information to a person doing credit card fraud or identity theft. A simple definition of credit card fraud is illegally obtaining goods, funds, or services deceptively. The amount of time for this type of fraud to be discovered can vary. Credit card fraud, better known to others as identity theft, can cause great turmoil in a person or family’s life by taking away their sense of security. Credit card fraud does not mean, necessarily, that a person has to have the card; the only thing a person really has to have is the card number, expiration date, and the three-digit card verification number on the back with the card holder’s name. In most cases of credit card fraud the person committing the fraud actually knows the person quite well and the address to where the bill is sent. With all of this being said, when it comes to you or your family’s finances be careful. Someone you think you know and trust can very easily steal your credit card information, or even personal information to use for his or her own personal gain. Credit card fraud is just as simple as ordering things over the Internet or the phone. A person can make a copy of your card and use it with few, or no, ... ...redit card fraud the amount of time it takes for the crime to be discovered can vary. When the card itself is stolen, the theft may be determined quickly, but in cases where a person’s personal information has been stolen, it can take far longer. If only a person’s credit card information is appropriated, then the consumer is likely to notice the illegal charges on their next billing statement, but if the theft extends to an individual’s identity, then a culprit who uses a separate address may be able to fully prevent discovery of the crime for an indeterminate amount of time. (1995-2012 HG.org- HGExperts.com) Works Cited http://theworldeconomy.info/article/avoid-credit-card-fraud/ http://www.combat-identity-theft.com/what-is-credit-card-fraud.html http://people.exeter.ac.uk/whatupman/undergrad/owsylves/page3.html http://www.hg.org/credit-card-fraud.html

Sunday, August 18, 2019

Cerebellar Lesions and The Neurosurgeon :: Medicine Treatments Papers

Cerebellar Lesions and The Neurosurgeon Modern Surgical Approaches The incorporation of computed topography into stereo tactic techniques coincided with a general interest in stereo tactic approaches to intracranial tumors. Several authors including Moser and Backlund in 1982 and Apuzzo in 1984 reported safe CT based stereo tactic tumor biopsies ofpineal region tumors. Most series of stereo tactic tumor biopsies contain a number of pineal region lesions. The reported mortality and morbidity of imaging based stereo tactic biopsy is very low. It is now clear that stereo tactic biopsy is one option in a management of a pineal region tumor. However, the question of sampling is frequency raised. In addition, many pineal region tumors are not cured with radiation and chemotherapy and need to be resected. Open Approaches The evolution of modern microsurgical techniques have resulted in a precipitous reduction in the morbidity andmortality in the open approaches for the excision of pineal tumors. Packer (1984) reported nomortality in the partial resection of 24 pineal region tumors most of which were operated using a transcallosal approach, although an infratentorial approach was use d in some of these. Larger subsequentseries were reported by Lapras and Patet (1987) with 100 patients and no mortality and Sano (1987) with 125 patients. Edwards (1988) and Hoffman (1991) each reported pediatric series of 30 and 33 patients respectively.No mortality was experienced in any of these surgical series. Most employed infratentorial or transtentorial approaches. The open approach has two major advantages over stereo tactic biopsy: it provides more tissue and adequate histological sampling and it allows excision of tumors, which can potentially cure by resection. Stereotactic Biopsy The surgical methods are reported elsewhere and described briefly here.Stereo tactic biopsies were performed utilizing the COMPASS stereo tactic system (COMPASS international, Inc. Rochester, Minnesota). The procedures comprise three steps: data base acquisition, surgical planning and thesurgical procedure. Data base acquisition: Under sedation and local anesthesia a CT/MRI compatible stereo tactic head frame is placed.

Saturday, August 17, 2019

Electronic Literature as an Information System Essay

ABSTRACT Electronic literature is a term that encompasses artistic texts produced for printed media which are consumed in electronic format, as well as text produced for electronic media that could not be printed without losing essential qualities. Some have argued that the essence of electronic literature is the use of multimedia, fragmentation, and/or non-linearity. Others focus on the role of computation and complex processing. â€Å"Cybertext† does not sufficiently describe these systems. In this paper we propose that works of electronic literature, understood as text (with possible inclusion of multimedia elements) designed to be consumed in bi- or multi-directional electronic media, are best understood as 3-tier (or n-tier) information systems. These tiers include data (the textual content), process (computational interactions) and presentation (on-screen rendering of the narrative). The interaction between these layers produces what is known as the work of electronic literature. This paradigm for electronic literature moves beyond the initial approaches which either treated electronic literature as computerized versions of print literature or focused solely on one aspect of the system. In this paper, we build two basic arguments. On the one hand, we propose that the conception of electronic literature as an  information system gets at the essence of electronic media, and we predict that this paradigm will become dominant in this field within the next few years. On the other hand, we propose that building information systems may also lead in a shift of emphasis from one-time artistic novelties to reusable systems. Demonstrating this approach, we read works from the _Electronic Literature Collection Volume 1_ (Jason Nelson and Emily Short) as well as newer works by Mez and the team gathered by Kate Pullinger and Chris Joseph. Glancing toward the future, we discuss the n-tier analysis of the Global Poetic System and the La Flood Project. INTRODUCTION The fundamental attributes of digital narrative have been, so far, mostly faithful to the origin of electronic text: a set of linked episodes that contain hypermedia elements. Whether or not some features could be reproduced in printed media has been subject of debate by opponents and proponents of digital narratives. However, as the electronic media evolves, some features truly unique to digital narrative have appeared. For instance, significant effort has been invested in creating hypertexts responsive to the reader’s actions by making links dynamic; additionally, there have been efforts to create systems capable of producing fiction, with varying degrees of success. Both approaches have in common that they grant greater autonomy to the computer, thus making of it an active part of the literary exchange. The increasing complexity of these systems has directed critical attention to the novelty of the processes that produce the texts. As critics produce a flood of neologisms to classify these works, the field is suffering from a lack of a shared language for these works, as opposed to drawing from the available computer science and well-articulated terminology of information systems. The set {Reader, Computer, Author} forms a system in which there is flow and manipulation of information, i.e. an _information system_. The interaction between the elements of an information system can be isolated in functional tiers. For instance: one or many data tiers, processing tiers, and presentation tiers. In general we will talk about n-tier information  systems. We will expand this definition in the next section. In this system, a portion of information produced (output) is taken, totally or partially, as input, i.e. there is a feedback loop and therefore the process can be characterized as a cybernetic process. Of course, the field has already embraced the notion of the cybertext. The term cybertext was brought to the literary world’s attention by Espen Aarseth (1997). His concept focuses on the organization of the text in order to analyze the influence of media as an integral part of literary dynamics. According to Aarseth, cybertext is not a genre in itself. In order to classify traditions, literary genres and aesthetic value, Aarseth argues, we should inspect texts at a much more local level. The concept of cybertext offers a way to expand the reach of literary studies to include phenomena that are perceived today as foreign or marginal. In Aarseth’s work, cybertext denotes the general set of text machines which, operated by readers, yield different texts for reading. Aarseth (1997, p. 19), refuses to narrow this definition of cybertext to â€Å"such vague and unfocused terms such as digital text or electronic literature.† For the course of this paper, we will use the phrase â€Å"electronic literature,† as we are interested in those works that are markedly literary in that they resonate (at least on one level) through evocative linguistic content and engage with an existing literary corpus. While we find â€Å"cybertext† to be a useful concept, the taxonomies and schematics that attend this approach interfere with interdisciplinary discussions of electronic literature. Instead of using Aarseth’s neologisms such as textons, scriptons and traversal functions, we will use widely-accepted terminology in the field of computer science. This shift is important because the concepts introduced by Aarseth, which are relevant to the current discussion, can be perfectly mapped to concepts developed years earlier in computer science. While the neologisms introduced by Aarseth remain arcane, the terms used in computer science are pervasive. Although the term cybertext adds a sense of increasingly complex interactivity, its focus is primarily on the interaction between a user and  a single art object. Such a framework, however, insufficiently describes the constitution of such an object. Within his treatise, Aarseth is compelled to create tables of attributes and taxonomies to map and classify each of these objects. What is needed is a framework for discussing how these systems operate and how that operation contributes to an overall literary experience. We want to make a clear distinction between this notion of cybertext as a reading process and more thorough description of a work’s infrastructure. Clearly, there are many ways in which the interaction between a reader and a piece of electronic literature can happen; for instance, a piece of electronic literature could be written in HTML or in Flash, yet presenting the same interaction with the reader. In this paper, we adapt the notion of n-tier information systems to provide a scaffolding for reading and interpreting works of electronic literature. The fact that the field of electronic literature is largely comprised of cybertexts (in the sense described above) that require some sort of processing by the computer, has made of this processing a defining characteristic. Critics and public approach new works of electronic literature with the expectation of finding creativity and innovation not only at the narrative level but also at the processing level; in many cases the newness of the latter has dominated other considerations. NEW, NEWER, NEWEST MEDIA Until now, electronic literature, or elit, has been focused on the new, leading to a constant drive to reinvent the wheel, the word, the image, the delivery system, and consequently reading itself. However, such an emphasis raises a number of questions. To what extent does the â€Å"novel† requirement of electronic literature (as the field is currently defined) de-emphasize a textual investment in exploring the (post)human condition (â€Å"the literary†)? How does this emphasis on the â€Å"new† constrain the development of New Media both for authors and for prospective authors? Or how does such an emphasis put elit authors into an artistic arms race taking on the aethetics of the militiary-industrial complex that produces their tools? Literary essays that treat electronic literature focus on Flash movies, blogs, HTML pages, dynamically generated pages, conversation agents, computer games, and other software applications. A recent edition of Leonardo Almanac (AA.VV. 2006) offers several examples. Its critics/poets analyze the â€Å"information landscapes† of David Small, the text art experiments of Suguru Ishizaki (2003), Brian Kim Stefans’ 11-minute Flash performance, and Philippe Bootz’s matrix poetry program. Though not all the objects are new, what they share most of all is the novelty of their surface or process or text. These works bear little resemblance to one another, a definitive characteristic of electronic literature (dissimilarity); however, their inclusion under one rubric reflects the field’s fetishization of the new. This addiction, mimicking that of the hard sciences it so admires, must constantly replace old forms and old systems with the latest system. Arguably, therefore, any piece of electronic literature may only be as interesting as its form or its novel use of the form. Moreover, such an emphasis shifts the critical attention from the content (what we will call data) to its rendering (or presentation plus processes) primarily. Marie-Laure Ryan (2005) raised charges against such an aesthetic in her _dichtung-digital_ article. In this piece, she rails against a certain style of new media, net.art, elit art object that follows WYSINWYG (What you see is _NOT_ what you get), where the surface presents a text that is considered interesting only because of a more interesting process beneath the surface. This approach, according to Ryan, focuses on â€Å"the meta-property of algorithmic operation.† For this aesthetic, â€Å"the art resides in the productive formula, and in the sophistication of the programming, rather than in the output itself† (Ryan). This means that literary, or artistic value, does not reside in what appears on the screen, but in the virtuoso programming performance that underlies the text. While Ryan goes too far in her dismissal of experimentation, her critique holds, in as much as electronic literary criticism that puts process uber alis risks not only minimizing the textual to insignificance but also losing what should be one of elit’s biggest goals: developing new forms for other authors to use and  explore. Such an emphasis reveals a bias that has thus far dominated new media scholarship. This same bias leads new media scholars away from literary venues for their discourse communities and instead to Boing Boing and Siggraph, sites where curiosity or commercial technological development dominate the discussions. It is also what spells instant obsolescence to many authorware forms. The person who uses authorware as it was intended is not the new media artist. It is the person who uses it in a new way or who reconfigures the software to do something unintended. This trend means that electronic literary artists will constantly be compelled to drive their works towards the new, even while it means a perpetual pruning of all prior authorware, cutting them off from the†literary† tree. (We see this same logic in commerical software production where the 4.0 release reconfigures the interface and removes some of the functionality we had grown to love.) A disproportionate emphasis on the new overlooks the tremendous areas of growth in authorship on the stabilizing, if rudimentary, authoring systems. The tide of productivity (in terms of textual output of all levels of quality) is not from an endless stream of innovations but from people who are writing text in established author information formats, from traditional print to blogs. It is through the use of stabilized and reusable information systems that the greater public is being attracted to consume and produce content through digital media. Blogging is the clearest example. This is not equivalent to saying that all blogging is literary, just as not all writing is; however, blogging has created a social practice of reading and writing in digital media, thus increasing the frequency at which literary pieces appear through that venue. This increased community activity would have been impossible if each blogger had to develop their own authoring systems. To help redistribute the scholarly priorities, we propose a reconsideration of electronic literature as an n-tier information system. The consequence of this shift will be twofold: First of all, it will allow us to treat content and processing independently, thus creating a clear distinction between works of literary merit and works of technological craftsmanship. While this  distinction is at best problematic, considering the information system as a whole will move the analysis away from over-priveleging processes. Secondly, we claim that this approach provides a unified framework with which all pieces of electronic literature can be studied. This paper is organized as follows: in Section 1 (Introduction) we describe what is the problem we intend to explore, and what are the type of systems that will be described in this paper. Section 2 (Information Systems) explores the components of an information system and compares the approaches of different researchers in the field. Section 3 (Examples) demonstrates that the n-tier information system approach can be used to describe a multifarious array of pieces of electronic literature. Section 4 (Discussion) explores the conclusions drawn from this study and set future directions. INFORMATION SYSTEMS Since electronic literature is mediated by a computer, it is clear that there must exist methods to enter information into the system, to process it, and to render an output for readers; that is to say, a piece of electronic literature can be considered as an _information system_. The term â€Å"information system† has different meanings. For instance, in mathematics an â€Å"information system† is a basic knowledge-representation matrix comprised of attributes (columns) and objects (rows). In sociology, â€Å"information systems† are systems whose behavior is determined by goals of individual as well as technology. In our context, â€Å"information system† will refer to a set of persons and machines organized to collect, store, transform, and represent data, a definition which coincides with the one widely accepted in computer science. The domain-specific twist comes when we specify that the data contains, but is not limited to, literary information. Information systems, due to their complexity, are usually built in layers. The earliest antecedent to a multi-layer approach to software architectures goes back to Trygve Reenskaug who proposed in 1979, while visiting the Smalltalk group at Xerox PARC, a pattern known as Model-View-Controller  (MVC) that intended to isolate the process layer from the presentation layer. This paradigm evolved during the next decade to give rise to multi-tier architectures, in which presentation, data and processes were isolated. In principle, it is possible to have multiple data tiers, multiple process tiers, and multiple presentation tiers. One of the most prominent paradigms to approach information systems in the field of computer science, and the one we deem more appropriate for electronic literature, is the 3-tier architecture (Eckerson, 1995). This paradigm indicates that processes of different categories should be encapsulated in three different layers: 1. Presentation Layer: The physical rendering of the narrative piece, for example, a sequence of physical pages or the on-screen presentation of the text. 2. Process Layer: The rules necessary to read a text. A reader of Latin alphabet in printed narrative, for example, must cross the text from left to right, from top to bottom and pass the page after the last word of the last line. In digital narrative, this layer could contain the rules programmed in a computer to build a text output. 3. Data Layer: Here lays the text itself. It is the set of words, images, video, etc., which form the narrative space. In the proposed 3-tier model, feedback is not only possible, but also a _sine qua non_ condition for the literary exchange. It is the continuation of McLluhan’s mantra: â€Å"the media is the message†. In digital narrative, the media acts on the message. The cycle of feedback in digital narrative is: (i) Readers receive a piece of information, and based on it they execute a new interaction with the system. (ii) The computer then takes that input and applies logic rules that have been programmed into it by the author. (iii) The computer takes content from the data layer and renders it to the reader in the presentation layer. (iv) step -i – is repeated again. Steps i through v describe a complete cycle of feedback, thus the maximum realization of a cybertext. N-tier information systems have had, surprisingly, a relatively short penetration in the field of electronic literature. Aarseth (1997, p.62) introduced a typology for his textonomy that maps perfectly a 3-tier system: Scriptons (â€Å"strings as they appear to readers†) correspond to the presentation layer, textons (â€Å"strings as they exist in the text†) correspond to the data layer, and traversal function (â€Å"the mechanism by which scriptons are revealed or generated from textons and presented to the user†) corresponds to the process layer. These neologisms, while necessary if we study all forms of textuality, are unnecessary if we focus on electronic literature. The methods developed in computer science permeate constantly, and at an accelerating rate, the field of electronic literature, specially as artists create pieces of increasing complexity. Practitioners in the field of electronic literature will be better equipped to benefit from the advances in information technology if the knowledge acquired in both fields can be bridged; without a common terminology attempts to generate dialog are thwarted. The first reference that used computer science terminology applied to electronic literature appeared in an article by Gutierrez (2002), in which the three layers (data, logic and presentation) were clearly defined and proposed as a paradigm for electronic literature. Gutierrez (2004, 2006) explored in detail the logic (middle) layer, proposing algorithms to manage the processes needed to deliver literary content through electronic media. His proposal follows the paradigm proposed by Eckerson (1995) and Jacobson et al (1999): the system is divided into (a) topological stationary components, (b) users, (c) and transient components (processes). The processes in the system are analyzed and represented using sequence diagrams to depict how the actions of the users cause movement and transformation of information across different topological components. The next reference belongs to Wardrip-Fruin (2006); he proposes not three, but seven components: (i) author, (ii) data, (iii) process, (iv) surface, (v) interaction, (vi) outside processes, and (vii) audiences. This vision corresponds to an extensive research in diverse fields, and the interpretation is given from a literary perspective. Even though  Wardrip-Fruin does not use the terminology already established in computer science, nor he makes a clear distinction between topology, actors and processes, his proposal is essentially equivalent, and independent, from Gutierrez’s model. In Wardrip-Fruin’s model, author -i- and audience -vii- correspond to actors in the Unified Process (UP); process -iii- and interaction -v- correspond to the process layer in the 3-tier architecture (how the actors move information across layers and how it is modified); data -ii- maps directly the data layer in the 3-tier model; finally, surface -iv- corresponds to the presentation layer. The emergence of these information systems approaches marks the awareness that these new literary forms arise from the world of software and hence benefit from traditional computer science approaches to software. In the Language of New Media, Lev Manovich called for such analysis under the rubric of Software Studies. Applying the schematics of computer science to electronic literature allows critics to consider the complexities of that literature without falling prey to the tendency to colonize electronic literature with literary theory, as Espen Aarseth warned in Cybertext. Such a framework provides a terminology rather than the imposition of yet another taxonomy or set of metaphors that will always prove to be both helpful and glaringly insufficient. That is not to say that n-tier approaches fit works without conflict. In fact, some of the most fruitful readings come from the pieces that complicate the n-tier distinctions. EXAMPLES DREAMAPHAGE 1 & 2: REVISING OUR SYSTEMS Jason Nelson’s Dreamaphage (2003, 2004) demonstrates the ways in which the n-tier model can open up the complexities and ironies of works of electronic literature. Nelson is an auteur of interfaces, and in the first version of this piece he transforms the two-dimensional screen into a three-dimensional navigable space full of various planes. The interactor travels through these planes, encountering texts on them, documentation of the disease. It is as if we are traveling through the data structure of the story itself, as if  the data has been brought to the surface. Though in strict terms, the data is where it always was supposed to be. Each plane is an object, rendered in Flash on the fly by the processing of the navigation input and the production of vector graphics to fill the screen. However, Nelsons’ work distances us, alienates us from the visual metaphors that we have taken for the physical structures of data in the computer. Designers of operating systems work hard to naturalize our relationship to our information. Opening windows, shuffling folders, becomes not a visual manifestation but the transparent glimpse of the structures themselves. Neal Stephenson has written very persuasively on the effect of replacing the command line interface with these illusions. The story (or data) behind the piece is the tale of a virus epidemic, whose primary symptom is the constant repetition of a dream. Nelson writes of the virus’ â€Å"drifting eyes.† Ultimately the disease proves fatal, as patients go insane then comatose. Here the piece is evocative of the repetitive lexias of classical electronic literature, information systems that lead the reader into the same texts as a natural component of traversing the narrative. Of course, the disease also describes the interface of the planes that the user travels through, one after the other, semi-transparent planes, dreamlike visions. This version of Dreamaphage was not the only one Nelson published. In 2004, Nelson published a second interface. Nelson writes of the piece, â€Å"Unfortunately the first version of Dreamaphage suffered from usability problems. The main interface was unwieldy (but pretty) and the books hard to find (plus the occasional computer crash)† (â€Å"Dreamaphage, _ELC I_) He reconceived of the piece in two dimensions to create a more stable interface. The second version is two-dimensional and Nelson has also â€Å"added a few more extra bits and readjusted the medical reports.† In the terms of n-tier, his changes primarily affected the interface and the data layers. Here is the artist of the interface facing the uncanny return of their own artistic creation in a world where information systems do not lie in the stable binding in a book but in a contingent state that is always dependent  on the environments (operating systems) and frames (browser) in which they circulate. As the user tries to find a grounding in the spaces and lost moments of the disease, Nelson himself attempts to build stability into that which is always shifting. However, do to a particular difference in the way that Firefox 2.0 renders Flash at the processing layer, interactors will discover that the†opening† page of the second version is squeezed into a fraction of their window, rather than expanding to fill the entire window. At this point, we are reminded of the work’s epigram, â€Å"All other methods are errors. The words of these books, their dreams, contain the cure. But where is the pattern? In sleeping the same dream came again. How long before I become another lost?† (â€Å"opening†). As we compare these two versions of the same information system, we see the same dream coming again. The first version haunts the second as we ask when will it, too, become one of the lost. Though Nelson himself seems to have an insatiable appetite for novel interfaces, his own artistic practices resonate well with the ethos of this article. At speaking engagements, he has made it a practice to bring his interfaces, his .fla (Flash source) files, for the attendees to take and use as they please. Nelson presents his information systems with a humble declaration that the audience may no doubt be able to find even more powerful uses for these interfaces. GALATEA: NOVELTY RETURNS Emily Short’s ground-breaking work of interactive fiction offers another work that, like its namesake in the piece, opens up to this discussion when approached carefully. Galatea’s presentation layer appears to be straight forward IF fare. The interactor is a critic, encountering Galatea, which appears to be a statue of a woman but then begins to move and talk. In this novel work of interactive fiction, the interactor will not find the traditional spacial navigation verbs (go, open, throw) to be productive, as the action focuses on one room. Likewise will other verbs prove themselves unhelpful as the user is encouraged in the help instructions to â€Å"talk† or  Ã¢â‚¬Å"ask† about topics. In Short’s piece, the navigational system of IF, as it was originally instantiated in Adventure, begins to mimic a conversational system driven by keywords, ala Joseph Weizenbaum’s ELIZA. Spelunking through a cave is replaced with conversing through an array of conversational replies. Galatea does not always answer the same way. She has moods, or rather, your relationship with Galatea has levels of emotion. The logic layer proves to be more complex than the few verbs portend. The hunt is to figure out the combination that leads to more data. Galatea uses a novel process to put the user in the position of a safe cracker, trying to unlock the treasure of answers. Notice how novelty has re-emerged as a key attribute here. Could there be a second Galatea? Could someone write another story using Galatea’s procesess. Technically no, since the work was released in a No-Derivs Creative Commons license. However, in many ways, Galatea is a second, coming in the experimental wave of artistic revisions of interactive fiction that followed the demise of the commercially produced text adventures from Infocom and others. Written in Z-Machine format, Galatea is already reimagining an information system. It is a new work written in the context of Infocom’s interactive fiction system. Short’s work is admittedly novel in its processes, but the literary value of this work is not defined by its novely. The data, the replies, the context they describe, the relationship they create are rich and full of literary allusions. Short has gone on to help others make their own Galatea, not only in her work to help develop the natural language IF authoring system Inform 7 but also in the conversation libraries she has authored. In doing so, she moved into the work of other developers of authoring systems, such as the makers of chatbot systems. Richard S. Wallace developed one of the most popular of these (A.I.M.L..bot), and his work demonstrates the power of creating and sharing authorware, even in the context of the tyranny of the novel. A.L.I.C.E. is the base-line conversational system, which can be downloaded and customized. Downloading the basic, functioning A.L.I.C.E. chatbot as a foundation allows users to concentrate on editing recognizeable inputs and systematic responses. Rather than worrying about how the system will respond to input, authors, or botmasters, can focus on creating what they system will say. To gain respect as a botmaster/author, one cannot merely modify an out-of-the-box ALICE. The user should further customize or build from the ground up using AIML, artificial intelligence markup language, the site-specific language created for Wallace’s system. They must change the way the system operates–largely, because the critical attention around chatbots follows more the model of scientific innovation more than literary depth. However, according to Wallace, despite the critics’ emphasis on innovations, the users have been flocking to ALICE, as tens of thousands of users have created chatbots using the system (Be Your Own Botmaster). AIML becomes an important test case because while users may access some elements of the system, because they are not changing fundamentals, they can only make limited forays into the scientific/innovation chatbot discussions. Thus while our n-tier model stresses the importance of creating authorware and understanding information systems, novelty still holds an important role in the development of electronic literature. Nonetheless, interactors can at least use their pre-existing literacies when they encounter an AIML bot or a work of interactive fiction written on a familiar platform. LITERATRONICA Literatronic is yet another example of an n-tier system. Its design was based entirely in the concept of division between presentation, process and data layers. Every interaction of the readers is stored in a centralized database, and influences the subsequent response of the system to each reader’s interactions. The presentation layer employs web pages on which the reader can access multiple books by multiple authors in multiple languages.  The process layer is rather complex, since it uses a specialized artificial intelligence engine to adapt the book to each reader, based upon his/her interaction, i.e. and adaptive system. The data layer is a relational database that stores not only the narrative, but also reader’s interaction. Since there is a clear distinction between presentation, data and process, Literatronica is a 3-tier system that allows authors of multiple language to focus on the business of literary creation. MEZ’S CODE: THE SYSTEMS THAT DO NOT USE A COMPUTER[1] As with many systematic critical approaches, the place where n-tier is most fruitful is the where it produces or reveals contradictions. While some works of electronic literature lend themselves to clear divisions between parts of the information system, many works in electronic literature complicate that very distinction as articulated in such essays as Rita Raley’s code.surface||code.depth, in which she traces out codeworks that challenge distinctions between presentation and processing layers. In the works of Mez (Maryanne Breeze), she creates works written in what N. Katherine Hayles has called a creole of computer and human languages. Mez, and other codework authors, display the data layer on the presentation layer. One critical response is to point out that as an information system, the presentation layer are the lines of code and the rest of the system is whatever medium is displaying her poem. However, such an approach missed the very complexity of Mez’s work. Indeed, Mez’s work is often traditional static text that puts users in the role of the processor. The n-tier model illuminates her sleight of hand. trEm[d]o[lls]r_ [by Mez] doll_tre[ru]mor[s] = var=’msg’ val=’YourPleading’/> † TREMOR Consider her short codework â€Å"trEm[d]o[lls]r_† published on her site and on the Critical Code Studies blog. It is a program that seems to describe (or self-define) the birth pangs of a new world. The work, written in what appears to be XML, cannot function by itself. It appears to assign a value to a variable named â€Å"doll_tre[ru]mor[s]†, a Mez-ian (Mezozoic?) portmenteau of doll_tremors and rumors. This particular rumor beign defined is called, the fifth world, which calls up images of the Native American belief in a the perfected world coming to replace our current fourth world. This belief appears most readily in the Hopi tribe of North America. A child of this fifth world are â€Å"fractures,† or put another way, the tremor of the coming world brings with it fractures. The first, post 2 inscription, contains polymers: a user set to â€Å"YourDollUserName,† a â€Å"3rdperson† set to â€Å"Your3rdPerson,† a location set to â€Å"YourSoddenSelf†, and a â€Å"spikey† set to â€Å"YourSpiKeySelf.† The user then becomes a molecule name within the fracture, a component of the fracture. These references to dolls and 3rd person seem to evoke the world of avatars. In virtual worlds, users have dolls. If the first fracture is located in the avatar of the person, in their avatar, the second centers on communication from this person or user. Here the user is defined with â€Å"YourPolyannaUserName,† and we are in the world of overreaching optimism, in the face of a â€Å"msg† or message of â€Å"YourPleading† and a â€Å"lastword.† Combining these two fractures we have a sodden and spikey self pleading and uttering a last word presumably before the coming rupture into the fifth world. As with many codeworks, the presentation layer appears to be the data and logic layer. However, there is clearly another logic layer that makes these words appear on whatever inerface the reader is using. Thus, the presentation layer is a deception, a challenge to the very division of layers, a revelation that hides. At the same time, we are compelled to execute the presneted code by tracing out its logic. We must take the place of the compiler with the understanding that the coding structures are also  meant to launch or allusive subroutines, that part of our brain that is constantly listening for echoes and whispers To produce that reading, we have had to execute that poem, at least step through it, acting as the processor. In the process of writing poetic works as data, she has swapped our traditional position vis-a-vis n-tier systems. Where traditional poetry establishes idenitity through I’s, Mez has us identify with a system ready to process the user who is not ready for the fifth world, whatever that may bring. At the same time, universal or even mythical realities have been systematized or simulated. There is another layer of data that is missing, supplied by the user presumably. The poem leaves its tremors in a state of potential, waiting to operate in the context of a larger system and waiting for a user to supply the names, pleading, and lastwords. The codework means nothing to the computer. This is not to make some sort of Searlean intervention about the inability of computers to comprehend but to point out that Mez’s code is not valid XML. Of course, Mez is not writing for computer validation but instead relies on the less systematic processing of humans who rely on a far less rigorously specified language structure. Tremors fracture even the process of assigning some signified to these doll_tre[ru]mor[s]. Mez’s poem plays upon the layers of n-tier, exposing them and inverting them. Through the close-reading tools of Critical Code Studies, we can get to her inference and innuendo. However, we should not miss the central irony of the work, the data that is hidden, the notable lack of processing performed by this piece. Mez has hailed us into the system, and our compliance, begins the tremors that brings about this fifth world even as it lies in potential. N-tier is not the fifth world of interpretation. However, it is a tremor of recognition that literacy in information systems offers a critical awareness crucial in these emerging forms of literature. FUTURE PROJECTS Two new projects give the sense of the electronic literature to come. The authors of this paper have been collaborating to create systems that answer Hayles’ call at â€Å"The Future of Electronic Literature† in Maryland to create works that move beyond the desktop. The â€Å"Global Poetic System† and â€Å"The LA Flood Project† combine GPS, literary texts, and civic spaces to create art objects that rely on a complex relationship between various pieces of software and hardware, from mobile phones to PBX telephony to satellite technology. To fully discuss such works with the same approaches we apply to video games or Flash-based literary works is to miss this intricate interaction. However, n-tier provides a scalable framework for discussing the complex networking of systems to produce an artistic experience through software and hardware. These projects explore four types of interfaces (mobile phones, PDAs, desktop clients, and web applications) and three ways of reading (literary adaptative texts, literary classic texts, texts constructed from the interaction of the community). The central piece that glues together literary information is geolocation. When the interactor in the world is one of the input systems, critics need a framework that can handle complexity. Because of the heterogeneity of platforms in which these systems run, there are multiple presentation layers (e.g. phone, laptop, etc.), multiple parallel processing layers, and multiple sources of information (e.g. weather, traffic, literary content, user routes, etc.), thus requiring a n-tier approach for analysis and implementation. It is clear that as electronic literature becomes more complex, knowledge of the n-tier dilineations will be crucial not only to the reception but also the production of such works. Since the interaction of heterogenous systems is the state of our world, an n-tier approach will up critics to open up these works in ways that help identify patterns and systems in our lives. DISCUSSION Let us bring down the great walls of neologisms. Let us pause for reflection  in the race for newer new media. Let us collaborate on the n-tiers of information systems to create robust writing forms and the possibility of a extending the audiences that are literate in these systems. In this paper, we have described an analytical framework that is useful to divide works of electronic literature into their forming elements, in such a way that is coherent with advances in computer science and information technology, and at the same time using a language that could be easily adopted by the electronic literature community. This framework places creators, technicians, and critics on common ground. This field does not have a unified method to analyze creative works; this void is a result, perhaps, in the conviction that works of electronic literature require an element of newness and a reinvention of paradigms with every new piece. Critics are always looking for innovation. However, the unrestrained celebration of the new or novel has lead New Media to the aesthetic equivalent of an arms race. In this article we found common elements to all these pieces, bridging the gap between computer science and electronic literature with the hopes of encouraging the production of sustainable new forms, be they â€Å"stand alone† or composed of a conglomeration of media forms, software, and users. As works of electronic literature continue to become more complex, bringing together more heterogeneous digital forms, the n-tier model will prove scalable and nuanced to help describe each layer of the work without forcing it into a pre-set positions for the sake of theory. We have to ask at this point: how does this framework handle exceptions and increasing complexity? It is interesting to consider how the proposed n-tier model might be adapted to cope with dynamic data, which seems to be the most complex case. Current literary works tend to process a fixed set of data, generated by the author; it is the mode of traversing what changes. Several software solutions may be used to solve the issue of how this traversal is left in the hands of the user or mediated yet in some way by the author through the presentation system. The n-tier model provides a way of identifying three basic ingredients: the data to be traversed, the logic for deciding how to  traverse them, and the presentation which conveys to the user the selected portions at the selected moments. In this way, such systems give the impression that the reader is shaping the literary work by his/her actions. Yet this, in the simple configuration, is just an illusion. In following the labyrinth set out by the author, readers may feel that their journey through it is always being built anew. But the labyrinth itself is already fixed. Consider what would happen when these systems leave computer screens and move into the world of mobile devices and ubiquitous art as Hayles predicted they would at the 2007 ELO conference. How could the system cope with changing data, with a labyrinth that rebuilds itself differently each time based on the path of the user? In this endeavor, we would be shifting an increasing responsibility into the machine which is running the work. The data need not be modified by the system itself. A simple initial approach might be to allow a subset of the data to be drawn from the real environment outside the literary work. This would introduce a measure of uncertainty into the set of possible situations that the user and the system will be faced with. And it would force the author to consider a much wider range of alternative situations and/or means of solving them. Interesting initiatives along these lines might be found in the various systems that combine literary material with real-world information by using, for example, mobile hand-held devices, provided with means of geolocation and networking. With respect to the n-tier model, the changes introduced in the data layer would force additional changes in the other layers. The process layer would grow in complexity to acquire the ability to react to the different possible changes in the data layer. It could be possible for the process layer to absorb all the required changes, while retaining a version of the presentation layer similar to the one used when dealing with static data. However, this may put a heavy load on the process layer, which may result in a slightly clumsy presentation. The clumsiness would be perceived by the reader as a slight imbalance between the dynamic content being presented and the static means used for presenting it. The breaking point would be reached when readers become aware that the material they are receiving is being presented inadequately, and it is apparent that there might have been better  ways of presenting it. In these cases, a more complex presentation layer is also required. In all cases, to enable the computer to deal with the new type of situations would require the programmer to encode some means of appreciating the material that is being handled, and some means of automatically converting it into a adequate format for communicating it to the user. In these task, current research into knowledge representation, natural language understanding, and natural language generation may provide very interesting tools. But, again, these tools would exist in processing layers, and would be dependent on data layers, so the n-tier model would still apply. The n-tier information system approach remains valid even in the most marginal cases. It promises to provide a unified framework of analysis for the field of electronic literature. Looking at electronic literature as an information system may signal another shift in disciplinary emphasis, one from a kind of high-theory humanities criticism towards something more like Human Computer Interface scholarship, which is, by its nature, highly pragmatic. Perhaps a better way would be to try bring these two approaches closer together and to encourage dialogue between usability scientists and the agents of interpretation and meaning. Until this shift happens, the future of â€Å"new† media may be a developmental 404 error page. REFERENCES AA.VV. â€Å"New Media Poetry and Poetics Special† _Leonardo Almanac_, 14:5, September 2006. URL:  «http://www.leoalmanac.org/journal/vol_14/lea_v14_n05-06/index.asp » First accessed on 12/2006. AARSETH , Espen J. _Cybertext: Perspectives on Ergodic Literature_. Johns Hopkins University Press, Baltimore, MD, 1997. CALVI, Licia.†Ã¢â‚¬ËœLector in rebus’: The role of the reader and the characteristics of hyperreading†. In _Proceedings of the Tenth ACM Conference on Hypertext and Hypermedia_, pp 101-109. ACM Press, 1999. COOVER, Robert.†Literary Hypertext: The Passing of the Golden Age of Hypertext.† _Feed Magazine_.  «http://www.feedmag.com/document/do291lofi.html » First accessed 4 August 2006. ECKERSON, Wayne W.†Three Tier Client/Server Architecture: Achieving Scalability, Performance, and Efficiency in Client Server Applications.† _Open Information Systems_ 10, 1. January 1995: 3(20). GENETTE, Gerard. _Paratexts: Thresholds of Interpretations_. Cambridge University Press, New York, NY, 1997. GUTIERREZ, Juan B. â€Å"Literatrà ³nica – sobre cà ³mo y porquà © crear ficcià ³n para medios digitales.† In _Proceedings of the 1er Congreso ONLINE del Observatorio para la CiberSociedad_, Barcelona,  «http://cibersociedad.rediris.es/congreso/comms/g04gutierrez.htm » First accessed on 01/2003. GUTIERREZ, Juan B. â€Å"Literatrà ³nica: Hipertexto Literario Adaptativo.† in _Proceedings of the 2o Congreso del Observatorio para la Cibersociedad_. Barcelona, Spain. URL:  «http://www.cibersociedad.net/congres2004/index_f.html » First accessed on 11/2004. GUTIERREZ, Juan B. â€Å"Literatronic: Use of Hamiltonian cycles to produce adaptivity in literary hypertext†. In _Proceedings of The Bridges Conference: Mathematical Connections in Art, Music, and Science_, pages 215-222. Institute of Education, University of London, August 2006. HAYLES, N. Katherine. â€Å"Deeper into the Machine: The Future of Electronic Literature.† _Culture Machine_. Vol 5. 2003.  «http://svr91.edns1.com/~culturem/index.php/cm/article/viewArticle/245/241 » First accessed 09/2004. — â€Å"Storytelling in the Digital Age: Narrative and Data.† Digital Narratives conference. UCLA. 7 April 2005. HILLNER, Matthias.†Ã¢â‚¬ËœVirtual Typography’: Time Perception in Relation to Digital Communication.† New Media Poetry and Poetics Special Issue, _Leonardo Electronic Almanac_ Vol 14, No. 5 – 6 (2006).  «http://leoalmanac.org/journal/vol_14/lea_v14_n05-06/mengberg.asp » First accessed 25 Sep. 2006 JACOBSON I, BOOCH G, RUMBAUGH J. _The unified software development process_. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1999. LANDOW George P. _Hypertext 2.0_. Johns Hopkins University Press, Baltimore, MD, 1997. MANOVICH, Lev. _The Language of New Media_. MIT, Cambridge, MA, 2002. MARINO, Mark. â€Å"Critical Code Studies.† _Electronic Book Review_, December 2006.  «http://www.electronicbookreview.com/thread/electropoetics/codology » First Accessed 12/2006. MEZ.†trEm[d]o[lls]r_† _Critical Code Studies_. April 2008.  «http://criticalcodestudies.com/wordpress/2008/04/28/_tremdollsr_/ » First accessed 04/2008. MONTFORT, Nick.†Cybertext â€Å". _Electronic Book Review_, January 2001. URL:  «http://www.altx.com/EBR/ebr11/11mon » First accessed on 06/2006. NEA. _Reading At Risk: A Survey of Literary Reading in America_. National Endowment for the Arts, 1100 Pennsylvania Avenue, NW. Washington, DC 20506-0001, 2004. PAJARES TOSCA, Susana and Jill Walker.†Selected Bibliography of Hypertext Critcism.† _JoDI_.  «http://jodi.tamu.edu/Articles/v03/i03/bibliography.html » First accessed October 24, 2006. Raley, Rita. â€Å"Code.surface||Code.depth.† _Dichtung Digital_. 2006.  «http://www.dichtung-digital.org/2006/1-Raley.htm » First accessed 08/2006. RODRà GUEZ, Jaime Alejandro. â€Å"Teorà ­a, Prà ¡ctica y Enseà ±anza del Hipertexto de Ficcià ³n: El Relato Digital.† Pontificia Universidad Javeriana, Bogotà ¡, Colombia, 2003.  «http://www.javeriana.edu.co/relatodigital » First accessed on 09/2003. RYAN, Marie-Laure. â€Å"Narrative and the Split Condition of Digital Textuality.† 1. 2005. URL:  «http://www.brown.edu/Research/dichtung-digital/2005/1/Ryan/ » First accessed 4 October 2006 VERSHBOW, Ben.†Flight Paths a Networked Novel.† _IF: Future of the Book_. December 2007  «http://www.futureofthebook.org/blog/archives/2007/12/flight_paths_a_networked_novel.html » First Accessed 01/2008. WALLACE, Richard S. â€Å"Be Your Own Botmaster.† Alice AI Foundation Inc. 2nd ed. 2004. WARDRIP-FRUIN, Noah. _Expressive Processing: On Process-Intensive Literature and Digital Media_. Brown University. Providence, Rhode Island. May 2006. WARDRIP-FRUIN,Noah. Christopher Strachey: the first digital artist? _Grand Text Auto_. 1 August 2005.  «http://grandtextauto.gatech.edu/2005/08/01/christopher-strachey-first-digital-artist/ » First accessed 3 September 2006. ZWASS, Vladimir. _Foundations of Information Systems_. Mcgraw-Hill College, NY 1997.