Friday, November 29, 2019

The Good and Bad Side of Telecommuting Essay Example Essay Example

The Good and Bad Side of Telecommuting Essay Example Paper The Good and Bad Side of Telecommuting Essay Introduction Organizations are increasingly using telecommuting as a way to increase productivity and decrease costs. Employees also see positive results from telecommuting. Research shows however that there are negative sides as well. Governmental intervention beginning in the early 1990s almost put an end to telecommuting, but after debate, telecommuting has proven stronger than expected. Telecommuting: The Good, The Bad, and The Government Parents today face increased burdens as the cost of living continues to rise. Many single parent homes are troubled with the problem of caring for their children and working at the same time. Many rely on babysitters and family members to help, but others look to the government for assistance. In any case, meeting the bills is hard enough without the cost of a babysitter. However, today there is another choice: Telecommuting has become a new way for business organizations to employ people to work out of their homes that will keep up with the more fast paced society than the earlier modes communications. There are issues to issues to be overcome with telecommuting as well, but those issues are not as costly to those involved, usually. ITAC (International Telework Association Council) defines telecommuting as a work arrangement in which employees work at any time or place that allows them to accomplish their work in an effective and efficient manner (On-Line). Most reports on telecommuting suggest that this alternative has been positively received by both employees and managers (McNerney, 1995). However, by definition, telecommuting holds positive and negative factors for both the employer and employee. The organization and the employee must review these factors to determine if this organizational workforce design is right for them. The Good and Bad Side of Telecommuting Essay Body Paragraphs According to McQuarrie, for the employee, positive factors include: reduced commuting time, reduced personal costs (travel, clothing, food), flexible working hours, greater autonomy, easiness to care for dependents (p. 82). The reduction of commuting time allows for positions in companies at such a distance that a position would not be possible without relocation. A lack of commuting is also favorable when the area surrounding the organization is susceptible to a high number of traffic problems such as congestion and multiple accidents. In areas like Los Angeles that have problems with exhaust, telecommuting offers cleaner air. According to the United States Department of Transportation and the United States General Service Administration (2000) Investments in telecommunications infrastructure that facilitate telecommuting should not only lead to . transportation benefits, but may also have a synergistic . effect on other transportation strategies . required to cope with growing traf fic congestion, urban air pollution, and national petroleum dependence (On-Line). The reduction of personal costs is favorable to the employees who see the reduction as money for other necessities. Flexible working hours offers a way to work around complicated schedules that otherwise would not be possible to work with. The freedom of telecommuting opens the employee up to new options that can be more beneficial such as mid-day exercise programs, choice of what task to perform first, community projects, volunteerism, and other civic activities. There is also an ease of caring for dependents that is not available through the conventional workplace. These dependents can range from children to elderly parents, but also, the employee may be disabled or terminally ill. In this case, telecommuting opens doors that otherwise would remain shut. The negative factors for employees include workaholism and isolation (McQuarrie p. 82). People have a need to interact frequently with others in a s table environment. Failure to maintain interactions will lead to a number of negative consequences such as anxiety, depression, and even physical ailments (Gainey, Kelley Hill p. 4). The organization experiences positive factors in the forms of higher productivity, reduced physical plant costs, selling point for new employees, and the ability to accommodate disabled or chronically ill employees (McQuarrie p. 82). The company saves the cost of office space and equipment by having an employee work at home rather than at a central office site. According to Fiona McQuarrie (1994) there is rarely any mention in the telecommuting literature of the possibility of the employer compensating the employee for home-based work by paying a portion of rent, mortgage, or utility costs (p 82). Lowered company costs enable a larger workforce that enjoys the benefits of autonomy. This in turn increases productivity both for the employer, through a larger workforce, and for the employee, due to increa sed â€Å"want to†. Another attracting factor for the increased work force comes from the selling point for new employees. The level of autonomy and other positive employee fac! tors entice new employees. The company can also reduce costs by letting the employee supply for their own special needs such as wheel chair ramps, handicapped toilets and so forth. The employee will already possess these necessities, but the company may or may not have them installed. Negative employer factors include loss of direct control and lack of a coordinated workweek. The lack of direct control is experienced through the lack of face-to-face training communication, low social contact, and lack of trust between management and employees. Only two of the various mediums of communication can be transferred electronically. It is currently technologically impossible to remotely express one’s self through body language, eye contact, and subtle meanings. Many telecommuters have expressed desire to return to their old arrangement of closer interactions with other employees. The trust level between management and telecommuters is low due to the two factions not necessarily knowing the other’s thoughts, views, and opinions. The lack of a coordinated workweek affects multiple employees because one employee’s work may depend on the completion of work by another employee. Steps have been taken by many organizations to combat the negatives for both the employer and the employee. The problems of isolation and loss of direction control have been solved by requiring the employee to commute to a central office or an organizational hub usually two days a week. This gives managers and employees direct contact and keeps the employee more in touch with the company. The problems of workaholism and lack of coordination have been met by job assignments that outline the nature of the work, the time frame of the work, and the need for completion which can be delivered during on e of the weekly commutes. These assignments serve a dual purpose of giving limits and guidelines to the employees, but also in showing the employer’s dependency on the employee. The reformation of OSHA (Occupational Safety and Health Administration) that began in 1995 brought about new questions and problems for the possibilities of telecommuting. â€Å"In a letter to a Texas based company concerning the liability for a telecommuter’s home office, it was deemed that the organization be liable for the safety of it’s telecommuters home work sites† (Kerrigan p. 63). The letter, posted on OSHA’s website, caused an eruption of contention leading to the removal of the letter from the website. â€Å"An analysis by Mark Wilson, a Heritage Foundation research fellow, shows the recent policy blunder left employers in the worst of all possible worlds — legal uncertainty† (Kerrigan p. 63). After debates between opposing sides of the issue, anothe r issue concerning the liability arose questioning the safety telecommuters’ children in terms of hazards from the workplace. Another issue arising from the OSHA’s letter is the liability of company resources. Most firms are! covered when they add the computers, fax machines and other equipment to their general policy (Hoke p. 35), but this policy does not cover home offices. After much dissention, the U. S. Department of Labor, ruled, †Employers aren’t responsible for the health and safety of white-collar telecommuters after all† (Rosencrance p. 1). After the statement by the Department of Labor, OSHA rewrote its archaic definition of ergonomics and released a new ruling for telecommuters liability. â€Å"The Occupational Safety and Health Administration will not inspect home offices and doesn’t expect employers to inspect them either†(Hoover p. 17). The new directive also gave relief to all employers for liability of the employeesâ₠¬â„¢ home offices. It continued to state, however, that OSHA would inspect home manufacturing operations when it receives complaints about serious health or safety violations or when a work-related fatality occurs (Rosencrance p. 93). The governmental â€Å"flip-flop† has left many employers leery of telecommuting, but the growth rate of telecommuters does not reflect a problem. Many new companies are taking advantage of its employees’ homes to relieve costs of physical assets. Some companies have reversed the role of the managers to a field position, allowing manager to have more face-to-face communication with employees as they travel from office to â€Å"office†. Some companies even legally accept liability of telecommuters through internal contracts and insurance. Today, the increasing rate of telecommuters is calling for the advancement of technology. This technology will lead to better and faster communication, however, it will bring its own set of problem s. What tomorrow holds for telecommuters is unclear, all we can do today is examine and adjust the good, the bad, and the government. Bibliography Bibliography Gainey, T. , Kelley, D. , Hill, J. (1999). Telecommuting’s We will write a custom essay sample on The Good and Bad Side of Telecommuting Essay Example specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The Good and Bad Side of Telecommuting Essay Example specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The Good and Bad Side of Telecommuting Essay Example specifically for you FOR ONLY $16.38 $13.9/page Hire Writer

Monday, November 25, 2019

Sources Of Energy Essays - Energy Development, Energy Economics

Sources Of Energy Essays - Energy Development, Energy Economics Sources Of Energy Sources of Energy Have you ever thought about how we get the energy to run the things we take for granite every single day. There are many sources of energy that that are used for transportation, heat, light, and the manufacturing of goods of all kinds. The development of science and civilization is closely linked to the availability of energy in useful forms. The seven main energy sources are fossil fuels, hydroelectric, solar power, win power, geothermal, nuclear power, and biomass energy. By harnessing the sun, wind, falling water, plant matter, and heat from the earth, energy planners expect to decrease the environmental impact on energy use. Most of the nonhydro renewable power comes through some form of combustion, such as the burning of biomass, landfill gas, or municipal solid waste. Little electricity comes from solar, wind, and geothermal sources. Factors that are increasing interest in renewable energy include cost advantages in niche markets, regulatory pressures, customer service requirem ents, fuel flexibility, and security. One of the biggest source of energy is fossil fuels. Fossil fuels have served as a reliable source of heat for cooking and warmth since the beginning of history. The common fossil fuels are coal, peat, lignite, petroleum, and natural gas. Coal gas, coke, water gas, and producer gas can be made by using coal as the principal ingredient. These such artificial gases can be used for fuel, illuminant, and a source material for the manufacturing of synthetic ammonia. Gasoline, kerosene, and fuel oil are made from petroleum. They are mainly used for transportation if the fuel is used in a liquid form. Natural gas is a natural mixture of gaseous hydrocarbons found from the ground or obtained from specially driven wells. The composition of natural gas varies in different localities. It is used extensively as an illuminant and a fuel. Some geologists theorize that natural gas is a by-product of decaying vegetable matter in underground strata. Others think it may be primordial gases that rise up from the mantle. Natural gas was known to the ancients but was considered by them to be a supernatural phenomenon because it appeared as a mysterious fire bursting from the ground. Gas is also a fossil fuel. It is a gaseous substance that burns in the air and releases enough heat to be useful as a fuel. It is advantageous if a fuel gas is readily transportable through pipes and is easily liquefied. Oil gas is a type of gas made by applying heat to various petroleum distillates. Its principal use is as a supplement to natural gas during periods of heavy demand. Coal gas may be any of a variety of gases produced by heating coal in the absence of air and driving off the volatile constituents. It is not as high in fuel value as other gases and often contains tars, light oils, ammonia, and hydrogen sulfide. These common fuels are used in industry, transportation, and the home are burned in the air. Scientists research and develop alternatives to gasoline every single day. One possible alternative is methanol, which can be produced from wood, coal, or natural gas. Another possibility is ethanol. Ethanol is an alcohol produced from grain and currently used in some typ es of US gasoline. A example of this is gasohol. It is a compressed natural gas, which is much less polluting than gasoline and is currently used by a half-million vehicles around the world. Petroleum is a fossil fuel thought to have been formed over millions of years from incompletely decayed plant and animal remains buried under thick layers of rock. The widespread burning of petroleum products as fuels has resulted in serious problems of air pollution. Oil spilled from tankers and offshore wells has damaged ocean and coastline environments. The environmentally disruptive effects of oil wells have sometimes led to strong opposition to new drilling, as in wilderness areas of Northern Alaska. Most of the energy consumed is ultimately generated by the combustion of fossil fuels, such as coal, petroleum, and natural gas. The world has only a finite supply of these fuels, which are in danger of being used up. Also the combustion of these fuels releases various pollutants, such as monoxide and sulfur

Thursday, November 21, 2019

Kecak Dance Essay Example | Topics and Well Written Essays - 2000 words

Kecak Dance - Essay Example This is apparent from the male gender’s checkered costumes won from the waist besides the performers’ array as they do the dance. From its inception to date, the dance has earned itself an incomparable fame globally owing to its magnificent aspects that comprise the entire performance, which this essay seeks to elaborate. Kecak Dance comprises an amalgamation of various Indian cultural exorcism movements and themes whose purpose entailed to narrate Ramayana account (Ubud). The dance represents 1930s’ work done by both Wayan Limbak and Walter Spies, where due to their immense and varied knowhow, emerged with a dance that is of its own category (Cormier). These composers sourced ideas from the Indian culture whereby they merged them with the knowhow they possessed with the intention of narrating Ramayana account by acting. This is manifested from the various aspects depicting Walter Spies’ artistic touch, for illustration, checkered pants, varied consumes ha ving dragon images, and the performers’ fascinating array while dancing. The dance’s creators intended to present their composition to the Western tourists, hence prompting Wayan Limbak to popularizing it globally. Therefore, Wayan Limbak ended up forming troupes meant to organize numerous functions globally with the intention of reaching many people (Cormier). Kecak dance is one of the numerous Indian expressions of Ramayana account. Principally, this is Hindu epic where artists from its inception have devised numerous ways to represent it, for illustration, carving and even using canvas (Bakan 88). The account starts with the arrival of Rama accompanied by his wife Sita and immediate brother Laksmana in the jungle. Owing to the Rama grandmother’s trickery, the trio found themselves exiled to the Dandaka forest where they thought would offer them privacy they needed. Conversely, all their actions and missions in the forest were under the observation of demon Ra hwana. The demon started lusting after Sita where it sought ways of separating the trio to abduct her from the two men. The accomplishment of this mission was via his prime minister who transformed himself to a golden deer to lure Rama away from the wife. Then the demon approached Sita in the form of a hungry priest desperately in need of assistance (Bakan 86). Finally, the trick succeeded, where Rahwana abducted the wife and proceeded with her to his palace. Rama and his brother on realizing what happened to Sita, they embarked on a search mission intended to rescue her from the demonic kingdom with the aid of Sugriwa’s monkey army. This is evident from the dance’s certain movements that are similar to those of monkeys, as they try to put off fire while engaging Meganada until they defeated him. Hence, the story acts as the theme if the dance where performers narrate it using actions besides chanting (Nettl et al 89). Initially, in 1930s the Kecak dance’s perfo rmers were only men, though as years progressed the organizers included women. Since, some of its scenes’ roles entailed female performers, for instance, the Sita’s position who was Rama’s partner. The most intriguing aspect of the Kecak dance is its unique mode of staging. Since, the audience without proper knowledge of its thematic account can perceive it as being illogical and loose interest. Therefore, the organizers mainly ensure there is adequate literature

Wednesday, November 20, 2019

International Relations look at the instruction Essay

International Relations look at the instruction - Essay Example In such a situation, the existence of US forces serves to be a balance of power in the region. This paper peeps into these aspects and also highlights the importance of multilateral arrangements for the promotion of security in the region. There has been an ever-increasing international concern, particularly on the part of the United States, regarding the state of security in the Asia Pacific region. United States has had vested interests in military deployment during the Cold War as to the influence of Russia in the region. After the Cold War, the American military existence and its continuity in the East Asian countries happens to be a debatable issue, owing to the perceived future friction among the states. Ball (1994, p87) states that, "one of the unfortunate consequences of the end of the Cold War is the likely increase in regional conflict". The existence and influence of US military forces in the Asia Pacific region acts as a buffer to protect the region from any possible contravention arising among the powerful states such as China, Japan etc. After the end of the Cold War, uncertainty concerning the state of regional security happens to be a constant factor. Many countries in the region comprehend the military power and influence of other countries as threatening to their national interests, hence creating a lack of cooperation among the states for regional peace. The most important element in this case happens to be the unprecedented growth of China as the regional power and rising concerns of East Asian countries regarding their national security. Several Asian Pacific countries have remained in alliance with the United States so as to curtail the political and military threats posed by China. Pablo-Baviera (2003, p343) elaborates that, "for Japan, Australia, the Philippines, Thailand, and even Singapore, alliances are seen as part of a hedging strategy in the event that the trajectory of China's development results in it becoming aggressive towards neighbors". This indicates a heightening sense of insecurity in the region concerning the possible consequences of aggression on the part of China towards these countries. In turn, these countries regard their alliance with the US forces as significant for the regional balance of power. Pablo-Baviera (2003, p343) also exposits that, "South Korea appears to be an exception in terms of perceptions of a China threat. The main role of its alliance with the United States is perceived as preventing aggression by North Korea against itself". The major challenge seen by South Korea encompasses the possibility of any action on the part of North Korea to despoil its national sovereignty. North Korea has, in essence, remained detached from the wave of cooperation in the region. Furthermore, the country's passion towards the expansion of its nuclear program causes profound terror to its neighboring countries. Cossa and Khanna (1997, p232) says that, "the isolation of North Korea and its hostility towards the South is one of the uncertain question facing regional relations". This contributes significantly to the regional instability by disengaging countries from collaborating with each other. With respect to North Korean nuclear progression, United States shares the same concern s on security issues, as this goes against the country's own national interests as well. This element

Monday, November 18, 2019

Term paper Essay Example | Topics and Well Written Essays - 1250 words - 3

Term paper - Essay Example Apparently, taxation policies as well as government spending have considerable effect on the economy and future prospects of the government as far as international relations are concerned. Taxation policy must take into account the fundamental rights of all workers in an economy (Mishkin 34). The government should particularly take into account the total population of its workforce during a given financial year even as it plans bring the fore the budgetary estimates. Since the budget of most countries is largely dependent on the local taxes as the main source of funding, the wage rate per worker will be a key factor. Taxation policy should not compromise the worker’s ability to meet their day-to-day needs to keep life moving (Mishkin 34). Hence, fiscal policy makers must take into account the wage rate, currency strength locally and internationally, and the cost of living. The government must therefore consider the current situation of its labor market before making any critic al additions regarding purchasing goods and services, distributing transfer payments, and collecting taxes. If the current trend were unfavorable to the economy and labor market, the government would then have to revisit its fiscal policy to save the situation (Mishkin 34). An increase in amount of taxes that employees pay to the government will adversely affect their disposable income. In most cases, the taxation policy that triggers increases in taxes paid to the government tends to lower the purchasing power of most households. Thus, a considerable number of people working in manufacturing and service industries among other forms of industries will have to relinquish certain commodities that were previously a necessity to them (Agell 25). The main area of concern for fiscal policy is looking into ways in which changes in the government budget affect the overall economy. The changes may not only compromise the capacity of the government to meet its policy needs but also providing essential services to the people. Heads of country’s finance or treasury department are on the verge of drafting fiscal policy that is realistic and achievable considering the strength and sustainability of the current economic state of the country (Agell 25). The flagship annual document of finance ministry essentially reviews the growth and developments of the economy. Of critical value is the capacity of the economy to withstand the constantly changing economic, social and political prospects. Fiscal policy further affects the quality of labor in the market. If the government spending surpasses its total revenues, one of the major options it employs to save the situation is raising the taxation rates. The increase shall take a toll on struggling employees, who in most cases hardly meet all their necessities. Hence, policy prospects should be workable and sustainable in the short term and long term despite the impending challenges to the economy during implementation of its programs. Under such circumstances, employees and business organizations will essentially react by initiating strategies of ensuring the government policy does not compromise their day-to-day lifestyle (Agell 25). In the wake of growing concerns about bad fiscal policy, most employees as well as prospective workers have resorted to look for employment opportunities in

Saturday, November 16, 2019

Automatic Encoding Detection And Unicode Conversion Engine Computer Science Essay

Automatic Encoding Detection And Unicode Conversion Engine Computer Science Essay In computers, characters are represented using numbers. Initially the encoding schemes were designed to support the English alphabet, which has a limited number of symbols. Later the requirement for a worldwide character encoding scheme to support multi lingual computing was identified. The solution was to come up with a 16 encoding scheme to represent a character so that it can support up to large character set. The current Unicode version contains 107,000 characters covering 90 scripts. In the current context operating systems such as Windows 7, UNIX based operating systems applications such as word processors and data exchange technologies do support this standard enabling internationalization in the IT industry. Even though this standard has been the de facto standard, still there can be seen certain applications using proprietary encoding schemes to represent the data. As an example, famous Sinhala news sites still do not adapt Unicode standard based fonts to represent the conte nt. This causes issues such as the requirement of downloading proprietary fonts, browser dependencies making the efforts of Unicode standard in vain. In addition to the web site content itself there are collections of information included in documents such as PDFs in non Unicode fonts making it difficult to search through search engines unless the search term is entered in that particular font encoding. This has given the requirement of automatically detecting the encoding and transforming into the Unicode encoding in the corresponding language, so that it avoids the problems mentioned. In case of web sites, a browser plug-in implementation to support the automatic non-Unicode to Unicode conversion would eliminate the requirement of downloading legacy fonts, which uses proprietary character encodings. Although some web sites provide the source font information, there are certain web applications, which do not give this information, making the auto detection process more difficult. Hence it is required to detect the encoding first, before it has been fed to the transformation process. This has given the rise to a research area of auto detecting the language encoding for a given text based on language characteristics. This problem will be addressed based on a statistical language encoding detection mechanism. The technique would be demonstrated with the support for all the Sinhala Non Unicode encodings. The implementation for the demonstration will make sure that it is an extendible solution for other languages making it support for any given language based on a future requirement. Since the beginning of the computer age, many encoding schemes have been created to represent various writing scripts/characters for computerized data. With the advent of globalization and the development of the Internet, information exchanges crossing both language and regional boundaries are becoming ever more important. However, the existence of multiple coding schemes presents a significant barrier. The Unicode has provided a universal coding scheme, but it has not so far replaced existing regional coding schemes for a variety of reasons. Thus, todays global software applications are required to handle multiple encodings in addition to supporting Unicode. In computers, characters are encoded as numbers. A typeface is the scheme of letterforms and the font is the computer file or program which physically embodies the typeface. Legacy fonts use different encoding systems for assigning the numbers for characters. This leads to the fact that two legacy font encodings defining different numbers for the same character. This may lead to conflicts with how the characters are encoded in different systems and will require maintaining multiple encoding fonts. The requirement of having a standard to unique character identification was satisfied with the introduction of Unicode. Unicode enables a single software product or a single website to be targeted across multiple platforms, languages and countries without re-engineering. Unicode Unicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the worlds writing systems. The latest Unicode has more than 107,000 characters covering 90 scripts, which consists of a set of code charts. The Unicode Consortium co-ordinates Unicodes development and the goal is to eventually replace existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes. This standard is being supported in many recent technologies including Programming Languages and modern operating systems. All W3C recommendations have used Unicode as their document character set since HTML 4.0. Web browsers have supported Unicode, especially UTF-8, for many years [4], [5]. Sinhala Legacy Font Conversion Requirement for Web Content Sinhala language usage in computer technology has been present since 1980s but the lack of standards in character representation system resulted in proprietary fonts. Sinhala was added to Unicode in 1998 with the intention of overcoming the limitations in proprietary character encodings. Dinamina, DinaminaUniWeb, Iskoola Pota, KandyUnicode, KaputaUnicode, Malithi Web, Potha are some Sinhala Unicode fonts which were developed so that the numbers assigned with the characters are the same. Still some major news sites which display Sinhala character contents have not adapted the Unicode standards. The Legacy Fonts encoding schemes are used instead causing the conflicts in content representation. In order to minimize the problems, font families were created where the shape of characters only differs but the encoding remains the same. FM Font Family, DL Font Family are some examples where a font family concept is used as a grouping of Sinhala fonts with similar encodings [1], [2]. Adaptation of non Unicode encodings causes a lot of compatibility issues when viewed in different browsers and operating systems. Operating systems such as Windows Vista, Windows7 come with Sinhala Unicode support and do not require external fonts to be installed to read Sinhalese script. Variations of GNU/Linux distributions such as Dabian or Ubuntu also provide Sinhala Unicode support. Enabling non Unicode applications especially web contents with the support for Unicode fonts will allow the users to view contents without installing the legacy fonts. Non Unicode PDF Documents In addition to the contents in the web, there exists a whole lot of government documents which are in PDF format but their contents are encoded with legacy fonts. Those documents would not be searchable through search engines by entering the search terms in Unicode. In order to overcome the problem it is important to convert such documents in to a Unicode font so that they are searchable and its data can be used by other applications consistently, irrespective of the font. As another part of the project this problem would be addressed through a converter tool, which creates the Unicode version of existing PDF document which are currently in legacy font. The Problem Sections 1.3, 1.4 describe two domains in which the Non Unicode to Unicode conversion is required. The conversion involves identification of non-Unicode contents and replacing it with the corresponding Unicode contents. The content replacement requires a Mapping engine, which would do the proper segmentation of the input text and map it with the corresponding Unicode code. The mapping engine can perform the mapping task only if it knows what is the source text encoding. In general, the encoding is specified along with the content so that the mapping engine could feed it directly. However, in certain cases the encoding is not specified along with the content. Hence detecting the encoding through an encoding the detection engine provides a research area, especially with the non-Unicode content. In addition to that, incorporating the detection engine along with a conversion engine would be another part of the problem, to solve the application areas in 1.3, 1.4. Project Scope The system will be initially targeted for Sinhala fonts used by local sites. Later the same mechanism will be extended to support other languages and scripts (Tamil, Devanagaree). Deliverables and outcomes Web Service/Plug-in to Local Language web site Font Conversion which automatically converts website contents from legacy fonts to Unicode. PDF document conversion tool to convert legacy fonts to Unicode In both implementations, the language encoding detection would use the proposed encoding detection mechanism. It can be considered as the core for the implementations in addition to the translation engine which performs the Non Unicode to Unicode mapping. Literature Review Character Encodings Character Encoding Schemes Encoding refers to the process of representing information in some form. Human language is an encoding system by which information is represented in terms of sequences of lexical units, and those in terms of sound or gesture sequences. Written language is a derivative system of encoding by which those sequences of lexical units, sounds or gestures are represented in terms of the graphical symbols that make up some writing system. A character encoding is an algorithm for presenting characters in digital form as sequences of octets. There are hundreds of encodings, and many of them have different names. There is a standardized procedure for registering an encoding. A primary name is assigned to an encoding, and possibly some alias names. For example, ASCII, US-ASCII, ANSI_X3.4-1986, and ISO646-US are different names for an encoding. There are also many unregistered encodings and names that are used widely. The character encoding names are not case sensitive and hence ASCII and Ascii are equivalent [25]. Figure 2.1 Character encoding Example Single Octet Encodings When character repertoire that contains at most 256 characters, assigning a number in the range 0255 to each character and use an octet with that value to represent that character is the most simplest and obvious way. Such encodings, called single-octet or 8-bit encodings, are widely used and will remain important [22]. Multi-Octet Encodings In multi octet encodings more than one octet is used to represent a single character. A simple two-octet encoding is sufficient for a character repertoire that contains at most 65,536 characters. Two octet schemes are uneconomical if the text mostly consists of characters that could be presented in a single-octet encoding. On the other hand, the objective of supporting Universal character set is not achievable with just 65,536 unique codes. Thus, encodings that use a variable number of octets per character are more common. The most widely used among such encodings is UTF-8 (UTF stands for Unicode Transformation Format), which uses one to four octets per character. Principles of Unicode Standard Unicode has used as the universal encoding standard to encode characters in all living languages. To the end, is follows a set of fundamental principles. The Unicode standard is simple and consistent. It does not depend on states or modes for encoding special characters. The Unicode standard incorporates the character sets of many existing standards: For example, it includes Latin-I, character set as its first 256 characters. It includes repertoire of characters from numerous other corporate, national and international standards as well. In modern businesses needs handle characters from a wide variety of languages at the same time. With Unicode, a single internationalization process can produce code that handles the requirements of all the world markets at the same time. The data corruption problems do not occur since Unicode has a single definition for each character. Since it handles the characters for all the world markets in a uniform way, it avoids the complexities of different character code architectures. All of the modern operating systems, from PCs to mainframes, support Unicode now, or are actively developing support for it. The same is true of databases, as well.There are 10 design principles associated with Unicode. Universility The Unicode is designed to be Universal. The repertoire must be large enough to encompass all characters that are likely to be used in general text interchange. Unicode needs to encompass a variety of essentially different collections of characters and writing systems. For example, it cannot postulate that all text is written left to right, or that all letters have uppercase and lowercase forms, or that text can be divided into words separated by spaces or other whitespace. Efficient Software does not have to maintain state or look for special escape sequences, and character synchronization from any point in a character stream is quick and unambiguous. A fixed character code allows for efficient sorting, searching, display, and editing of text. But with Unicode efficiency there exist certain tradeoffs made specially with the storage requirements needing four octets for each character. Certain representation forms such as UTF-8 format requiring linear processing of the data stream in order to identify characters. Unicode contains a large amount of characters and features that have been included only for compatibility with other standards. This may require preprocessing that deals with compatibility characters and with different Unicode representations of the same character (e.g., letter à © as a single character or as two characters). Characters, not glyphs Unicode assigns code points to characters as abstractions, not to visual appearances. A character in Unicode represents an abstract concept rather than the manifestation as a particular form or glyph. As shown in Figure 2.2, the glyphs of many fonts that render the Latin character A all correspond to the same abstract character a. Figure 2.2: Abstract Latin Letter a and Style Variants Another example is the Arabic presentation form. An Arabic character may be written in up to four different shapes. Figure 2.3 shows an Arabic character written in its isolated form, and at the beginning, in the middle, and at the end of a word. According to the design principle of encoding abstract characters, these presentation variants are all represented by one Unicode character. Figure 2.3: Arabic character with four representations The relationship between characters and glyphs is rather simple for languages like English: mostly each character is presented by one glyph, taken from a font that has been chosen. For other languages, the relationship can be much more complex routinely combining several characters into one glyph. Semantics Characters have well-defined meanings. When the Unicode standard refers to semantics, it often means the properties of characters, such spacing, combinability, and directionality, rather than what the character really means. Plain text Unicode deals with plain texti.e., strings of characters without formatting or structuring information (except for things like line breaks). Logical order The default representation of Unicode data uses logical order of data, as opposed to approaches that handle writing direction by changing the order of characters. Unification The principle of uniqueness was also applied to decide that certain characters should not be encoded separately. Unicode encodes duplicates of a character as a single code point, if they belong to the same script but different languages. For example, the letter à ¼ denoting a particular vowel in German is treated as the same as the letter à ¼ in Spanish. The Unicode standard uses Han unification to consolidate Chinese, Korean, and Japanese ideographs. Han unification is the process of assigning the same code point to characters historically perceived as being the same character but represented as unique in more than one East Asian ideographic character standard. These results in a group of ideographs shared by several cultures and significantly reduces the number of code points needed to encode them. The Unicode Consortium chose to represent shared ideographs only once because the goal of the Unicode standard was to encode characters independent of the languages that use them. Unicode makes no distinctions based on pronunciation or meaning; higher-level operating systems and applications must take that responsibility. Through Han unification, Unicode assigned about 21,000 code points to ideographic characters instead of the 120,000 that would be required if the Asian languages were treated separately. It is true that the same charact er might look slightly different in Chinese than in Japanese, but that difference in appearance is a font issue, not a uniqueness issue. Figure 2.4: Han Unification example The Unicode standard allows for character composition in creating marked characters. It encodes each character and diacritic or vowel mark separately, and allows the characters to be combined to create a marked character. It provides single codes for marked characters when necessary to comply with preexisting character standard. Dynamic composition Characters with diacritic marks can be composed dynamically, using characters designated as combining marks. Equivalent sequences Unicode has a large number of characters that are precomposed forms, such as à ©. They have decompositions that are declared as equivalent to the precomposed form. An application may still treat the precomposed form and the decomposition differently, since as strings of encoded characters, they are distinct. Convertibility Character data can be accurately converted between Unicode and other character standards and specifications. South Asian Scripts The scripts of South Asia share so many common features that a side-by-side comparison of a few will often reveal structural similarities even in the modern letterforms. With minor historical exceptions, they are written from left to right. They are all abugidas in which most symbols stand for a consonant plus an inherent vowel (usually the sound /a/). Word-initial vowels in many of these scripts have distinct symbols, and word-internal vowels are usually written by juxtaposing a vowel sign in the vicinity of the affected consonant. Absence of the inherent vowel, when that occurs, is frequently marked with a special sign [17]. Another designation is preferred in some languages. As an example in Hindi, the word hal refers to the character itself, and halant refers to the consonant that has its inherent vowel suppressed. The virama sign nominally serves to suppress the inherent vowel of the consonant to which it is applied; it is a combining character, with its shape varying from script to script. Most of the scripts of South Asia, from north of the Himalayas to Sri Lanka in the south, from Pakistan in the west to the easternmost islands of Indonesia, are derived from the ancient Brahmi script. The oldest lengthy inscriptions of India, the edicts of Ashoka from the third century BCE, were written in two scripts, Kharoshthi and Brahmi. These are both ultimately of Semitic origin, probably deriving from Aramaic, which was an important administrative language of the Middle East at that time. Kharoshthi, written from right to left, was supplanted by Brahmi and its derivatives. The descendants of Brahmi spread with myriad changes throughout the subcontinent and outlying islands. There are said to be some 200 different scripts deriving from it. By the eleventh century, the modern script known as Devanagari was in ascendancy in India proper as the major script of Sanskrit literature. The North Indian branch of scripts was, like Brahmi itself, chiefly used to write Indo-European languages such as Pali and Sanskrit, and eventually the Hindi, Bengali, and Gujarati languages, though it was also the source for scripts for non-Indo-European languages such as Tibetan, Mongolian, and Lepcha. The South Indian scripts are also derived from Brahmi and, therefore, share many structural characteristics. These scripts were first used to write Pali and Sanskrit but were later adapted for use in writing non-Indo-European languages including Dravidian family of southern India and Sri Lanka. Sinhala Language Characteristics of Sinhala The Sinhala script, also known as Sinhalese, is used to write the Sinhala language, by the majority language of Sri Lanka. It is also used to write the Pali and Sanskrit languages. The script is a descendant of Brahmi and resembles the scripts of South India in form and structure. Sinhala differs from other languages of the region in that it has a series of prenasalized stops that are distinguished from the combination of a nasal followed by a stop. In other words, both forms occur and are written differently [23]. Figure 2.5: Example for prenasalized stop in Sinhala In addition, Sinhala has separate distinct signs for both a short and a long low front vowel sounding similar to the initial vowel of the English word apple, usually represented in IPA as U+00E6 à ¦ latin small letter ae (ash). The independent forms of these vowels are encoded at U+0D87 and U+0D88. Because of these extra letters, the encoding for Sinhala does not precisely follow the pattern established for the other Indic scripts (for example, Devanagari). It does use the same general structure, making use of phonetic order, matra reordering, and use of the virama (U+0DCA sinhala sign al-lakuna) to indicate conjunct consonant clusters. Sinhala does not use half-forms in the Devanagari manner, but does use many ligatures. Sinhala Writing System The Sinhala writing system can be called an abugida, as each consonant has an inherent vowel (/a/), which can be changed with the different vowel signs. Thus, for example, the basic form of the letter k is à  Ã‚ ¶Ã… ¡ ka. For ki, a small arch is placed over the à  Ã‚ ¶Ã… ¡: à  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ·Ã¢â‚¬â„¢. This replaces the inherent /a/ by /i/. It is also possible to have no vowel following a consonant. In order to produce such a pure consonant, a special marker, the hal kirÄ «ma has to be added: à  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ·Ã…   . This marker suppresses the inherent vowel. Figure 2.6: Character associative Symbols in Sinhala Historical Symbols. Neither U+0DF4 sinhala punctuation kunddaliya nor the Sinhala numerals are in general use today, having been replaced by Western-style punctuation and Western digits. The kunddaliya was formerly used as a full stop or period. It is included for scholarly use. The Sinhala numerals are not presently encoded. Sinhala and Unicode In 1997, Sri Lanka submitted a proposal for the Sinhala character code at the Unicode working group meeting in Crete, Greece. This proposal competed with proposals from UK, Ireland and the USA. The Sri Lankan draft was finally accepted with slight modifications. This was ratified at the 1998 meeting of the working group held at Seattle, USA and the Sinhala Code Chart was included in Unicode Version 3.0 [2]. It has been suggested by the Unicode consortium that ZWJ and ZWNJ should be introduced in Orthographic languages like Sinhala to achieve the following: 1. ZWJ joins two or more consonants to form a single unit (conjunct consonants). 2. ZWJ can also alter shape of preceding consonants (cursiveness of the consonant). 3. ZWNJ can be used to disjoin a single ligature into two or more units. Encoding auto Detection Browser and auto-detection In designing auto detection algorithms to auto detect encodings in web pages it needs to depend on the following assumptions on input data [24]. Input text is composed of words/sentences readable to readers of a particular language. Input text is from typical web pages on the Internet which is not an ancient dead language. The input text may contain extraneous noises which have no relation to its encoding, e.g. HTML tags, non-native words (e.g. English words in Chinese documents), space and other format/control characters. Methods of auto detection The paper[24] discusses about 3 different methods for detecting the encoding of text data. Coding Scheme Method In any of the multi-byte encoding coding schemes, not all possible code points are used. If an illegal byte or byte sequence (i.e. unused code point) is encountered when verifying a certain encoding, it is possible to immediately conclude that this is not the right guess. Efficient algorithm to detecting character set using coding scheme through a parallel state machine is discussed in the paper [24]. For each coding scheme, a state machine is implemented to verify a byte sequence for this particular encoding. For each byte the detector receives, it will feed that byte to every active state machine available, one byte at a time. The state machine changes its state based on its previous state and the byte it receives. In a typical example, one state machine will eventually provide a positive answer and all others will provide a negative answer. Character Distribution Method In any given language, some characters are used more often than other characters. This fact can be used to devise a data model for each language script. This is particularly useful for languages with a large number of characters such as Chinese, Japanese and Korean. The tests were carried out with the data for simplified Chinese encoded in GB2312, traditional Chinese encoded in Big, Japanese and Korean. It was observed that a rather small set of coding points covers a significant percentage of characters used. Parameter called Distribution Ration was defined and used for the purpose separating the two encodings. Distribution Ratio = the Number of occurrences of the 512 most frequently used characters divided by the Number of occurrences of the rest of the characters. . Two-Char Sequence Distribution Method In languages that only use a small number of characters, we need to go further than counting the occurrences of each single character. Combination of characters reveals more language-characteristic information. 2-Char Sequence as 2 characters appearing immediately one after another in input text, and the order is significant in this case. Just as not all characters are used equally frequently in a language, 2-Char Sequence distribution also turns out to be extremely language/encoding dependent. Current Approaches to Solve Encoding Problems Siyabas Script The SiyabasScript is as an attempt to develop a browser plugin, which solves the problem using legacy font in Sinhala news sites [6]. It is an extension to Mozilla Firefox and Google Chrome web browsers. This solution was specifically designed for a limited number of target web sites, which were having the specific fonts. The solution had the limitation of having to reengineer the plug-in, if a new version of the browser is released. The solution was not global since that id did not have the ability to support a new site which is using a Sinhala legacy font. In order to overcome that, the proposed solution will identify the font and encodings based on the content but not on site. There is a chance that the solution might not work if the site decided to adapt another legacy font, as it cannot detect the encoding scheme changes. There is a significant delay in the conversion process. The user would notice the display of the content with characters which are in legacy font before they g et converted to the Unicode. This performance delay can be also identified as an area to improve in the solution. The conversion process does not provide the exact conversion specially when the characters need to be combined in Unicode. à  Ã‚ ¶Ã‚ ´Ãƒ  Ã‚ ·Ã‚ = à  Ã‚ ·Ã¢â€š ¬Ãƒ  Ã‚ ·Ã¢â‚¬ ºÃƒ  Ã‚ ¶Ã‚ ¯Ãƒ  Ã‚ ·Ã…  Ãƒ ¢Ã¢â€š ¬Ã‚ Ãƒ  Ã‚ ¶Ã‚ º à  Ã‚ ·Ã¢â€ž ¢Ãƒ  Ã‚ ¶Ã‚ ¢Ãƒ  Ã‚ ·Ã…  .à  Ã‚ ¶Ã¢â‚¬â„¢.à  Ã‚ ¶Ã‚ ´Ãƒ  Ã‚ ·Ã¢â‚¬Å". à  Ã‚ ·Ã¢â€š ¬Ãƒ  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ·Ã¢â€ž ¢Ãƒ  Ã‚ ¶Ã‚ ¢Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ·Ã¢â‚¬ Ãƒ  Ã‚ ¶Ã‚ ¸Ãƒ  Ã‚ ·Ã‚ Ãƒ  Ã‚ ¶Ã‚ » à  Ã‚ ¶Ã‚ ¸Ãƒ  Ã‚ ·Ã¢â‚¬Å¾Ãƒ  Ã‚ ¶Ã‚ ­Ãƒ  Ã‚ ·Ã‚ , à  Ã‚ ¶Ã…“à  Ã‚ ·Ã‹Å"à  Ã‚ ·Ã‹Å"à  Ã‚ ¶Ã‚ ´Ãƒ  Ã‚ ·Ã…   à  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ¶Ã‚ ´Ãƒ  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ¶Ã‚ ­Ãƒ  Ã‚ ·Ã‚ Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…   à  Ã‚ ¶Ã¢â‚¬ ¡Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ¶Ã‚ ©Ãƒ  Ã‚ ·Ã‹Å"à  Ã‚ ·Ã‹Å" à  Ã‚ ·Ã¢â€š ¬Ãƒ  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ·Ã¢â€ž ¢Ãƒ  Ã‚ ¶Ã‚ ¢Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ·Ã†â€™Ãƒ  Ã‚ ·Ã¢â‚¬ Ãƒ  Ã‚ ¶Ã‚ »Ãƒ  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ¶Ã‚ º, à  Ã‚ ¶Ã… ¡ à  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ·Ã…  Ãƒ ¢Ã¢â€š ¬Ã‚ Ãƒ  Ã‚ ¶Ã‚ »Ãƒ  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ¶Ãƒ  Ã‚ ·Ã…   , à  Ã‚ ¶Ã¢â‚¬ ¢Ãƒ  Ã‚ ·Ã†â€™Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ¶Ãƒ  Ã‚ ·Ã… ¡Ãƒ  Ã‚ ·Ã…  Ãƒ ¢Ã¢â€š ¬Ã‚ Ãƒ  Ã‚ ¶Ã‚ »Ãƒ  Ã‚ ¶Ã‚ ½Ãƒ  Ã‚ ·Ã¢â‚¬â„¢Ãƒ  Ã‚ ¶Ã‚ ºÃƒ  Ã‚ ·Ã‚ Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã¢â‚¬  , à  Ã‚ ¶Ã¢â‚¬ ¦Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã¢â‚¬ Ãƒ  Ã‚ ·Ã‚ +à  Ã‚ ¶Ã‚ »Ãƒ  Ã‚ ¶Ã‚ ºÃƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…   à  Ã‚ ·Ã¢â€š ¬Ãƒ  Ã‚ ¶Ã‚ ± à  Ã‚ ·Ã‚ Ãƒ  Ã‚ ·Ã¢â‚¬Å"à  Ã‚ ·Ã…  Ãƒ ¢Ã¢â€š ¬Ã‚ Ãƒ  Ã‚ ¶Ã‚ » ,à  Ã‚ ¶Ã… ¡Ãƒ  Ã‚ ·Ã¢â‚¬Å"à  Ã‚ ·Ã…  Ãƒ ¢Ã¢â€š ¬Ã‚ Ãƒ  Ã‚ ¶Ã‚ »Ãƒ  Ã‚ ¶Ã‚ ©Ãƒ  Ã‚ ·Ã‚ Ãƒ  Ã‚ ¶Ã¢â‚¬Å¡Ãƒ  Ã‚ ¶Ã…“à  Ã‚ ¶Ã‚ «Ãƒ  Ã‚ ¶Ã‚ ºÃƒ  Ã‚ ·Ã… ¡Ãƒ  Ã‚ ¶Ã‚ ¯Ãƒ  Ã‚ ·Ã¢â‚¬Å" ,à  Ã‚ ¶Ã‚ ¯`à  Ã‚ ¶Ã…“ à  Ã‚ ¶Ã‚ ´Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ¶Ã‚ ¯Ãƒ  Ã‚ ·Ã¢â‚¬  à  Ã‚ ¶Ã‚ ºÃƒ  Ã‚ ·Ã¢â€š ¬Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…  Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ¶Ã‚ ±Ãƒ  Ã‚ ·Ã…   can be mentioned as the examples of words of such conversion issues. The plug-in supports the Sinhala Unicode conversion for the sites www.lankadeepa.lk, www.lankaenews.com and www.lankascreen.com. But the other websites mentioned in the paper does not get properly converted to Sinhala with Firefox version 3.5.17. Aksharamukha Asian Script Converter Aksharamukha is a South: South-East-Asian script convertor tool. It supports transliteration between Brahmi derived Asian scripts. It also has the functionality to transliterate web pages from Indic Scripts to other scripts. The Convertor scrapes the HTML page, then transliterates the Indic Scripts and displays the HTML. There are certain issues in the tool when it comes to alignment with the original web page. Misalignments and missing images, unconverted hyperlinks are some of them. Figure 2.7: Aksharamukha Asian Script Converter Corpus-based Sinhala Lexicon The Lexicon of a language is its vocabulary including higher order constructs such as words and expressions. In order to detect the encoding of a given text this can be used as a supporting tool. Corpus based Sinhala lexicon has nearly 35000 entries based on a corpus consisting of 10 million words from diverse genres such as technical writing, creative writing and news reportage [7], [9]. The text distribution across genres is given in table 1. Table 2.1: Distribution of Words across Genres [7] Genre Number of words Percentage of words Creative Writing 2340999 23% Technical Writing 4357680 43% News Reportage 3433772 34% N-gram-based language, script, and encoding scheme-detection N-Gram refers to N character sequences and is used as a well-established technique used in classifying language of text documents. The method detects language, script, and encoding schemes using a target text document encoded by computer by checking how many byte sequences of the target match the byte sequences that can appear in the texts belonging to a language, script, and encoding scheme. N-grams are extracted from a string, or a document, by a sliding window that shifts one character at a time. Sinhala Enabled Mobile Browser for J2ME Phones Mobile phone usage is rapidly increasing throughout the world as well as in Sri Lanka. It has become the most ubiquitous communication device. Accessing internet through the mobile phone has become a common activity of people especially for messaging and news items. In J2ME enabled phones Sinhala Unicode support yet to be developed. They do not allow installation of fonts outside. Hence those devices will not be able to display Unicode contents, especially on the web, until Unicode is supported by the platform. Integrating the Unicode viewing support will provide a good opportunity to carry the technology to remote areas if it can be presented in the native language. If this is facilitated, in addition to the urban crowd, people from rural areas will be able to subscribe to a daily newspaper with their mobile. One major advantage of such an application is that it will provide a phone model independent solution which supports any Java enabled phone. Cillion is a Mini browser software which shows Unicode contents in J2ME phones. This software is an application developed with the fonts integrated wh

Wednesday, November 13, 2019

John Miltons Paradise Lost as Christian Epic Essay example -- Milton

Paradise Lost as Christian Epic John Milton's great epic poem, Paradise Lost, was written between the 1640's and 1665 in England, at a time of rapid change in the western world. Milton, a Puritan, clung to traditional Christian beliefs throughout his epic, but he also combined signs of the changing modern era with ancient epic style to craft a masterpiece. He chose as the subject of his great work the fall of man, from Genesis, which was a very popular story to discuss and retell at the time. His whole life had led up to the completion of this greatest work; he put over twenty years of time and almost as many years of study and travel to build a timeless classic. The success of his poem lies in the fact that he skillfully combined classic epic tradition with strongly held Puritan Christian beliefs. In Paradise Lost, Milton uses many conventions of the classic epic, including an invocation of the Muse, love, wa, a solitary voyage, heroism, the supernatural and mythical allusion. Milton writes, "Sing, Heavenly Muse, that on the secret top of Oreb, or of Sinai, didst inspire that shepard who first taught the chosen seed in the beginning how the heavens and earth rose out of Chaos." Here he invokes the traditional muse of the epic, yet in the same sentence he identifies the muse as a Christian being and asks him to sing of Christian tales. A central theme of Paradise Lost is that of the deep and true love between Adam and Eve. This follows both traditonal Christianity and conventional epic style. Adam and Eve are created and placed on earth as "our first two parents, yet the only two of mankind, in the happy garden placed, reaping immortal fruits of joy and love, uninterrupted joy, unrivaled love, in blissful solitude."(... ...le in one sentence. Thus, he successfully completes the tapestry which he has created, weaving the Bible and the genre of the epic closely together to create a work of art. Throughout Paradise Lost, Milton uses various tools of the epic to convey a traditional and very popular Biblical story. He adds his own touches to make it more of an epic and to set forth new insights into God's ways and the temptations we all face. Through his uses of love, war, heroism, and allusion, Milton crafted an epic; through his references to the Bible and his selection of Christ as the hero, he set forth a beautifully religious Renaissance work. He masterfully combined these two techniques to create a beautiful story capable of withstanding the test of time and touching its readers for centuries. Works Cited Scott Elledge, ed., Paradise Lost, second edn. (NY: Norton, 1993).