Geography Overview of the Maldives

Abstract
This paper reviews the Maldives and the political, economic, topographical, and historical geography of the nation and its people. After a brief overview of basic facts, the paper will shift focus to more specific areas. First, a look at the topographical geography of the nation, reviewing formation and size of atolls, as well as climate, flora, and fauna. Historical geography, political geography, and economic geography will be followed by a conclusion of the current state of the Maldives and possible future outcomes of the nation based on political and climate changes.
Maldives
The Republic of Maldives is a South Asian country comprised of atolls located in the Indian Ocean. It is an isolated archipelago that is one of the smallest and poorest countries in the entire world. The United Nations estimated that the population of Maldives to be approximately 294,000 people (Metz, 1995). The Maldivian capital of Male’ holds about a quarter of the total population.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Officially, Divehi is the language of Maldives. Divehi is spoken similarly to the old Ceylon language. Arabic and Urdu have influenced the language, and Maldivians write in Thaana. Most government officials speak English, but only a small percentage of Maldivians speak anything other than Divehi. Ethnic groups consist of a combination of Sinhalese, Arabic, Dravidian, Australasian, and African assemblages (Metz, 1995).
Topographical Geography
The Republic of Maldives is the smallest country in Asia. An archipelago located in the Indian Ocean, Maldives consists of nearly 1,200 coral islands assembled in a dual chain of 27 atolls. These atolls sit upon a ridge jutting up from the Indian Ocean in a north-to-south expanse of 596.5 miles (Brown, Turner, Hameed, & Bateman, 1997). Many atolls are made of circular coral reefs which support small islands within. Each island spans about a mile, and are less than a mile above sea level. Maldives is the world’s lowest country, with an average ground-level elevation of only 4 feet 11 inches above sea level. The highest point in the Maldives is also the lowest in the world, coming in at 7 feet 10 inches (Metz, 1995). No single island is longer than 5 miles or wider than 18 miles. Each atoll has about five to ten islands that are populated, and twenty to sixty which are unpopulated. Many atolls consist of a main remote island enclosed by a steep coral beach (Metz, 1995).
The Maldives archipelago is situated upon the Chagos-Maldives-Laccadive Ridge, which is an immense underwater mountain range. This particular geographical set up forms a unique terrestrial ecoregion, but leaves the nation susceptible to natural disasters due to rising sea levels. For example, the tsunami of 2004 killed more than 100 Maldivians and displaced more than 12,000 (BBC News, 2014). Other environmental issues plague the Maldives, leading to a diminishing supply of freshwater and poor sewage treatment (Brown et al, 1997).
Approximately 200 of these atolls are inhabited by local Maldivians, and 87 of the islands have been converted into lavish resorts for travelers and tourists. The lush groves of breadfruit and coconut trees, the sandy beaches and beautiful corals visible through crystal clear waters combined to attract nearly a million and a half tourists to Maldives in 2015 (Naish, 2016).
Historical and Political Geography
Early Maldivian history is shrouded in mystery. No archeological remains have been found of early settlers. The earliest identified settlers were probably from southern India followed by migrants from Sri Lanka. Arab sailors came from east Africa and other countries, and today’s ethnicity reflects a blend of these cultures (Metz, 1995). Many researchers believe the earliest settlers to be of Aryan descent, coming from India and Sri Lanka in the 5th century BC. Maldivians are believed to have practiced Hinduism, then Buddhism until 1153 AD. The sitting king of Maldives was converted to Islam in the 12th century (Metz, 1995). Maldivian history reflects the Islamic concept that before Islam, ignorance reigned, although the Maldivian culture reflects much of the customs and mannerisms from when Buddhism was prominent in the area. Since that initial Islamic conversion, the recording of history in Maldives was much more consistent (MaldiveIsle, 2010). After Islamic conversion, the Maldivian government was considered a monarchy ruled by sovereign sultans, and intermittent Sultanas or queens (MaldiveIsle, 2010).
Trade wars with the Portuguese during the 16th century lead to Portuguese seizure of Male in 1558. In 1573, resistance leader Muhmmad Thakurufanu defeated the Portuguese invaders and ruled Maldives until 1752. At this time, Malabari pirates overthrew the Sultan, Ali 6th, and stationed army troops in the capitol. Maldivian leader Muleege Hassan Maniku regained control of the throne (MaldiveIsle, 2010). Political instability led Maldives to enter into a protectorate with the British in 1887, wherein Maldives gained protection from foreign antagonism, in exchange agreeing not to join forces with any other foreign authority (MaldiveIsle, 2010).
Although researchers disagree whether or not Maldives was definitely independent of British power, for the most part Maldivians enjoyed independence from foreign rulers. The Maldivian constitution was formed in 1932, with overtones of Islamic Sharia law, and the sultanate becoming an elected position (MaldiveIsle, 2010). However, the public disagreed, physically tearing the Constitution to pieces and dethroning the Sultan in 1934 for overstepping his bounds. A new Constitution was written in 1937. Nine years later, the British agreement was renewed. The Maldives changed from a monarchy to a Republic within the British Commonwealth in 1953, and the position of sultanate was eliminated. Mohammed Amin Didi was the first elected President of Maldives, but his victory was cut short after being overthrown due to food scarcities and his tobacco ban. The Sultanate once again ruled Maldives until 1968, with famine caused by World War Two lingering into the 1950’s. Mohammed Fareedh was the last Sultan of the Maldives, having been ousted after the Republic was reinstated and Ibrahim Nasir became President in 1968 (BBC News, 2016).
Nasir retired in 1978, and was succeeded by Abd al-Gayoom. Maldives rejoined the Commonwealth in 1982, after the tourist industry led to expanded economic growth (BBC News, 2016). Gayoom was reelected repeatedly until 2008, when opposition leader Mohamed Nasheed became President. Nasheed resigned in 2012 after demonstrations and mutiny by the police force, and Vice-President Mohamed Waheed rose to the Presidency. Political unrest in Maldives continued after the 2013 election of Gayoom’s half-brother, Abdulla Yameen. However, opposition leader and former President Nasheed was arrested on terrorism charges in 2015, prompting speculation from international governments about political unrest in Maldives (BBC News, 2016). Nasheed was sentenced to 13 years for his terrorism case, but was granted leave in January 2016 to travel to Britain for back surgery. In April, the Maldivian government ordered Nasheed to return; however, Nasheed was granted refugee status in Britain, where he remains to this day (BBC News, 2016). Abdulla Yameen remains the Maldivian President, and in October 2016 the Maldives announced its departure from the Commonwealth (BBC News, 2016).
Current political atmospheres in Maldives appear to be relatively stable. The political structure remains a Republic with an executive President and a Legislature known as People’s Majlis. Both positions are selected during elections that take place every five years. Like the United States, Presidents are limited to two terms in office (BBC News, 2016).
Economic Geography
Once known as “The Money Isles”, Maldives was the main producer of cowry shells. These Maldivian cowries were used in monetary transactions over most of Asia and much of East Africa, and the cowry is used as the symbol of the Maldives Monetary Authority. Historically, shipping and fishing have been the fixed industries of the nation, not surprising since the Maldives territory is comprised of islands (MaldiveIsle, 2010).

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

Poor soil quality and scarce cultivatable land limit the practice of agriculture. Native fruits and vegetables are used mainly to feed natives, and most other living essentials are imported. Crafting and boatbuilding fuel business workings, and more modern manufacturing and assembly is limited to a fish cannery, a few garment factories, and assorted consumer products. Many Maldivians work in the fishing industry, which employs almost half of the labor force (Brown et al, 1997).
With fishing being the main source of employment for Maldivians, a variety of fish is caught and exported for profit. The main types of fish caught and sold are skipjack tuna, yellowfin tuna, little tuna, and frigate mackerel. Once done by hand with a line and pole, modern fishing vessels have enabled Maldivian fishermen to nearly triple their catch, while refrigeration has allowed for longer storage times which enable fishermen to travel farther out to sea for their catch (MaldiveIsle, 2010).
Although there appears to be a shortfall of resources in the Maldives, tourism has grown impressively over the last twenty years. The beauty and tranquility of the water, as well as the native flora and fauna attracts nearly 1.2 million tourists per year. Because of this uptick in tourism, skilled laborers such as construction workers, tile workers, and other craftsmen are experiencing an increase in work (Naish, 2016).
Overall, Maldives is a beautiful, lively nation with a vibrant history and interesting culture. From early Dravinian culture to modern-day Islam, Maldives has remained steadfast in its resolve to preserve the atolls that nearly a quarter million people call home. However, despite local government efforts, the increasing damaging effects of climate change and global warming threaten to eliminate this isolated gem from the world map entirely. Only time will tell if efforts to reduce climate change impact can save this wonderful nation.
References
Brown, K., Turner, R., Hameed, H., & Bateman, I. (1997). Environmental carrying capacity and tourism development in the Maldives and Nepal. Environmental Conservation, 24(4), 316-325. Retrieved from https://www.cambridge.org/core/journals/environmental-conservation/article/div-classtitleenvironmental-carrying-capacity-and-tourism-development-in-the-maldives-and-nepaldiv/DC50C550C6E6403C034B77F3292FAB9F
History of Maldives. (2010). In Maldive Isle. Retrieved from http://www.maldiveisle.com/history.htm
Maldives Profile- Timeline. (2016). In BBC News. Retrieved from http://www.bbc.com/news/world-south-asia-12653969
Metz, H. C. & Library Of Congress. Federal Research Division. (1995) Indian Ocean: five island countries. [Washington, D.C.: Federal Research Division, Library of Congress: For sale by the Supt. of Docs., U.S. G.P.O] [Online Text] Retrieved from the Library of Congress, https://www.loc.gov/item/95016570/.
Naish, A. (2016). Tourist arrivals reach 1.2m in 2015. In Maldives Independent. Retrieved from http://maldivesindependent.com/business/tourist-arrivals-reach-1-2m-in-2015-121424

Sepsis An Overview Health And Social Care Essay

Sepsis is an infection of the bloodstream. The infection tends to spread quickly and often is difficult to recognize. One of our roles as a nurse is that of patient advocate, and as such we are closest to the patient, placing us in a key position to identify any subtle changes at their earliest onset and prevent the spread of severe infection. Knowledge of the signs and symptoms of SIRS, sepsis, and septic shock is key to early recognition. Early recognition allows for appropriate treatment to begin sooner, decreasing the likelihood of septic shock and life-threatening organ failure. Once sepsis is diagnosed, early and aggressive treatment can begin, which greatly reduces mortality rates associated with sepsis.
sep•sis (ˈsep-sÉ™s) n. Sometimes called blood poisoning, sepsis is the body’s often deadly response to infection or injury (Merriam-Webster, 2011)
Sepsis is a potentially life-threatening condition caused by the immune system’s reaction to an
infection; it is the leading cause of death in intensive care units (Mayo Clinic Staff, Mayo Clinic
2010). It is defined by the presence of 2 or more SIRS (systemic inflammatory response
syndrome) criteria in the setting of a documented or presumed infection (Rivers, McIntyre,
Morro, Rivers, 2005 pg 1054). Chemicals that are released into the blood to fight infection
trigger widespread inflammation which explains why injury can occur to body tissues far from
the original infection. The body may develop the inflammatory response to microbes in the
blood, urine, lungs, skin and other tissues. Manifestations of the systemic inflammatory
response syndrome (SIRS) include abnormalities in temperature, heart, respiratory rates and
leukocyte counts. This is a severe sepsis that arises from a noninfectious cause. The condition
may manifest into severe sepsis or septic shock.
Severe sepsis is characterized by organ dysfunction, while septic shock results when blood
pressure decreases and the patient becomes extremely hypotensive, even with the administration
of fluid resuscitation (Lewis, Heitkemper, Dirksen, O’Brien and Bucher (2007), pg 1778). The
initial presentation of severe sepsis and septic shock is usually nonspecific.   Patients admitted
with relatively benign infection can progress in a few hours to a more devastating form of the
disease. The transition usually occurs during the first 24 hours of hospitalization (Lewis, et al
2007, pg 1779). Severe sepsis is associated with acute organ dysfunction as inflammation may
result in organ damage (Mayo Clinic Staff, Mayo Clinic 2010). As severe sepsis progresses,
it begins to affect organ function and eventually can lead to septic shock; a sometimes fatal drop
in blood pressure.
People who are most at risk of developing sepsis include the very young and the very old,
individuals with compromised immune systems, very sick people in the hospital and those who
have invasive devices, such as urinary catheters or breathing tubes (Mayo Clinic Staff, Mayo
Clinic, 2010). Black people are more likely than are white people to get sepsis and black men
face the highest risk (Mayo Clinic Staff, Mayo Clinic 2010).
Severe sepsis is diagnosed if at least one of the following signs and symptoms, which indicate
organ dysfunction, are noted; areas of mottled skin, significantly decreased urine output, abrupt
change in mental status, decrease in platelet count, difficulty breathing and abnormal heart
function (Lewis et al, 2007 pg 1779). To be diagnosed with septic shock, a patient must have the
signs and symptoms of severe sepsis plus extremely low blood pressure (Mayo Clinic Staff,
Mayo Clinic 2010).
Sepsis is usually treated in the ICU with antibiotic therapy and intravenous fluids. These
patients require preventative measures for deep vein thrombosis, stress ulcer and pressure ulcers.
Hunter (2006) explains that the reason why sepsis is rarely given attention and popularized for
public information and attention is because it is not a disease in itself, but a reaction of the body
to a lowered immunological response.
Sepsis is the leading cause of death in non-coronary intensive care units (ICUs) and the 10th
leading cause of death in the United States overall (Slade, Tamber and Vincent, 2010, pg 2).  The
incidence of severe sepsis in the United States is between 650,000 and 750,000 cases. Over 10
million cases of sepsis have been reported in the United States based on a 22-year period study
of discharge data from 750 million hospitalizations Annually, approximately 750,000 people
develop sepsis and more than 200,000 cases are fatal (Slade, et al 2010, pg 1). More than 70% of
these patients have underlying co-morbidities and more than 60% of these cases occur in those
aged 65 years and older (Slade, et al 2010, pg 1). When patients with human immunodeficiency
virus are excluded, the incidence of sepsis in men and women is similar. A greater number of
sepsis cases are caused by infection with gram-positive organisms than gram-negative
organisms, and fungal infections now account for 6% of cases (Slade, et al 2010, pg 1). After
adjusting for population size, the annualized incidence of sepsis is increasing by 8%. The
incidence of severe sepsis is increasing greatest in older adults and the nonwhite population. The
rise in the number of cases is believed to be caused by the increased use of invasive procedures
and immunosuppressive drugs, chemotherapy, transplantation, and prosthetic implants and
devices, as well as the increasing problem of antimicrobial resistance (Slade, et al 2010, pg 1).
Despite advances in critical care management, sepsis has a mortality rate of 30 to 50 percent and
is among the primary causes of death in intensive care units ((Brunn and Platt, 2006, 12: 10-6).
It is believed that the increasing incidence of severe sepsis is due to the growing population
among the elderly as a result of increasing longevity among people with chronic diseases and the
high prevalence of sepsis developing among patients with acquired immune deficiency syndrome
(Slade, et al 2010, pg 1).
During an infection, the body’s defense system is activated to fight the attacking pathogens.
These invading pathogens, especially bacteria, possess receptive lipopolysaccharide (LPS)
coverings or release exotoxins and endotoxins that activate the T-cells and macrophages and
trigger the Toll-like receptors (TLR’s) to respond by releasing antibodies, eicosanoids and
cytokines such as tumor necrosis factor (TNF) and interleukins. The antigens may also result in
the production of lysozymes and proteases, cationic proteins and lactoferrin that can recognize
and kill invading pathogens. Different microbes also induce various profiles of TNF and
interleukin to be released. These molecules results in a heightened inflammatory response of the
body and vascular dilation. The TLR’s also affect a different cascade that involves coagulation
pathways, which results in preventing the bleeding to occur at the area of infection. With too
much molecular responses and signals, the recognition of the molecules sometimes fails and
attacks even the body’s endothelial cells. These compounded immune and inflammatory actions
result in the development of the symptoms of sepsis (Hunter, 2006 pg 668; Van Amersfoort,
2001 pg 400). Brunn and Platt (2006) believes that events leading to breakdown of the tissue
such as injuries or infection, that naturally results in the activation of the immune system, is a
major event that causes sepsis. During host infection, the release of tumor necrosis factor and
interlekin-1 signals the dilation of the arteries and inflammation. These released cytokines also
activate the coagulation pathway to prevent fibrinolysis but an increase in the concentration of
these molecules may result in abnormalities in the host’s defense system (Gropper, 2004 pg 568).
The common belief that sepsis is caused by endotoxins released by pathogens has fully been
established but genomic advancements is shedding light on current insights that sepsis can also
occur without endotoxin triggers, that is even without microbial infections (Gropper, 2004 pg
568).
Diagnosing sepsis can be difficult because its signs and symptoms can be caused by other
disorders. Doctors often order a battery of tests to try to pinpoint the underlying infection. Blood
tests and additional laboratory tests on fluids such as urine and cerebrospinal fluid to check for
bacteria and infections and wound secretions, if an open wound appears infected. In addition,
imaging tests to visualize problems such as x-ray, computerized tomography (ct), ultrasound and
magnetic resonance imaging (mri) to locate the source of an infection are also ordered. Early,
aggressive recognition boosts a patient’s chances of surviving sepsis.
Sepsis should be treated as a medical emergency. In other words, sepsis should be treated as
quickly and efficiently as possible as soon as it has been identified. This means rapid
administration of antibiotics and fluids. A 2006 study showed that the risk of death from sepsis
increases by 7.6% with every hour that passes before treatment begins. (Mayo Clinic Staff, Mayo
Clinic 2010). Early, aggressive treatment boosts the chances of surviving sepsis. People with
severe sepsis require close monitoring and treatment in a hospital intensive care unit. Lifesaving
measures may be needed to stabilize breathing and heart function. (Mayo Clinic Staff, Mayo
Clinic 2010). People with sepsis usually need to be in an intensive care unit (ICU). As soon as
sepsis is suspected, broad spectrum intravenous antibiotic therapy is begun. The number of
antibiotics may be decreased when blood tests reveal which particular bacteria are causing the
infection. The source of the infection should be discovered, if possible. This could mean more
testing. Infected intravenous lines or surgical drains should be removed, and any abscesses
should be surgically drained. Oxygen, intravenous fluids, and medications that increase blood
pressure may be needed. Dialysis may be necessary if there is kidney failure, and a breathing
machine (mechanical ventilation) if there is respiratory failure (Mayo Clinic Staff, Mayo Clinic,
2010).
While severe sepsis requires treatment in a critical care area, its recognition is often made
outside of the Intensive Care Unit (ICU). With nurses being at the side of a patient from
admission to discharge, this places them in an ideal position to be first to recognize sepsis.
Thorough assessments are crucial and being able to recognize even the most minimal changes in
a patient could be the difference between life and death.
Once severe sepsis is confirmed, key aspects of nursing care are related to providing
comprehensive treatment. Pain relief and sedation are important in promoting patients’ comfort.
Meeting the needs of patients’ families is also an essential component of care. Research on the
needs of patients’ families during critical illness supports provision of information as an
important aspect of family care (Gropper et al, 2004 pg. 569). Teaching patients and their
families is also essential to ensure that they understand various treatments and interventions
provided in severe sepsis.
Ultimately, prevention of sepsis may be the single most important measure for control
(Mayo Clinic Staff, Mayo Clinic, 2010). Hand washing remains the most effective way to
reduce the incidence of infection, especially the transmission of nosocomial infections in
hospitalized patients (Mayo Clinic Staff, Mayo Clinic, 2010. Good hand hygiene can be
achieved by using either a waterless, alcohol-based product or antibacterial soap and water with
adequate rinsing. Using universal precautions, adhering to infection control practices, and
instituting measures to prevent nosocomial infections can also help prevent sepsis (Lewis, et al
2007, pg 248). Nursing measures such as oral care, proper positioning, turning, and care of
invasive catheters are important in decreasing the risk for infection in critically ill patients
(Fourrier, Cau-Pottier, Boutigny, Roussel-Delvallez, Jourdain, Chopin, 2005 pg 1730). Newly
released guidelines on the prevention of catheter-related infections stress the use of surveillance,
cutaneous antisepsis during care of catheter sites, and catheter-site dressing regimens to
minimize the risk of infection (Fourrier, 2005 pg. 1731). Other aspects of nursing care such as
sending specimens for culture because of suspicious drainage or elevations in temperature,
monitoring the characteristics of wounds and drainage material, and using astute clinical
assessment to recognize patients at risk for sepsis can contribute to the early detection and
treatment of infection to minimize the risk for sepsis.
Critical care nurses are the healthcare providers most closely involved in the daily care of
critically ill patients and so have the opportunity to identify patients at risk for and to look for
signs and symptoms of severe sepsis (Kleinpell, Goyette, 2003 pg 120). In addition, critical care
nurses are also the ones who continually monitor patients with severe sepsis to assess the effects
of treatment and to detect adverse reactions to various therapeutic interventions. Use of an
intensivist-led multidisciplinary team is designated as the best-practice model for the intensive
care unit, and the value of team-led care has been shown (Kleinpell, et al 2003, pg 121). As key
members of intensivist-led multidisciplinary teams, critical care nurses play an important role in
the detection, monitoring, and treatment of sepsis and can affect outcomes in patients with severe
sepsis (Kleinpell, et al 2003, pg 121).
5 Priority Nursing Diagnosis
Diagnosis #1: Deficient fluid volume related to vasodilatation of peripheral vessels leaking of capillaries.
Intervention #1: Watch for early signs of hypovolemia, including restlessness, weakness, muscle cramps, headaches, inability to concentrate and postural hypotension. .
Rationale #1: Late signs include oliguria, abdominal or chest pain, cyanosis, cold clammy skin, and confusion (Kasper et al, 2005).
:
Intervention #2: Monitor for the existence of factors causing deficient fluid volume (e.g., vomiting, diarrhea, difficulty maintaining oral intake, fever, uncontrolled type 2 diabetes, diuretic therapy).
Rationale #2: Early identification of risk factors and early intervention can decrease the occurrence and severity of complications from deficient fluid volume. The gastrointestinal system is a common site of abnormal fluid loss (Metheny, 2000).
Intervention #3: Monitor daily weight for sudden decreases, especially in the presence of decreasing urine output or active fluid loss. Weigh the client on the same scale with the same type of clothing at same time of day, preferably before breakfast.
Rationale #3: Body weight changes reflect changes in body fluid volume (Kasper et al, 2005). Weight loss of 2.2 pounds is equal to fluid loss of 1 liter (Linton & Maebius, 2003).
Diagnosis #2: Imbalanced nutrition less than body requirements related to anorexia generalized weakness.
Intervention #1: Monitor for signs of malnutrition, including brittle hair that is easily plucked, bruise, dry skin, pale skin and conjunctiva, muscle wasting, smooth red tongue, cheilosis, “flaky paint rash” over lower extremities and disorientation (Kasper, 2005).
Rationale #1: Untreated malnutrition can result in death (Kasper, 2005).
Intervention #2: Recognize that severe protein calorie malnutrition can result in septicemia from impairment of the immune system or organ failure including heart failure, liver failure, respiratory dysfunction, especially in the critically ill client.
Rationale #2: Untreated malnutrition can result in death (Kasper, 2005)
Intervention #3: Note laboratory test results as available: serum albumin, prealbumin, serum total protein, serum ferritin, transferring, hemoglobin, hematocrit, and electrolytes.
Rationale #3: A serum albumin level of less than 3.5 g/100 milliliters is considered and indicator of risk of poor nutritional status (DiMaria-Ghalli & Amella, 2005). Prealbumin level was reliable in evaluating the existence of malnutrition (Devoto et al, 2006).
Diagnosis #3: Ineffective tissue perfusion related to decreased systemic vascular resistance.
Intervention #1: If the client has a period of syncope or other signs of a possible transient ischemic attack, assist the client to a resting position, perform a neurological assessment and report to the physician.
Rationale #1: Syncope may be caused by dysrhythmias, hypotension caused by decreased tone or volume, cerebrovascular disease, or anxiety. Unexplained recurrent syncope, especially if associated with structural heart disease, is associated with a high risk of death (Kasper et al, 2005).
Intervention#2: If the client experiences dizziness because of postural hypotension when getting up, teach methods to decrease dizziness, such as remaining seated for several minutes before standing, flexing feet upward several time while seated, rising slowly, sitting down immediately if feeling dizzy and trying to have someone present when standing.
Rationale #2: Postural hypotension can be detected in up to 30% of elderly clients. These methods can help prevent falls (Tinetti, 2003).
Intervention #3: If symptoms of a new cerebrovascular accident occur (e.g., slurred speech, change in vision, hemiparesis, hemiplegia, or dysphasia), notify a physician immediately.
Rationale #3: New onset of these neurological symptoms can signify a stroke. If the stroke is caused by a thrombus and the client receives thrombolytic treatment within 3 hours, effects can often be reversed and function improved, although there is an increased risk of intracranial hemorrhage (Wardlaw, et al, 2003)
Diagnosis #4: Ineffective thermoregulation related to infectious process, septic shock.
Intervention #1: Monitor temperature every 1 to 4 hours or use continuous temperature monitoring as appropriate.
Rationale #1: Normal adult temperature is usually identified as 98.6 degrees F (37 degrees C) but in actuality the normal temperature fluctuates throughout the day. In the early morning it may be as low as 96.4 degrees F (35.8 degrees C) and in the late afternoon or evening as high as 99.1 degrees F (37.3 degrees C). (Bickely & Szilagyj, 2007). Disease injury and pharmacological agents may impair regulation of body temperature (Kasper et al, 2005).
Intervention #2: Measure the temperature orally or rectally. Avoid using the axillary or tympanic site.
Rationale #2: Oral temperature measurement provides a more accurate temperature than tympanic measurement (Fisk & Arcona, 2001; Giuliano et al, 2000). Axillary temperatures are often inaccurate. The oral temperature is usually accurate even in an intubated clients (Fallis, 2000). The SolaTherm and DataTherm devices correlated strongly with core body temperatures obtained from a pulmonary artery catheter (Smith, 2004). A study performed in Turkey found that axillary and tympanic temperatures were less accurate than oral temperatures (Devrim, 2007).
Intervention #3: Take vital signs every 1 to 4 hours, noting changes associated with hypothermia; first, increased blood pressure, pulse and respirations; then decreased values as hypothermia progresses.
Rationale #3: Mild hypothermia activates the sympathetic nervous system, which can increase the levels of vital signs; as hypothermia progresses, the heart becomes suppress, with decreased cardiac output and lowering of vital sign readings (Ruffolo, 2002; Kaper et al, 2005).
Diagnosis #5: Risk for impaired skin integrity related to desquamation caused by disseminated intravascular coagulation.
Intervention #1: Monitor skin condition at least once a day for color or texture changes, dermatological conditions, or lesions. Determine whether the client is experiencing loss of sensation or pain.
Rationale #1: Systemic inspection can identify impending problems early (Ayello & Braden, 2002; Krasner, Rodeheaver & Sibbald, 2001).
Intervention #2: Identify clients at risk for impaired skin integrity as a result of immobility, chronological age, malnutrition, incontinence, compromised perfusion, immunocompromised status or chronic medical conditions such as diabetes mellitus, spinal cord injury or renal failure.
Rationale #2: These client populations are known to be at high risk for impaired skin integrity (Maklebust & Sieggreen, 2001: Stotts & Wipke-Tevis, 2001). Targeting variables (such as age and Braden Scale Risk Category) can focus assessment on particular risk factors (e.g., pressure) and help guide the plan of prevention and care (Young et al, 2002).
Intervention #3: Monitor the client’s skin care practices, noting type of soap or other cleansing agents used, temperature of water and frequency of skin cleansing.
Rationale #3: Individualize plan according to the client’s skin condition, needs, and preference (Baranoski, 2000).
As a nursing student with a strong interest in working with trauma patients, I am intrigued by
the fact that as to why some trauma patients are more susceptible to contracting sepsis than
others. Therefore my suggestion for future research would be to determine if there is an
underlying factor that we, as healthcare professionals are overlooking. Apparently, I am not
alone in my thinking and in performing additional reading on sepsis I was pleasantly surprised to
learn that an investigation into this matter is underway. Hinley (2010), a staff writer for Medical
News Today, reports how an emergency room nurse’s curiosity about why some trauma patients
develop sepsis while others don’t has led to an expanded career as a researcher studying the
same, burning question.
Dr. Beth NeSmith, assistant professor of physiological and technological nursing in the
Medical College of Georgia School of Nursing received a three-year, $281,000 National
Institutes of Health grant in September, 2010 to examine risk factors for sepsis and organ failure
following trauma. Based on her own research, Dr. NeSmith concluded that trauma kills more
than 13 million Americans annually and sepsis is the leading cause of in-hospital trauma deaths,
yet little data existed to explain differences in population vulnerability to these deadly outcomes.
NeSmith believes lifetime chronic stress may be the culprit and a simple test on hair may identify
those at risk. Her theory is that a person who grows up with chronic stress, such as socio-
economic stress or abuse, will have a different response to trauma in terms of their inflammation
profile,” NeSmith said. “Inflammation is a normal body response to trauma, but if it gets out of
hand it’s dangerous. The only care for it is supportive until – if – the body gets better.” (Hinley,
P., Medical News Today, 2010)
As the trauma clinical nurse specialist at MCG Health System from 1997-2003, NeSmith was
intrigued by the limited treatment options available for sepsis. Her grant will allow her to test the
theory that people with existing chronic stress respond differently physiologically to trauma than
non-stressed individuals. NeSmith spends three days a week in the lab working with basic
science research techniques.
Nurses play a critical role in improving outcomes for patients with sepsis. To save the lives of
those with sepsis, all nurses, no matter where they work, must develop their skills for
recognizing sepsis early and initiating appropriate therapy. With nurses dedicated to
understanding and stopping this deadly disorder, the goal of reducing mortality will be realized.  
 

The Digital Dice Game Project: An Overview

A traditional dice is a small polyhedral object, usually cubic in shape. It generates a random number in the range of one to six. There are also non – cubical dice with different number of faces such as tetrahedrons (four faces), octahedrons (eight faces) or dodecahedrons (twelve faces). A digital dice is an alternative device that can be used to replace the traditional device with the help of a numeric display. It is controlled with the help of a switch. The count will display numbers randomly from one to six on the 7 segment display in a push of the button.
1.1 Rules of the Game:
The Digital Dice game consists of two players, Player A and Player B.
Both the players, Player A and Player B, are given a switch each to control the dice.
In this game, only one player is allowed to play at a time and the input of only one player is counted at a time. A LED indicates the player’s turn.
The output of each player’s throw is added to the output of their previous throw’s number. This gives their final score.
The maximum count is taken as 30. When any one of the players reaches the maximum count of 30, the Game ends. The player (Player A or Player B) has won the game.
The beeper along with a light indicates the player’s victory.
Chapter 2: Circuit Description
This chapter gives a detailed description of the block diagram for the Digital Dice game project. It discusses the main parts and also gives a detailed explanation on the same.
2.1 Block Diagram
The main parts of the block diagram as shown in figure 1 are:

2 – Clock pulses
Random Number Generator
Digital Dice Display
2 Adder Circuits (including the seven – segment FND display)
‘Game – Over ‘disabling circuit
Reset switch

2.2 Clock Pulse
Clock pulse is a signal used to synchronize the operations of an electronic system. They are continuous and precisely spaced changes in voltage. The main aim of this part in the circuit is to give the appropriate clock pulses to the next circuits to make a progress in the game.
For this purpose, 2 clocks have been employed for each player. Here a special circuit has to be employed so as not to allow the player that has already played to play until his opponent has had his chance. This is done by using the Toggling feature of J-K flip-flop (IC 7476). Each of the 2 clock pulses is then ANDED with the 2 outputs of J-K flip-flop which is Q and Q’. At any point of time, only one of Q and Q’ will be HIGH and so only one player will be able to play at a time as per the rules of the game. The clock of the other player being ANDED with zero will be ineffective. The appropriate clock then will pass through the OR gate and into the input clock of the J-K flip-flop, thus toggling it and providing a chance for the other player to play. The output of the OR gate is given to the rest of the entire circuit as a ‘common clock’.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

2.3 Random Number Generator
The main aim of this part is to generate any number between 1 and 6 (inclusive) i.e. 3-bit binary number, similar to a cubic dice where each face represents a number. However, the number generated in this circuit is not in any kind of a predictable sequence and is in a perfectly random similar to an actual dice in such a game.
This is facilitated by the use of IC NE-555, which generates series of output clock pulses. The resistors and capacitors surrounding it formulate a particular RC time constant and the IC then continues to generate output clock pulses till the end of this time period. So, when the appropriate clock pulse is obtained from the above discussed clock pulse circuit, the pulses generated by IC NE-555 are fed to the next Integrated Circuit, Binary Ripple Counter (IC – 7493). Another Integrated Circuit, Decade Counter (IC – 7490) can also be used. The Binary Ripple Counter counts from 0 to 5 i.e. 3-bit numbers provided the MSB (Most significant Bit) of the counter is not considered. After the count reaches five, the Counter resets to zero. When many clock pulses are received by it in a single time-constant period, it counts from 0-5 many times and outputs any of these numbers. This is known as Random Number Generation.
However, the numbers obtained from the above procedure are between 0 and 5 (inclusive) and the desired numbers are from 1 – 6. This is taken into account by including another Integrated circuit, Binary Parallel Adder (IC – 7483) which increments the above generated number by 1 as it is between 0 and 5. The output of the Binary Parallel Adder is the final desired random number which is then fed into the Digital Dice-Display circuit as shown in the figure 1.
2.4 Digital Dice Display Circuit
The only purpose of this part is to show the face of the Dice corresponding to the number generated by the randomizer circuit.
This is done with the help of a BCD – 7 Segment decoder which is used to drive a common anode 7 segment display. The output of the above discussed circuit forms the input for the BCD which then enters the input of seven – segment decoder. The random number generated by the random number generator circuit will be displayed on the 7 – segment display when the button is pushed by a player. The number displayed is any number between 1to 6 in a complete random sequence.
2.5 Adder Circuits
This is the core part of this game. All the numbers generated so far should be accounted for each player independently in the form of their score. As discussed earlier, this score gets incremented by each alternate fall of numbers on the dice. The Adder circuit performs this function.
The Adder circuit is made up of a group of 3 AND gates. One of the inputs of the AND gates is a bit of the random number generation and the other input is one of Q and Q’ (outputs of the J – K flip-flop as discussed above in the ‘Clock Pulses’ section). Hence, at a time, the score of only the appropriate player gets incremented by the number on the dice. Whereas the score of the other player remains the same (i.e. gets added by 0).
The outputs of the 3 AND gates enter the Integrated Circuit, Binary Parallel Adder (IC – 7483) as inputs for A. The Most significant bit (MSB) A is kept grounded. The inputs for B come from the output of the Integrated circuit, 4-bit Register (IC – 74194) and these stores the Least significant bit (LSB) of the final score. There are two Binary Parallel Adders and the output of this 1st Adder (IC – 7483) is connected to the 2nd Adder (IC – 7483), which converts the added binary number into its decimal equivalent and stores the output in the above mentioned 4 – Bit register (IC – 74194). This conversion is produced with the help of different logic gates (AND and OR gates). When the binary number is greater than 9, 6 (0110) is added to it, else 0 (0000) is added to the number thus generating the equivalent LSB decimal number. Therefore, the LSB remains less than or equal to 9, thus representing the score in decimal form.
The same technique is applied to the Most Significant Bit of the score. Here, 1 is added to the MSB of the Binary Parallel Adder (IC – 7483), if the above generated binary number is greater than 9. The other input for this 3rd IC-7483 comes from another 4-bit register (IC – 74194). Therefore, the MSB can also show decimal numbers from 0 – 9.
The same Most Significant Bit and Least Significant Bit numbers from the Adders are given as input to Integrated circuit, BCD (IC – 7447), which is the driver IC to the Seven-Segment LED display. The outputs of this Integrated circuit are fed into the LT-543, to show the corresponding numbers.
An important point to be noted here is that the same ‘common clock’ is given to the above mentioned IC-74194 registers so that they can output the stored numbers each time.
2.6 ‘Game – Over’ Disabling Circuit
This part of the block diagram indicates the END of the game, i.e. Game – Over. The game is considered to be over once the score of any one of the two players (Player A or Player B) reaches/crosses the score of 30.
The second input of the Most Significant Bits of the Most Significant Bit of the decimal score of both the players form the input to the NOR gate. Thus, when any score reaches/crosses 30, the 2nd Most Significant Bit becomes HIGH. Thus NOR output becomes LOW (i.e. In a NOR gate, when any one of the inputs is HIGH, the output is LOW). This is then ANDED with the clock-pulse to be given to the J-K flip-flop. As a result, the J-K flip-flop does not receive any clock. Thus, the ‘toggling’ feature of the flip – flop stops. Thus, the random number generation stops and the Dice-display remain unchanged. And, finally the scores remain fixed. Therefore, the game has come to an end
The winning player (Player A or Player B) is identified by the tone of the buzzer/alarm along with a LED to provide an indicating light. This is having one end on the above 2nd Most Significant Bit and the other end grounded.
2.7 Reset Switch
This is also a very important part of the game. The function of this switch is to bring the game back to start from any point of time.
This is performed with the help of a Combinational Circuit and a ‘Push-to-OFF’ switch. This is a kind of switch which has its 2 ends always connected, except when pressed/pushed. Thus, one end of the switch is grounded. Therefore, by default this makes the clear inputs of all registers HIGH. Here, the registers employ Active Low Clear inputs.
When the switch is not pushed, HIGH clear is fed to the registers via a NOT gate. Therefore, normal functioning of all the registers is obtained. Also, the output drawn from the OR gate then depends on the output from the AND gate (the 2 inputs of the AND gate come from the 2nd Most Significant Bit and 3rd Most Significant Bit of the output of the Binary Ripple Counter, IC – 7493).
When the switch is pressed, the connection of its 2 ends gets broken and thus making the Clear input to all registers LOW via the NOT gate (i.e. all registers are cleared). Therefore, one of the inputs to the IC – 7483 Adders become 0000. And, also the input of the OR gate becomes HIGH, thereby ignoring the 2nd input and thus providing HIGH output to the RO(1) Clear input of the Binary ripple counter, IC – 7493. Now, the counter is reset by 2nd Clear input RO(2) as it becomes HIGH, providing 0000 output. This forms the other input of Binary parallel adder, IC – 7483. Thus, the Adder circuits display 00 in the 7 – segment display. This 0000 output is then carried via the Binary parallel adder, IC – 7483 (here the input carry is also 0) to the Dice-display circuit which displays 00.
Chapter 3: Random Number Generation Circuit
This chapter explains the circuit diagram required for the random number generation and the digital – dice display. It also talks about the working for the same.
3.1 Circuit Diagram
The below figure (figure 2) shows the circuit diagram used for the random number generation of a digital dice.
3.2 Operation
Figure 2 shows the circuit diagram to generate any random number between 1 and 6 and display it on the 7 – segment display. In operation, a clock frequency of 50 Hz is generated by the pulse generator. It is ANDED with the push button. When the push button is pressed, the clock pulse generates a series of clock pulses. The combination of the clock pulse and the push button forms the counter clock for the Binary Ripple Counter (IC – 7493). This counter behaves as a Mod – 6 Counter and it counts from 0 – 5. Once the count reaches 5, it resets to zero. Thus, the connection of QB (with value 2) to R0 (1) and QC (with value 4) to R0(2) respectively.
The output of this counter is connected to the input A of the Binary Parallel Adder (IC – 7483), i.e. QA, QB, QC, QD to A1, A2, A3, A4 respectively. The function of the adder is to add the number 1 (Binary 0001) to the output from the Binary ripple counter. This is done by grounding the pins B1, B2, B3 and the pin B4 is connected to the supply to get a value of 1.
The output of the Adder is connected to the BCD – 7 segment display, i.e. the pins 9, 6, 12, 15 are connected to pins 7, 1, 2, 6 respectively. Therefore, any number between 1 and 6 is displayed in a totally random manner in the form of its decimal equivalent on the 7 – segment display.
This completes the random number generation and the Digital – dice display parts of the block diagram.
3.3 Components Assembled
The following components have been assembled on a Bread Board in order to create a random number display between 1 and 6.
3.3.1 Counter
A counter is a device which stores the number of times a particular event or process has occurred, usually in connection with a clock signal. Every counter requires a ‘square wave’ clock signal to make them count. A square wave clock signal (as shown in figure 3) is a digital waveform with sharp transitions between low (0V) and high (+Vs) voltage, such as the output from a 555 astable timer. Here it comes from the pulse generator.
Examples of counting are digital clocks, watches, timers found in a range of appliances from microwave ovens to VCRs and counters are also found in everything from automobiles to test equipments.
There are mainly two types of counters:
Ripple Counters
In a ripple counter, there are a chain of flip-flops with the output of each flip – flop forming the input for the next. Every time the input of the flip – flop changes from high to low (on the falling edge), the state of the flip flop output changes.
Ripple counters mostly count on the falling-edge which is the high to low transition of the clock signal. They use this edge as linking counters becomes easier as the most significant bit (MSB) of one counter can drive the clock input of the next. This whole process occurs because the next bit must change state when the previous bit changes from high to low – the point at which a carry must occur to the next bit.
The disadvantages of this counter are:
There is a slight delay (known as a Ripple Delay) as the effect of the clock ‘ripples’ through the chain of flip-flops. But in many circuits, this is not a problem as it is far too short to be seen on a display.
In a logic system, the connection to the ripple counter outputs will cause false counts which may produce ‘glitches’ in the logic system and thereby disrupt its operation. For example, a ripple counter changing from 0111 (7) to 1000 (8) will briefly show 0110 (6), 0100 (4) and 0000 (0) before 1000.
Synchronous Counter
A synchronous counter has a more complex internal structure as compared to a ripple counter. The advantage of this counter over the ripple counter is that it ensures that all its outputs change precisely together on each clock pulse, thereby avoiding the brief false counts which occur with ripple counters.
Most synchronous counters count on the rising-edge (refer figure 5) which is the low to high transition of the clock signal. They usually have carry out and carry in pins for linking counters without introducing any ripple delays.
These counters have a synchronous reset which occurs on the next clock pulse rather than immediately as in a ripple counter. Since reset must be performed on the maximum count required, it is a very important function.
3.3.1.1 Binary Ripple Counter (IC – 7493)
This is the counter used in the circuit. Figure 3 shows a clock signal driving a 4-bit (0-15) counter. It is connected with LEDs (Light Emitting Diodes) to show the state of the clock and counter outputs QA – QD. And Q indicates the output.
A counter can be used to reduce the frequency of an input signal and thus behaves as a frequency division counter (as shown in figure 7), i.e. they can be used to reduce the frequency of an input (clock) signal. Each stage of a counter halves the frequency, so here the LED on the first output QA flashes at half the frequency of the clock LED, i.e. QA is 1/2, QB flashes at 1/4, QC at 1/8 and QD at 1/16 of the clock frequency. It is usually labeled as Q1, Q2 and so on. Qn is the nth stage of the counter, representing 2n.
Division by numbers that are not powers of 2 is possible by resetting counters. Counters can be reset to zero before their maximum count by connecting one (or more) of their outputs to their reset input. The counter is in two sections: Clock A for QA and Clock B for QB, QC and QD.
If the reset input is ‘active-low’ a NOT or NAND gate will be required to produce a low output at the desired count. ‘Active – low’ is indicated by a line drawn above reset. For example:   (say ‘reset-bar’). The reset function requires an immediate reset on the next count.
3.3.1.2 Decade Counter (IC – 7490):
A decade counter (refer figure 8) is a binary counter that is designed to count to 10 or 1010 in binary, i.e. it counts the number of pulses arriving at its input. The number of pulses is counted up till 9 and it appears in binary form on four pins of the IC. When the tenth pulse arrives at the input, the binary output is reset to zero (0000) and a single pulse appears at another output pin.
This function is performed due to the fact that the NAND output goes low, and resets the counter to zero. D going low can be a CARRY OUT signal, indicating that there has been a count of ten. So for ten pulses in the input, there is one pulse output. Therefore, the 7490 Decade Counter divides the frequency of the input by ten. And, if this pulse is applied to the input of a second 7490 decade counter, then the second IC will count the pulses from the first IC i.e. for 100 pulses input, there will be one pulse output.
3.3.2 Binary Parallel Adder (IC – 7483)
The parallel adder precedes the binary counter, i.e. once the counter begins its count from 0 – 5, it then enters the adder where the binary 0001 is added to it.
The central computational element in any circuit is the adder. The function of the parallel adder is to add two n – bit numbers together. For this purpose, n full – adders should be cascaded with each full – adder representing a column in the long addition. The carry signals ‘ripple’ through the adder from right to left.
Figure 9 indicates the working of a logic full adder/ subtractor. The adder circuit has a mode control signal M which determines whether the circuit has to operate as an adder or a subtractor. Each XOR gate receives one input from M and the other input from B, i.e. Bi. The function of the XOR gate is that if both the inputs of the XOR gate is the same, then the output of the XOR gate will be zero and if both the inputs to the XOR gate are different, then the output of the XOR gate will be 1.
When M = 0, the output of XOR gate will be Bi ⊕ 0 = Bi. Thus, the addition function takes place, i.e. the circuit performs A plus B (A + B). When M = 1, the output of XOR gate will be Bi ⊕ 1 = Bi’. Since it is the complement of B, subtraction function takes place, i.e. A plus 1’s complement of B which is the same as A minus B (A – B).
Every digit position consists of two operands and a carry. The operation of an adder is to add the two operands and the carry-in together. If the result is less than the base, this sum is outputted with a carry-out of 0. 0therwise the base is subtracted from the total of the two operands and the carry-in and this sum is outputted with a carry-out.
3.3.3 BCD – 7 segment display decoder
Here, the output of the Binary parallel adder forms the input for this BCD – 7 segment decoder to display the random number from 1 – 6.
The inputs A – D for the BCD (Binary Coded Decimal) display driver are connected from the outputs of the parallel adder. The display driver consists of a network of logic gates to make its outputs a – g become high or low. This lights the required segments a – g of a 7-segment display as shown in the figure. Usually, a resistor is required in series with each segment to protect the LEDs, 330 or 270 is a suitable value for many displays with a 4.5V to 6V supply. But for this project, only one 270 resistor is used which is connected between 3 (display test) and 8 (ground) pins of the integrated circuit.
There are two types of 7-segment displays:
Common Cathode (CC or SC): This display consists of all the cathodes connected together. These need a display driver with outputs which become high to light each segment, i.e. they are illuminated with high voltages. For example the IC – 4511. Here, there is a connection between the common cathode to 0V. IC 4511 is designed to drive a common cathode display and thus would not work with the common anode display.
Common Anode (CA or SA): This display consists of all the LED anodes connected together. These need a display driver with outputs which become low to light each segment, i.e. they are illuminated by connecting with low voltages. For example, IC – 7447 (BCD – 7 segment decoder) which is the IC used for this project. Here, there is a connection of a resistor in series between the common anode to +Vs.
The 7447 chip is used to drive 7 segment display. The input to the 7447 is a binary number DCBA where D is 8s (1000), C is 4s (0100), B is 2s (0010) and A is 1s (0001). The IC – 7447 display is intended for BCD (binary coded decimal) which has input values from DCBA = 0000 (0) to DCBA = 1001 (9) (i.e. 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001 in binary). Inputs from 10 to 15 (1010, 1011, 1100, 1101, 1110, 1111 in binary) will light odd display segments.
The following functions can be performed on the IC – 7447:
This IC has an open collector outputs a – g, which can sink up to 40mA.
A lamp test can be performed on the IC to check if all the segments are in working condition. This is done by keeping the part of the IC low. At this point of time, all the display segments should light (showing number 8).
There is another function which is the Blanking Input (). If the blank input is low, then the display will be blank when the count input is zero (0000). This can be used to blank leading zeros when there are several display digits driven by a chain of counters. The blank output can be achieved by connecting the blank input of the next display down the chain (i.e. the next most significant digit).
Also, a function stands for Ripple Blanking Input. When is low and DCBA = 0000, the display is blank otherwise the number is displayed on the display. This is used to remove leading zeroes from a number. For example, displays 89 instead of 089. If more than one display has to be used, a connection of (Ripple Blanking Output) from most significant 7447 to the of the next 7447 has to be done.
If a connection between of the least significant 7447 to 5V is done, the display will turn off when the number is 0.
This circuit can also be controlled by a PLC (Programmable Logic Circuit), if the inputs to the BCD (Binary Coded Decimal) come from the 4 output bits of the PLC output card.
Chapter 4: Summary
This chapter lists the achievements and developments of the project
The following has been achieved in this project:
Successful design and simulation of random number generation circuit along with the dice display – Block Diagram of the Digital Dice game, circuit diagram for the display of random numbers from 1 – 6 on the 7 – segment display.
Successful assembly of wires, binary ripple counter (IC – 7493), binary parallel adder (IC – 7483), BCD – 7 Segment display decoder (IC – 7447).
The development of this project is as follows.
The digital dice game is currently being assembled, and post assembly, it will be used as a game to be played between two players..
Remaining circuit diagrams with more detail about the remaining parts of the block diagram will be designed.
 

Overview Of Health Inequality Health And Social Care Essay

Introduction
Overview of health inequality
Health inequality is the comprehensive term used to describe gaps, variations and disparities in the health achievements of individuals, racial, ethnic, sexual orientation and socioeconomic groups in a society (Kawachi et al. 2002). It is generally described in terms of social economic class and its causes are mainly lifestyle factors (smoking, nutrition, alcohol consumption, exercise, weight, stress) and socio economic factors (income, socio economic group,housing,employment and educational status) (House of commons health committee 2004).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

People in the lower social class tend to have lower incomes,poor education,live in poorer housing in deprived areas and are also more likely to suffer worse health (Marmot 2010;The Black report 1980; Acheson report 1998). Life expectancy has increased and illness from disease, infection, hunger and dirty water has decreased over the years in the UK, however the problem now is disease of lifestyle, resulting in increasing levels of respiratory disease, obesity and its complications, heart disease, and cancer due to lack of exercise, smoking, drinking and bad diet (DoH 2010).
The Marmot report (2010) and the National Obesity Observatory (2010) both show the rise in obesity in both males and females over a period of 10 years. As shown in figure 1 below.
Obesity
Figure1: Prevalence of adult morbid obesity (BMI 40kg/m2 or more), 3 year rolling averages, Health Survey for England.
Source: Information Centre for health and social care, 2009.
Obesity in the lower social class has been identified as a growing health inequality in today’s society by the Marmot report (2010).With 61% of the population obese, 21% still smoking, less than 40% of the nation meeting physical activity needs and 2.4 million people drinking over the recommended weekly amount, the disease of lifestyle remains a major concern and cost burden to the government (DoH 2010) withobesity and the relating health issues costing the NHS £4.2 billion in 2007 (NOO 2010).
This essay is going to look at the increased risk of obesity in lower social class, by exploring the sociological and psychological influences, thus identifying why those in the lower social class are more likely to suffer worse health outcomes (Marmot, 2010). Sociological factors will include type of environment, and income, whereas the psychological aspects will look more at issues influencing behaviours that increase risk of obesity, such as over eating and lack of activity (Foresight 2007). This essay will then go on to look at what is being done to limit the effect of obesity in the lower social class and the role of the nurse and the multi-disciplinary teamin prevention and treatment of obesity.Inequalities in health have been identified throughout the last 30 years, as shown in the Black report (1980), Acheson report (1998) and Marmot report (2010).
Obesity in the lower social class
The Marmot report (2010) identifies obesity in the lower social class as being a prevalent and contemporary inequality in health in today’s society. Obesity is universally measured by Body Mass Index. (Naidoo and Wills 2008). The World Health Organisation (2006) describes obesity as an increase level of fat stored in the body that impairs health.Obesity can result from a number of factors including genetic, metabolic, environmental and psycho-social influence (Aylott et al. 2008).
Disorders closely associated with obesity are heart disease and stroke (main causes of death) where as diabetes, obstructive sleep apnoea, musculoskeletal pain and osteoarthritis, obstetric complications, polycystic ovarian syndrome, infertility, incontinence and mental health problems reduces quality of life as well as life expectancy (NOO 2010). The Marmot review (2010) identifies people living in the poorest areas of England will generally die 7 years earlier than those living in the more affluent areas and also reveals the difference in disability free life as 17 years between the poorest and richest areas.These gradient differences in life expectancy and disability free life are also prevalent in level of education, type of housing, and occupation (Marmot 2010).
The Health Survey for England(2007) revealed the prevalence of obesity was higher in unskilled workers (social class V) than professionals (social class I) with this gap between the two been significant and has widened since 1997 in both sexes.The Black report (1980) showed unskilled workers death rate was twice that of professional workers in 1971.It has been estimated 12% of women in the professional/managerial group (higher social class) were obese, compared to 29% in the routine and manual group (lower class) (HSE 2007).
Figure 2: Obesity and the contributing determinants.
Figure 2 above shows some of the contributingdeterminants of obesity. By understanding the societal influences, such as the influence of the media, peer- pressure and income may help identify and account for the variations in diet between the socio economic groups. However a clear picture of obesity in the lower social class cannot be explored without looking at the influences, such as psychological drives for particular foods, eating patterns and patterns of physical activity on this group (NOO 2010).
Sociological determinants – Structure
The Acheson report (1998) identifies income as a major contributor to social influence on diet and obesity. Daykin and Jones (2008) also identify the link between income and diet related health. However, the Acheson report (1998),and Daykin and Jones (2008), suggests that the types of food eaten by people in each group of social class tends to vary greatly. Those in lower income homes buy foods that are processed, high in salt, sugar and fat, and most importantly cheaper (House of Commons 2004). These energy dense foods are high risk contributors to obesity and the relating health conditions (Daykin and Jones 2008). Although there are many healthy versions of the same products now available, they tend to be more expensive than their counterpart thus deterring those from a lower social class with a lower income from buying theseproducts (House of Commons 2004).
This may also relate to low income areas having a decrease in the number of retail grocers and an increased number of fast food outlets opening in the area (Cummins andMacintyre 2002).In deprived areas most local retailers selling fresh vegetables and fruit have had to change their products because of their inability to cope with large out of town supermarkets selling fresh fruits and vegetables at a cheaper rate, leaving these retailers to stock food that is processed and high in sugar, fat and salts (Caraher and Conveney 2004). This is further exacerbated by those from lower social class being less likely to have their own transport causing further problems for those trying to get to supermarkets with poor transport links (Daykin and Jones, 2008). The Wanless report (1998) also identifies the change in societies eating habits and patterns. There are higher levels of sugar intake due to the carbonated drinks and society tends to snack more with higher levels of eating out, and food being available at all times of day and night (House of Commons 2004). With an increase in fast food venues, opening and selling cheap filling processed food in economically deprived areas, is a contributing factor to the lower social class increased prevalence of obesity and link health problems(Daykin and Jones, 2008).
Psychological determinants – Direct
Research has shown that those in lower social class are affected more by different types of stress (Brinkerhoff et al. 2008). For example those in lower social class are affected more by job insecurities, job loss and physical disabilities, which in turn can be the cause of mental disorders and emotional distress (Stansfeld et al. 1997). This distress over poverty and economic insecurity is extremely important in understanding the causes of major depression (Brinkerhoff et al. 2008). Depression can lead to changes in a person’s activity (hypersomnia or insomnia) and appetite (increased or decreased)(Quitkin 2002). Hypersomnia and increased appetite are important risk factors in development of obesity (Cumella et al. 2008). Obesity can then lead to a person having low self-esteem (Ogden and Flanagan 2008). A person’s self-esteem can then determine whether they have the confidence and motivation to seek a better work or to simply get a job. Without this confidence to find a better paid job the person is more likely to stay in the same lower social class position, with the same stress and depression, thus influencing their psychological behaviours(Stansfeld et al. 1997).Aylott et al. (2008) reveals that those who are obese and have low self-esteem feel stigmatised and therefore are less likely to participate in schemes to lose weight.
Indirect determinants – Action
However, looking at learnt and inherited behaviours explains unhealthy lifestyle choices influencing obesity in the lower social and economic class (Shelton 2005). There has been evidence that suggests that lower levels of physical activity with higher levels of television watching tend to be more prevalent in the lower social class (Aylott et al. 2008). Those in lower social class are less likely to take part in physical activity, men are 38% and women are 68% more likely to be active if from a higher social class (Wandless Report 2003). This disparity in activity is highlighted as a major factor in increased riskof developingobesity (Wandless Report 2003).
The role of the media has more impact on eating habits and behaviours on the lower social class, who spend more time watching TV (House of Commons 2004). Most advertising as identified by the House of Commons Report (2004), were for high fat, sugar or salt foods. With limited advertising for fresh fruit and vegetables, which is drowned out by the sheer number of snack food adverts, this impacts on the choices of foods bought by people (House of Commons 2004). There is evidence also that if one parent is obese then the children are also more likely to be obese, and again reflecting social determinates early on in life and behaviours (Aylott et al. 2008).
Biological determinants
There has been increasing research over the past few years that have shown a definite link to obesity and genetics (Boutin and Froquel 2001). Genetic defects have been identified that could pre determine that a person is going to be overweight or obese (Boutin and Froquel 2001). These biological factors have recently been identified not to affect our metabolic rate or nutrition partitioning, but to be a neuro-behavioural disease (O’Rahilly and Farooqi2006). Therefore increased exposure to unhealthy behaviours and a predisposed biological abnormality would increase the chances of a person developing obesity (O’Rahilly and Farooqi 2006).
Problem solving
The Government and the primary care sector recognise obesity as being a problem that needs addressing (NICE clinical guidelines 2006). Obesity results from a complex interaction between various factors such as a societal, individual psychology, physical activity and food consumption and therefore a wholly holistic approach is needed when tackling the obesity epidemic (Aylott et al. 2008). Looking at government policies and strategies as well as professional actions will help identify what is being done to limit the effect of obesity in the lower social class as well as direct approaches in influencing changes in behaviour.
Health promotion and policies
As identified, to approach obesity collaboration between the government bodies, as well as the media, food industries and communities is needed to achieve the common goal of better public health (Fisher 2005).There have been many policies already targeting healthy behaviours across a variety of intervention schemes (NICE clinical guidelines 2006). The government has identified key areas that need to be addressed in policies, such as start well, policies encouraging healthy habits from birth, prevention and treatment of mental health problems, as well as living and working well policies (DoH 2007). The role of the nurse and other members of the multi-disciplinary team are essential in delivery of these policies and guidelines, to offer advice and support to facilitate behaviour changes (Foresight 2007). The key areas of government strategy have been to:-
To increase levels of walking and cycling in our current environment,
Targeting those at risk with health interventions,
Control demand and supply of obesogenic foods and drinks,
Increasing the responsibility of different organisations to the health and wellbeing of their employees,
Early life interventions, influencing behaviours and attitudes.
(Aylott et al. 2008)
The role of the nurse and multi-disciplinary team in implementation
It has been identified that the prevention and management of obesity in the lower social class should be a priority for all (NICE guidelines 2006). When working with those to prevent or manage obesity a person centred approach should be used (NICE guidelines 2006).Department of Health(2007) strategies indicate primary care workers duty to treat patients as individuals and adjust their care requirements accordingly. However some research has shown that nurses are likely to perceive obese people as lazy and unattractive while caring for them is stressful, hard work, repulsive and disgusting (Poon and Tarrant 2009). This negative view of the patient is likely to be picked up by the patient and cause further problems in the nurse patient relationship (Poon and Tarrant 2009). This will also impact on any advice and support offered by the nurse in health promotion as the patient will not feel the advice it is coming from someone who actually cares about their problems, and will further reinforce the bad body image they have (Poon and Tarrant, 2009). Areas of change and policy governing these have been identified, but without a non-judgmental holistic approach by the primary care sector, much of the advice and information around prevention and treatment of obesity could be ignored (Ogden and Flanagan 2007). However with a holistic assessment of possible causes, individually tailored treatment can be implemented, with appropriate support (NICE clinical guidelines 2006).
Direct behavioural change pathways
The Department of Health (2010), have identified that halting the obesity epidemic, is about individual behaviour and responsibility: how much we eat and what we eat, as well as how much physical activity we take part in. However DoH (2010) also identifies the voluntary and private sector being accountable for behaviours in health and promotion. The DoH (2010) has implemented guidelines, tools and information on changes needed to tackle obesity. One such campaign is the Change4Life. This campaign is aimed at reaching all parts of the community even those from the lower social class (DoH 2010). The aim of the campaign is to make people aware of healthy food choices. The Department of health 2010 have spent£75 million on evidence based marketing programme, which empowers and supports the public to make changes to their lifestyle (DoH
2010). The Government has used extensive advertising campaigns to make people aware of healthy choices, and behaviours (DoH, 2010)the campaign offer advice in several areas of change, such as: portion swap, snack swap, 5 a Day, up and about, drink swap
(https://www.wales.nhs.uk/healthtopics/lifestyle/obesity, 2011)
The Department of Health (2010) cannot emphasize enough the importance of healthcare professionals in the implementation and success of the Change4Life campaign. It recognises that many healthcare professionals are the first point of contact for those at increased risk of developing obesity and are in a position to influence the targeted group in society (Department of Health 2010). Understanding the causes of obesity will allow for the primary care team to take a more active role in prevention, whether through counselling for depression, support groups for self-esteem, weight management consultations, referrals to a dietician, surgery and medication as potential solutions to the behaviours of obesity (Ogden and Flanagan 2007). Ogden and Flanagan (2007) identified in their research that there is focus on the primary care due to many of their clinics now taking weight management as part of their approach to diabetes clinics, COPD clinics and health screening.
Indirect government action
The Government has identified that the food industries need to be dedicated to the delivery and promotion of health choices (DoH 2010). The Foresight report(2007) has identified an effective regulation response to foods and drinks. With local government involved in building partnerships with local food retailers for example the partnership between the Department of Health and the association of convenience stores to increase the availability of fresh fruit and vegetables available in areas that would have limited access to these products (deprived areas) (DoH 2008).
Others areas of effective strategy policy set out by the ‘healthy weight, Healthy lives, 2008 policy show practical ideas around walking, cycling and the surrounding environment. The Department of Health(2008) has encouraged local areas to invest in many cycle paths and schemes encouraging people to walk or ride their bike to school or work. As well as using markers and promotional pedometers, allowing individuals to actively see how much they walk and what that can do for their health (Aylott et al. 2008).
Conclusion
The obesity epidemic has become a huge problem across the whole of the United Kingdom with the lower social class more likely to be affected by obesity and its complication (Marmot 2010). Obesity is a complex disease with many determinants that affect its prevalence in the lower social class (Foresight 2007). One major determinant is income which have a profound effect on the psychological behaviours (depression, self-esteem) as well as the sociological environment (Transport, location, availability of product) (Foresight 2007).A small percentage of people are biologically predisposed to obesity (O’Rahilly and Farooqi 2006). Combination of all of these factors has influenced the prevalence of obesity in the lower social class (Foresight 2007). The prevention of obesity needs a holistic approach in order to reverse the rising tide of obesity (Cross – Government Obesity Unit et al. 2008).
Government policies and campaigns are aimed at promoting healthy eating and lifestyle. They rely mainly on primary care teams to implement this to their local areas and patients, however, the individual’s enthusiasm to change and adhere to policies and advice is vital in closing the gap in this inequality (Foresight 2007). However it is not only the role of the Government to envisage a change, the food marketing and food agency should also take a role in promoting and delivering healthier food to the consumers, whatever class they are from (Wandless 2003).
 

Communication Models: Overview and Analysis

Digging deeper into communication models, the research done by theorist and communication experts alike has helped the future generation to at least have a brief idea of what a communication model is. In this era, whereby the world is getting smaller coined the “Global Village” (Marshall McLuhan) communication has risen to new heights due to its importance. The understanding of communication would further enhance a man’s understanding of how to communicate with efficiency and efficacy. The rise of the Internet has also changed how communication works whereby the former and commonly used type was direct communication (face-to-face) and it changed into communication that was based more on writing as more and more people are hooked to the Internet.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

A communication model would help with discovery of the difference in relationships between the different types of communication and how manipulative variables change efficiency, reach etc. A communication model would also clarify complex understanding of communication into a more disciplined, order and simple form of communication. Though models of communication has its advantages yet because of its simple nature could lead to oversimplifications, such as quoted “There is no denying that much of the work in designing communication models illustrates the oft-repeated charge that anything in human affairs which can be modeled is by definition too superficial to be given serious consideration.” Duhem (1954) thus a model may attempt to mirror real life information but in reality it is impossible to truly model real life communication.
Therefore, communication models are generally divided into two, linear model and non-linear model, the former neglecting feedback of receivers, external factors etc. The linear model adapted a mathematical approach to communication based on codes that are decoded and encoded. The model dictates communication is possible only if two people share the same code whereby in this case is the same language. Language is the bridge that connects human thoughts into strings of sound, syllables and words and like any other ‘computer’ in order to translate it requires the same code system. This means, an example if a person wants to transfer his/her thoughts it would be encoded into language then the message is delivered through a channel that is later on decoded by the receiver/recipient.
Thus the introduction to the encode-decode model of communication, the linear model though revolutionary in communication introduced by Lasswell and later on enhanced by Shannon-Weaver model (Shannon, 1948) including noise into the communication. The main defect of these models are that it is linear and ‘robotic’, dictating that communication has a beginning and ending while in reality communication does not have such limited specifications under communication. Linear models do not generally take into account how humans interpret meanings…the encode-decode model “assumes that meaning is objective and can be captured in a fixed correspondence between aspects of the world and some system of representation” (Lund and Waterworth) Thus, Schramm introduced the circular model that acknowledges communication is not linear and that emphasizes the circular nature of human communication, in which the roles of source/encoder and receiver/decoder interchange. In furtherance of analysis, the sample used is the Circular/Interactive Model of Osgood and Schramm.
Wilbur Schramm and Osgood introduced the Circular Model, 1954 were one of the first to alter the mathematical model of Shannon and Weaver. The reason behind Schramm coming up with the circular model was to overcome the limitations of linear models. “In fact, it is misleading to think of the communication process as starting somewhere and ending somewhere. It is really endless. We are little switchboard centers handling and rerouting the great endless current of information….”(Schramm W. (1954) quoted in McQuail & Windahl (1981).
All communication process starts with a person that has a thought or information to pass on to other people. In the interactive model, that thought would first go through an element called the encoder, which will change our thought into codes. Encoding is actually an act of translating specific thoughts into codes (message) that is then transferred to another person, which will decode the codes (message), and interpret the meaning. The second part of the communication, is the feedback/response of the person to the particular code (message) that goes through the process of encoding, and then delivered back to the original sender.
The difference here is that the circular model acknowledges the circular nature of human communication that is endless. An enhancement of previous linear models whereby the sender and receiver vice versa changes role depending on the communication. This helps improve the understanding of communication between two people rather then a one way linear model that does not represent the nature of communication in real life.
The Circular Model is a dynamic model that shows how a situation can change and that communication is not generally one sided. The Circular Model also raises the importance of redundancy and that it is an essential part of communication, due to the fact that communication moves in a circular manner. Another advantage of the model is that it does not separate between sender and receiver, both sender and receiver is the same person. A more active communication models rather then the linear model that assumes passive receivers. The Circular Model emphasizes on the feedback feature to be central of the communication model, where models before failed to incorporate.
The Circular model is not free from defects; one being the most highly criticized is that it does not incorporate the noise feature included in Shannon-Weaver model (Shannon, 1948). Noise is anything that influences effective communication and the interpretation of the code (message). Noise may have profound effects on interpretation of communication but is usually overlooked.
Noise can be divided into three categories, which are Semantic Noise, Psychological Noise (internal noise) and Physical Noise (external noise). Understanding noise is essential in improving further the communication models.
External Noise is anything outside the person that may distract the efficiency of communication, such as sight, sound, smell, and environment such as crowded environment. While’s Internal Noise is anything that influences thoughts, feelings during communication such as hunger, headaches and fatigue. The final one is Semantic Noise which encoding errors by the sender which is not understood by the receiver such as writings in articles by the use of jargons or unnecessary technical language.
Application of the circular model would most probably mirror communication limited to only 2 people. The Circular model is limited to that specific use since it fails to incorporate context and the surrounding nature and growth development of the individual.
Room for improvement of the Circular Model (1954) has been made through the Helical Model (1967) attempting to show that the growth of communication is forever evolving and limitless. The extent of its growth depending on the development of the individual throughout his life and including individual factors such as environment, economic and relations change over time. As communication moves forward so does the form of such communication, therefore it is a need to take into account of the different for such as the epidemic growth of social media changing the communication as there is more reliance of the things said rather then the non-verbal messages sent in the past with direct communication.
Based on extended reading, improvements of the Circular model taking into account present communication settings.
Macintosh HD:Users:syafeeqz:Desktop:College:COmm:Templatechart.jpg
Based on the Communication Model above, it clearly looks similar to the Circular model. The difference is the enhancement of noise in the middle section of the model. Message is changed into distortion; to infer that noise plays a part in the message sent thus naming it distortion. What this model represents is a more suitable approach to real life communication, as it incorporates the underlying factors of intention, perception, relationship and the context of communication while acknowledging all three types of noise.
First of all, intention of the communication does have significance as if the intention was transactional it is specifically goal-oriented thus would affect interaction to achieve such goals. On the other hand, if the intention were of socializing purposes and demonstrating social intimacy with the receiver/counter-initiator thus the distortion (message) would be interpreted differently. As an example, the differences of response to sellers as compared to friends and family. The attention span/level is also based on such intentions. Intentions can also recognized as inference. Inference in this sense means ‘humans communicate far more meaning than they ever encode linguistically’. A perfect example is the use of the word ‘Its gone’, the ambiguity of linguistics fail to define the complete meaning of intention in language and the interpretation is inferred by the receiver/counter-initiator.
Next, the relationship between initiator and receiver also governs the communication model such as symmetric power relationship between to friends, both with equal rights to speak as compare to a asymmetrical power relationship between and employee and employer would change the distortion (message) taking into measure the authority of the employer. Furthermore, the context is a combination of both the intention and relationship mixed with the location, time and noise during the communication thus turns into a major influence as how distortion (message) is received and the response given. An example to display such context, is the comparison of initiators/rebound distortion (message) during at work with a colleague compared too the distortion at home with a family member, while in both cases explaining the accident that took place while on the way to work. The context changes so much that the variables are limitless, while at the office the explanation of such event may be more dramatized and exaggerated since it just happened, and also the fact that the receiver/counter-initiator is a colleague while at home the explanation of the same event would be shorter due to the redundancy, fear of the family member’s response and so fourth.
The nuclear signed used was to signify the limitless boundaries of such context and the different combinations that may occur combined with the different types of noise that play a subtle role in influencing communication. The model incorporates all three types of noise that is semantic, external and internal. What differs from other models is it also features other major factors such as perception outwards toward the each other and perception inward of oneself. Example, if we perceive of what we are listening too in the radio is false thus the whether it is true we tend to ignore the distortion (message) this is called selective perception.
Schramms model though outdated and has been improved with numerous other models; it remains to be the cornerstone of communication models, with the model centered on the theory of feedback. Schramms model is of use in today’s social media lifestyle since social media relies mainly on the two-way circular nature of communication.
 

Overview of Crawlers and Search Optimization Methods

With the explosive growth of knowledge sources out there on the planet Wide internet, it’s become progressively necessary for users to utilize automatic tools in the notice the specified data resources, and to trace and analyze their usage patterns.
Clustering is worn out some ways and by researchers in several disciplines, like clump is done on the premise of queries submitted to look engine. This paper provides an outline of algorithms that are useful in program optimization. The algorithms discuss personalized conception based clump algorithmic rule. Fashionable organizationsare geographically distributed.
Typically, every web site domestically stores its ever increasing quantity of everyday knowledge. Using centralized Search optimized to find helpful patterns in such organizations, knowledge is not possible as a result of merging knowledge sets from totally differentwebsitesinto a centralized site incurs immense network communication prices. Knowledge of these organizations don’t seem to be solely distributed over numerous locations however conjointly vertically fragmented, creating it troublesome if not possible to mix them in a very central location.
Distributed Search optimized has therefore emerged as a full of life Subarea of Search optimized analysis. They’re planning a way to seek out the rank of every individual page within the native linguistics program surroundings. Keyword analysis tool conjointly accustomed.
Keywords – Distributed data, Data Management System, Page Rank, program Result Page, Crawler

INTRODUCTION

A search engine may be a computer code that’s designed to look for data on the planet Wide internet. The search results are typically given in a line of results usually named as Search Engine Result Page (SERPs). The data could also be a specialist in sites, images, data and different varieties of files. Some search engines conjointly mine knowledge out there in databases or open directories. In contrast to internet directories that are maintained solely by human editors, search engines conjointly maintain period data by running an algorithmic rule on an internet crawler. A look engine may be a web-based tool that permits users to find data on the planet. Wide internet well-liked samples of search enginesare Google, Yahoo, and MSN Search. Search engines utilize automatic code applications that follow the net, following links from page to page, site to site.
Every program use totally different advanced mathematical formulas to get search results. The results for a particular question are then displayed on the SERP. Program algorithms take the key components of an internet page, together with the page title, similar content and used keywords. If any search result page get the higher ranking in the yahoo then it is not necessary that it’s also get the same rank at Google result page.
To form things additional sophisticated, the algorithms utilized by search engines don’t seem to be closely guarded secrets, they’re conjointly perpetually undergoing modification and revision. This implies that the factors to best optimize awebsitewith should be summarized through observation, additionally as trial and error and not one time.The programis divided roughly into 3 components: crawl, Indexing, and looking out.

WORKING POSTULATE OF SEARCH ENGINE

Crawling

The foremost well-known crawler is termed “Google larva.” Crawlers scrutinize sites and follow links on those pages, very similar to that if anyone were browsing content on the net. They going from link to link and convey knowledge concerning those sites back to Google’s servers. An internet crawler is a web larva that consistently browses the planet Wide internet, generally for the aim of internet assortment. An internet crawler might also be referred to as an internet spider, or an automatic trained worker.

Indexing

Search engine assortment is that the method of a Search engine collection parses and stores knowledge to be used by the program. The particular program index is that the place wherever all the info the program has collected iskept. It’s the program index that gives the results for search queries, and pages that are keep at intervals the program index that seem on the program results page.
Without a look engine index, the program would take amounts of your time and energy anytime a question was initiated, because the program would need to search not solely each web content or piece of information that has got to do with the actual keyword employed in the search question, however each different piece of knowledge it’s access to, to make sure that it’s not missing one thing that has one thing to try and do with the actual keyword. Program spiders, conjointly referred to as program crawlers, are however the program index gets its data, additionally as keeping it up thus far and freed from spam.

Crawl Sites

The crawler module retrieves pages from the net for later analysis by the assortment module. For retrieve pages for the user query Crawler start it with U0. In this search result U0 come at a first place according to the prioritized. Now crawler retrieves the result of 1st important page i.e. U0, and puts the next important URLs U1 within the queue. This method is continual till the crawler decides to prevent. Given the big size and also the modification rate of the net, several problemsarise, together with the subsequent.

Challenges of crawl

1) What pages ought to the crawler download?
In most cases, the crawler cannot transfer all pages on the net [6]. Even the foremost comprehensive program presently indexesa little fraction of the whole internet. Given this reality, it’s necessary for the crawler to fastidiously choose the pages and to go to “important” pages 1st by prioritizing the URLs within the queue properly [fig. 1.1], in order that the fraction of the net that’s visit isadditionally significant. It’sstartingout revisiting the downloaded pages so as to find changes and refresh the downloaded. The crawler might want to transfer “important” pages1st.
2) However ought to the crawler refresh pages?
After download pages from the internet, crawler starting out revisiting the downloaded pages. The crawler has to fastidiously decide what page to come back and what page to skip, as a result of this call might considerably impact the “freshness” of the downloaded assortment. for instance, if a particular page seldom changes, the crawler might want to come back the page less usually, so as to go to additional often dynamical.
3) The load on the visited websites is reduced?
When the crawler collects pages from the net; it consumes resources happiness to different organizations. For instance, once the crawler downloads page p on web site S, the location has to retrieve pageup from its classification system, intense disk and central processor resource. Also, once this retrieval the page has to be transferred through the network that is another resource, shared by multiple organizations.
III. RELATED WORK
Given taxonomy of words, an easy methodology used to calculate similarity between 2 words. If a word is ambiguous, then multiple strategies could exist between the two words. In such cases, entirely the shortest path between any a pair of senses of the words is taken into consideration for conniving similarity. A tangle that is usually acknowledged with this approach is that it depends on the notion that every one links at intervals the taxonomy represent a consistent distance.

Page Count

The Page Count property returns an extended price that indicates the amount of pages with information in an exceedingly Record set object. Use the Page Count property to see what percentage pages of knowledge square measure within the Record set object. Pages square measure teams of records whose size equals the Page Size property setting. Though the last page is incomplete as a result of their square measure fewer records than the Page Size price, it counts as an extra page within the Page Count Price. If the Record set object doesn’t support this property, the worth are -1 to point that the Page Count is indeterminable. Some SEO tools square measure use for page count. Example- web site link count checker, count my page, net word count.

Text Snippets

Text Snippets square measure usually won’t to clarify that means of a text otherwise “cluttered” operate, or to reduce the employment of recurrent code that’s common to different functions. Snip management may be a feature of some text editors, program ASCII text file editors, IDEs, and connected code.
Search optimized additionally referred to as Discovery of Knowledge in large Databases (KDD) [9], is that the method of mechanically looking out giant volumes of knowledge for patterns mistreatment tools like classification, association rule mining, clustering, etc. Search optimized may be also work as info retrieval, machine learning and pattern recognition system.
Search optimized techniques square measure the results of an extended method of analysis and products development. This evolution began once business information was initial hold on computers, continuing with enhancements in information access, and additional recently, generated technologies that enable users to navigate through their information in real time. Search optimized takes this organic process on the far side retrospective information access and navigation to prospective and proactive info delivery. Search optimized is prepared for application within the community as a result of its supported by 3 technologies that square measure currently sufficiently mature:

Massive information assortment
Powerful digital computer computers
Search optimized algorithms.

With the explosive growth of knowledge sources accessible on the globe Wide net, it’s become progressively necessary for users to utilize machine-driven tools in realize the required info resources, and to trace and analyze their usage patterns. These factors bring about to the requirement of making server facet and shopper side intelligent systems which will effectively mine for data. Net mining [6] may be generally outlined because the discovery and analysis of helpful info from the globe Wide net. This describes the automated search of knowledge resources accessible online, i.e. website mining, and also the discovery of user access patterns from net servers, i.e., net usage mining.

Web Mining

Web Mining is that the extraction of fascinating and doubtless helpful patterns and implicit info from artifacts or activity associated with the globe wide net. There square measure roughly 3 data discovery domains that pertain to net mining: website mining, net Structure Mining, and net Usage Mining. Extracting data from the document content is called the Website mining. Net document text mining, resource discovery supported ideas compartmentalization or agent primarily based technology might also fall during this class. Net structure mining is that the method of inferring data from the globe Wide net organization and links between references and referents within the net. Finally, net usage mining, additionally called diary mining, is that the method of extracting fascinating patterns in net access logs.

Web Content Mining

Web content mining [3] is associate automatic method that works on the keyword for extraction. Since the content of a text document presents no machine readable linguistics, some approaches have steered restructuring the document content in an exceedingly illustration that might be exploited by machines.

Web Structure Mining

World Wide net will reveal additional info than simply the knowledge contained in documents. As an example, links inform to a document indicate the recognition of the document, whereas links commencing of a document indicate the richness or maybe the range of topics coated within the document. This will be compared to list citations. Once a paper is cited usually, it got to be necessary. The Page Rank strategies profit of this info sent by the links to search out pertinent sites.
Search optimized, the extraction of hidden prophetic info from giant databases, may be a powerful new technology with nice potential to assist corporations target the foremost necessary info in their information warehouses. Search optimized tools predict future trends and behaviors, permitting businesses to create proactive, knowledge-driven selections. The machine-driven, prospective analyses offered by Search optimized move on the analyses of past events provided by of call support systems. Search optimized tools will answer business queries that historically were too time intense to resolve.
LIMITATION
Duringdata retrieval, onewithall the most issues is to retrieve a collection of documents, that don’t seem to be giventouser question. For instance, apple is often related to computers on the net. However, this sense of apple isn’t listed in most all-purpose thesauri or dictionaries.
IV. PURPOSE OF THE ANALYSIS
Knowledge Management (KM) refers to a spread of practices utilized by organizations to spot, create, represent, and distribute data for utilize, awareness and learning across the organization. Data Management programsare aunit generally tied to structure objectives and area unit meant to guide to the action of specific outcomes liketo shareintelligence, improved performance, competitive advantage, or higher levels of innovation. Here we tend to area unit viewing developing an internet computer network data management system that’s of importance to either a company or an academic institute.
V. DESCREPTION OF DRAWBACK
Top of Form
After the arrival of laptop the knowledge are hugely out there and by creating use of such raw assortment data to create the data is that the method of Search optimized. Likewise in internet conjointly lots of internet Documents residein on-line.The internetisa repositoryof form of data like Technology, Science, History, Geography, Sports Politics et al. If anyone is aware ofa concern specific topic, then they’re exploitation program to look for his or her necessities and it provides full satisfaction for user after giving entire connected data concerning the subjects.
 

Overview of Cryptography and Encryption Techniques

What is cryptography
Cryptography is the discipline of cryptography and cryptanalysis and of their interaction. The word “cryptography” is derived from the Greek words “Kryptos” means concealed, and “graphien” means to inscribe. It is the science of keeping secrets secret. One objective of cryptography is protecting a secret from adversaries. Professional cryptography protects not only the plain text, but also the key and more generally tries to protect the whole cryptosystem. Cryptographic primitives can be classified into two classes: keyed primitives and non-keyed primitives as in the figure. The fundamental and classical task of cryptography is to provide confidentiality by encryption methods. Encryption (also called enciphering) is the process of scrambling the contents of a message or file to make it unintelligible to anyone not in possession of key “key” required to unscramble the file or message. Providing confidentiality is not the only objective of cryptography. Cryptography is also used to provide solutions for other problems: Data integrity, Authentication, Non-repudiation.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Encryption methods can be divided into two categories: substitution ciphers and transposition ciphers. In a substitution cipher the letters of plaintext are replaced by other letters or by symbols or numbers. Replacing plaintext bit pattern with cipher text bit patterns is involved in substitution when plaintext is viewed as a sequence of bits. Substitution ciphers preserve the order of plaintext symbols but disguise them. Transposition ciphers, do not disguise the letters, instead they reorder them. This is achieved by performing some sort of permutation on the plaintext letters. There are two type of encryption :symmetric(private/secert) encryption key and asymmetric(public) key encryption.
Conventional encryption model
A conventional encryption model can be illustrated as assigning Xp to represent the plaintext message to be transmitted by the originator. The parties involved select an encryption algorithm represented by E. the parties agree upon the secret key represented by K. the secret key is distributed in a secure manner represented by SC. Conventional encryption’s effectiveness rests on keeping the secret. Keeping the key secret rests in a large on key distribution methods. When E process Xp and K, Xc is derived. Xc represents the cipher text output, which will be decrypted by the recipient. Upon receipt of Xc, the recipient uses a decryption algorithm represented by D to process Xc and K back to Xp. This is represented in the figure. In conventional encryption, secrecy of the encryption and decryption algorithm is not needed. In fact, the use of an established well known and tested algorithm is desirable over an obscure implementation. This brings us to the topic of key distribution.
Cryptanalysis
Code making involves the creation of encryption products that provide protection of confidentiality. Defeating this protection by some men’s other than the standard decryption process used by an intended recipient is involved in code breaking. Five scenarios for which code breaking is used. They are selling cracking product and services, spying on opponents, ensure accessibility, pursuing the intellectual aspects of code breaking and testing whether one’s codes are strong enough. Cryptanalysis is the process of attempting to identify either the plaintext Xp or the key K. discovery of the encryption is the most desired one as with its discovery all the subsequent messages can be deciphered. Therefore, the length of encryption key, and the volume of the computational work necessary provides for its length i.e. resistance to breakage. The protection get stronger when key size increases but this requires more brute force. Neither encryption scheme conventional encryption nor public key encryption is more resistant to cryptanalysis than the other.
Cryptographic goals
However, there are other natural cryptographic problems to be solved and they can be equally if not important depending on who is attacking you and what you are trying to secure against attackers. Privacy, authentication, integrity and non-repudiation are the cryptographic goals covered in this text.
These three concepts form what is often referred to as the CIA triad? The three notations represents the basic security objectives for both data and for information and computing services. FIPS PUB 199 provides a useful characterization of these objectives in terms of requirements and the definition of a loss of security in each category:

Confidentiality: Preserving authorized restrictions on information access and disclosure, together with means for shielding personal secrecy and copyrighted material. A damage of privacy is the illegal disclosure of information.
Integrity: Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity. A loss of integrity is the unauthorized modification of information.
Availability: Ensuring timely and reliable access to and use of information. A loss of availability is the disruption of access to an information system.

Although the use of the CIA tried to define security objectives is well established, some in the security field feel that additional concepts are needed to present a complete picture. Two of the most commonly mentioned are:

Authenticity: The property of being genuine and being able to be verified and trusted; confidence in the validity of a transmission, a message, or message originator.
Accountability: The security goal that generates the requirement for actions of an entity to be traced uniquely to that entity.

Generally there are two types key present
1 Symmetric-key
2 Asymmetric-key
Symmetric key encryption
The universal technique for providing confidentiality for transmitted data is symmetric encryption. Symmetric encryption is also known as conventional encryption or single-key encryption was the only type of encryption in use prior to the introduction of public-key encryption. Countless individuals and groups, from Julius Caesar to the German U-boat force to present-day diplomatic, military and commercial users, use symmetric encryption for secret communication. It remains by far the more widely used of the types of encryption. A symmetric encryption scheme has five ingredients as follows-

Plaintext: This is the original data or message that is fed into the algorithm as input.
Encryption algorithm: the encryption algorithm performs various transformations and substitutions on the plaintext.
Secret key: The secret key is input to the encryption algorithm. The exact transformations and substitutions performed by the algorithm depend on the key.
Ciphertext: This is the scrambled message produced as output. It depends on the plaintext and the secret key. For a given message, two different keys will produce two different ciphertexts.
Decryption algorithm: This is reserve process of encryption algorithm. It takes the ciphertext and secret key and produces the original plaintext.

Symmetric key encryption is shown in fig.
There are two necessities for protected use of symmetric encryption:

We need a strong encryption algorithm.
Sender and receiver must have secured obtained, & keep secure, the secret key.

Stream Ciphers
The stream ciphers encrypt data by generating a key stream from the key and performing the encryption operation on the key stream with the plaintext data. The key stream can be any size that matches the size of the plaintext stream to be encrypted. The ith key stream digit only depends on the secret key and on the (i-1) previous plaintext digits. Then, the i­th ciphertext digit is obtained by combining the ith plaintext digit with the ith key stream digit. One desirable property of a stream cipher is that the ciphertext be of the same length as the plaintext. Thus, a ciphertext output of 8 bits should be produced by encrypting each character, if 8-bit characters are being transmitted. Transmission capacity is wasted, if more than 8 bits are produced. However, stream ciphers are vulnerable to attack if the same key is used twice ormore.
Block Ciphers
A block ciphers fragments the message into blocks of a predetermined size and performs the encryption function on each block with the key stream generated by cipher algorithm. Size of each block should be fixed, and leftover message fragments are padded to the appropriate block size. Block ciphers differ from stream ciphers in that they encrypted and decrypted information in fixed size blocks rather than encrypting and decrypting each letters or word individually. A block ciphers passes a block of data or plaintext through its algorithm to generate a block of ciphertext.
Asymmetric Key Cryptosystems
In Asymmetric Key Cryptosystems two different keys are used: a secret key and a public key. The secret key is kept undisclosed by the proprietor and public key is openly known. The system is called “asymmetric” since the different keys are used for encryption and decryption, the public key and private key.
If data is encrypted with a public key, it can be decrypted only by using the corresponding private key. Public Key Encryption shown in fig.
Classical encryption techniques
The technique enables us to illustrate the basic approaches to conventional encryption today. The two basic components of classical ciphers are transposition and substitution. Combination of both substitution and transposition is described in others systems.
Substitution techniques
In this technique letters of plaintext message are placed by symbols and numbers. If plaintext is in the form of a sequences of bits, then substituting plaintext bit patterns with ciphertext bit patterns.
Transposition techniques
Transposition instantly moves the position around within it but does not alter any of the bits in the plaintext. If the resultant ciphertext is then put through more transpositions, the end result has increasing security.
 

Overview of Ancient Roman Houses

The Romans were one of the most important civilizations ever; they created the basis for the modern apartment, formed concrete, and had many influences on the contemporary world. The Roman culture was founded in 753 BC, and Rome was located about 23 km east off the coast of the Mediterranean Sea and is the capital of Italy. Today if you visit you can find ruins of the Coliseum and The Arch of Constantine which were both examples of the famous “Roman Arches.” It was a bustling city in ancient times and is still today, and you could find people in the streets all the time day, and night.

The Ancient Romans lived in all sorts of houses that would range anywhere from humongous mansions to tiny shacks. People lived in certain types of dwellings based on there social class, economic status, and there preference of the city or the outskirts.

    There were two different social classes the patricians and the plebians. The patricians were very wealthy and the descendant of wealthy, and noble families. They were sometimes landowners, had power in the government, and lived in humungous houses. They also only did business and interacted with people of there class.

The plebians were impoverished people who usually had the job of being artisans or worked for the patricians as a servant or slave. They never had any power in the government and had no rights. Sometimes the plebians could get lucky and become a client of a patrician, and  they were offered protection when they did work for the patricians.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

There were generally three different types of houses, Inulaes, Domuses, and Villas. The poor lived in a tiny multi-story building that were shared with up to 50 people, and they were called Insulas. The rich lived in luxurious mansions that were decorated with marvelous objects such as sculpturer and mosaics, and these were called Villas. There were also Domus’s which were used for the middle class and were usually also used for religious ceremonies and business conferences. All three of them now have influences on modern houses today.

Insula was where poor people generally lived. It typically consisted of six to eight three-story apartments with multiple families living in each room. The ground floor was for shopping, and the upper floors were for families. They were made of mud and brick and generally broke down by the fire and collapsing from to much weight on the top stories. They were never well constructed and were always the cheapest due to there poor structure. The high level was the most affordable because the top tier was the first to collapse and usually just had only tiny stairways to get down to the bottom which would not be safe in the chance of a fire.  They were noisy, dirty, and cramped.

Someone who owed an Insula would generally be very wealthy. He could make people pay higher rent due to if the buildings were newer. An example of a wealthy landlord would be Marcus Licinius Crassus. He owned many insults and made his residence pay way more than usual. Roman Insula generally did not have toilets, so people had to use the public bathrooms.

Domus’s were huge building where middle-class people lived. It had marble pillars supporting the roof and had statues on the inside. The floors were made of mosaic tiles and were usually decorated with fantastic artwork. They sometimes took up a whole block, or they could also just be a tiny house. They would be used for religious ceremonies and business meetings. Some of them were separated with walls, and others were detached. Domus generally did not face the street due to safety. They were ancient Roman apartments and sometimes would be rented out to people.

Domus’s had multiple rooms including a dining room, garden, and courtyard. There were generally two sections that came into one center section that was outdoors that was called and Atrium. The Atrium had a backyard and a pool for drinking water, and sometimes they would have gardens where they would grow plant and herbs. The Atrium was a symbol of wealth, and they were usually decorated with amazing treasures.

Villas were country homes that the wealthy used, they were usually enormous and and were a way for Romans to escape from the big city. There were generally two different types of Villas, “Villa Urbana” which would be more for a country home and “villa rustica” which was where serevents had charge of the state when the owners were not there and was generally only sometimes occupied based on the season.

Villas had three parts an the main building, the Atrium, a place where slaves would live, and then finally the place where farm produce would be stored. The Atrium would let natural light into the house. There was a hole in the roof, and they would collect water through it, and it would fall into the middle where there was generally a pool. They are similar to Domus’s and are usually decorated with frescos, mosaics, and sculptures.

The most expensive villas were the Imperial Villas, and they were usually for Royalty. Some examples would be the Villas of Diocletian, Domitian, Nero, and Hadrian. The Imperial Villas were generally very closed off and very chaotic even though they were organized. Romans who were very wealthy would escape to these during the summer time. They are similar to the idea of modern-day vacation homes.

A couple of different types of sections of a Villa and Domus are the vestibulum, the tablinum, the triclinium, and the culina. The vestibulum is the main entrance to the house, and it usually faces the street. The tablinum is the office or living room. It is also traditionally reserved for the “man of the house”. The triclinium is the dining room, and it was usually very well decorated because people wanted to impress guest when they invited them over for dinner, and the culina was the kitchen.

Ancient Roman houses had central heating, which means that they had heat that came from the floor. This was called hypocaust and slaves would help keep it clean and up to date. This system was also used to help keep the Romans bath water warm if they could not get a naturally heated water. This was done by transferring hot air through the floor and also the walls by a series of pipes. Most Roman houses also were centered around a courtyard. It did not matter if you were rich or poor or what type of homes you lived in. The opening of the yard was faced in of the streets instead out.

Today we have a clear idea of which houses rich people lived in and which one’s poor people lived in from. Ancient Roman houses were made for the most part out of sun-dried brick and built on a foundation like modern day dwellings. The walls were held up by beams made of either mud or wood. Towards the beginning of ancient Rome, the family lived in one big room where their meals were cooked, and all the “indoor work” was done. The houses, in the beginning, did not have a chimney and the smoke would escape through tiny holes in the roof. The ancient Romans have had such an impact of the modern day world and have the basis for so many great inventions. They will always be remembered as a great and powerful civilization.

Works Cited

Ancient Roman Houses. www.crystalinks.com/romebuildings.html.

Bombarde, Odile. Living in Ancient Work. Aileen Buhl, 1988. Young Discovery Library.

—. Living in Ancient Work. Aileen Buhl, 1988. Young Discovery Library.

Classics Unveiled. www.classicsunveiled.com/romel/html/romehouse.html.

Ellis, Simon P. Roman Housing.

—. Roman Housing.

Facts and Details. factsanddetails.com/world/cat56/sub369/item2072.html.

History on the Net. www.historyonthenet.com/the-romans-housing.

Khan Academy. www.khanacademy.org/humanities/ancient-art-civilizations/roman/beginners-guide-rome/a/roman-domestic-architecture-domus.

Rutland, Jonathan. A Roman Town. Edited by R.J. Unstead, illustrated by Angus Mc.Bride, 1986. See Inside.

—. A Roman Town. Edited by R.J. Unstead, illustrated by Angus Mc.Bride, 1986. See Inside.

Vroma. vroma.org/~bmcmanus/house.html.

 

Overview of Photovoltaics in Japan

 

OVERVIEW OF PHOTOVOLTAICS IN JAPAN

 

Grid Connected Solar Electric System

HISTORY ON ENERGY SUPPLIES IN JAPAN:

Japan, an Island country in East Asia is in the Pacific Ocean that lies off the eastern coast of Asian mainland and stretches from the Sea of Okhotsk in the north to the East China Sea and China in the southwest. It has a population of 127 million which is the tenth largest in the world. [1]

Japans’ dependency on resources for energy supply has been fluctuating since 1950. In 1950, around 50% of the energy was extracted from the coal with very little production from Hydro (33%) and oil (17%) with no production from Natural gas and nuclear. In 1973, oil became the primary source for the energy supply with a percentage of 75.5%. Later in 1973 due to the oil shock, Japan decreased their oil dependency and by promoting alternative energy resources that were not in consideration prior, such as nuclear power and natural gas. They continue to diversify their dependency and oil was reduced from 75.5% to 40.3% in FY 1973 to FY 2010, respectively. In March 2011, the country again faced nuclear power crises because of the earthquake and the nuclear incident at their Fukushima Daiichi Nuclear power plant that resulted in the deficiency of nuclear power and to compensate the smooth running of the energy they again had to go back to the fossil fuel (oil in particular) but late in 2015 they restarted their two power plan and increase their renewable resources dependency and finally in the latest composition i.e. in FY2016 the oil source dropped to 39.7% with renewable energy expansion from 4.3% to 7%. [2]

Figure 1:Comprehensive Energy Statistics [3]

LNG (Liquefied Natural Gas) is referred as a clean fossil fuel with the emission of greenhouse gas. In the recent year of Japan energy supply it was almost non-existence, this resource marked the spot when the nuclear power plant got shutdown and it was an urgent need to provide stable electric power supply, so LNG thermal power generation contributed. It is predicted that the LNG demand will be increasing in the future. [3]

Figure 2:Japan’s Energy 2017 [3]

ELECTRIC POWER COST IN THE COUNTRY:

The electricity rates in FY 2010 increased from 20.4 Yen/kWh to 25.5 Yen/kWh for homes and from 13.7 Yen/kWh to 18.8% Yen/kWh for industries, after the earthquake and nuclear power plant incident due to the average cost of the imported fuel. However, the cost reduced significantly since FY 2014 because of decline in the crude oil cost from 20.4 Yen/kWh to 25.5 Yen/kWh for homes and from 13.7 Yen/kWh to 18.8% Yen/kWh for industries. From the graph it can be considered that electric rates increased 25% for homes and for industrial use it increased to 38%, whereas, from 2014 the electric rate decreased 10% for homes and about 14% for industrial use. [3]

Figure 3:Electricity Rate Trend [3]

In Japan, there are 10 companies that has share in the electricity market and they are listed as below: [4]

In one of the reports published in March 23, 2017, it was stated that 14 major electricity production firms are most likely to increase their rates in May for the time in a row due to the rise in the imported fuels. Estimated predictions made were that if household uses 260 kilowatt-hours of electricity then the cost is expected to rise from 190 yen to roughly around 6,605 yen ($59.57) for Tepco Energy Partner’s customers. Similarly, for another electric power i.e. Kansai, the customers will pay 190 yen to 6,825 yen. [5]

JAPAN POWER MARKET SNAPSHOT:

Nuclear disaster was the turning point for the renewable resources to mark a spot in the energy resources, prior to that renewable only contributed 2% of the total energy supply. It was then when renewable was considered an effective way of generating electricity to let go electricity prices, dependency of import and greenhouse gas emissions. In 2011, after the incident that took place in 2010, the annual PV system capacity installed surpassed 1 GW for the first time and reached about 1.3 GW. From figure 4, it can be shown that in 2010 the annual installed capacity was 991 MW i.e. 31% decrease as compared to 2011. From figure 5, we can depict market segment of the installed PV system. [6]

In July 2012, the government promoted the series of incentives such as FiT9Feed in Tariff) that helped renewable (PV) to gain strength. Due to the promotional scheme provided by the government, within 2 years the total annual production increased from 1.3 GW to 10.5 GW, which brought Japan amongst the country that has the fast-growing PV market worldwide in 2013 and 2014.

Considering 2016, the electricity produced by Japan with the help of renewable energy was 21.4%, hydroelectric power included. Figure 6 shows that Japan needs to expand as it is lacking behind other major countries.

Previously, before the FiT the PV industry in Japan used to focus on the households but with the passage of time and as the households came out of FiT, they found another way for solar use. Japan introduced a new concept of ‘virtual power plant’, in this if the PV production is more than what is required or is in excess, the heat pump use is considered from night to mid-day. Due to this initiative, there are 5 million heat pumps currently in Japan. It is said that Japan installed around 149MWh resulting in approximately 1.2GWh energy. [7]

By the end of 2017, total capacity reached up to 50GW, marking its spot in the second largest solar PV installed capacity (after China). [8]

Figure 6:Comparison of the Renewable Energy Ratio in the Generated Electric Power Amount

There are 9 companies in Japan: 1) Sharp 2) Sanyo-Panasonic 3) Kyocera 4) Mitsubishi 5) Mitsui 6) Toshiba 7) Honda 8) Solar Frontier 9) Tokuyama

LARGEST PV INSTALLATIONS:

1)     Kagoshima Nanatsujima Mega-Solar Power Plant:

It is the largest solar project that was completed in November 2013, in the city of Kagoshima. It produces 70 MW of power using 290,000 photovoltaic panels on a land that could accommodate roughly 27 Tokyo Domes or 22,000 houses. The Kagoshima Mega Solar power corporation was funded by 7 companies that together built the Kagoshima Nanatsujima Mega-Solar Power Plant. Its total investment was 27 billion yen, almost 78,000 people from 208 construction companies helped in the construction of this plant and it took almost one year and two months to get completed. [9]

Figure 7:Kagoshima Nanatsujima Mega-Solar Power Plant (provided by Kyocera Corporation) [9]

2)     13.7MW floating solar PV plant:

According to the power output, 13.7MW floating solar PV plant is the largest power plant. It is in Ichihara on the Yamakura Dam reservoir, covering water surface area of approximately 180,000m2 i.e. 44 acres and is comprised of 50,904(approximately) solar modules. 

Tokyo century corporation along with Kyocera corporation constructed this plant which can generate around 16,170MWh per year, which is good enough to provide energy to 4,970 houses. This project was completed and inaugurated in March 2018. [10]

Figure 8:he 13.7MW floating solar PV plant on the Yamakura Dam reservoir [10]

3)     Kyocera’s 21.1MW floating PV plant:

Floating PV plant is another large-scale PV installation that was carried out in Japan in Hagi city, in Yamaguchi Prefecture. Kyocera Corporations and Tokyo Century Corporation were the soul shareholder of the project. The floating PV plant was installed on a land with the area of
1km2
with 78,144 of Kyocera’s modules installed. This project was initially constructed for an industrial waste disposal facility.

The plant is expected to generate around 23,000MWh of electricity annual which will be used by Chugoku Electric Power Co., a local utility. It is the second largest solar power plant in Japan. The company from 2012 have installed 58 plants that is able to generate power up to 166.9MW. [11]

Figure 9:Kyocera’s 21.1 MW solar PV plant in Japan’s Hagi City [11]

Moreover, Japan is progressing in terms of bringing solar energy into use by planning systematically about the upcoming construction. Some of the projects that are currently in progress and planned are as under:

Figure 10: Mega solar power plants under construction or being planned (over 100 MW) [9]

Figure 11:Site map for mega solar projects greater than 100 MW that are under construction or being planned [9]

TOP THREE MODULE AND INVERTER IN JAPAN:

1)     MODULE:

NSP (350W/355W/360W)

Neo Solar Power is a solar module of Mitsubishi Electric i.e. a Japanese company. It contains 72 cell monocrystalline module for commercial and industrial purposes with an efficiency of around 18.6%. the module is an ammonia resistant with an excellent low light performance of 96.5% and with a maximum system voltage of 1,000V. The company offer 10-year material and workmanship warranty and 25-years linear power output warranty. [13]

Figure 12: Monocrystalline NSP [13]

The manufacture of this module is Kyocera which is the developing and manufacturing company for over 40 years. It contains polycrystalline cell of dimension (156 mm x 156 mm) and delivers power of 265 to 270 Wp. This module is pre-assembled with bypass diodes, and connector for smoother module assembly. Moreover, the company offers 10-year product warranty, 25-year performance guarantee (for 10 years at 90%, for 25-years at 80%) and lastly 25 years linear performance warranty. [14]

This sharp 250-watt polycrystalline solar module is designed for the commercial purposes with USA as their manufacturers with an open circuit voltage of 38.3V and short circuit current of 8.90A. the dimension of the module is 64.60×39.10×1.80inches with module efficiency of 15.5%. The company/manufacturer offers 25-years warranty on power output. [15]

2)     INVERTER

Some of the PV invertors that japan uses are as under:

This PV inverter is only used in Japan, having 100kW power conditioner along with a power conversion efficiency of about 95%. It is available in 2 types i.e. for grid connection and off-grid connection and accepts a wide range of input voltages. It can take DC input of 240v up to 600V and as it is an inverter it can output around 202V three-phase voltage with an output power of 100kW. [16]

Sengoku solar Co., Ltd is the manufacturer of this company, situated in Japan with a power range of 36kW. It can take maximum DC voltage of 1000V and current of 18A and give AC output of 480V. the dimension of the inverter are 700x530x338mm and offer product warranty of 20years. [17]

This inverter is manufactured by the Japanese company (DIAsine), the inverter takes in DC voltage of 12V and provide AC output voltage range of 100 to 120V. It can provide up to 0.36kW of AC power at a frequency of 50 to 60Haz and a distortion of less than 3% and an efficiency of 90%. The dimension of this invertor is 44×146.5x234mm, with a product warranty of 1 years. [18]

GOVERNMENT INCENTIVE PROGRAM:

NATIONAL CASH SUBSIDY PROGRAM

System that has installment price below $4.10/W will receive $0.20/W, whereas system whose installments costs ranges between $4.10/W and $5.00/W have a low subsidy of $0.15/W. system having selling price above $5.00/W is not eligible for any subsidiary. [19]

NATIONAL FEED-IN-TARIFF (FIT)

– FIT of $0.40/kWh is provided if the power generation is surplus.

– Return of 3.2% annually is said to be returned to the system owners after a pay back period of 10 years. [19]

–          Japanese government is attracting people to set PV plant by providing 10 years Japanese government bond. [19]

CHALLENGES FACED BY THE PV INDUSTRY:

Currently, the PV sectors are facing 3 major challenges: 1) Rapid increasing cost burden 2) Grid constraints 3) unreasonable land use regulations.

Rapid increasing cost burden is unnecessary regulation that will make people think whether to install PV system or not as they are getting hardly any profit with the increase in the investment cost.

Figure 13:Average unit cost of electricity in electric utilities [12]

The second main obstacle is the regulation on the land i.e. it is the time-consuming process, as it requires several steps to work on the desired land and set up the PV system there. For example, if we must convert an agricultural land into a PV generation plant we need a lot of proper paperwork. In one of the websites it was stated “Generally, it takes a large amount of time to achieve deregulation because it requires coordination among many entities to confirm compliance with existing laws and ordinances, to coordinate stakeholders, to conduct safety checks, to work with many different ministers and agencies, and more. To effectively deregulate, persistent efforts must be made to clarify issues under current regulations, and continued appeals must be directed to concerned parties. It is therefore important for the Japanese PV industry to work hard to identify irrational regulations by means of industrial associations”. [6]

The other issue is the grid contains as there are no proper rules for using interconnectors and no proper change purchase from the retailers to TSOs.

Figure 14:Grid constraints [12]

 

FUTURE SCOPE AND PREDICTIONS OF THE PV MARKET:

The FIT program has brought about a significant change in the expansion of PV system installations in Japan. Advances by PV system integrators, PV-utilizing industries, and users are enabling downstream sectors of the PV industry to flourish, broadening the scope of the PV business. The national targets are to increase PV power generation to ten times its 2008 level, to 14 GW, by FY 2020 and to 40 times the 2008 level, or to an estimated 53 GW, by 2030. In parallel with PV deployment in Japan, the Japanese PV industry will also able to contribute to the deployment of PV in foreign countries. Japan’s PV industry must enhance its global competitiveness by continuing to shift its business structure, based on the technologies, engineering, and services that support the life cycle of PV systems. [6]

Considering all the statistics of the future, I think Japan has a growing PV market and there are hopes that it becomes leading in the PV generation.

Figure 15:Target PV capacity in Japan. [6]

 

 

WORK CITED:

[1] https://en.wikipedia.org/wiki/Japan

[2] https://www.globallegalinsights.com/practice-areas/energy-laws-and-regulations/japan

[3] http://www.enecho.meti.go.jp/en/category/brochures/pdf/japan_energy_2017.pdf

[4] https://en.wikipedia.org/wiki/Energy_in_Japan

[5] https://asia.nikkei.com/Business/Japan-s-gas-electricity-prices-to-rise-again-in-May

[6] http://web.mit.edu/mission/www/m2018/pdfs/japan/solar.pdf

[7] https://www.nrel.gov/docs/fy18osti/71493.pdf

[8] https://en.wikipedia.org/wiki/Solar_power_in_Japan

[9] https://www.asiabiomass.jp/english/topics/1402_06.html#fig1

[10] https://www.pv-magazine.com/2018/03/27/kyocera-jv-inaugurates-13-7-mw-floating-pv-plant-in-japan/

[11] https://www.pv-magazine.com/2018/01/30/kyocera-completes-large-scale-and-floating-pv-projects-in-japan/

[12] https://www.unescap.org/sites/default/files/Session%201-5.%20Keiji%20Kimura_REI.pdf

[13] https://www.mitsubishielectricsolar.com/products/commercial/solar-modules

[14] http://www.kyocerasolar.eu/index/products.html

[15] https://www.solarelectricsupply.com/solar-panels/sharp/sharp-nd-250qcs-solar-panels

[16] https://www.sanyodenki.com/contents/product_information/list_01.html

[17] https://www.enfsolar.com/pv/inverter-datasheet/9547

[18] https://www.enfsolar.com/pv/inverter-datasheet/10727

[19] https://www.nrel.gov/docs/fy14osti/60419.pdf

TABLE OF FIGURE:

Figure 1:Comprehensive Energy Statistics [3]

Figure 2:Japan’s Energy 2017 [3]

Figure 3:Electricity Rate Trend [3]

Figure 4:Market segments of installed PV systems in 2011 [6]

Figure 5:Installed capacity of PV system in Japan [6]

Figure 6:Comparison of the Renewable Energy Ratio in the Generated Electric Power Amount

Figure 7:Kagoshima Nanatsujima Mega-Solar Power Plant (provided by Kyocera Corporation) [9]

Figure 8:he 13.7MW floating solar PV plant on the Yamakura Dam reservoir [10]

Figure 9:Kyocera’s 21.1 MW solar PV plant in Japan’s Hagi City [11]

Figure 10: Mega solar power plants under construction or being planned (over 100 MW) [9]

Figure 11:Site map for mega solar projects greater than 100 MW that are under construction or being planned [9]

Figure 12: Monocrystalline NSP [13]

Figure 13:Average unit cost of electricity in electric utilities [12]

Figure 14:Grid constraints [12]

Figure 15:Target PV capacity in Japan. [6]

Mary Cassatt Art Style: An Overview

Cassatt is perhaps best-known for her paintings of mothers and children, works which also reflect a surprisingly modern sensibility. Traditional assumptions concerning childhood, child-rearing, and the place of children in society were facing challenges during the last part of the 19th century and women too were reconsidering and redefining their place in modern culture. Cassatt was sensitive to a more progressive attitude toward women and children and displayed it in her art as well as in her private comments. She recognized the moral strength that women and children derived from their essential and elemental bond, a unity Cassatt would never tire of representing.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The many paintings, pastels, and prints in which Cassatt depicted children being bathed, dressed, read to, held, or nursed reflect the most advanced 19th-century ideas about raising children. After 1870, French scientists and physicians encouraged mothers (instead of wet-nurses and nannies) to care for their children and suggested modern approaches to health and personal hygiene, including regular bathing. In the face of several cholera epidemics in the mid-1880s, bathing was encouraged not only as a remedy for body odors but as a preventative measure against disease.
Shortly after her triumphs with the Impressionists, Cassatt’s style evolved, and she moved away from impressionism to a simpler, more straightforward approach. By 1886, she no longer identified herself with any art movement and experimented with a variety of techniques. A series of rigorously drawn, tenderly observed, yet largely unsentimental paintings on the mother and child theme form the basis of her popular work. In 1891, she exhibited a series of highly original colored lithograph prints, including Woman Bathing and The Coiffure, inspired by the Japanese masters shown in Paris the year before.
Her decision to become a professional artist must have seemed beyond the pale, given that serious painting was largely the domain of men in the 19th century.
Despite the concerns of her parents, Cassatt chose career over marriage
Janson’s History of Art, Seventh Edition
p. 879-880
This text gives us a little insight into the life of Mary Cassatt (1844-1926). She was an American who was born into a wealthy family and raised in Pittsburgh; also influenced by Renaissance art, she approached Impressionism from a woman’s perspective, mainly as a figure painter. As a female, she was often restricted as far as going places unattended where men could go. Her subject matter was attributed to these restrictions. Many of her themes included women reading, visiting, taking tea, and bathing an infant. The Child’s Bath is not only a picture about health, but about intense emotional and physical involvement.
Paul case:
Cather’s understanding of the tacit limits governing the representation of sexuality, and the way they were linked to genre, explains why she chose the mode of indirection in writing her 1905 story of a homosexual teenager, “Paul’s Case.” Recent developments in sexology enabled Cather to characterize Paul as a homosexual without naming his condition. Through background information and physical description, Cather’s narrator discreetly invokes degeneracy theory to explain her protagonist, aligning him with the subjects of recent case studies. After experimenting with the persona of the “fairy,” Paul uses stolen money to transform himself into a cultured, sophisticated “queer,” but neither persona proves permanently satisfactory. Through its references to Paul’s sexuality, the story analyzes one particular product of late-nineteenth-century consumer capitalism: the middle-class, urban gay man.
How to write it ?
Write your climax first; it will aid you to gauge properly the view-point of your story. The climax is the plot in brief: here is a hint as to plot finding. Take a situation: it may be humorous, pathetic, full of mystery, or dramatic; but it must be striking. Life abounds in many such, and he who goes about with his eyes open can not fail to set aside an ample store.
The conclusion should follow closely on the heels of the climax. Its office is to ring down effectively the curtain on the scene. Often it dovetails in the climax so that we can not tell where one begins and the other ends
When you conceived your climax, doubtless some one thing stood out in bolder relief than all the rest. It may have been humor, it may have been pathos, it may have been grim tragedy. Whatever it was, it is the point of the tale, the centre of gravity of your story. You wisely gave it a setting in keeping, and in the conclusion let it dwell like a lingering note to be a haunting memory for many a day. It is the essence of your conception, and in the introduction you held it up before your reader’s eyes as the game to be pursued. This we will call the theme of the composition.
The subtle power of the French school lies in the art of innuendo. It is what is left unsaid rather than what is said that causes the greatest thrill. But the inference must be plain: the reader’s imagination should not be left to construct the tale which you set out to tell. Often a story will be saved from boredom to fascination by the power of suggestion alone. This is particularly true of love scenes, deaths, and the like, such as only a master’s hand at description can hope to handle effectively.
Rosebud:
One of the key cruxes of the film is the question of what exactly Rosebud means. We ask this question even though we know that Welles & Co. were in part trying to show that you cannot reduce a man’s mysteries to one thing. On the other hand, there is a solution to the “problem.” It is actually found in Welles’s next film, The Magnificent Ambersons. Throughout Welles’s radio career, his most moving shows, such as his adaptation of “The Apple Tree,” were about loss ” loss of a bucolic past, of a domestic happiness, of a quiet life. This theme doesn’t seem to have anything to do with Welles’s real life. It’s just something he liked, though perhaps based on the loss of his mother at an early age. The Magnificent Ambersons is his most poignant realization of this theme in his work. Rosebud leads up to that film. Rosebud is The Magnificent Ambersons. The small-town values and mother’s love that the snow-ball evoke ” which reminds Kane of his childhood home, and the sled called Rosebud ” are all explored in much more detail and presented with an additional dollop of aching loss, in Welles’s second film.
Rosebud is not a gimmick. As a narrative device, it is the holy grail of the film, the engine that drives the reporter Thompson to solve the mystery of Kane, and along the way we learn as much about Kane as the characters (and the undermining overvoice of the film itself) can tell us. But when we learn, from our privileged position as viewers of the film, what Rosebud actually is, even as it is being destroyed, we also learn that it is not a hoax, nor is it hokey. As Bernard Herrmann’s beautiful music rises in the background, we feel both the unsealing of the envelope and the closing of a life. It’s a beautiful moment, one of the most expressive in all cinema. And you know what? In a way, a man’s life can be reduced to one thing, if that thing is the rich cluster of images and ideas that Rosebud contains.
The gay subtext in Citizen Kane
Who wrote Kane? The answer is in the aspect of the film that everyone is afraid to mention, the gay subtext that appears in Kane and in many of Welles’s other films. I’m not talking about his private life, in which, according to Simon Callow, Welles had a knack for attracting the support of older gay men such as Houseman, who were smitten with the youth’s vivacity. Welles, a heavy drinker, was married three times and, like Marlon Brando and Warren Beatty after him, had ostentatious affairs with many women, among them Dolores Del Rio. None of this seemed to find its way into his films.
Women don’t figure that heavily in most of Welles’s films, and rarely does sex truly enter. Love and passion are there, but often presented discreetly. Kane offers up something of a Madonna/whore contrast, while his next film shows dedicated woman in a soap-operaish oleo of unrequited, often even unexpressed, love. Although the aborted It’s All True celebrated the passionate life of Latin America, Welles was really interested in the politics of the time. Subsequent films dealt with “great men” and their political lives. Welles played Othello as if he were really married to Iago. There is the suggested rape of a newlywed in Touch of Evil, and a nymphomaniac in The Trial. It’s a shock to see footage from the unfinished The Other Side of the Wind in which actual lust is realized in the back seat of a car. But the combination of sex and women is not what we carry away from many of these films.
Male friendship and its betrayals interested Welles, from one film to another, starting with Kane and lasting all the way to The Big Brass Ring, a screenplay credited to Welles but finally filmed by someone else. As in many films with a gay subtext, parts of Kane don’t make sense unless you view them from a gay perspective. Why, exactly does Jed Leland feel so betrayed by Kane? It can’t just be because Kane’s political folly “put back the cause of reform 20 years.” When Leland, the stooge friend, first learns of the political disgrace, he walks into a bar to drown feelings of… what? Leland, who elsewhere says he took ballet lessons with Kane’s first wife and was “very graceful,” has no female companions in the film, and his reaction to Kane’s political “betrayal” far exceeds its actual weight. There’s a love here that dare not speak its name.
This gay subtext provides another indication of Welles’s hand in the Kane screenplay. Welles’s other great movie, Touch of Evil, has a similar relationship between a powerful man and a stooge, in which the powerful man is the love of the stooge’s life: Welles’s Quinlan and Joseph Calleia’s Pete Menzies; only here, both men betray each other. And the totality of The Trial only makes sense if the film is viewed as really about the persecution of a gay man in a straight society. The gay subtext of Kane only adds to its mysteries and makes it a richer film.
Understanding themes:D1
Personal identity is shaped by one’s culture, by groups, and by institutional influences. Examination of various forms of human behavior enhances understanding of the relationship between social norms and emerging personal identities, the relationships between social processes that influence identity formation, and the ethical principles underlying individual action.