Automatic Metadata Harvesting From Digital Content

MR. RUSHABH D. DOSHI, MR. GIRISH H MULCHANDANI

Abstract: Metadata Extraction is one of the predominant research fields in information retrieval. Metadata is used to references information resources. Most metadata extraction systems are still human intensive since they require expert decision to recognize relevant metadata but this is time consuming. However automatic metadata extraction techniques are developed but mostly works with structured format. We proposed a new approach to harvesting metadata from document using NLP. As NLP stands for Natural Language Processing work on natural language that human used in day today life.
Keywords: Metadata, Extraction, NLP, Grammars
I. Introduction
Metadata is data that describes another data Metadata describes an information resource, or helps provide access to an information resource. A collection of such metadata elements may describe one or many information resources. For example, a library catalogue record is a collection of metadata elements, linked to the book or other item in the library collection through the call number. Information stored in the “META” field of an HTML Web page is metadata, associated with the information resource by being embedded within it.
The key purpose of metadata is to facilitate and improve the retrieval of information. At library, college, Metadata can be used to achieve this by identifying the different characteristics of the information resource: the author, subject, title, publisher and so on. Various metadata harvesting techniques is developed to extract the data from digital libraries.
NLP is a field of computer science, artificial intelligence and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human computer interaction. Recent research has increasingly focused on unsupervised andsemi-supervisedlearning algorithms. Such algorithms are able to learn from data that has not beenhand-annotatedwith the desired answers, or using a combination of annotated andnon-annotateddata. The goal of NLP evaluation is to measure one or more qualities of an algorithm or a system, in order to determine whether (or to what extent) the system answers the goals of its designers, or meets the needs of its users.
II. Method
In this paper we proposed automatic metadata harvesting algorithm using natural language (i.e. humans used in day today works). Our technique is rule based. So it does not require any training dataset for it.
We harvest metadata based on English Grammar Terms. We identify the possible set of metadata then calculate their frequency then applying weight term based on their position or format that apply to it.
The rest of the paper is organized as follows. The next section review some related work regarding to metadata harvesting from digital content. Section gives the detailed description of proposed idea presented here. At last paper is concluded with summary.
III. Related Work
Existing Metadata harvesting techniques are either machine learning method or ruled based methods. . In machine learning method set of predefined template that contains dataset are given to machine to train machine. Then machine is used to harvest metadata from document based on that dataset. While in rule based method most of techniques set ruled that are used to harvest metadata from documents.
In machine learning approach extracted keywords are given to the machine from training documents to learn specific models then that model are applied to new documents to extract keyword from them.Many techniques used machine learning approach such as automatic document metadata extraction using support vector machine .
In rule based techniques some predefined rules are given to machine based on that machine harvest metadata from documents. Positions of word in document, specific keyword are used as category of document and etc. are examples rules that are set in various metadata harvest techniques. In some case Metadata classification is based on document types (e.g. purchase order, sales report etc.) and data context (e.g. customer name, order date etc.) [1].

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Other statistical methods include word frequency [2], TF*IDF [3], wordco-occurrences[4]. Later on some techniques are used to harvest key phrase based on TF*PDF [5]. Other techniques use TDT (Topic Detection and Tracking) with aging theory to harvest metadata from news website [6]. Some techniques used DDC/RDF editor to define and harvest metadata from document and validate by thirds parties [7]. Several models are developed to harvest metadata from corpus. Now days most of techniques used models that all are depends on corpus.

IV. Proposed Theory
Our approach focused on harvesting a metadata from document based on English grammar. English grammar has many categories which categorized the word in statement. Grammar categories such as NOUN,VERB, ADJECTIVES, ADVERB, NOUN PHRASE, VERB PHRASE etc. each and every grammar category has a priority in statement. So our approaches to extract out the Metadata extraction based on its priority in grammar. Priority in grammar component is as follows: noun, verb, adjective, adverb, noun phrase
V. Proposed Idea
Figure-1 Proposed System Architecture
Infigure-1we give proposed system architecture. In this architecture we does not stick steps in any order.
ArticlePre-processing:
articlepre-processingwhich remove irrelevant contents (i.e. tags,header-footerdetails etc.) from documents.
POS Taggers:
APart-Of-SpeechTagger (POS Tagger) is a piece of software that reads text in some languages and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc.
Stemming:
In most cases, morphological variants of words have similar semantic interpretations can be considered as equivalent for the purpose of IR applications. For this reason, a number ofso-calledstemming Algorithms, or stemmers, have been developed, which attempt to reduce a word to its stem or root form.
Calculate frequency:
Here each termed frequency is calculated i.e. how many occurrence of each term in document.
Identify Suitable Metadata:
Now metadata is extracted from word set based on their frequency, grammar and their positions.
VI. Experiments & Results
In this study we take a corpus with 100 documents. Documents contain the news article about various categories. Here we first extract the metadata manually from each & every documents. Then apply our idea to corpus. We measure our result from following parameter.
Precision = No of terms identified correctly by the system / Top N terms out of total terms generated by the system. Recall = Number of keyterms identified correctly by the system / Number of keyterms identified by the authors.F-measure=F=2* ((precision* recall)/ ( precision+ recall))
Table1: Evaluation Results

Terms

Precision

Recall

F-measure

10

0.43

0.36

0.40

20

0.42

0.63

0.51

30

0.32

0.72

0.49

VII. Conclusion & Future Works
This method based on grammar component Our Aim to use this algorithm to identifying metadata in bigram, trigram tetra gram. This metadata helps us to generate summary of documents.
References:
[1] Christopher D. Manning, Prabhakar, Raghavan, Hinrich Schtze An Introduction to Information Retrieval book.
[2] H. P. Luhn. A Statistical Approach to Mechanized Encoding and Searching of Literary Information. IBM Journal of Research and Development, 1957, 1(4):309-317.
[3] G. Salton, C. S. Yang, C. T. Yu. A Theory of Term Importance in Automatic Text Analysis, Journal of the C.Zhang et al American society for Information Science, 1975, 26(1):33-44.
[4] Y. Matsuo, M. Ishizuka. Keyword Extraction from a Single Document Using WordCo-ocuurrenceStatistical Information. International Journal on Artificial Intelligence Tools, 2004, 13(1):157-169.
[5] Yan Gao Jin Liu, Peixun Ma The HOT keyphrase Extraction based on TF*PDF, IEEE conference, 2011.
[6] Canhui Wang, Min Zhang, Liyun Ru, Shaoping Ma An Automatic Online News Topic Keyphrase Extraction System,IEEE conference, 2006.
[7] Nor Adnan Yahaya, Rosiza Buang Automated Metadata Extraction from web sources, IEEE conference, 2006.
[8] Somchai Chatvienchai Automatic metadata extraction classi_cation of spreadsheet Documents based on layout similarities, IEEE conference, 2005.
[9] Dr. Jyoti Pareek, Sonal Jain KeyPhrase Extraction tool (KET) for semantic metadata annotation of Learning Materials, IEEE conference, 2009.
[10] Wan Malini Wan Isa, Jamaliah Abdul Hamid, Hamidah Ibrahim, Rusli Abdullah, Mohd. Hasan Selamat, Muhamad Tau_k Abdullah and Nurul Amelina Nasharuddin Metadata Extraction with Cue Model.
[11] Zhixin Guo, Hai Jin ARule-basedFramework of Metadata Extraction from Scienti_c Papers, IEEE conference.
[12] Ernesto Giralt Hernndez, Joan Marc Piulachs Application of the Dublin Core format for automatic metadata generation and extraction,DC-2005:Proc. International Conference. on Dublin Core and Metadata Applications.
[13] Canhui Wang, Min Zhang, Liyun Ru, Shaoping Ma An Automatic Online News Topic Keyphrase Extraction System, IEEE conference.
[14] Srinivas Vadrevu, Saravanakumar Nagarajan, Fatih Gelgi, Hasan Davulcu Automated Metadata and Instance Extraction from News Web Sites,IEEE conference.
 

Taro Leaves Drying Kinetics and Monolayer Moisture Content

INTRODUCTION |1
A study on the drying kinetics and monolayer moisture content of taro leaves
This research was aimed to develop dehydrated products based on Taro leaves and finding out the effect of drying parameters such as loading density and temperature on that control the drying kinetics. To determine the end point of drying studies on the sorption isotherm was conducted. From the moisture sorption isotherm data, the monolayer moisture content was estimated by Braunauer-Emmett-Teller (BET) equation using data up to a water activity (aw) of 0.52 and monolayer moisture content was found to be 8.92 g water per 100 g solid for taro leaves. By using another most important model, GAB (Guggenheim-Anderson-DeBoer) model, using data up to aw=0.9, the monolayer moisture content of taro leaves was found to be 19.78g water per 100 gm solid.
INTRODUCTION
It is estimated that by 2020, the population of Bangladesh will be as high as 200 million, which means that there is a need to produce more food from the limited land resources. In this context, there is a need to explore alternate food crops, which could supply food in food insecurity situations. “Taro” can be the alternative to the other vegetables for developing and under developed country. Apart from acting as cheap energy and dietary source, this crop provides other micronutrients, vitamins and dietary fiber as well.
In Bangladesh, taro is used as vegetable throughout the country. Corms and cormels are used as starchy vegetables whereas leaves and leaf stalks are used as ‘shak’. During famine, a large number of people reportedly survive simply on food materials made by boiling the corms, cormels, stolons, leaf stalks and leaves of different varieties of Taro.
The subfamily Colocasioideae of family Araceae consists of three edible tubercrops, namely ‘taro’ (Colocasia esculenta Schott), `tannia’ (Xanthosoma spp.) and giant Taro’ (Alocasia spp.). Among these crops, taro and tannin are cultivated to a larger extent, while giant taro is not as common as commercial crop like the other two. In general, these are crops of third world countries, particularly grown in Africa and Asia. About 88 % of the total world acreage is in Africa which produces about 80% of total production (Onwueme, 1978). Among the three crops, taro is more common in South-East Asia. It is one of the ancient crops with an interesting history blending with the evolution of agricultural systems (Gopalan et al., 1974).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

A large number of horticultural varieties of taro are widely cultivated in Bangladesh and still larger varieties grow wild. During the rainy season when other vegetables are in scarcity in Bangladesh this taro goes a long way to meet the demand for vegetables. The leaves, petioles, stolons, corms and cormels, and indeed all the parts of some taro are taken as food in large quantities by the rural population in our country. Hence the use of Taro as vegetables, both leaves and roots, in the diets of the people of our country assumes special and added importance.
The taro has also medicinal value. Processed “Bishknchu” is used in Ayurvedic medicine for the treatment of rheumatism. Juice from petioles and whole leaves are used as antiseptic to check bleeding from minor injury in the rural areas of Bangladesh (Chowdhury, 1975).
The possibility of wider use of the Taro leaves as vegetables crops in our country may be ascribed to their unusual environmental adaptability and ease of cultivation. The lowland types grow in standing water which is rarely possible for other crops. The taro can be produced with minimum capital investment. Growing of this crop does not require any special technological skill. Their keeping quality in most cases is excellent.
The best way of preserving the leasfy vegetables is drying or dehydration. This process costs less, then other preserving methods and require simple instrument. The type and conditions of the blanching treatment prior to drying affect the retention of ascorbic acid, carotene, and ash in the dried vegetables. The sun-dried vegetables had inferior color, texture and acceptability compared to the vegetables dried in the cabinet dryer.
In the mechanical dryer, desired temperature and airflow could be maintained. Compared to sun/solar drying, higher airflow and temperature can be used in mechanical drying. This leads to high production rates and improved quality products due to shorter drying time and reduction of the risk of insect infestation and microbial spoilage as well as minimum nutrient loss. Since mechanical drying is not dependent on sunlight so it can be done as and when necessary.
Based on the above information, the present experiment was broadly aimed to study on development of shelf stable taro (Colocassia esculenta) leaves product. The specific objectives of this study are as follows:

To determine the composition of fresh and processed Taro leaves
To develop the isotherm
To study the drying characteristics of taro leaves during mechanical and vacuum drying
To study the storage stability of processed taro leaves

Materials and methods
3.2.3 Sorption isotherm studies
The moisture sorption properties of dried Taro leaves were determined at room temperature under conditions of various relative humidity (11-93% RH) in the vacuum desiccators. The various RH conditions were achieved in vacuum desiccators using saturated salt solutions.
The following salt solutions (Table. 3.1) of known water activity were used for the study (Islam, 1980).
Table 3.1: Water activity of saturated salt solution

Salt

Water activity (aw)

LiCI

0.11

KC2H3O2

0.20

MgCl2. 6H2O

0.33

K2CO3

0.44

Mg(NO3)2 .6H2O

0.52

CaC12

0.68

NaCl

0.75

KCl

0.85

KNO3

0.93

Petri dishes were used for preparing saturated salt solution. The various salts were put in the Petri dish and water is added to give a saturated condition. The method involved putting a small accurately weighed about 1g sample in a previously weighed Petri dish into desiccators contained saturated salt solutions. The sample and the solution was separated a perforated plate to avoid mixing. The desiccators were evacuated to less than 50 Torr. At various intervals, the vacuum was broken with air, the sample weighed and replaced in the desiccators, which was then re-evacuated. The sample was weighed daily in the initial period and less often, as the sample started to reach equilibrium. Weighing was continued until the sample weights were constant two days in row.
In the mid-1970s, water activity came to the forefront as a major factor in understanding the control of the deterioration of reduced moisture, drugs and biological systems (Labuza, 1975). It was found that the general modes of deterioration, namely physical and physicochemical modifications, microbiological growth, and both aqueous and lipid phase chemical reactions were all influenced by the thermodynamic availability of water (water activity) as well as the total moisture content of the system.
Control of initial moisture content and moisture migration is critical to the quality and safety of foods. Ideally, food manufacturers develop products with defined moisture contents to produce a safe product with optimum shelf- life. Quality and safety factors that the manufacturer must consider are microbial stability, physical properties, sensory properties, and the rate of chemical changes leading to loss of shelf-life. Water activity or the equilibrium relative humidity of a system is defined as:
Where
Vapor pressure of water in equilibrium with the dry system
Saturation vapor pressure of pure water at the same temperature.
Sorption properties of floods (equilibrium moisture content and monolayer moisture) are essential for the design and optimization of many processes such as drying, packaging and storage (Muhtaseb et al., 2002).The moisture sorption isotherms show the equilibrium amount of water sorbed onto a solid as a function of steady state vapor pressure at a constant temperature (Bell and Labuza, 2000).
There are many empirical equations that describe this behavior, but the water sorption properties at various RHs should be experimentally determined for each material. The general shape of the isotherm, specific surface area of the sample, reversibility of moisture uptake, presence and shape of a hysteresis loop provide information on the manner of interaction of the solid with water (Swaminathan and Kildsig, 2001).
Sorption properties are important in predicting the physical state of materials at various conditions, because most structural transformations and phase transitions are significantly affected by water (Roos, 1995).
Langmuir (1917) developed an equation based on the theory that the molecules of gas are adsorbed on the active sites of the solid to form a layer one molecule thick (monolayer).
The Brunauer-Emmett-Teller (BET) sorption model (Brunauer et al.1938) is often used in modeling water sorption particularly to obtain the monolayer value (Eq. 2.10) which gives the amount of water that is sufficient to form a layer of water molecules of the thickness of one molecule on the adsorbing surface (Bell and Labuza 2000, Roos, 1995).
The BET monolayer value has been said to be optimal water content for stability of low-moisture materials (Labuza, 1975 and Roos, 1995). The BET equation was developed based on the fact that sorption occurs in two distinct thermodynamic states; a tightly bound portion and multilayer having the properties of bulk free water (Zografi and Kontny, 1986). The BET equation is:

Where,
= the measured moisture at water activity
= the monolayer moisture content (the optimal moisture content for maximum storage stability of a dry food);
c = the isotherm temperature dependence coefficient (energy constant)
Vanchy (2002) determined the moisture sorption isotherm of Whole milk powder (WMP). The WMPs were stored at 20 and 35°C under 11%, 22% and 33% relative humidity . The monolayer moisture content was 4.8%, (solids not fat basis) at 0. 11 using the BET equation and 5.1 % at 0.23 according to the GAB equation.
Nikolay et al. (2005) determined the moisture equilibrium data (adsorption and desorption) of semi-defatted (fat 10.6 % wet basis) pumpkin seed flour using the static gravimetric method of saturated salt solutions at three temperatures 10°C, 25°C, and 40°C, found that the equilibrium moisture content decreased with the increase in storage temperature at any given water activity. They fitted the experimental data to five mathematical models (modified Oswin, modified Halsey, modified Chung-Host, modified Henderson and GAB).
The GAB model was found to be the most suitable for describing the sorption data. The monolayer moisture content was estimated using the Brunauer-Emmett-Teller (BET) equation. The BET model (Brunauer et al. 1938) gives the best fit to the data at aw of up to 0.5 (Bell and Labuza 2000, Roos 1995).
Guggenheim-Anderson-de Boer (GAB) sorption model (Anderson 1946, Boer 1953, Guggenheim 1966) introduces a third state of sorbed species intermediate to the tightly bound and free states. The GAB equation has a similar form to BET, but has an extra constant, K (equation 2.11). BET is actually a special case of GAB.
The GAB equation is:

Isotherm equations are useful for predicting the water sorption properties of a material, but no equation gives results accurate throughout the entire range of water activities. According to Timmermann (2003), the GAB monolayer value is always higher than the BET monolayer value. Prediction of water sorption is needed to establish water activity and water content relationship for materials (Roos, 1995)
Where
m = the measured moisture at water activity;
= the monolayer moisture content (the optimal moisture content for maximum storage stability of a dry food),
=the GAB multi-layer constant;
c=the isotherm temperature dependence coefficient (energy constant).
The GAB model can be used to a maximum water activity of 0.9. The following procedure is suggested by Biozt (1983) to fit data on water activities and equilibrium moisture content.
Equation (2.11) can be transformed as follows:

Where

Equation (2.12) indicates that GAB equation is a three-parameter model. The water activity and equilibrium moisture content date are regressed using equation (2.12) and values of three coefficients, and are obtained. From these coefficients, the values of k,, and c can be calculated.
To overcome this weakness of the GAB equation, modifications of the equation have been proposed (Schuchmann et al. 1990; Timmermann and Chirife 1991). Timmermann and Chirife (1991) used one additional parameter in the GAB model and studied the so-called third stage of sorption using experimental data of starch with satisfactory results.
Isotherm equations are useful for predicting the water sorption properties of a material, but no equation gives results accurate throughout the entire range of water activities. According to Timmermann (2003), the GAB monolayer value is always higher than the BET monolayer value. Prediction of water sorption is needed to establish water activity and water content relationship for materials (Roos, 1995).
Results and discussion:
The sorption isotherm is an extremely valuable tool for food scientist because it can be used to predict potential changes in food stability, for selection of packaging, for selection of ingredient and for predicting drying time. A sorption isotherm for dehydrated taro leaves obtained by Vacuum oven drying (VOD) was established to determine how the taro product will behave in a confined environment. To obtain the moisture sorption isotherm, moisture content (dry basis) versus water activity were plotted on linear graph paper (Figure 4.1).
The results shown in Figure 4.1 (tabulated data given in Appendix-II, Table 2.1), indicate that samples absorb little water particularly at lower aw (
Figure 4.1 Graphical presentation of sorption isotherm of Taro
The water sorption isotherm of taro follows the shape of the sigmoid type isotherm. The resultant curve is caused by the combination of colligative effects (physical properties of solution), capillary effects, and surface-water interactions (Bell and Labuza, 2000). A distinct “knee” usually indicates a formation of a well-defined monolayer.
The monolayer moisture content was estimated using the Brunauer-Emmett-Teller (BET) equation. The BET equation is an extension of the Langmuir relationship that accounts for multilayer coverage.
BET equation was used (eq. 2.10) to calculate monolayer moisture content (mo) and energy constant (C). mo represents the optimal moisture for maximum storage stability in the dry state. Results obtained from BET equation are shown in Table 4.2.
Table 4.2 Data for BET and GAB methods

Method

Energy constant
(cal/g-mole)

Monolayer moisture content
(g/100 g solid)

BET

37.33

8.92

GAB

19.78

From the slope and intercept of BET equation (Appendix II, Figure 2.1), monolayer moisture content and energy constant of taro leaves calculated for VOD samples. The monolayer moisture content of taro leaves was found to be 8.92g water per 100 g solid (Table 4.2). The calculated monolayer moisture content are greater than those found by Islam (1980) who reported 5.5 for potato slice and 6 for potato powder and by Kamruzzaman (2005) who reported 7.52 for aroids.
Another important model of sorption isotherm behavior stated by GAB (Guggenheim-Anderson-DeBoer) in equation 2.11 and 2.12 to determine the monolayer moisture content of food products. This is very important for safe level of storage of food.
Dry foods are usually considered to be most stable to chemical reactions if their moisture content is at or near the BET monolayer (Labuza et al., 1970). Usually air dried products are dried to moisture content corresponding to aw 0.6 (Nickerson and Sinskey, 1977).
From this study it is seen that VOD taro leaves give 25% (Figure 4.1) moisture content at 0.6 aw. From this standpoint, freeze dried products are considered best for sorption studies (Islam, 1980). It may be mentioned here that the current study was concerned with adsorption isotherm so as to avoid risk due to hysteresis effect. At same moisture content adsorption path gives higher water activity than desorption path. Thus product dried to safe aw level according to adsorption isotherm will be even safer when it follows desorption path.
After fitting data (Appendix II) the following figure was developed and from the developed equation the monolayer moisture content of taro leaves were found for GAB model.

Fig. 4.2 Graphical presentation of GAB model of sorption isotherm
From the developed Figure (4.2) and equation (4.1) the coefficients found, and were -0.121, 0.114 and -0.003 respectively (Table 4.2). Taking k= 0.9 and found the monolayer moisture content 19.78gm water per 100 gm solid. It is shown that the standard GAB equation is adequate to describe experimental data for water activity values up to 0.90 but fails to adequately describe the experimental data when data in the range of aw 0.9-1.0 are included in the calculation.
 

Content Based Image Retrieval System Project

An Efficient Content-based Image Retrieval System Integrating Wavelet-based Image Sub-blocks with Dominant Colors and Texture Analysis
ABSTRACT
Multimedia information retrieval is a part of computer science and it is used for extracting semantic information from multimedia data sources such as image, audio, video and text. Automatic image annotation is called as automatic image tagging or automatic linguistic indexing. It is the process in which a computer system automatically designates metadata in the form of keywords or captioning to a digital image. This application is widely used in image retrieval systems to locate and organize images from database. In this paper we have proposed efficient content based image retrieval (CBIR) systems due to the availability of large image database. The image retrieval system is used to retrieve the images based on color and texture features. Firstly, the image is partition into equal sized non-overlapping tiles. For partitioning images we are applying methods like, Gray level co-occurrence matrix (GLCM), HSV color feature, dominant color descriptor (DCD), cumulative color histogram and discrete wavelet transform. An integrated matching scheme can be used to compare the query images and database images based on the Most Similar Highest Priority (MSHP). Using the sub-blocks of query image and the images in database, the adjacency matrix of a bipartite graph is formed.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

INTRODUCTION:
Automatic image annotation is known as automatic image tagging or automatic linguistic indexing. It is the process in which a computer system automatically designates metadata in the form of keywords or captioning to a digital image. This application is widely used in image retrieval systems to locate and organize images from database. This method can be considered as multi class image classification with a large number of classes. The advantage of automatic image annotation is that the queries that can be specified by the user. Content based image retrieval requires users to search by images based on the color and texture and also is used to find example queries. The traditional methods of image retrieval are used to retrieve annotated images from large image database manually and which is an expensive, laborious and time consuming in existence.
Animage retrieval system is a computer system for searching, browsing and retrieving images from a largecollectionofdigital images. Most common and traditional methods of image retrieval use some methods of adding metadata such as captioning or descriptions and keywords to the images so that the retrieval can be performed over the annotation words. Image searchis used to find images from database and a user will provide a query terms as image file/link, keywords or click on some image and the system will return images similar to that query image. The similarity matching is done by using the Meta tags, color distribution in images and region/shape attributes.

Image Meta Search: – searching the images based on associated metadata such as text, keywords.
Content-Based Image Retrieval (CBIR):- This is the main application of computer vision to retrieve the images from image database. The aim of CBIR is used to retrieve images based on the similarities in their contents such as color, texture and shape instead of textual descriptions and comparing a user-specified image features or user-supplied query image.
CBIR Engine List: – This is used to search images based on image visual contents as color, texture, and shape/object.
Image Collection Exploration: – It is used to find images using novel exploration paradigms.

Content Based Image Retrieval:
Content based image retrieval is known asquery by image content(QBIC) andcontent-based visual information retrieval(CBVIR) and it is the application ofcomputer vision techniques to retrieve the images from digital image database. This is the image retrieval problem of finding for images in large image database. Content-based image retrieval is to provide more accuracy as compared to traditionalconcept-based approaches.
Content-based is the search that analyzes the contents of the image instead of metadata such as keywords, tags, or descriptions associated with that image. The term “content” in this context means textures, shapes, colors or any other information about image can be derived from the image itself. CBIR is popular because of its searches are purely dependent on metadata, annotation quality and completeness. If the images are annotated manually by entering the metadata or keywords in a large database can be a time consuming and sometime it cannot be capture the keywords preferred to describe its images. The CBIR method overcomes with the concept based image annotation or textual based image annotation. This is done by automatically.
Content Based Image Retrieval Using Image Distance Measures:-
In this the image distance measure method is used to compare the two images such as a query image and an image from database. An image distance measure method is used to compare the matching of two images in various dimensions as color, shape, texture and others. Finally these matching results can be sorted based of the distance to the queried image.
Color
This is used to compute image distance measures based on color similarity. This is achieved by computing the color histogramfor each image and that is used to identify the proportion of each pixel within an image which is holding a specific values. Finally examine the images based on the colors, which contains most widely used techniques and it can be completed without consider to image size or orientation. It is used to segment color by spatial relationship and by region among several color region.
Texture
Textures are represented as texels and are then located into a number of sets based on a lot of textures and are detected in the images. These sets are used to define texture and also detect where the textures are located in images. Texture measures are used to define visual patterns in images. By using texture such as a two- dimensional gray level variation is to identify specific textures in an image is achieved. Using texture, the relative intensity of pairs of pixels is estimated such as contrast, regularity, coarseness and directionality.Identifying co-pixel variation patterns and grouping them with particular classes of textures like silky, orrough.

Different methods of classifying textures are:-

Co-occurrence matrix.
Laws texture energy.
Wavelet transforms.

LITERATURE SURVEY:
In this paper a multscale context dependent classification algorithm is developed for segmenting collection of images into four classes. They are background, photograph, text, and graph. Here, features are used for categorization based on the distribution patterns of wavelet coefficients in high frequency bands. The important attribute of this algorithm is multscale nature and is used to classifies an image at different resolutions adaptively and enabling accurate classification at class boundaries. The collected context information is used for improving classification accuracy. In this two features are defined for distinguishing local image types in image database according to the distribution patterns of wavelet coefficients rather than the moments of wavelet coefficients as features for classification. The first feature is defined for matching between the empirical distribution of wavelet coefficients in high frequency bands and the Laplacian distribution. The second feature is defined for measuring the wavelet coefficients in high frequency bands at a few discrete values. This algorithm was developed to calculate the feature efficiently. The multscale structure collects context information from low resolutions to high resolutions. Classification is done on large blocks at the starting resolution to avoid over-localization. Here, only the blocks with extreme features are classified to ensure that the blocks of mixed classes are left to be classified at higher resolutions and the unclassified blocks are divided into smaller blocks at the higher resolution. These smaller blocks are classified based on the context information achieved at the lower resolution. Finally simulations shows that the classification accuracy is significantly improved based on the context information. Multiscale algorithm is also provides both lower classification error rates and better visual results [1].
This paper proposed content based image retrieval technique that can be derived in a number of different domains as Medical Imaging, Data Mining, Weather forecasting, Education, Remote Sensing and Management of Earth Resources, Education. The content based image retrieval technique is used to annotate images automatically based on the features like color and texture known as WBCHIR (Wavelet Based Color Histogram Image Retrieval). Here, color and texture features are extracted using the color histogram and wavelet transformation and the mixture of these two features are strong to scaling and translation of objects in an image. In this, the proposed system i.e. CBIR has demonstrated a WANG image database containing 1000 general-purpose color images for a faster retrieval method. Here, the computational steps are effectively reduced based on the Wavelet transformation. The retrieval speed is increases by using the CBIR technique even though the time taken for retrieving images from 1000 of images in database is only a 5-6 minutes [2].
This paper presents content based image retrieval scheme for medical images. This is an efficient method of retrieving medical images based on the similarity of their visual contents. CBIR-MD system is used to facilitate doctors in retrieving related medical images from the image database to diagnose the disease efficiently. In this a CBIR system is proposed by which a query image is divided into identical sized sub-blocks and the feature extraction of each sub-block is conceded based on Haar wavelet and Fourier descriptor. Finally, matching the image process is provided using the Most Similar Highest Priority (MSHP) principle and by using the sub-blocks of query and target image, an adjacency matrix of bipartite graph partitioning (BGP) created [3].
In this paper a content based image retrieval (CBIR) system is proposed using the local and global color, texture, and shape features of selected image sub-blocks. These image sub-blocks are approximately identified by segmenting the image into small number of partitions of different patterns. Finding edge density and corner density in each image partition using edge thresholding, morphological dilation. The texture and color features of the identified regions are calculated using the histograms of the quantized HSV color space and Gray Level Co- occurrence Matrix (GLCM) and the combination of color and texture feature vector is evaluated for each region. The shape features are computed using the Edge Histogram Descriptor (EHD). The distance between the characteristics of the query image and target image is computed using the Euclidean distance measure. Finally the experimental results of this proposed method provides a improved retrieving result than retrieval using some of the existing methods [4].
An efficient content based image retrieval system plays an important role due to the availability of large image database. The Color-Texture and Dominant Color Based Image Retrieval System (CTDCIRS) is used to retrieve images based on the three features such as Dynamic Dominant Color (DDC), Motif Co-Occurrence Matrix (MCM) and Difference between Pixels of Scan Pattern (DBPSP). By using the fast color quantization algorithm, we can divide the image into eight partitions. From these eight partitions we obtained eight dominant colors. The texture of the image is obtained by using the MCM and DBPSP methods. MCM is derived based on the motif transformed image. It is related to color co-occurrence matrix (CCM) and it is the conventional pattern co-occurrence matrix and is used to calculate the possibility of the occurrence of same pixel color between each pixel and its nearby ones in each image, which is the attribute of the image. The drawback of MCM is used to capture the way of textures but not the difficulty of texture. To overcome this, we use DBPSP as texture feature. The combination of dominant color, MCM and DBPSP features are used in image retrieval system. This approach is efficient in retrieving the user interested images [5].
In this paper content based image retrieval approach is used. It consists of two features such as high level and low level features and these features includes color, texture and shape which are present in each image. By extracting these features we can retrieve the images from image database. To obtain better results, RGB space is converted into HSV space and YCbCr space is used for low level features. The low level features are to be used based upon the applications. Color feature in case of natural images and co-occurrence matrix in case of textured images yields better results [6].
OBJECTIVE:

To retrieve images more efficiently or accurately.
To improve the efficiency and accuracy by using the multi features for image retrieval (discrete wavelet transform).
Image classification and accuracy analysis.
Time saving.
Robustness.

METHODOLOGY:

Discrete Wavelet Transform.
Conversion to HSV Color Space.
Color Histogram Generation.
Dominant Color Descriptor.
Gray-level Co-occurrence Matrix (GLCM).

ARCHITECTURE:
 
This architecture consists of two phases:

Training phase
Testing phase

These two phases of the proposed system consists of many blocks like image database, image partitioning, wavelet transform of image sub-blocks, RGB to HSV, non uniform quantization, histogram generation, dominant color description, textual analysis, query feature, similarity matching, feature database, returned images.
In training phase, the input image is retrieved from image database and then the image is being partitioned into equal sized sub-blocks. Further, for each sub-block of the partitioned image, wavelet transform is being applied. Then the conversion from RGB to HSV taken place preceded with non uniform quantization, inputted to histogram generation block where a color histogram is generated for the sub-blocks of the image. Then the dominant color descriptors are extracted and texture analysis of each sub-block of the image is done. Finally the image features from the feature database and the input image features are compared for the similarity matching using MSHP principle. Then the matched image is being returned.
In testing phase, the processing steps are same as training phase, except the input image is given as the query image by the user not collected from the image database.
OUTCOMES:

It provides accurate image retrieving.
Comparative analysis and graph.
Provides better efficiency.

CONCLUSION:
To retrieve images from image database, we can use discrete wavelet transform method based on color and texture features. The color feature of the pixels in an image can be described using HSV, color histogram and DCD methods, similarly texture distribution can be described using GLCM method. By using these methods we can achieve accurate retrieval of images.
REFERENCES:
[1] Jia Li, Member, IEEE, and Robert M. Gray, Fellow, IEEE, “Context-Based Multiscale Classification of Document Images Using Wavelet Coefficient Distributions”, IEEE Transactions on Image Processing, Vol. 9, No. 9, September 2000.
[2] Manimala Singha and K.Hemachandran, “Content Based Image Retrieval using Color and Texture”, Signal & Image Processing: An International Journal (SIPIJ) Vol.3, No.1, February 2012.
[3] Ashish Oberoi Deepak Sharma Manpreet Singh, “CBIR-MD/BGP: CBIR-MD System based on Bipartite Graph Partitioning”, International Journal of Computer Applications (0975 – 8887) Volume 52– No.15, August 2012.
[4] E. R. Vimina and K. Poulose Jacob, “CBIR Using Local and Global Properties of Image Sub-blocks”, International Journal of Advanced Science and Technology Vol. 48, November, 2012.
[5] M.Babu Rao Dr. B.Prabhakara Rao Dr. A.Govardhan, “CTDCIRS: Content based Image Retrieval System based on Dominant Color and Texture Features”, International Journal of Computer Applications (0975 – 8887) Volume 18– No.6, March 2011.
[6] Gauri Deshpande, Megha Borse, “Image Retrieval with the use of Color and Texture Feature”, Gauri Deshpande et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (3) , 2011, 1018-1021.
[7] Sherin M. Youssef, Saleh Mesbah, Yasmine M. Mahmoud, “An Efficient Content-based Image Retrieval System Integrating Wavelet-based Image Sub-blocks with Dominant Colors and Texture Analysis”, Information Science and Digital Content Technology (ICIDT), 2012 8th International Conference on Volume:3 .
 

Determination of Sugar Content: Biochemical Analysis

Determination of sugars (total sugar, reducing sugar and non-reducing sugar) were carried out though Lane and Eynon Method as described by James (1995).
Total sugar and reducing sugar: Took 5 g of sample into a beaker and added 100 ml of warm water. The solution was stired until all the soluble matters were dissolved and filtered through wattman paper into a 250 volumetric flask. Pipetted 100 ml of the solution prepared into a conical flask, added 10 ml diluted HCl and boiled for 5 min. On cooling, neutralize the solution to phenolphthalein with 10% NaOH and make up to volume in a 250 volumetric flask. This solution was used for titration against Fehling’s solution and reading was calculated as follow.
% Total sugar = Factor (4.95) x dilution (250) x 2.5
Titre x wt of sample x 10
% Total sugar = Factor (4.95) x dilution (250) x 2.5
Titre x wt of sample x 10
% Reducing sugar = Factor (49.5) x dilution (250)
Titre x wt of sample x 10
Non-reducing sugar was estimated as the difference between the total sugar content and reducing sugar content.
Determination of dietary fiber
Total dietary fibre content is measured according to the AOAC enzymatic-gravimetric method. The basis of this method is the isolation of dietary fibre by enzymatic digestion
of the rest of the constituents of the material. The residue is measured gravimetrically.
Starch is digested by combining amylase (pH=6.0, Ta=100°C, t=30min) and amyloglucosidase (pH=7.5, Ta=60°C, t=30min); protein is digested with protease
(pH = 4.5, T a = 60°C, t = 30 min).
Analyses are performed using a total dietary fibre assay kit (Sigma product n ° TDF 100 kit). Ethanol is added to precipitate the soluble fibre. Filtration is carried out in crucibles with 0.5 g of wet and homogeneously distributed celite. The residue is then filtered off and washed with 78 % and 95 % ethanol and acetone. Crucibles with the residue are dried to measure the weight of residue. Protein, ash and starch were measured in each residue in order to correct the values for dietary fibre. Weight of dietary fiber gave was calculated by the following formula.
Crud Fiber % = (c-b) – (d-b) x 100
(a)
[6].
Determination of moisture content
Moisture content is one of the most important factors influencing seed quality and storability, Therefore, its estimation during seed quality determination is important. Seed moisture content can be expressed either on wet weight basis or on dry weight basis, in seed testing, it is always expressed on a wet weight basis, and the method for its calculation is given later. Seed moisture content can be determined either by air oven or moisture meter. However, if prescribed standard for moisture content is less than 8%, air oven method shall be used.
(1) Air oven method: In this method, seed moisture is removed by drying at a specified temperature for a specified duration. The moisture content is expressed as a percentage of the original weight (wet weight basis). It is the most common and standard method for seed moisture determination.
(2) Moisture meters: A variety of moisture meters are available in the market.
These meters estimate seed moisture quickly but the estimation is not as precise as by the air-oven method. The meters should be calibrated and standardized against the air-oven method.
The moisture content of the sample is calculated using the following equation:
W% = A-B x 100
B
Where:
%W = Percentage of moisture in the sample,
A = Weight of wet sample (grams), and
B = Weight of dry sample (grams)
Determination of ash content
For determination of ash content, method of AOAC (2000) was followed. According to the method, 10 g of each sample was weighed in a silica crucible. The crucible was heated in a muffle furnace for about 3-5 h at 600 °C. It was cooled in desiccators and weighed to completion of ashing. To ensure completion of ashing, it was heated again in the furnace for half an hour more, cooled and weighed. This was repeated consequently till the weight became constant (ash became white or grayish white). Weight of ash gave the ash content was calculated by the following formula.
Ash % = Weight of ashed sample x 100
Weight of sample taken
Determination of protein
Protein was determined using micro Kjeldahl method as describe in AOAC (2000).
2 g of sample material was taken in a Kjeldahl flask and 30 ml concentrated sulfuric acid (H2SO4) was added followed by the addition of 10 g potassium sulphate and 1 g copper sulphate. The mixture was heated first gently and then strongly once the frothing had ceased. When the solution became colorless or clear, it was heated for another hour, allowed to cool, diluted with distilled water (washing the digestion flask) and transferred to 800 ml Kjeldahl flask. Three or four pieces of granulated zinc and 100 ml of 40% caustic soda were added and the flask was connected with the splash heads of the distillation apparatus. Next 25 ml of 0.1 N sulphuric acid was taken in the receiving flask and distilled. When two-thirds of the liquid had been distilled, it was tested for completion of reaction. The flask was removed and titrated against 0.1 N caustic soda solution using methyl red indicator for determination of Kjeldahl nitrogen, which in turn gave the protein content. The nitrogen percent was calculated by the following formula.
N%= 1.4 (V2-V1) x Normality of Hcl x 250 (dilution)
Weight of Sample
Whereas, protein content was estimated by conversion of nitrogen percentage to protein (James, 1995).
Protein % = N% x Conversion factor (6.25)
Where conversion factor = 100/N (N% in fruit products)
Determination of fat
Fat was determined by Mojonnier method (James,1995). The fat content was determined gravimetrically after extraction with diethyl ether (ethoxyethane) and petroleum ether from an ammonia alcoholic solution of the sample. About 10 g of sample was taken into a Mojonnier tube. Added 1 ml of 0.880 with 10 ml ethanol mixed well and cooled. Added 25 ml diethyl ether, stopper the tube, shacked vigorously and then added 25 ml petroleum ether and left the tube to be stand for 1 hr. The extraction was replaced thrice using a mixture of 5 ml petroleum ether and adding the extraction to the distillation flask. Distilled off the solvents, dried the flask for 1 hr at 100 C and reweighed. The percentage fat
Content of the sample was calculated by the following formula which gave that the difference in the weight or the original flask and the flask plus extracted fat represent the weight of fat present in the original sample.
% Fat content of sample = W2 W1 x 100
W3
Where:
W1 = Weight of empty flask (g)
W2 = Weight of flask + fat (g) and
W3 = Weight of sample taken (g).
 

Determination of Vitamin C Content in Fruit Juices

Research Project

 

Determination of Vitamin C Content in Fruit Juices

 and commercial fruit juices via (acid base) redox titration.

 

Literature Review

 

Vitamin C is an essential supplement that is required for humans to keep their overall health at desired state. Humans (and other isolated species) do not have the ability to synthesise Vitamin C vis-à-vis other species, making it an obligation for them to intake it.  For many centuries, the condition scurvy was infamously known to baffle humankind. Specifically for seafaring men and explorers, this mysterious ailment inflicted aching pain and suffering, making each journey a gamble with death. It was not until 1930, that scientists were able to determine the substance for curing scurvy, and thus referring it to “Vitamin C” (American Chemical Society, 2019).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Benefits of Vitamin C include protection against immune system inadequacy, prenatal health problems, eye disease, skin wrinkling and also cardiovascular disease. Looking at the studies, one can derive that Vitamin C does indeed provide numerous benefits to the body. For example, a contemporary study published in Seminars in Preventive and Alternative Medicine looked and analysed over 100 studies over 10 years and concluded that Vitamin C did indeed include many benefits, such as the ones mentioned above.

Vitamin C is also required for the biosynthesis of collagen, certain neurotransmitters and L-carnitine. In 2018, the University of Maryland conducted a study and concluded that Vitamin C intake can reduce the pressure that’s built up on the bones in a condition known as osteoarthritis. This could potentially prevent a person from being diagnosed with arthritis. (University of Maryland Medical Center, 2018).

It can also assist in protein metabolism. (Y, 2019) Vitamin C also provides the essential nutrients that help maintain the connective tissue and bones in our bodies. It ensures the optimal functionality of several enzymes, by activating certain liver-detoxifying systems. Vitamin C also acts as an antioxidant, as it reacts directly with free radicals in the aqueous state. This is important as it protects cellular function and as a result, this function can aid in fighting bacterial infections and increase the rate of regeneration of burns or wounds. (Mason, 2007)

The chemical name for Vitamin C is ascorbic acid, and primarily exists in 2 forms- L-ascorbic acid and D-ascorbic acid. The L variety can be found naturally, i.e in fruits or vegetables but can also be found in its synthetic form (such as supplements), both versions being interchangeable with their benefits. The D variety carries indistinguishable antioxidant properties but not the vitamin C content of L-ascorbic acid. In addition, the D form is not used in forms of supplement. Although they are both chemically Vitamin C, the nutrient packed properties differ amongst them, affecting their bioavailability. (SmartyPants Vitamins, 2019) 

A deficiency in Vitamin C can ultimately be the leading factor to scurvy. Subclinical deficiencies can lead to signs of inadequate wound healing and ulceration.  Early signs of deficiencies are not too fatal, and may range from general weakness, shortness of breath, lethargy, and possibly aching of the limbs. As time progresses, other prominent conditions may become evident, such as petechiae after the application of a sphygmomanometer (blood pressure monitor), perifollicular haemorrhages, bleeding and swollen gums, pallor/anaemia (unhealthy pale appearance after a result of prolonged bleeding). Unfortunately, groups that are at high risk with these conditions include smokers, patients with diabetes and the elderly. (Mason, 2007)

Some people automatically assume that if a drink is advertised as ‘natural’ it will contain as many nutrients and vitamins as the source it has been derived from. However,  one must keep in mind companies will favour satisfaction of their target more, and thus meaning that commercial fruit juices (and sometimes the natural fruit juices) will contain flavourings  and chemicals that will enhance the flavour and preserve the liquids for longer. Also, commercial fruit juices react with oxygen, which proves that most of the nutrients are lost due to oxidation. The method that will be used is a titration method, where Vitamin C in fruit juices were titrated against aqueous sodium dichlorophenolindophenol with starch as an indicator.

Aim:

The aim of the experiment is to analyse the different concentrations of Vitamin C in fresh and commercial fruit juices. Comparison of the results will give one the idea of the differences in the amounts of Vitamin C.

– Fruits used: Orange, lime and grapefruit.

Hypothesis:

I acknowledge that fresh fruit juices will have a higher concentration of Vitamin C compared to the commercial fruit juices. Also, I predict that amongst all the fresh fruits, orange will have an abundance of Vitamin C compared to the other fruits, as it is notable to have a high amount of Vitamin C.

Apparatus:

Fresh fruit juices (orange, lime and grapefruit)

Commercial fruit juice (orange, lime and grapefruit)

10ml of ascorbic acid

DCPIP solution (2,6-dichlorophenolindophenol)

Deionised/ Distilled Water

25ml of 0.5% Oxalic acid

250ml Beaker

250ml Conical Flask

Knife

25ml measuring cylinder

Titration kit (Boss, clamp + stand, funnel and tile)

Filter paper and Buchner Funnel

Pipetman 1000 with the pippete tips

 

Method:

Initially, a solution of ascorbic acid was produced. This was done by weighing 0.2g of the ascorbic acid solution into 1 Litre of deionised/distilled water. The concentration of Ascorbic acid could also be calculated by using the following two formulas.

 

 Concentration = Mole ÷ Volume ∴  Conc = (Mass/Mr) ÷  Volume

Shortly after, a solution of DCPIP was made by weighing 0.24g of it into 1 Litre of deionised water. The exact same formulas were utilised to work out the concentration of DCPIP.

25ml of 0.5% oxalic acid was thoroughly measured and transferred into a 250ml conical flask. Using a pipetman, the 10ml of ascorbic acid was added into the conical flask.

Using the titration apparatus, several trial titres were observed. This was carried out by the accurate titration of ascorbic solution against the DCPIP solution. At one specific point (endpoint), the colour change of DCPIP to pink was seen as it reacted fully with the ascorbic acid. Therefore, the volume was recorded of the DCPIP used was recorded.

The liquids of the natural fruits were extracted using the knife to cut them in half. Squeezing them and using a Buchner funnel and filter ensured that the seeds and skin of the fruits remained separate from the juices. The juice was obtained with no impurities.

A portion of one of the fruit juices (10ml to be exact) was pipetted into the conical flask that had contained oxalic acid, which was made prior to this step. An additional 10ml of deionised water was also added.

Repeats were conducted thrice with each fruit. At the end, the average result was calculated after obtaining each result for each fruit.

Calculations needed towards the latter stages:

       Mole of vitamin C =
Vitamin C: Natural Vs Synthetic. [online] Available at: https://www.smartypantsvitamins.com/blogs/articles/vitamin-c-natural-vs-synthetic [Accessed 9 Jan. 2019].

University of Maryland Medical Center. (2018). Vitamin C (Ascorbic acid). [online] Available at: https://www.umm.edu/Health/Medical-Reference-Guide/Complementary-and-Alternative-Medicine-Guide/Supplement/Vitamin-C-Ascorbic-acid [Accessed 4 Feb. 2018].

 

What a Digital Forensics Investigator should know about Steganalysis of Digital Content

Table of Contents

Introduction

Background

Aim, Objectives and Research questions

Literature Review

Digital Steganography

What a digital forensics Investigator should know about steganalysis of digital content?

Steganalysis

Digital Forensics

Digital Security Issues

Legal Issues and Challenges

Discussion

Conclusion

References

Background

Internet has been explicitly used for transfer of data and information from one place to another place. Data transfer can be done between families, friends, corporates and different groups in a legal manner. However, there can be illegal way too for data transfer. Therefore, this can be threat-full for society and corporates. Steganography deals with secrecy and communication conversion into hidden format (Lin 2018). The word steganography has been derived from Greek word ‘steganos’ and ‘graphein’ means masked writing. Steganography include various techniques that are not directly linked to computer. In computer science, steganography has been refereed to hiding data with non-secret data. Steganography is based on a fact that files and data can be altered without losing its originality so that human senses cannot distinguish changes.

Figure 1: Simple presentation of the principle of steganography

(Source: Cogranne, Sedighi and Fridrich, 2017)

The above figure represents an example of steganography. A carrier image is chosen and the secret message is embedded into carrier using stenographic algorithm so that it does not change the original image. The resultant new image is the stego-image that is not visible different from original. This technique has been different from cryptography as the resultant image is visible for the user. In case of cryptography, resultant data is encrypted into a packet with a key that cannot be accessed by any user (Cogranne, Sedighi and Fridrich, 2017). There are various classification of steganography. The below figure describes about classification of steganography.

Figure 2: Classification of steganography

(Source: Manimegalai et al. 2014)

Technical steganography focuses on scientific approaches for hiding a message including the use of undetectable ink or microdots.

Forensic science is a technology developed for uncovering scientific evidence in variety of fields. Digital forensics refers to investigating crimes with proper possibility of digital evidence. It is use of scientific driven and proven methods towards preservation, identification, collection and interpretation of data arranged from digital sources to restrict criminal activities and planned illegal works (Manimegalai et al. 2014). Computer crimes have been increased in recent years that have been creating bug challenges in the market. Therefore, digital forensics is an investigation of crime in an organization done by criminal. Digital forensics is done to examine and resolve the flaws in the steganography. Various criminals are using steganography for transferring hidden messages for increasing crime. Therefore, digital forensics investigators need to know details about steganalysis. 

 

 

 

Aim, Objectives and Research questions

The aim of the research is about what a digital forensics Investigator should know about steganalysis of digital of digital content.

The objectives of the research have been mentioned below:

To investigate issues in steganography

To implement steganalysis for defeating steganography

To identify what a digital forensics Investigator should know about steganalysis of digital of digital content

The research questions are mentioned below:

What are the issues in steganography?

How to defeat steganography using steganalysis?

What a digital forensics Investigator should know about steganalysis of digital of digital content?

Digital Steganography

Technology has been helping in hiding messages easily and efficiently in the modern computer age. Computerized tools have been helping in encoding messages and hide within another file. According to Srivastava et al. (2018), steganography is an art of concealing existence of information technology within carriers. The goal of the technology is that the message exists in first place that helps in intercepting file would not be able to view the hidden message in the carrier. Steganography help in hiding the existence of message, cryptography helps in making it impossible to understand for outsiders.

There are three types of steganography techniques that has been discussed below:

Injection techniques: Concealing of data in original files have been occurring in computer applications. IS conferences and journals have been routinely instructing authors for removing identified data in order not to compromise in blind review process (Chaumont 2018). The general view of the webpage does not include content, however, source view reveals tag. Therefore, utilizing space without any kind of alteration to carrier file as there is a limited space.

Substitution Techniques:Limited volume of data and information of carrier file has been replaced with coded representation of hidden message in substitution techniques. In this technique a Least Significant Bit (LSB) has been taken with binary representation of each picture element in graphic element.

10010101 00001101 11001001

10010110 00001111 11001010

10011111 00010000 11001011

The LSB algorithm can hide the following nine bits 101101101 by changing the last bit in each octet as needed. This results in

10010101 00001100 11001001

10010111 00001110 11001011

10011111 00010000 11001011

This example represents about the process of hiding nine bits of information and algorithm needed for changing four of nine least significant bits in these nine bytes. Changing last bit helps in creating small change in color of a pixel. Therefore, change in image is not perceptible in front of human eye (Yu, Cheng and Zhang 2016).

Figure 3: Block diagram of Steganography

(Source: Boroumand and Fridrich, 2017)

File Creation:In final stage, stego message has been used for generating a completely new file. Using Spam Mimic, a short message used to hide in text which appears to be a spam. Therefore, a normal might understand this message as spam and ignore the same. However, the receiver can decode the message. However, this technique has been inefficient as evidenced by modification of three words “steganography is interesting” to text with a word count of 574.

Figure 4: Steganographic Procedure

(Source: Boroumand and Fridrich, 2017)

What a digital forensics Investigator should know about steganalysis of digital content?

Steganalysis

Steganalysis is process of detecting small changes in patterns of a file that helps in detecting presence of hidden messages (Boroumand and Fridrich, 2017). There are various types of steganalysis as mentioned below:

Stego only attack- stego object has been available during analysis;

Known cover attack- stego object and cover have been available both;

Chosen stego attack- the stego object and algorithm have been available for analysis;

Chosen message attack- a regular message has been selected and converted to stego message for further analysis;

Known stego attack- the algorithm, stego message and cover message have been available for analysis.

However, steganalysis has been becoming more efficient in the market.  The complexity level of process has been reduced gradually. Therefore, detection of steganography has been based on comparison of stego file with help of detection files.  The size of the detection file has been larger in size than that of stego file (Li, Huang and Shi, 2012).  Therefore, original files are not available in the public sources. Various steganography techniques have been increased in their size of the digital carrier file.  The stego message structure has been superimposed on digital carrier data with proper analysis of properties.

Digital Forensics

Digital Forensics has been focusing in preservation of digital evidence. As commented by Song et al. (2017), digital forensics have been use of keywords, validation, identification and documentation of digital proof depicted from digital sources. The computing world has helped in the enhancement of digital media in the world.  Advanced crime scene investigation is done to look at and resolve the imperfections in the steganography (Sushith and Keerthana 2018). The use of the digital communication has been helping in enhancing the media over the internet.  Therefore, there has been various threats in the data transfer. Digital forensics deals with the investigation of threats and risks s involved in the digital world over the internet. Digital forensic experts have an idea of different type of steganalysis that have been helping in detecting several hidden messages. 

There have been various approaches to the implementation of digital forensics in the steganalysis. Some of them are discussed below:

Detection of software:

There are various cases in which steganography software itself discover its presence in the computer for investigation. The steganography application fingerprint database (SAFDB) include identification information on 625 applications that are linked with steganography. The National Institute of Standard and Technology (NIST) helps in maintaining a list of digital signatures in National Software Reference Library in which steganography software include (AL-Salhi and Lu 2016). However, traces might be found in windows registry even after removal of software. After installation of steganography software, harmful intent can be assumed until it is proven.

Detecting pairs of carrier files and stego files:

However, if some of files are deleted, it can be recovered from Recycle Bin or other file recovery software. Accordingly, advanced legal sciences is an examination of wrongdoing in an association done by criminal. Different offenders are utilizing steganography for exchanging concealed messages for expanding wrongdoing. Thusly, an advanced crime scene investigation needs to know insights about steganalysis.

Using Keywords:

There is another method of detection by using keywords for file names and content in program files. Therefore, list needs to be specific in related to steganography. For example, searching term “steg” can be used for detecting steganography (Shih 2017). Therefore, effectiveness process has been false positives and negatives depending on keyword dictionary. Most of stenographic tools used to target specific applications in the past.

Physical crime scene investigation:

The last one is the physical crime scene investigation that can be useful for gathering information.  There are various passwords used for tools printed on notes and stuck under environmental objects which helps in generating clues for potential passwords.

Digital Security Issues

Steganography tools have been used in order to maintain a legitimate approach to the security of corporate information during transfer. Steganography tools have been becoming widely available and easier to use in illicit use with legitimate use as a new challenge. There have been various cases in which unapproved applications have been installed by employees of various organizations.  These applications include instant messaging clients, screen savers and other peer-to-peer software (Song et al. 2015). It is use of scientific driven and proven methods towards preservation, identification, collection and interpretation of data derived from digital sources to restrict criminal activities and planned illegal works. Computer crimes have been increased in recent years that have been creating bug challenges in the market. Acceptable use polices have been excluded from steganography software as it is not included under banned software for legitimate issues.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our services

The use of intrusion software helps in detecting abnormal transfer of graphics files. Most of the business practices do not include high level of graphic file use and traffic over internet have been increased. Information exchange should be possible between families, companions, corporates and distinctive gatherings in a lawful way. In any case, there can be illicit much too for information exchange. Accordingly, this can be risk full for society and corporates. Steganography manages mystery and correspondence change into shrouded design. Steganography incorporate different procedures that are not straightforwardly connected to PC. Therefore, the use of the stego files over the internet can be a great for the organization.  The stego files have been detected in the networked computers that can be isolated within emails.

Steganography has been a threat to the security for an organization.  The hidden message in the content material cannot be read by naked human eyes. Therefore, there might be illegal information being sent to the organization without any permit (Dutta 2016). Therefore, there is a need of proper monitoring and checking of messages being send in and out of the organization.  Therefore, both active and passive tools are required for monitoring the stenographic activates in the organizations.

Legal Issues and Challenges

Various laws have been involving in the technological aspect of the steganography that are difficult for enact in the Internet age. There are various stenographic internet communications that are done at international borders in which no jurisdictions are imposed. In 1952, The US enacted Section 1343 of the Federal Criminal Code that include wire fraud provision (Wu, Zhong and Liu 2016). This act has been linked with the Internet. Court orders have been approved on the telephonic conversations as this order has been applied on mobile communications only. As argued by Alattar, Memon and Heitzenrater (2015), criminals have been easily bypassing this law by using disposable phones. Several technologies including voice over Internet Protocol (VoIP) has been creating new challenges. Internet Telephony helps in breaking phone conversations into data packets and send them over internet. Therefore, for monitoring this kind of traffic include various destinations. Therefore, monitoring this traffic various central locations have been able to set up their voice streams in copying their intended destinations. Therefore, it might be effective in monitoring right after starting point. All packets are transferred to its required destinations.

There has been a critical balance between loss of personal privacy and different society. Various groups including American Civil Liberties Union (ACLU) have been opposing law enforcement monitoring of communications (Xia et al. 2014). The position of ACLU on technology and privacy in the US has been a risk for surveillance society. The size of the detection file has been larger in size than that of stego file.  Therefore, original files are not available in the public sources. Various steganography techniques have been increased in their size of the digital carrier file. There has been another problem with the new legislation (Denemark, Boroumand and Fridrich 2016). This law has been amended in 2004 as in its real form various technology including steganography has been prohibited. However, the government have tried to mention different approaches in the encryption technology.

Measurable science is an innovation produced for revealing logical proof in assortment of fields. Advanced legal sciences allude to exploring wrongdoings with appropriate plausibility of computerized proof. It is the utilization of logical driven and demonstrated strategies towards safeguarding, approval, recognizable proof, gathering and understanding of information got from computerized sources to confine criminal exercises and arranged illicit works. PC wrongdoings have been expanded as of late that have been making bug difficulties in the market. Accordingly, advanced legal sciences are an examination of wrongdoing in an association done by criminal. Advanced crime scene investigation is done to look at and resolve the imperfections in the steganography (Sushith and Keerthana 2018). Different offenders are utilizing steganography for exchanging concealed messages for expanding wrongdoing. Thusly, an advanced crime scene investigation needs to know insights about steganalysis.

With the improvement of PC and growing its use in various everyday issues and work, the issue of data security has turned out to be progressively vital. One of the grounds examined in data security is the trading of data through the cover media. To this end, diverse strategies, for example, cryptography, steganography, coding, and so on have been utilized (Dang-Nguyen et al. 2015). Most steganography employments have been done on pictures, video cuts, writings, music and sounds. Nowadays, utilizing a mix of steganography and alternate techniques, data security has enhanced significantly.

Steganography apparatuses have been utilized with the end goal to keep up an authentic way to deal with the security of corporate data amid exchange. Steganography instruments have been winding up generally accessible and less demanding to use in unlawful use with authentic use as another test. There have been different cases in which unapproved applications have been introduced by workers of different associations (Farid 2018). These applications incorporate texting customers, screen savers and other shared programming. It is the utilization of logical driven and demonstrated strategies towards protection, approval, distinguishing proof, gathering and translation of information got from computerized sources to confine criminal exercises and arranged unlawful works. PC wrongdoings have been expanded as of late that have been making bug difficulties in the market. Adequate utilize polices have been prohibited from steganography programming as it does not fall under restricted programming for genuine issues.

The utilization of interruption programming helps in recognizing irregular exchange of designs records. A large portion of the business rehearses do exclude abnormal state of realistic record utilize and movement over web have been expanded. In this way, the utilization of the stego records over the web can be an incredible for the association (Bossler et al. 2017). The stego records have been distinguished in the organized PCs that can be disconnected inside messages.

Steganography has been a danger to the security for an association. The shrouded message in the substance material cannot be perused by exposed human eyes. In this way, there may be unlawful data being sent to the association with no allow. Along these lines, there is a need of appropriate observing and checking of messages being send all through the association. In this way, both dynamic and aloof devices are required for checking the stenographic enacts in the associations.

The majority of the ways to deal with steganography make them thing in like manner that they shroud the mystery message in physical protest which is sent. The accompanying figure demonstrates the steganography procedure of the cover picture being passed into the inserting capacity with the message to encode bringing about a steganographic picture containing the covered-up message. A key is regularly used to secure the shrouded message. This key is typically a secret phrase, so this key is used to scramble and decode the message when the installing. Insider facts can be covered up inside a wide range of cover data: content, pictures, sound, video and then some. Notwithstanding, there are apparatuses accessible to store insider facts inside a cover source (Watson and Dehghantanha 2016).

Web has been unequivocally utilized for exchange of information and data starting with one place then onto the next place. Information exchange should be possible between families, companions, corporates and distinctive gatherings in a lawful way. In any case, there can be illicit much too for information exchange. Accordingly, this can be risk full for society and corporates. Steganography manages mystery and correspondence change into shrouded design (Bossler et al. 2017). Steganography incorporate different procedures that are not straightforwardly connected to PC. In software engineering, steganography has been refereed to concealing information with non-mystery information. Steganography depends on a reality that documents and information can be adjusted without losing its inventiveness so human faculties cannot recognize changes.

It can be concluded that use of steganography has been helping in sending secret and hidden message with the help of carrier file. Steganography refers to sending hidden message without getting detected with naked human eyes. Steganalysis helps in detecting errors in the steganography. The techniques used in the steganalysis have been discussed in the report. The need of knowledge related to steganalysis by digital forensic experts have been discussed in the report.

Alattar, A.M., Memon, N.D. and Heitzenrater, C.D., (2015). Media Watermarking, Security, and Forensics 2015. Proc. of S3PIE-IS&T Vol, 9409, [online] pp.940901-1. Available at: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9409/940901/Front-Matter-Volume-9409/10.1117/12.2192149.full?SSO=1. [Accessed 8 Apr 2018]

AL-Salhi, Y.E. and Lu, S., 2016. Quantum image steganography and steganalysis based on LSQu-blocks image information concealing algorithm. International Journal of Theoretical Physics, 55(8), [online] pp.3722-3736. Available at: https://link.springer.com/article/10.1007/s10773-016-3001-3. [Accessed 8 Apr 2018]

Boroumand, M. and Fridrich, J., (2017), June. Nonlinear feature normalization in steganalysis. In Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security [online] (pp. 45-54). ACM. Available at: http://ws2.binghamton.edu/fridrich/Research/nonlinear-feature-normalization.pdf. [Accessed 8 Apr 2018]

Bossler, A., Holt, T.J. and Seigfried-Spellar, K.C., (2017). Cybercrime and digital forensics: An introduction. Routledge. Available at: https://www.taylorfrancis.com/books/9781315296975. [Accessed 8 Apr 2018]

Chaumont, M., (2018), January. The emergence of Deep Learning in steganography and steganalysis. [online] In Journée” Stéganalyse: Enjeux et Méthodes”, labelisée par le GDR ISIS et le pré-GDR sécurité. Available at: https://hal-lirmm.ccsd.cnrs.fr/lirmm-01777391/document. [Accessed 8 Apr 2018]

Cogranne, R., Sedighi, V. and Fridrich, J., (2017), March. Practical strategies for content-adaptive batch steganography and pooled steganalysis. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on [online](pp. 2122-2126). IEEE. Available at: https://hal.archives-ouvertes.fr/hal-01915645/document. [Accessed 8 Apr 2018]

Dang-Nguyen, D.T., Pasquini, C., Conotter, V. and Boato, G., (2015), March. RAISE: a raw images dataset for digital image forensics. In Proceedings of the 6th ACM Multimedia Systems Conference [online] (pp. 219-224). ACM. Available at: https://dl.acm.org/citation.cfm?id=2713194. [Accessed 19 Oct 2018]

Denemark, T.D., Boroumand, M. and Fridrich, J., (2016). Steganalysis features for content-adaptive JPEG steganography. IEEE Transactions on Information Forensics and Security, 11(8), [online] pp.1736-1746. Available at: http://millenniumsoftsol.com/courses/IEEETitles/Dotnet/Steganalysis-features-for-content.pdf. [Accessed 12 Oct 2018]

Dutta, S., (2016). Exploring Different Techniques in Steganography and Steganalysis. ASIAN JOURNAL FOR CONVERGENCE IN TECHNOLOGY (AJCT)-UGC LISTED, [online] 2. Available at: http://asianssr.org/index.php/ajct/article/view/487. [Accessed 9 Nov 2018]

Farid, H., (2018). Digital forensics in a post-truth age. Forensic science international, 289, [online] pp.268-269. Available at: https://www.cs.dartmouth.edu/farid/downloads/publications/fsi18.pdf. [Accessed 11 Nov 2018]

Li, B., He, J., Huang, J. and Shi, Y.Q., (2011). A survey on image steganography and steganalysis. Journal of Information Hiding and Multimedia Signal Processing, 2(2), [online] pp.142-172. Available at: http://bit.kuas.edu.tw/~jihmsp/2011/vol2/JIH-MSP-2011-03-005.pdf. [Accessed 8 Apr 2018]

Song, X., Liu, F., Chen, L., Yang, C. and Luo, X., (2017). Optimal Gabor Filters for Steganalysis of Content-Adaptive JPEG Steganography. KSII Transactions on Internet and Information Systems (TIIS), 11(1), [online] pp.552-569. Available at: http://www.dbpia.co.kr/Journal/ArticleDetail/NODE07102372. [Accessed 8 Apr 2018]

Li, E. and Yu, J., (2017). A Forensic Mobile Application Designed for both Steganalysis and Steganography in Digital Images. Electronic Imaging, 2017(6), [online] pp.84-89. Available at: https://www.ingentaconnect.com/contentone/ist/ei/2017/00002017/00000006/art00012?crawler=true&mimetype=application/pdf. [Accessed 10 Nov 2018]

Lin, X., (2018). Steganography and Steganalysis. In Introductory Computer Forensics [online] (pp. 557-577). Springer, Cham. Available at: https://link.springer.com/chapter/10.1007/978-3-030-00581-8_21. [Accessed 20 Oct 2018]

Manimegalai, P., Gomathi, K.S., Ponniselvi, D. and Santha, M., (2014). The Image Steganography And Steganalysis Based On Peak-Shaped Technique For Mp3 Audio And Video. International Journal of Computer Science and Mobile Computing, 3, [online] pp.300-308. Available at: https://s3.amazonaws.com/academia.edu.documents/32817700/V3I1201453.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1544511320&Signature=rCAQkBesg3cTJr2aS0%2FTvWDLvok%3D&response-content-disposition=inline%3B%20filename%3DTHE_IMAGE_STEGANOGRAPHY_AND_STEGANALYSIS.pdf. [Accessed 12 Oct 2018]

Shih, F.Y., (2017). Digital watermarking and steganography: fundamentals and techniques. [online] CRC press. Available at: https://www.taylorfrancis.com/books/9781498738774. [Accessed 26 oct 2018]

Song, X., Liu, F., Yang, C., Luo, X. and Zhang, Y., (2015), June. Steganalysis of adaptive JPEG steganography using 2D Gabor filters. In Proceedings of the 3rd ACM workshop on information hiding and multimedia security [online] (pp. 15-23). ACM. Available at: http://or.nsfc.gov.cn/bitstream/00001903-5/422527/1/1000014340409.pdf. [Accessed 18 Apr 2018]

Srivastava, S., Thakral, P., Bansal, V. and Shandil, V., (2018). A Novel Image Steganography and Steganalysis Technique Based on Pattern Searching. In Optical and Wireless Technologies [online] (pp. 531-537). Springer, Singapore. Available at: https://link.springer.com/chapter/10.1007/978-981-10-7395-3_59. [Accessed 28 Apr 2018]

Sushith, M. and Keerthana, A., (2018). Improved Steganography and Steganalysis using Image Processing. International Journal of Engineering Science, [online] 17418. Available at: http://ijesc.org/upload/acb548667f42dac023d83bbce1a470c5.Improved%20Steganography%20and%20Steganalysis%20using%20Image%20Processing%20Technique.pdf. [Accessed 8 Nov 2018]

Watson, S. and Dehghantanha, A., (2016). Digital forensics: the missing piece of the internet of things promise. Computer Fraud & Security, 2016(6), [online] pp.5-8. Available at: http://usir.salford.ac.uk/39539/7/IoT%20Forensics%20in%20CFS%20format.pdf. [Accessed 17 Nov 2018]

Wu, S., Zhong, S.H. and Liu, Y., (2016), December. Steganalysis via deep residual network. In Parallel and Distributed Systems (ICPADS), 2016 IEEE 22nd International Conference on [online] (pp. 1233-1236). IEEE. Available at: http://futuremedia.szu.edu.cn/assets/files/ICPADS.pdf. [Accessed 12 Apr 2018]

Xia, Z., Wang, X., Sun, X. and Wang, B., (2014). Steganalysis of least significant bit matching using multi‐order differences. Security and Communication Networks, [online] 7(8), pp.1283-1291. Available at: https://onlinelibrary.wiley.com/doi/pdf/10.1002/sec.864. [Accessed 3 sep 2018]

Yu, J., Li, F., Cheng, H. and Zhang, X., (2016). Spatial steganalysis using contrast of residuals. IEEE Signal Processing Letters, [online] 23(7), pp.989-992. Available at: https://ieeexplore.ieee.org/abstract/document/7482790/. [Accessed 9 Aug 2018]

 

Content management system

Abstract
This assignment is aimed to introduce students to how a project is to be managed and developed. This project is about planning for the project management of the move of a large corporate website from static HTML version to a data driven system based on a Web Content Management System. To plan for the project an evaluation for three options has to be made between Joomla, Drupal and SharePoint. As the project plan is to be for a 9 month period the time scheduling have to be made within this period. Also have to do Gantt chart and resource utilization in Microsoft project & Microsoft excel and make lesion learned report.
Project Management is the application of knowledge, skills, tools and techniques to project activities in order to meet or exceed stakeholders’ needs and expectations from a project.
Project management is the discipline of organizing and managing resources in such a way that the project is completed within defined scope, quality, time and cost constraints.
Executive Summary
Introduction:
This document serves as a course requirement of ITPQM assignment given by Greenwich University. It supersedes the previous HTML version to a data driven system for Web Content Management System (WCMS). Key parts of this report will be the choice of the content management system and the evaluation of M.S Excel and M.S project. A business criterion has to be selected regarding the chosen WCMS, which would be chosen after evaluating it in MS Excel. This assignment helps us to understand whether MS Excel and MS Project have features and functions that would support in a Project management.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Research On Web Content Management System (CMS)
CMS stands for Content Management System, a software application used for the creation, storage, and management of web content in many formats.A Web Content Management System (WCM, WCMS or Web CMS) is content management system (CMS) software, implemented as a Web application, for creating and managing HTML content. It is used to manage and control a large, dynamic collection of Web material (HTML documents and their associated images). A WCMS facilitates content creation, content control, editing, and essential Web maintenance functions. The software provides authoring (and other) tools designed to allow users with little knowledge of programming languages or markup languages to create and manage content with relative ease.
Most systems use a database to store content, metadata, or artifacts that might be needed by the system. Content is frequently, but not universally, stored as XML, to facilitate, reuse, and enable flexible presentation options.
Administration is done through browser-based interfaces, but some systems require the use of a fat client.
A presentation layer displays the content to Web-site visitors based on a set of templates. The templates are sometimes XSLT files.
Most systems use server side caching boosting performance. This works best when the WCMS is not changed often but visits happen on a regular basis.
Unlike Web-site builders, a WCMS allows non-technical users to make changes to a website with little training. A WCMS typically requires an experienced coder to set up and add features, but is primarily a Web-site maintenance tool for non-technical administrators.
This means users will not need to hire a web design company every time they want to update the site or add content.
Benefits of WCMS:
Upon completion of this project plan WCMS derives following benefits:

Customizable pages and portal elements (banners, colors, etc.) that can be tailored globally or targeted individually
Targeted announcements based on Banner criteria
Web-based tools to manage user and group profiles, announcements, content and layout, and performance and usage
A portal interface to control channel and content delivery
An integration suite to share data between third-party applications, and databases
Increased capacity to growth any organization.

Project Scope
Scope:
The objectives of WCMS scope:

Procure and install the selected web content management system
Plan, test and deploy initial information architecture framework and update, document or leverage from existing
Templates
Workflows for known sites
Roles and responsibilities
Content guidelines
Support and training materials
Services to be provided
System schematic – logical and physical design
Plan, test and execute

Scope Elements:
Several elements lack sufficient clarity without further analysis to determine whether they are in or out of scope:

Number and scope of site migration: the number of Humanities departments that can be accommodated within the project is unknown. The scope of the University Relations migration is not fully defined.
Use of authoritative course information is currently available using the “template system” and some academic departments expect this functionality. Whether it is in scope for Phase II is dependent upon analysis of complexities involved.
Fully redundant off-site disaster recovery of editing and publishing functionality may prove too complex and costly.

Out of Scope:
Other deliverables that are out of scope for the WCMS Project include:

Creation of strategic and implementation plans for corporate response to web security and policy/regulatory compliance beyond Design Review Board process.
Web standards work for development and integration (with the exception of standards and release policy for code passed via system to web layer.)
Full or extensive evaluation and mitigation for compliance and accessibility issues
Extensive service definition of the new web services to be deployed
Retirement/repurpose of existing web content delivery infrastructures
Design/revision of new campus template “Look and Feel”
Resolution of funding source for hiring of operational staff.

Project Dependencies:
The dependencies below introduce risk that must be mitigated and, therefore, are included in the Risk Management Plan.
Other Web Program Components

Web Function and Design Project: template design and information architecture deliverables have many functional and schedule-related interdependencies.
Web Service Definition Project will derive information from
WCMS as a result of practical migration experience and the WCMS project will require the Service Definition project to provide direction.

Web Governance:
The WCMS project will rely upon Web Governance to develop policy where needed for implementation or operations.
Description of Joomla, Drupal & SharePoint:
Drupal:
Drupal is a free software package that allows an individual or a community of users to easily publish, manage and organize a wide variety of content on a website. Tens of thousands of people and organizations are using Drupal to power scores of different web sites, including

Community web portals
Discussion sites
Corporate web sites
Intranet applications
Personal web sites or blogs
Aficionado sites
E-commerce applications
Resource directories
Social Networking sites

The built-in functionality, combined with dozens of freely available add-on modules, will enable features such as:

Electronic commerce
Blogs
Collaborative authoring environments
Forums
Peer-to-peer networking
Newsletters
Podcasting
Picture galleries
File uploads and downloads

General features

Collaborative Book- collaborative book feature lets one setup a “book” and then authorize other individuals to contribute content.
Friendly URLs- Drupal uses Apache’s mod_rewrite to enable customizable URLs that are both user and search engine friendly.
Modules- The Drupal community hascontributed many moduleswhich provide functionality that extend Drupal core.
Online help- Have built a robust online help system built into the core help text.
Open source- The source code of Drupal is freely available under the terms of the GNU General Public License 2 (GPL). Unlike proprietary blogging or content management systems, Drupal’s feature set is fully available to extend or customize as needed.
Personalization- A robust personalization environment is at the core of Drupal. Both the content and the presentation can be individualized based on user-defined preferences.
Role based permission system- Drupal administrators don’t have to tediously setup permissions for each user. Instead, they assign permissions to roles and then group like users into a role group.
Searching- All content in Drupal is fully indexed and searchable at all times if one take advantage of the built in search module.

Content management

Polls- Drupal comes with a poll module which enables admins and/or users to create polls and show them on various pages.
Templating- Drupal’s theme system separates content from presentation allowing you to control the look and feel of your Drupal site. Templates are created from standard HTML and PHP coding meaning that you don’t have to learn a proprietary templating language.
Threaded comments- Drupal provides a powerful threaded comment model for enabling discussion on published content. Comments are hierarchical as in a newsgroup or forum.
Version control- Drupal’s version control system tracks the details of content updates including who changed it, what was changed, the date and time of changes made to your content and more. Version control features provide an option to keep a comment log and enables you to roll-back content to an earlier version.

Joomla:
Joomla is an award-winning content management system (CMS), which enables you to build Web sites and powerful online applications. Many aspects, including its ease-of-use and extensibility, have made Joomla the most popular Web site software available. Best of all, Joomla is an open source solution that is freely available to everyone.
Joomla is used all over the world to power Web sites of all shapes and sizes. For example:

Corporate Web sites or portals
Corporate intranets and extranets
Online magazines, newspapers, and publications
E-commerce and online reservations
Government applications
Small business Web sites
Non-profit and organizational Web sites
Community-based portals
School and church Web sites
Personal or family homepages

Joomla is designed to be easy to install and set up. Many Web hosting services offer a single-click install, getting your new site up and running in just a few minutes. Since Joomla is so easy to use, as a Web designer or developer, we can quickly build sites for your clients. Then, with a minimal amount of instruction, we can empower our clients to easily manage their sites themselves.
SharePoint:
SharePoint is a collection of products and software elements that includes, among a growing selection of components, web browser based collaboration functions, process management modules, search modules and a SharePoint can be used to host web sites that access shared workspaces, information stores and documents, as well as host defined applications such as wikis and blogs. All users can manipulate proprietary controls called “web parts” or interact with pieces of content such as lists and document libraries.
Some Features of SharePoint:
Team Collaboration, Review Workflows, Premium Web, Slide Library (splits a PPT presentation into individually viewable slides on the site without breaking the PPT file open), Premium Web Application, Premium Root Site, Management Library, Global Web Parts, Enhanced Search, Base Web Application, Spell Checking, Signatures Workflow ,Reporting, Premium Site, Publishing Web, Base Web, Base Site, Basic Search, Translation Workflow – workflow for sending a document through rounds of translation into multiple languages, Expiration Workflow, Excel Server, Search Web Parts, Publishing Site Issue Tracking Workflow .
EVALUATION ON DRUPAL, JOOMLA & SHAREPOINT
Joomla:

Joomla is designed in a way that it can work perfectly in a shared hosting environment.
It is a package that is least expensive and most common to all users.
Its installation feature is simple and just like any other descktop software.
It can support several extensions, add-on and plug in.
Joomla is written in PHP with general purpose scripting language and best suitable for web development.
Joomla is integrated with CiviCRM and other common packages like GetActive or DemocracyInAction.

Drupal

Drupal can work just like Joomla in shared hosting environments.
It has powerful content editing tools for common users and for web developers for creating websites without bothering about codes.
Drupal is a bit diffuclt in installation procedure than that of Joomla.
Drupal is also developed in PHP and offers common functionalies of Joomla or perhaps more sophisticated ones, which would be difficult for non technical persons to master it than that of Joomla.
It contains non-profit centric add-ons like event registration, online donation, email newsletter etc.
Eventhough drupal has plugins they are less powerful than that of Joomla.

SharePoint:

Sharepoint uses application server as IIS.Net where as Joomla and Drupal uses , CGI and Apache.
Applicatin cost is $4000 compared to the other two which are free to use.
Security feature is more of a plus point in SharePoint than that of Joomla! And Drupal.
Ease of use, performance and management are more easy and manageable compared to both Drupal and Joomla.
But SharePoint supports ASP.net programming language, if the site which was built using PHP then it will be difficult to use SharePoint to establish the same site online.

KEY FUNCTIONALITIES OF A WCMS
Content management systems manage content creation, review and approval processes for web site content. Content management system provides content version control, collaboration utilities, and user or document level security.
Some of the functions of CMS are:

Content Authoring: it is the ability to create content through a content editor, import of content, capability to deploy, present the content and aggregation of items.
Content acquisition: it is the ability to gather content through import or metadata.
Content aggregation: process of gathering information from different sources into one overall structure.
Output and content presentation: Presenting content through different ways such as HTML or XML
Workflow management: the process of managing to create flows of sequential and parallel tasks that must be accomplished.
Version control and management: lets multiple users make simultaneous changes to content and keep track of them.
Security management: Access to the content are controlled through authentication, role and directory management, access control settings and passwords.
Product technology and support: defines the technical architecture of the product, the technological environment in which the product can successfully run. Such as product and application architecture, software usability and administration, platform and database support, application standards support, communications and protocol support and integration capabilities.

Project Goals and Objectives:
The objectives of WCMS with the original scope:

Procure and install the selected web content management system
Plan, test and deploy initial information architecture framework and update, document or leverage from existing
Templates
Workflows for known sites
Roles and responsibilities
Content guidelines
Support and training materials
Services to be provided
System schematic – logical and physical design
Plan, test and execute

WSM criteria of WCMS:
Criteria to which alternative to choose for Web Content Management System (WCMS):
Corefunctionality
When most people think of content management, they are thinking of the creation, deletion, editing and organizing of pages. They assume all content management systems do this and so take the functionality for granted. However that is not necessarily the case. There is also no guarantee that it is done in an intuitive fashion.
Not all blogging platforms for example allow the owner to manage and organize pages into a tree hierarchy. Instead the individual ‘posts’ are automatically organized by criteria such as date or category. In some situations this is perfectly adequate. In fact this limitation in functionality keeps the interface simple and easy to understand. However, in other circumstances the absence of this functionality can be frustrating.
Theeditor
The majority of content management systems have a WYSIWYG editor. Strangely this editor is often ill considered, despite the fact that it is the most used feature within the system.
The editor is the interface through which content is added and amended. Traditionally, it has also allowed the content provider to apply basic formatting such as the selection of fonts and color. However more recently there has been a move away from this type of editor to something that reflects the principles of best practice.
The danger of traditional WYSIWYG editors is twofold. First, they give the content provider too much design control. They are able to customize the appearance of a page to such an extent that it could undermine the consistence of design and branding. Second, in order to achieve this level of design control the CMS mixes design and content.
The new generation of editors takes a different approach. The content provider uses the editor to markup headings, lists, links and other elements without dictating how they should appear. Ensure your list of requirements includes an editor that uses this approach and does not give content providers control over appearance. At the very least look for content management systems that allow the editor to be replaced with a more appropriate solution.
The editor should also be able to handle external assets including images and downloads. That brings us on to the management of these assets.
Managingassets
Managing images and files are badly handled by some CMS packages. Issues of accessibility and ease of use can cause frustration with badly designed systems. Images in particular can cause problems. Ensure that the content management system you select forces content provider to add alt attributes to imagery. You may also want a CMS that provides basic image editing tools such as crop resize and rotate. However, finding such a CMS can be a challenge.
Also consider how the content management system deals with uploading and attaching PDFs, Word documents and other similar files. How are they then displayed to users? What descriptions can be attached to the files and is the search capable of indexing them.
Search is an important aspect of any site. Approximately half of users will start with search when looking for content. However, often the search functionality available in content management systems is inadequate.
Userinteraction
If you intend to gather user feedback, your CMS must provide that functionality or allow third party plug-in to do so. Equally, if you want a community on your site then you will require functionality such as chat, forums, comments and ratings.
As a minimum you will require the ability to post forms and collect the responses. How easy does the CMS make this process? Can you customize the fields or does that require technical expertise? What about the results? Can you specify who they are emailed to? Can they be written to a database or outputted as an excel document? Consider the type of functionality that you will require and look for a CMS that supports that.
Rolesandpermissions
As the number of content providers increase, you will want more control over who can edit what. For example, personnel should be able to post job advertisements but not add content to the homepage. This requires a content management system that supports permissions. Although implementation can vary, permissions normally allow you to specify whether users to edit specific pages or even entire sections of the site. As the number of contributors grows still further you may require one individual to review the content being posted to ensure accuracy and consistent tone. Alternatively content might be inputted by a junior member of staff who requires the approval of somebody more senior before making that content live.
In both cases this requires a cms that supports multiple roles. This can be as simple as editors and approver, or complex allowing customized roles with different permissions.
Finally, enterprise level content management systems support entire workflows where a page update has to go through a series of checkpoints before being allowed to go live. These complex scenarios require the ability to roll back pages to a pervious version.
Being able to revert to a previous version of a page allows you to quickly recover if something is posted by accident.
Some content management systems have complex versioning that allow you to rollback to a specific date. However, in most cases this is overkill. The most common use of versioning is simply to return to the last saved state.
Although this sounds like an indispensable feature, in my experience it is rarely used expect in complex workflow situations. That said, although versioning was once a enterprise level tool it is increasingly becoming available in most content management systems. This is also true of multi-site support.
Multiplesitesupport
With more content management systems allowing you to run multiple websites from the same installation, I would recommend that this is a must-have feature.
Although you may not currently need to manage more than a single site, that could change. You may decide to launch a new site targeting a different audience.
Alternatively with the growth of the mobile web, you may create a separate site designed for mobile devices. Whatever the reason, having the flexibility to run multiple websites is important.
Multilingualsupport
It is easy to dismiss the need to support multiple languages. Your site may be targeted specifically at the domestic market or you may sell a language specific product. However think twice before dismissing this requirement.
Even if your product is language specific, that could change. It is important that your cms can grow with your business and changing requirements.
Also just because you are targeting the domestic market does not mean you can ignore language. We live in a multicultural society where numerous languages are spoken. Being able to accommodate these differences provides a significant edge on your competition.
That said; do think through the ramifications of this requirement. Just because you have the ability to add multiple languages doesn’t mean you have the content. Too many of my clients have insisted on multilingual support and yet have never used it. They have failed to consider where they are going to get the content translated and how they intend to pay for it.
Success Criteria:

A central WCMS is implemented and accepted by primary stakeholders including academic and academic support web site
Clear roles and responsibilities are established for content creation, maintenance, and the support of the technology
In-scope web sites are to the content management system
Stakeholders are kept informed of developments and are provided with opportunities to comment and participate

After Evaluation of WSM criteria I have created this WSM model which can help me to choose best CMS system for WCMS.
Work Breakdown Structure:
Project Name: Web Content Management System

Project planning/feasibility study (requirements stage)

Study on the project issues
Develop the project plan

System analysis

Analysis of its requirements pre analysis
selecting a supplier
How many servers will be required, procuring hardware etc.
Procuring hardware and software

System design

Develop system design
Context diagram/ system boundary
DFD
ERD
Final database
The final solution map

Develop content management activities.

Develop content management activities
CMS objects
CMS emails
Find relevant contents
Moving contents from old website to the new one.
Archiving mechanisms
Operating environment made ready

implementation/coding

Creating basic pages with different logged areas
Implement menu structure
Implement site authentication
Implement site modules
web editors trained for use of CMS

integration and testing

Developed module for testing
Test modules
Test full site
Test in the working environment

Acceptance

Check developed module and suggests changes
Client testing
Acceptance by the sponsors for the launch of new system

installation

Move site from developers server to live server
Changes made
System installed in the real environment.

Deployment (training)

Train IT support staff
Construct training schedule
Give training for use of the CMS system
Verify user readiness
Give editor course after 6 months of deployment.

Implementation Plan:
The high-level timeline follows for implementation. [A key weakness in estimating dates is the current unknown availability date of the vendor. Here we assume availability to develop SOW as soon as the contract is finalized.]
Assumptions for the following timeline include:

Contract negotiations are successful
Actual award is not delayed after successful negotiations
Vendor can engage as soon as contract negotiations are complete
Two weeks off over winter break and one week over Thanksgiving break
Availability of other team resources as specified below
Twenty percent reduction in capacity due to furloughs and staff loss

Implementation Strategy:
The strategy to implement the new centrally supported WCMS site process includes the following work elements.

Procurement – negotiate a contract with the vendor and complete the purchase of the application.
System Design and Installation – design and installation of hardware, software, and process components supporting the application environment
Requirements, Configuration, and Development – requirements elicitation, configuration and development of the application to meet user requirements
Deployment Management – create a deployment plan for release of infrastructure, configurations, development projects, and assure release readiness.
Documentation and Communication – collect and organize documentation and project communication.

Implementation Work Package Description:
Procurement:

Procurement and Business Contracts will procure software, consultation time, and three-year support contract from the selected vendor.

System design and installation:

The technical aspects of the system implementation will be conducted by a core technical team including two ITS team leads, PM, and rotating technical experts depending upon work products (programming, security, server admin, network, architects, IDM manager, etc.) Disaster recovery is a deliverable of this workgroup. Requirements will be gathered, options reviewed, and feasible option implemented. Because no precedent at UCSC for off-site disaster recovery is available, the options will be researched and analyzed for feasibility. The lack of precedent will be verified. Specific deliverables are listed in section 4.2.

Functional Requirements, Configurations & Development:

Logical configuration of the application to meet business needs will include developing knowledge of the application function as well as the partner business requirements. Also key will be engagement with the vendor to understand best practices.
Unlike technical configuration, functional configuration includes definition of business requirements related elements such as users/roles/groups, workflows, and user interface configurations. Hannon Hill Cascade Server has components that combine templates, configuration settings, and user groups together. A logical analysis of the best configuration is critical to maintaining scalability and functionality.
This configuration team will engage and include technical team members and migration team members in developing requirements and specifications for configuration and development. To the extent required to meet project deliverables, the team will gather business requirements, create specifications, and develop scripts, API interfaces, and external application integration. Specific deliverables are listed in section 4.2.

Deployment Management:
A core team including team leads will be responsible for deployment planning, will develop a checklist of activities and tests that must be performed prior to deployment, and will be accountable for their successful completion prior to deployment.
Documentation and Communication:
This team will be responsible for assuring that documentation to be handed off to the service team is created by the appropriate sub-teams and is stored/ organized in the appropriate place prior to project close. This includes:

Determining Protein content of Drosophila

Abstract

BCA protein assay will help find out the protein concentration in flies as the more Cu+ ions there is, the more intense the purple will be in the solution. It will allow the spectrophotometry to determine the concentration of the solutions. The results were gained by using six different Eppendorf tubes which are labelled 1600, 800, 400, 200, 100 and 0 g/ml. 50 l of the 1600 g/ml is added to 1600 and then another to 800 g/ml. Distilled water is added to the rest of them and 50 l of the protein solution. The flies from the square Eppendorf tube is added to a weighed out Eppendorf tube. They are crushed with a blue pestle and homogenizing buffer is added. The results show that there is no statistical difference between the two different homogenates. ‘Table 4’ shows the absorbance’s for them. Overall, they show that there is no statistical difference between them and the protein content of the Drosophila is shown in ‘Figure 1’.

Introduction

The practical is to determine the protein content of male Drosophila flies. In order to determine how much protein content there is in male Drosophila flies, we need to use BCA (bicinchoninic acid) assay. This will allow us to find out the protein concentration in the flies. It turns the solution into a purple-coloured product and it will allow it to absorb light at a wavelength of 562 nm (G-Biosciences, 2018). The BCA assay works by reducing the Cu2+ to Cu+1 which results in the purple colour solution and it can help reduce the variability that is caused by the compositional differences in protein (Fanglian, 2018). It is used because of the advantage that it can reduce the variability in the compositional differences in protein. The more protein there is, the more intense the purple in the solution will be. Spectrophotometry follows the Beer-Lambert Law which is where the absorbance of a solution at a specific wavelength (A) is directly proportionate to that of the concentration of the absorbing molecule (C). The more protein present, the more Cu+ complexes with the BCA and this will mean more absorption. Protein in the flies will react with the BCA reagent and it will give an absorbance that might be able to be read off the standard curve.

Methods

Standards

Label the Eppendorf tube with the dilution concentration that it will contain including a zero concentration Eppendorf tube. Label them from 1600, 800, 400, 200, 100 and 0 g/ml. Pipette 50 l of the 1600 g/ml protein solution which is Bovine serum albumin (BSA) (Sigma-Aldrich, St. Louis, MO, USA) into the 1600 g/ml labelled Eppendorf tube. Pipette another 50 l into the 800 g/ml Eppendorf tube. Make sure when pipetting into Eppendorf tube to take them out of the tube rack to ensure that you can see the tip clearly. Discard the pipette tip to avoid contamination. Using a fresh pipette tip, add 50 l of water into the other Eppendorf tubes (0, 100, 200, 400) and use the same tip to add 50 l into the 800 g/ml Eppendorf tube. Using the same tip still mix carefully inside the 800 g/ml tube and transfer 50 l of the solution. Transfer this to the 400 g/ml Eppendorf tube and then to the rest. This is known as a serial 1 in 2 dilutions. Remove 50 l out from the 100g/ml Eppendorf tube since the rest of them have 50 l but it has 100 l.

Fly samples

Using the weighed tube and a tube that has a square on it, transfer the flies that come from the square Eppendorf tube into the Eppendorf that has been weighed and labelled. Add 100 l of homogenizing buffer and crush the flies with a plastic pestle. After a minute of crushing, add another 300 l of the buffer and crush until there are no large bits that are visible. After 2 minutes add another 200 l of the buffer, close the lid and vortex for 30 seconds. After 5 minutes, centrifuge the homogenate at 3000g for 3 minutes. Remove it from the centrifuge and transfer 200 l of the supernatant to a fresh Eppendorf tube. Transfer 50 l into three separate Eppendorf tubes. Label them appropriately.

BCA Assay

Pipette 1000 ml BCA reagent (Thermo Scientific, Rockford, IL, USA) and add it to all the samples and place them into a 60oC waterbath for 30 minutes to incubate. After the 30 minute are done, immediately read the absorbance’s of all the solutions using the spectrophotometer. Using the zero tube to calibrate the spectrophotometer. Set the wavelength to 590 nm. Take a plastic cuvette and pour the contents of the Eppendorf tubes into it. Read then record the absorbance of them all. Including the unknowns. As the unknowns were above 1, it was diluted by adding 200 l of distilled water and 800 l of the absorbance replicate.

Results

Figure 1: Shows the standard curve for the protein concentration, this will help determine the protein concentration in the unknown sample. Calculated the protein concentration by dividing the average of the three absorbance replicates by 0.0027. 0.283 / 0.0027 =104.81

Figure 2: This shows whether is a statistical difference between the mass of the two different fly homogenates

Figure 3: This shows whether is a statistical difference between the protein percentage of the two different fly homogenates

Cross

Weight (mg)

Protein Percentage (%)

Mean

3.2

11.7

Standard Deviation

0.6

4.5

Table 1: Shows the mean and standard deviation of weight and protein percentage in cross Eppendorf tubes.

Square

Weight (mg)

Protein Percentage (%)

Mean

2.9

10.6

Standard Deviation

0.6

4.8

Table 2: Shows the mean and standard deviation of weight and protein percentage in the square Eppendorf tubes.

g/mL

1600

800

400

200

100

0

Absorbance

Too high

1.351

0.958

0.758

0.420

0

Table 3: Showing the protein concentration BSA standard dilution absorbance readings.

X5

Absorbance replicate 1

0.057

0.285

Replicate 2

0.056

0.281

Replicate 3

0.057

0.285

Table 4: Showing the fly homogenate absorbance readings. Multiply the replicates by 5 due to them being a 1 in 5 dilution. Average of them all will equal 0.283.

Discussion

‘Figure 2’ and ‘Figure 3’ show that there is no statistical difference between the two different homogenates. This is due to error bars overlapping each other in the bar graphs. There are some disadvantages with the BCA protein assay such as that compared to other assays such as Bradford assay, it is susceptible to some interference by chemicals present in the protein (Fanglian, 2018).  Bradford assay could be used which is cheap but it works more quick and easy than BCA protein assay. It relies on direct binding to protein samples and it is compatible with a wide range of components (G-Biosciences, 2018). This means that the Bradford assay is better than BCA protein assay as unlike the BCA protein assay it is compatible with a wider range of components. This means that Bradford assay would have been best to use than using BCA protein assay.  ‘Table 1’ and ‘Table 2’ are showing the standard deviation which is meant to show how much they differ from the mean value. T-test can’t be used for these results as there are too many samples, t-test is usually used when there is a small sample size. 

References

Fanglian, H. (2018). BCA (Bicinchoninic Acid) Protein Assay —BIO-PROTOCOL. [online] Bio-protocol.org. Available at: https://bio-protocol.org/bio101/e44 [Accessed 12 Nov. 2018].

G-Biosciences. (2018). Bicinchoninic Acid (BCA) Protein Assay. [online] Available at: https://www.gbiosciences.com/Protein-Research/Bicinchoninic-Acid-BCA-Protein-Assay [Accessed 12 Nov. 2018].

G-Biosciences. (2018). Is Your BCA Protein Assay Really the Best Choice?. [online] Available at: https://info.gbiosciences.com/blog/alternative-bca-protein-assay [Accessed 12 Nov. 2018].

BOVINE SERUM ALBUMIN: Sigma-Aldrich, St. Louis, MO, USA.

BCA protein assay reagents A and B: Thermo Scientific, Rockford, IL, USA.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Raw Data

Fly symbol

Mass of flies

Protein conc

Total protein

Total protein

Protein

(Square or Cross)

(mg)

(g/ml)

(μg)

(mg)

(%)

 

CR

3.0

694.2

416.5

0.4

13.9

CR

3.4

324.4

194.6

0.2

5.7

CR

4.5

562.0

337.2

0.3

7.5

CR

4.1

457.1

274.3

0.3

6.7

CR

2.5

840.9

504.5

0.5

20.2

CR

2.6

642.2

385.3

0.4

14.8

CR

2.9

995.7

597.4

0.6

20.6

CR

2.1

527.2

316.3

0.3

15.1

CR

2.7

734.2

440.5

0.4

16.3

CR

2.1

688.3

413.0

0.4

19.7

CR

3.0

481.1

288.7

0.3

9.6

CR

2.8

327.3

196.4

0.2

7.0

CR

3.1

474.4

284.6

0.3

9.2

CR

2.5

537.8

322.7

0.3

12.9

CR

3.6

768.3

461.0

0.5

12.8

CR

3.2

515.2

309.1

0.3

9.7

CR

3.3

646.0

387.6

0.4

11.7

CR

3.4

593.3

356.0

0.4

10.5

CR

2.3

642.1

385.2

0.4

16.7

CR

2.7

284.2

170.5

0.2

6.3

CR

3.3

1019.5

611.7

0.6

18.5

CR

3.0

569.1

341.4

0.3

11.4

CR

2.2

607.8

364.7

0.4

16.6

CR

4.1

467.5

280.5

0.3

6.8

CR

3.0

508.7

305.2

0.3

10.2

CR

3.2

363.7

218.2

0.2

6.8

CR

3.4

318.3

191.0

0.2

5.6

CR

3.1

1118.7

671.2

0.7

21.7

CR

5.1

761.0

456.6

0.5

9.0

CR

4.3

984.2

590.5

0.6

13.7

CR

3.0

510.2

306.1

0.3

10.2

CR

3.9

838.3

503.0

0.5

12.9

CR

3.2

602.8

361.7

0.4

11.3

CR

3.9

1069.4

641.7

0.6

16.5

CR

3.7

621.3

372.8

0.4

10.1

CR

3.3

445.0

267.0

0.3

8.1

CR

2.0

795.2

477.1

0.5

23.9

CR

2.9

405.6

243.3

0.2

8.4

CR

3.5

470.8

282.5

0.3

8.1

CR

3.1

784.0

470.4

0.5

15.2

CR

3.4

494.8

296.9

0.3

8.7

CR

3.4

802.9

481.7

0.5

14.2

CR

3.6

397.7

238.6

0.2

6.6

CR

2.5

613.7

368.2

0.4

14.7

CR

2.9

644.9

386.9

0.4

13.3

CR

3.2

716.0

429.6

0.4

13.4

CR

3.6

792.3

475.4

0.5

13.2

CR

3.4

594.6

356.7

0.4

10.5

CR

2.6

264.3

158.6

0.2

6.1

CR

4.4

786.8

472.1

0.5

10.7

CR

3.1

613.3

368.0

0.4

11.9

CR

4.0

235.6

141.3

0.1

3.5

CR

3.9

156.9

94.1

0.1

2.4

CR

3.0

556.3

333.8

0.3

11.1

CR

3.4

327.9

196.7

0.2

5.8

CR

3.5

722.7

433.6

0.4

12.4

CR

2.7

861.5

516.9

0.5

19.1

CR

3.3

805.7

483.4

0.5

14.6

CR

3.0

540.0

324.0

0.3

10.8

CR

3.7

500.0

300.0

0.3

8.1

CR

2.7

433.1

259.9

0.3

9.6

CR

3.4

512.6

307.6

0.3

9.0

CR

2.8

472.7

283.6

0.3

10.1

CR

2.7

575.0

345.0

0.3

12.8

CR

3.0

817.7

490.6

0.5

16.4

Weight

Protein Percentage

Mean

3.2

11.7

SD

0.6

4.5

SQ

2.7

544.4

326.7

0.3

12.1

SQ

2.7

687.6

412.6

0.4

15.3

SQ

2.8

524.4

314.6

0.3

11.2

SQ

2.4

569.0

341.4

0.3

14.2

SQ

2.0

589.2

353.5

0.4

17.7

SQ

3.5

645.0

387.0

0.4

11.1

SQ

3.0

597.9

358.8

0.4

11.9

SQ

2.9

578.6

347.1

0.3

12.0

SQ

2.2

367.1

220.2

0.2

10.0

SQ

2.8

252.2

151.3

0.2

5.4

SQ

3.9

1051.3

630.8

0.6

16.2

SQ

2.7

573.2

343.9

0.3

12.7

SQ

3.7

1113.8

668.3

0.7

18.1

SQ

2.7

1013.5

608.1

0.6

22.5

SQ

3.1

532.8

319.7

0.3

10.3

SQ

3.4

536.7

322.0

0.3

9.5

SQ

2.3

306.0

183.6

0.2

8.0

SQ

2.4

861.8

517.1

0.5

21.5

SQ

3.1

620.0

372.0

0.4

12.0

SQ

2.2

496.5

297.9

0.3

13.5

SQ

2.3

636.9

382.1

0.4

16.6

SQ

3.0

351.5

210.9

0.2

7.0

SQ

3.7

439.4

263.6

0.3

7.1

SQ

3.0

300.0

180.0

0.2

6.0

SQ

3.2

463.2

277.9

0.3

8.7

SQ

3.4

702.9

421.7

0.4

12.4

SQ

2.4

48.0

28.8

0.0

1.2

SQ

4.1

707.4

424.5

0.4

10.4

SQ

2.6

300.0

180.0

0.2

6.9

SQ

2.4

493.5

296.1

0.3

12.3

SQ

2.5

292.5

175.5

0.2

7.0

SQ

2.7

352.6

211.6

0.2

7.8

SQ

2.8

427.8

256.7

0.3

9.2

SQ

4.0

281.3

168.8

0.2

4.2

SQ

2.1

101.3

60.8

0.1

2.9

SQ

2.2

784.3

470.6

0.5

21.4

SQ

2.1

101.3

60.8

0.1

2.9

SQ

2.7

341.6

205.0

0.2

7.6

SQ

3.7

650.6

390.3

0.4

10.5

SQ

2.7

330.6

198.3

0.2

7.3

SQ

2.3

655.2

393.1

0.4

17.1

SQ

2.8

513.7

308.2

0.3

11.0

SQ

3.5

353.3

212.0

0.2

6.1

SQ

4.5

643.3

386.0

0.4

8.6

SQ

2.9

353.3

212.0

0.2

7.3

SQ

4.3

545.24

327.1

0.3

7.6

SQ

2.8

490.9

294.5

0.3

10.5

SQ

2.9

393.1

235.9

0.2

8.1

SQ

2.1

548.9

329.4

0.3

15.7

SQ

2.7

516.8

310.1

0.3

11.5

SQ

2.0

77.1

46.3

0.0

2.3

Weight

Protein Percentage

Mean

2.9

10.6

SD

0.6

4.8

T-Test(SQ/CR)

0.2

 
 

Client Server Computing and Content Management Systems

Table of Contents

Client-Server Computing

Uses of Client-Server Computing

Client-Server Computing evolved

Advantages and Disadvantages

Advantages

Disadvantages

Content Management System

Types of Content Management System

CMS Comparison: Drupal and Joomla

CMS Installation

CMS 1 Installation(Drupal):

The process of acquiring and installing a theme for Drupal:

The  method to add plugins/modules to provide functionality for Drupal

To produce a simple site for Drupal:

What back end technologies do they make use of?

CMS 2 installation(Joomla):

The process of acquiring and installing a theme for Joomla:

The  method to add plugins/modules to provide functionality

Steps to produce a simple site for Joomla

What back end technologies do they make use of?

Recommendation and Justification

References

 

The client-server model is a model that communicate against each other via a computer network. It is designed for one or more users to share its data resources in a client-server network. There are plenty of data resources that can be shared such as songs album, mp3, mp4, or other contents from the service provider, called a server. A server is a service provider which store files and data and may works on multiple clients at the same time and share its resources to the client. While the clients are a requester which can ask requests from the server. For example, one of the famous gaming devices called the Playstation is a kind of Client-server model to serve the end-user. We can open the updated or completed network files by logging into it which allow the end-user to contact with the Playstation server website and retrieve their data such as games update, video collection, equipment store, games download and others.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

The uses of the client and server computing are the server itself should take request from the clients and share its data with all clients that are requested for. Just imagine, a client is waiting for a server in order to serve him, the client connects, communicate and ask for a request from the server via a network. A server then will make sure the data access is legit. After the server confirms that the data is legit, then the server will take the request from the client. So, the role of the client-server model is a client doesn’t provide any data to the server but only make requests to the server. However, a server may have its limited number by receiving requests from clients at once. Therefore, it uses priority rules to schedule that request. Although the client and server are communicated with each other on separate client’s devices, they remain to use the same system through a network. After that, the server will close the connection and move to serve another client when there is no other job request from the present client.

The client and server system started when the computers were increased hugely and being used individually. The organization started to demands of computing control and value. Then, the idea is to separate some personal computers and a mainframe. Therefore, the computers can communicate with each other via a network.

Advantages:  Security, data access, data resources and data integrity are under control by the dedicated server. It is because the data is centralized in a dedicated server. The unauthorized client cannot access the system without permission by the support staff. For example, any new unknown clients entered the network would require to enter the network’s username and passwords for new access to the network. Any new unrecognized clients will go through a centralized security system based before access through the network. That’s mean it is hard to be attacked by some unauthorized clients or maybe  Even if the issue happened, the support staff will be aware easily and protect the data.

Besides, support staff can set data backup in the servers which server can automatically recover data before the data being lost from the client devices. That is important because no one can guarantee that the hard drive will always work well in the client devices. When it comes to the issue of a hard drive failure, everybody must do a backup. However, one hard drive is not enough. The server can have two or more internal hard drives to do extra data backup. When the data accessed for the first one is failed, then the client can go to the second one for retrieving data.

The support team can also modify, update, or upgrade resources if they needed. It is easy for them to do that because all clients are requested data from only one server in the network. Therefore, the support team can only focus on the server to modify data. It is also a good way to improve teamwork and reduce human error at the same time when they have three or more people working together in that server.

The user can connect and access anytime by using any client devices they want because the server is always powered on. This is important for the client because it can help them to communicate with each other immediately whenever the other client devices are offline or broken down.

The support staff can strengthen, upgrade or add new nodes in the client-server computing. It is a great scalability in client-server computing. We can also increase the nodes if we want it to. It is because of all the nodes have the same task individually, which request data from the server. The client and support staff don’t have to get closer to the server and access it. They can connect with the server by using remote access. The data can be processed efficiently even if they are not in the server room. So, this makes them save much of their time if they live far away from there or having difficulties to come.

Disadvantages: Client and server computing are more lead to a network overload issue. The issue happens when more and more request from the clients contact with only one server contemporary. This can cause the server down by getting overloaded.

As we know, the server is a hub for all clients. When the server is broken down, the whole network will go down at the same time. All the download part may stop and start from 0 to redownload it again. The clients have to stop their work until the server is restored.

The failure of the server may happen anytime without any reason. When that happens,  the time to under maintenance might take about several minutes if it is a fortune, but sometimes it might take days to fix the whole systems’ server failure.

The setup cost is a little bit expensive compared to some other model such as peer to peer network.

Moreover, it is required to have some experienced IT field people to take control of the servers. The security, database, errors or problem-solving issues are not a regular person can do. And it needs some profession to be fast and get the solution. The bigger the company is, the faster they want it to solve the problem on that server because the more clients it has.

A content Management System is an application that is used to manage and modify content. It supports multiple users to add, organize and modify their publishing content. You can edit the page in the administrative page and then you can also preview it at the user site after you have done editing. Furthermore, you can add some plugins and themes to demonstrate the site and make the content different. All of those themes and plugins are mostly created by the support team from the CMS and a lot of that can be used for free and only one-click installation. Besides, a Content management system also can help you to manage menu, create a contact or online forms, create a blog, create an e-commerce platform, manage a security framework, manage store database and etc. Moreover, a content management system also allows you to create and manage webpages easily by separating the creation of your content from the mechanics required and present it on the web. A traditional CMS architecture still prefers to use the self-hosted open-source model to combine codes together with the CMS so that the developers can take control over it without share it to other people such as a vendor. Actually, it has another technology architecture to change the typical web CMS to a modern CMS called Headless CMS architecture. The Headless CMS can use open API to manage users content and send over to multiple client devices.

Types of Content Management System

There are five types of a content management system. First, the component content management system. It manages stores the components in a repository. The costs are reduced with the reuse of the content and it can publish in a different type of platform such as PDF, mobile and etc. Secondly, the document management system. It is to manage documents on a cloud-based platform. Thirdly, the Enterprise Content Management System. It can manage document and make sure information goes to a correct recipient every day. Then, it will delete unimportant files. Fourthly, the web content management system. It manages the webpage with its own content. Users can do it at zero knowledge of coding. Fifthly, the Digital Asset Management System. It can store content securely and let clients access their important data safely. Also, this system is a cloud-based platform too.

Drupal is a content management system run in PHP script. Drupal is a flexible CMS based on the LAMP stack such as Linux, Apache, My SQL and my PHP languages. With a modular design that allowing features to be added by installing new modules and allowing the entire appearance and feel of the whole website to be changed. All of those can be done by installing a theme. Drupal isn’t only a CMS platform but it is a variety and complete web development framework. By using Drupal, we are able to build many websites without any programming knowledge or coding experiences. Drupal is a free and open-source and it is same as Joomla, so that’s mean it is free to use for the user. About Drupal: two university students felt unusual to a permanent internet connection in the year of 2000. A new site with a web board has been developed and a small content framework is created. They set its domain name as dorp.org and “Drop” is known as a village in the Dutch language. They did a mistype from “dorp” to “drop”. When it is live on the web, it became attracted to people and they rename it to Drupal as the Dutch word means “drop”. There are some features about Drupal. It has a new theme engine called Twig.  It is a PHP based but it is written in syntax form. So it is more secure, fast and flexible. Another great feature is the quick edits. It allows a user to click on the edit button on the front page and that button links through backport for quick hands fixes from the front end. Also, It has a Guided Tour for people to understand better. It supports PHP 4 and 7. Drupal provides HTML 5, JSON, XML and Hypertext Application Language as a data source and output content to the users. The cost to develop a website project is a lot in Drupal. The estimated amount is between $15,000 to $100,000. There are several business problems faced by Drupal which is business themes issue and business rule module issue.

Joomla is a CMS which runs in a Model View Controller design pattern. About Joomla: It is an award-winning content management system. It allows users to build websites with zero coding experience. History of Joomla, An Australia company named Miro develops a CMS called MAMBO. It has given a General Public license under GNU. But, a copyright issue happened in 2005, caused the MAMBO teams to resign.  The second time the name of Mambo changed to Jumla and it launched on 16 September 2005. Then, it launched Joomla 1.5 on 21 January 2008. After that, Joomla has well known to the public since July 2009. There are some features about Joomla. It is an open-source(free), good content management allow the viewer to register and keep updating on their module and get core updated easily. Then it keeps newer version with users need by using 1 click step to complete the update process. It supports PHP 4 and 7 and over 70 viewing languages to the user. Major types of websites that can be created, for example, a blogging site, business to business online sales website, event website for a local event and others. Furthermore, it also great at user management, menu management, contact management, banner management and cache management. The search capabilities in Joomla provide the most popular search in plugins, themes, admin tools and others. The cost to develop a website project is less than Drupal. The estimated amount is between $99 to $100,000. There are several business problems faced by Joomla which is Jstring issues, Joomlaaxtc extension issue and framework issue.

The easier ways to install a content management system is by following:

First, I run a hosting provider called CPanel. Before we start, we should have a domain name like 27035574.2019.labnet.nz to access it. After that, to open the Cpanel is to type in your [domain name]/cpanel in the URL bar and enter.

You will go into the login page. Press login to enter your CPanel hosting account.

CMS 1 Installation(Drupal):

In the CPanel hosting environment, you can set up any kind of Content Management System quickly. Scroll down and go to the “SOFTACULOUS APPS INSTALLER” tab.

At the categories, Click on the Portals/CMS logo.

Find Drupal and press it.

Go to the install menu button, click on the arrow for the drop-down menu and choose Custom install

It will show this page.

Then, you will see an “in Directory”. This is for your root directory to install at, for example, Http://[domain]/[drupal8]/. Leave it empty if you don’t want to change the directory.

You can change the admin’s username and password. Recommended building a strong password to real website builder to prevent security loss.

After that, click on the install button and it will bring you to this page.

 

Press on the Administrative URL link to log in as an administrator.

You will be required to enter your admin username and password to access.

After login, you will see something has changed such as the extension bar and Tools menu.

 

The process of acquiring and installing a theme for Drupal:

First, you have to enter the link below: https://www.drupal.org/project/project_theme

Then, you will see the download and extend page, fill in the Core compatible mode to the mode that suits your Drupal version. I’m choosing, 8.x for myself. Type in zircon at the search themes bar and press on the Search button.

After pressing the search button, scroll down and choose zircon as shown below. Then press on Zircon link.

Scroll down the page and you will see two types of download file, you can choose either .gz or.zip file. For me, I have chosen the .gz file, right-click the link and copy link addresses.

Go back to you Drupal admin site, hover to the Appearance tab, and choose the install new theme option.

When the page loaded, paste the copy link address into the ”Install from a URL” bar and press the install button.

After the installation successful, press on the Appearance tab.

Zircon theme will be located on the Uninstall theme column. Go there and press on the install link.

Then you will get a confirmation message below.

.

Go to the Zircon theme and set the theme as default.

The  method to add plugins/modules to provide functionality for Drupal

First, you have to enter the link below: https://www.drupal.org/project/project_module

Then, you will see the download and extend page, fill in the Core compatible mode to the mode that suits your Drupal version. I’m choosing, 8.x for myself. Type in admin toolbar at the search themes bar and press on the Search button.

After pressing the search button, scroll down and choose Admin Toolbar module as shown below. Then press on Admin Toolbar link.

Scroll down the page and you will see the type of download file, you can choose either .gz or.zip file. For me, I have chosen the .gz file, right-click the link and copy link addresses.

Go back to you Drupal admin site, hover to the Extend tab, and choose the install new module option.

When the page loaded, paste the copy link address into the ”Install from a URL” bar and press the install button.

You will receive a confirmation message.

After that, press on the Extend tab, scroll down and find the Admin Toolbar, tick all 3 boxes as shown below.

A confirmation message will be shown at the top of the page as below.

Then, you can now update the module by using the Update option under the Extend tab.

To produce a simple site for Drupal:

First, go to the configuration tab and click on the Basic site settings.

Then, fill in the site name, email address and the file path. It will change the contents on the home page.

After that, go to content -> Add content

Fill in all the field.

Menu setting is a menu creator and settings for the content. Set up the weight from lower to higher means to display the menu in ascending order.

URL Alias is to make user access easier. After finished, save and close the settings.

To change the logo image or favicon: press on appearance -> find our theme -> site settings ->  you can change under logo image content and also favicon content. It uses the same function to change the colour background as well.

What back end technologies do they make use of?

Back end technologies have PHP code and other languages to write modules, web services, automate tests and etc.

https://befused.com/drupal/developer

CMS 2 installation(Joomla):

In the CPanel hosting environment, Scroll down and go to the “SOFTACULOUS APPS INSTALLER” tab. You can see there is a Joomla script, press on that.

Go to the install menu button, click on the arrow drop-down menu and choose Custom install.

It will show this page.

You can change the directory if you want. Otherwise, leave it empty.

Change the Admin Username and password if you like.

After that, click on the install button and it will bring you to this page.

Press on the Administrative URL link to log in as an administrator.

You will be required to enter your admin username and password to access.

It will bring you to the Joomla’s control panel page.

The process of acquiring and installing a theme for Joomla:

First, there are multiple websites for you to download a theme to Joomla, For me, I will enter the link: https://www.astemplates.com/free-joomla-templates.

I will Choose LT TECH SHOP as my template in Joomla, you can choose another template whenever you like. Press on the more button.

Then press on Download Free Version button.

Tick the accept term and condition check box and download it. It will ask to sign in the account for the first user.

After the registration completed, you will get two zip folders.

Open the Joomla admin site, go to extension tab -> manage. Then, browse both zip file that you downloaded and upload it.

After that, go to the extension tab -> template. The page will be shown as below, make the Lt Techshop as default by clicking on the star icon.

And you will see the change of homepage.

The  method to add plugins/modules to provide functionality

First, you should enter the link below: https://extensions.joomla.org/extension/

Search for “AllVideos” plugin and download it and it should show in the zip file.

Go back to the admin site. Then, go to the extension tab -> Manage

Drap and drop the plugin zip file into the upload link. Then, it will install automatically and show a confirmation message to you.

Go to extension tab -> Manage

Enable the plugin that has installed by clicking the disable folder icon and it will turn to green.

Then, go to Extension tab -> Plugins.

Press on the link to edit the “AllVideos” Plugin

Then, click on the “show full description” link.

Go down and select the “Documentation for AllVideos”.

You will beload this page.

Scroll down and copy the Youtube AllVideo tag that can be used in Joomla (if you want to use Youtube channel service).

GO back to Joomla admin site, and go to the article that you want to embed a video inside with HTML code. Then paste the “AllVideos” tag to shows the youtube channel. Then save and publish it. For example:

Steps to produce a simple site for Joomla

First, create an article for the website, go to content -> Articles.

Press the new button

Create a title for your article, save and close after finished.

Press the menu tab, go to Main Menu and choose Add New Menu Item.

Create a menu title, this is to link with the article, change the menu item type to a single article, and select About Article.

Check to confirm the menu items are added inside the menu items manager, you should have About Us page.

You can add some content inside an article. Press the content and select the article option.  Press the About Us link.

Put the content that you want to. Save and close. Then, you can view it on your user site.

What back end technologies do they make use of?

We could create a Model-view-Controller triptych and edit the admin entry point.

I have applied two software for the Social Media(twitter style)micro-messaging site. First one is called Social Engine. This is a mobile social community software. It has some great performance to create a custom branded social media site. Another one is a social analytics software called phpFox. About the phpFox is to give a solution for people who wants to create social media sites. It provides tools for group users such as group owners, admin, and community managers. Both are at the same cost and have similar product features.  If you want to get support in 24×7 from the active community, the phpFox should have the service but the Social media does not. Both software support above Php4.0 and PHP7 versions.

For my recommendation, I will choose phpFox as my choice. It is because phpFox is a powerful server platform to build social media online than Social Engine server platform. If you build the social media site in phpFox, you will like the script that has been built inside the phpFox compared to the Social Engine especially the hosting part with a virtual private server.

I have applied two software for the Content Streaming Website. First one is called Uscreen. Then, the second one is called Contus Vplay. Uscreen can turn users’ channel into a mobile application. So, it allows interested viewers to download it from Play Store on Android or Itunes on Apple and displays the content on their phone. For Contus Vplay, it is an online video streaming product that allows users to manage their video content. Both of the software has similar features such as. It is a one-time license for the Contus Vplay and the price is varied by the vendor. For Ugreen users, they have to pay 149 per month for the basic plan and 299 for the Plus. Ugreen has a great secure on its Billing system, while the Contus VPlay has fully secured the whole system from being access by hackers or unauthorized users.

For my recommendation, I will choose Contus VPlay as my choice. It is because live streaming is common nowadays. Contus V Play can live broadcast for my created videos on other mobile phones. It is cool, isn’t it? In addition, it has full protection at my video streaming contents from being stolen by other users.

I have applied two software for the Blogging Site. First one is called Gator. Then, the second one is called Blogger. Gator is a blogging platform created by HostGator. As we know, HostGator is a famous hosting service provider, but it also a blog creator management site. On the other hand, Blogger is a blogging platform running in a google hosting platform. Users can use Blogger as free by watching the provided advertisement. But if users want to create more, then they have to pay based on quota. But in other hand, Gator has to pay monthly with as low as 4 dollars to get a plan. Both have 24×7 technical support. The security of Gator is protected by the HostGator while the security of Blogger is protected by Google.

For my recommendation, I will choose Blogger as my choice. It is great to choose Blogger as my favourite Blogging site builder. It is because the trustworthy of linking with Google company and believe the platform is more secure than Gator. In addition, it is easy, simple, and understandable to manage or build my own content without using any of my technical or programming skills.

I have applied two software for the Business to business online sales websites. First one is called PrestaShop and the second one is called OpenCart. Both of this CMS is specially designed for eCommerce platform and has been placed in a top 10 list. Presta shop allows users to works on 25 languages while OpenCart is not good in client management.  However, OpenCart has its own community which give users certain commercial support to solve their issue. It is easy to build a website in PrestaShop than in the OpenCart.

For my recommendation, I will choose PrestaShop as my choice. When I work with some business clients and also some of the charge accounts, PrestaShop is a good choice for me. It is because PrestaShop provides stronger features in product management, SEO capabilities, customer care than using an OpenCart. This is important because it helps me to overcome several clients issue and also ranking up my website from the search engine site.

I have applied two software for the Micro jobbing sites. First one is called WordPress and the second one is Drupal. Drupal is a website builder to generate a solution with robust functionality for building general-purpose websites. While WordPress is more like a community platform. It can be said a Drupal can do more than a WordPress. In fact, Drupal is a little bit more complicated than WordPress to build a website because Drupal is neither for a starter nor zero skills developer, on the other hand, WordPress can start at a very basic level. Both are open sources Content Management System. There are 5000 more free themes and 53000 more plugins that can be used in WordPress compare to Drupal’s which has only 2500 more themes and 39000 more modules.

For my recommendation, I will choose WordPress as my choice. I choose WordPress because I know there are some themes available to create a micro jobbing site such as a Fiverr. It is important to get started with prebuilt themes rather than using a Drupal. Besides, the ease of use for WordPress can also help to save my time.

I have applied two software for Event websites for a local event. First one is called Wix and the second one is EventCreate. EventCreate is specially designed to create an Event website, while Wix is used to creating a variety of websites with different features in it. It is free cost for the starter plan in the EventCreate, while it has a free version for the Wix when I use the ad provided by Wix.

For my recommendation, I will choose EventCreate as my choice. It is because there is a limited choice of background to choose in Wix. EventCreate has a great design option for me to choose and create a quality background on the event website. EventCreate provides great tools and good technical support to help create a robust product that I need.

Buytaert, D. (2001, May 18). Our history. Retrieved from DRUPAL: https://www.drupal.org/about/history

DRKCT. (2008, March 17). The Evolution of Client/Server Computing. Retrieved from A Blogger CMS: http://client-server-technology.blogspot.com/2008/03/evolution-of-clientserver-computing.html

Horne, K. (2019, March 15). Gator Website Builder: Our First Look at HostGator’s Site Builder. Retrieved from A whoishostingthis Blog: https://www.whoishostingthis.com/blog/2019/01/21/gator-website-builder

Inc, Q. (2019, December 30). Client/Server Computing. Retrieved from Webopedia: https://www.webopedia.com/Computer_Science/Client_Server_Computing

Jackson, B. (2019, July 26). WordPress vs Drupal – Which One is Better? (Pros and Cons). Retrieved from A Kinsta Blog : https://kinsta.com/blog/wordpress-vs-drupal/

JONES, S. (2018, November 26). 5 Types of Content Management Systems (CMS). Retrieved from An ixiasoft Blog: https://www.ixiasoft.com/types-of-content-management-systems/

Kenneth Crowder, R. S. (2019, January 30). A Brief History of Joomla. Retrieved from Oreilly Safari: https://www.oreilly.com/library/view/using-joomla/9781449377434/ch01s02.html

Lee, M. (2017, June 7). 9 Best Content Management System (CMS) for Blogging. Retrieved from A Blogdada Blog: https://www.blogdada.com/9-best-content-management-system-cms-for-blogging/

Levi, J. (2014, July 1). 16 Drupal 8 Features You Should Know. Retrieved from axelerant: https://www.axelerant.com/resources/articles/drupal-8-features-need-know

members, T. F. (2015, September 15). 10 Core Features of Joomla. Retrieved from Tech Fry: https://www.techfry.com/joomla/10-core-features-of-joomla

Mortti. (2018, February 9). The Joomla! Forum. Retrieved from JoomlaXTC: https://forum.joomla.org/viewtopic.php?t=959075

Open Source Matters, Joomla community. (2005, August 17). Joomla! Benefits & Core Features. Retrieved from JOOMLA!: https://www.joomla.org/core-features.html

Patar5036. (2014, January 18). Socialengine vs. PhpFox. Retrieved from A WebHostingTalk web and cloud hosting community: http://www.webhostingtalk.com/showthread.php?t=1341054

perpignan. (2018, August 1). Problem with Business Rules + Masquerade. Retrieved from Drupal download and extension: https://www.drupal.org/project/business_rules/issues/2989868

Raymond, M. (2018, December 10). Creating a headless CMS technology stack. Retrieved from A SURFCODE blog Web Site: https://www.surfcode.io/blog/headless-cms-tech-stack/

Rode, J. (2016, June 9). What is the top 10 features of Joomla? Retrieved from Quora: https://www.quora.com/What-is-the-top-10-features-of-Joomla

Sameh, F. (2019, February 17). PRESTASHOP VS. OPENCART: A COMPARISON OF ECOMMERCE PLATFORMS. Retrieved from A Perzonalization Blog website: https://www.perzonalization.com/blog/prestashop-vs-opencart-comparison/

Sanati, J. (2011, May 2). Top 10 Reasons to Setup a Client-Server Network. Retrieved from Intel IT Peer Network Web Site: https://itpeernetwork.intel.com/top-10-reasons-to-setup-a-client-server-network/

Sparrow, P. (2011, May 1). Client Server Network : Advantages and Disadvantages. Retrieved from Ianswer4u.com: https://www.ianswer4u.com/2011/05/client-server-network-advantages-and.html

Zen Ventures, L. (2018, April 10). PrestaShop VS OpenCart. Retrieved from A prestashop CMS website: https://www.prestashop.com/en/prestashop-vs-opencart

Zuckerman., A. (2019, July 10). C. Retrieved from CompareCamp : http://comparecamp.com/webstarts-review-pricing-pros-cons-features/

 

Taiwan Napier Nutritional Content

Pennisetum purpureum particularly Taiwan Napier or elephant grass is a perennial forage crop with high growth rate, high productivity, good nutritive value and mostly used for cut and carry system over the tropical and sub-tropical area of the world (Cook et al., 2005; Wadi et al., 2004). It have been used widely as fodder grasses, these are the grasses that have been shown to be most adaptable and productive under Malaysian conditions (Wong et al. 1982). It has high forage quality, with low content of dry matter, high contents of crude protein, neutral detergent fiber, acid detergent fiber and acid detergent lignin. Fertilizer application is one of the cultivation method used to realize the potential of dry matter production. High rates of nitrogen application such as urea fertilizer also make significant effects to the Napier grass flexible responsive ability in dry matter production (Ambo et al, 1999). As seen in table 1, nutritional evaluation of the Napier grass at the different cuts frequency and under rate of 200kgN kg/ha fertilizer input. The dry matter contents in elephant grass are around 13.2% -17.7%. Crude protein (CP) concentration decrease from 15.5% to 6.8%, NDF and ADF concentration was increased with advancing maturity (Moran, 2005). Napier grass cut at a 30 cm height was superior to that of the grass cut at 0 cm height (Wadi et al, 2004). However, several other studies showed that the crude protein content of the elephant grass commonly ranges from 3.4-12.9% (Gonçalves et al, 1991, Santos, 1994). The nutritive value is maintained up to harvest intervals of six weeks, after which energy and protein value deterioted rapidly (Moran, 2005)
Taiwan Napier (Sabah) Unknown Variety
This is a one variety of Napier was shown to be productive grows in Sabah. But, the identity of this variety still not yet to be determine. It needs to compare with Taiwan variety (local) in Peninsular Malaysia which has a similar physical characteristic of it.
Nitrogen Fertilizer (Urea)
Urea or carbamide is an organic compound with the chemical formula (NH2) 2CO. the molecule has two amine (-NH2) residue joined by carbonyl (-CO-) functional group. Urea is manufactured organic compound containing 46% N that is widely used in solid and liquid fertilizers. It has relatively desirable handling and storage characteristics, making it the most important solid nitrogen-fertilizer material, worldwide. It may contain small concentration of a toxic decomposition product. However, urea manufactured using good quality control practice rarely contains enough to be of agronomic significance. Urea is converted to ammonium carbonate by an enzyme called urease when applied to the soil. Ammonium carbonate is unstable molecule that can break down into ammonia and carbon dioxide. If ammonia is not trapped by soil water, it can escape to the atmosphere. This ammonia volatilization can cause significant losses of N from urea when the fertilizer is applied to the surface of warm, moist soils, particularly those covered with plant residue or those drying rapidly. Relatively high surface pH also aggravates N volatilization from urea. More than 90% of world production of urea is destined for use as nitrogen-release fertilizer. For fertilizer use, granules are preferred over prills because of their narrower particle size distribution which is an advantage for mechanical application. Urea, when properly applied, results in crop yield increases equal to other forms of N. (www.rainbowplatfood.com/agronomics/efu/nitrogen.pdf)

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Function Nitrogen Fertilizer
Fertilizer application is one of the cultivation methods used to realize the potentiality of dry matter production. Napier grass has flexible responsive ability in dry matter productivity to high rates of nitrogen application (Ambo et al, 1999). Napier grass was determined to be tolerant to nitrogen input by, fertilization is carried out by none slowly released chemicals. It has also a flexibility responsive ability to high rates of nitrogen application in yield and forage quality in grasses (Ambo et al, 1999). Plants absorb nitrogen from the soil as both NH4+ and NO3-ions, but because nitrification is so pervasive in agricultural soils, most of the nitrogen is taken up as nitrate. Nitrate moves freely toward plant roots as they absorb water. Once inside the plant NO3- is reduced to an NH2 form and is assimilated to produce more complex compounds. Because plants require very large quantities of nitrogen, an extensive root system is essential to allowing unrestricted uptake. Plants with roots restricted by compaction may show signs of nitrogen deficiency even when adequate nitrogen is present in the soil. Most plants take nitrogen from the soil continuously throughout their lives and nitrogen demand usually increases as plant size increases. A plant supplied with adequate nitrogen grows rapidly and produces large amounts of succulent, green foliage. Providing adequate nitrogen allows an annual crop, such as corn, to grow to full maturity, rather than delaying it. A nitrogen-deficient plant is generally small and develops slowly because it lacks the nitrogen necessary to manufacture adequate structural and genetic materials. It is usually pale green or yellowish, because it lacks adequate chlorophyll. Older leaves often become necrotic and die as the plant moves nitrogen from less important older tissues to more important younger ones. On the other hand, some plants may grow so rapidly when supplied with excessive nitrogen that they develop protoplasm faster than they can build sufficient supporting material in cell walls. Such plants are often rather weak and may be prone to mechanical injury. Development of weak straw and lodging of small grains is an example of such an effect (www.rainbowplantfood.com/agronomics/efu/nitrogen.pdf).
Effect of Fertilizer Application on Yield and Quality of Natural Pasture
Both quantity and quality of natural pasturelands can be improved by application of fertilizer. Hence, sufficient response to fertilizer application is one of the desirable characteristics expected of natural pasturelands. The high nitrogen requirement of pastures, coupled with their pervasive root system results in efficient absorption of nitrogen from the soil. Thus, in grass dominated pastures about 50 to 70 percent of applied fertilizer nitrogen is normally taken up, although this decreases at very high nitrogen levels (Miles et al, 2000) due to deficiencies of some micronutrients in the soil and displacement of phosphate concentrations at higher levels of nitrogen (Falade, 1975). Grasses can obtain their nitrogen in a number of ways, but the most important sources are from fertilizers. Hence, the simplest way to achieve maximum production from grass is to apply inorganic fertilizer with high nitrogen content (Skerman et al, 1990). Moreover, fertilizers not only increase yield but also influence species composition of natural pastures.
Forage Yield
The application of fertilizers on natural pasture has been clearly shown to improve the herbage yields (Adane, 2003). When nitrogen is applied, there is usually an initial linear response. But, there is a phase of diminishing response and a point beyond which nitrogen has little or no effect on yield. The dry matter yield of fertilized plots of natural pasture has been shown to be 9.47 ton/ha as compared to unfertilized plots 5.67 ton/ ha at 90 days of harvest (Adane, 2003).Therefore, the amount of dry matter produced for each kilogram of nitrogen applied depends largely on the species under consideration, frequency of defoliation and growth condition (Miles et al, 2000).
Forage Quality
Application of nitrogen to pasture usually results in marked increase in the level of crude protein content. However, the great variability in crude protein content due to nitrogen applied exists in early stages of growth. The crude protein content of most grass species is adequate to meet minimum nutritional requirements for livestock in early stages of harvesting but reaches levels below this requirement in later stages of harvesting. Hence, addition of nitrogen and phosphorus results in considerably higher crude protein content (Goetz, 1975). The increase in the crude protein content of grasses through fertilization depends on the availability of soil nitrogen. Nitrogen fertilizer application also increases the level of soil nitrogen. This has increased the crude protein percentage of the grass but has no consistent effect on dry matter digestibility (Minson, 1973). Fertilization at early stages of growth greatly influences the accumulation of non-structural and insoluble carbohydrate levels. Insoluble carbohydrate decreased with increasing nitrogen supply and soluble carbohydrate levels increase with increase in phosphorus supply (Miles et al, 2000). Nitrogen fertilizer also improves the concentrations of neutral detergent fiber (NDF) and acid detergent fiber (ADF) in early cut pennisetum purpureum. However, according to studies of the same author, nitrogen fertilizer could not reverse the adverse effects of maturity on the quality. Similarly, the lignin content (Abade, 2008).
Yield Analysis
Dry Matter Yields
Moisture content is usually reported on a wet and a dry-matter (DM) basis. Wet basis indicates how much fresh forage would be required to meet DM requirement of the animals. Dry-matter basis is calculated as if the forage had no moisture (Yoana et al, 2000). Napier grass has flexible responsive ability in dry matter productivity to high rates of nitrogen application (Ambo et al, 1999). Table 2 was show are the average dry matter yield (tons / ha / year) of Napier grass compare to other pasture (http://www.dvssel.gov.my/cms/index.php?pt=295).
Nutritive Quality Analysis
Nutritive value refers to aspect a forage quality, which are refers to how well ruminants consume a forage and how efficiently the nutrients in the forage are converted into ruminant products. The right forage tests, accurately conducted, can provide a good estimate of forage quality (Lin et al, 1999). The nutritive quality of forages varies as they grow towards maturity. Consideration of the stage at which both biomass yield and nutrient content are optimal is therefore important. After attainment of maturity, the forages generally depreciate in nutritive value. This is mostly due to increase fibrous material, particularly lignin. For many types of forage, the leaves die off systemically after attainment of maturity, and this reduces photosynthetic activities. As a result, there will be reduced accumulation of nutrients and quality of forages.
Crude Protein
A protein was a prime source of energy for the most important nutrients for livestock. These nutrients support rumen microbes that consequently degrade forage rue proteins make up 60-80 percent of the total plant nitrogen (N), with soluble protein and a small portion of fiber-bound N making up the remainder (Yoana et al, 2000). The total protein in the sample was including true protein and non-protein nitrogen. Proteins are organic compounds composed of amino acids. They are a major component of vital organs, tissue, muscle, hair, skin, milk and enzymes. Protein is required on a daily basis for maintenance, lactation, growth and reproduction. Proteins can be further fractionated for ruminants according to their rate of breakdown in the rumen. The crude total protein content of a feed sample can be accurately determined by laboratory analysis. The measured amount of nitrogen in the feed is converted to protein by multiplying by 6.25. The basis for this is that protein contains 16 percent nitrogen, or 1 part nitrogen to 6.25 parts protein (J. W. Schroeder 1994). By the high level of chemical fertilizer application crude protein was increased and the dry matter digestibility was lowered (Ambo et al, 1997).
Neutral Detergent Fiber (NDF)
The NDF values represent the total fiber fraction (cellulose, hemicelluloses and lignin) that make up cell walls (structural carbohydrates or sugars) within the forage tissue. Values of NDF for grasses will be higher (60-65 %) (Yoana et al, 2000). A high NDF content indicates high overall fiber in forage, but the lower the measurement of NDF value, was the better quality of forages. Neutral detergent fiber, like CF, uses chemical extraction (with a neutral detergent solution under reflux) followed by gravimetric determination of the fiber residue. Neutral detergent fiber is considered to be the entire fiber fraction of the feed, but it is known to underestimate cell wall concentration because most of the pectin substances in the wall are solubilized (Van Soest 1994).
Acid Detergent Fiber (ADF)
Acid detergent fiber analysis will accurately measure the amount of poorly digestible cell wall components, primarily lignin. Formulas are under development that can be used to estimate net energy content of a feed from an analysis for ADF (Van Soest, 1982). The ADF values are then used in equations to determine total digestibility of nutrient. The ADF values represent cellulose, lignin and silica. The ADF fraction of forages is moderately indigestible. High ADF values are associated with decreased digestibility (J.W. Schroeder 2004). Therefore, a low value of ADF is better for forage quality.
Acid Detergent Lignin (ADL)
ADF residue is subjected to digestion with 72% sulphuric acid to dissolve the cellulose. The remaining residue is ashed to consist of lignocelluloses and acid insoluble ash (mostly silica). Strong acid will dissolve the cellulose component and ashing of this residue will determine the lignin component of the grasses (Hans et al, 1997). Ashing is done by heating a sample in a furnace at high temperature (550-600Ëšc) until all organic material has been burned away. Ash contains essential minerals, non essential minerals and toxic element such as heavy metals. Lignin has a negative impact on cellulose digestibility. As lignin content increases, digestibility of cellulose decreases thereby lowering the amount of energy potentially available to the ruminant.