Literature Review On Cloud Computing Data Storage And Integrity Verification

Evaluation

There are different trends which are based on how the cloud computing is based on the internet development. Here, the factors are defined to make the proper use of the power processes with SaaS (Software as a Service) computing architecture. The increased network bandwidth with the reliability of the network connections help in easy processing of the data services. The envisioned forms are related to the service platform which includes how the new data storage with the cloud is considered to be the major challenging issue. (Islam et al., 2016). The major concern for the data storage is based on the integrity with the verification that is coming from any of the untrusted servers. The storage service providers tend to focus on the hiding of the data errors.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

The focus is mainly to solve the issues which are relating to the integrity of the data with the check of the different schemes that are under the security models. The work includes the designing of the solutions that meet the different requirements with the efficiency set for the high scheme pattern, and the stateless verification of the system. (Talluri, 2016). The schemes which include the private auditability is checked under the higher efficiency with the public allowing the client data for the proper setup and the data storage forms. It includes the different check lists which works for the management with the clients. The practical use of the cloud computing is based on the outsourcing of the data which is not needed by the verifier and working on the designs which are based on the cloud computing and the efficiency consideration. (Mahajan & Kumar, 2016).

The other major concern is the data operations, wherein, in cloud computing, there are remote storage of the electronic data that need not to work only on accessing but also with the proper updates of the clients. This is through the modification of the blocks, deletion and the insertion. The state of art is based on working over the static data files and then working over the provable data that depends on retrievability schemes to support the different dynamics of the data which leads to the security issues. With this, the contribution is mainly to the storage of the outsourcing services with the view that includes the role of public auditability and the data dynamics for the proper storage of the cloud. (Beloglazoy et al., 2016).The integration is based on the motivation where the public standard system and the data storage security includes the proposed protocol with the dynamic data operations, to handle the block insertions and then work on the other missing schemes.

With this, there is also a need to work on:

  1. The schemes which includes the scalable support and the effective public auditing. The cloud computing parameters are mainly to achieve the batch auditing process where there are different tasks that are for the different users.
  2. The security of the construction and the performance is through the implementation with the proper state-of-the-art.

Along with this, there is a public auditability method to define and include the provable data depending upon the possession model to make sure of the files which are on the untested storages. (Daniel et al., 2016). The schemes are mainly to utilise the RSA based homomorphic tags for the auditing of the outsource data. The check is also on how the different schemes come from the static storage with different case designing and the security problems. It is mainly considered to allow any of the basic operations which are based on not only support the dynamic data operations but also to work on the operations of the limited functionality. (Sookhak et al, 2017). The consideration is about the proof of retrievability where the model includes the spot check and the error correction codes that are used with the possession with the archived service system. The allocation and the development of the dynamic data is to design the updates about the improvement of the schemes with authenticated lists of the data structure.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Literature Review

The aim of the report is to focus on the integrity verification for the proper storage of the data systems and work on supporting the public auditability and the different data dynamic. The secured and the effective designing is to integrate the components which are for the data storage devices. (Lin et al., 2017).

                         

For working on the cloud storage, there is a need to focus on the basic solutions which works over the schemes related to the MAC and the signature, for the realisation of the data auditability and then discuss about their demerits in supporting the public auditability. The generalised support comes with the provable data possession (PDP) models that is mainly to discuss about the operations of the data operations. (Sawant et al., 2016). The emphasis is mainly on the updates that could be performed with the effective protocol settings. The encoding is based on the designs that are for the distribution of the data storage security programs. The extension of the scheme is also for the single client work and then focus on including a concrete description about the multi-client data auditing. (Cheng et al, 2016). The organisation also focus on how the security analysis could be done with:

This includes the representation of the network which includes the proper data storage of the cloud and then work on the different entities and the patterns. The client is the entity for handling the large files which rely mainly on the maintenance programs and the computation. The cloud storage server is also the entity which is able to manage and work on the storage spacing and the computation resource to maintain the data of the client. The third party auditor works with the expertise and the capabilities to handle the assess for the trusted data and then expose the risks which are related to the cloud services based on the request. In this, there is a paradigm which is set to ensure that no longer possession of the data is important for the clients to make sure that the data is being sorted and maintained. Hence, for this, the clients also need to be equipped with the proper general security. (Krishnan et al., 2016). It includes the functioning with the time and feasibility approach for the resources to monitor the data and work towards the delegation for the trusted Third Party. The possession is the public key which includes the cloud servers to access or retrieve the data based on the modifications, insertions and the deletion process.

The security is based on making sure of the different schemes like the polynomial time algorithm which includes that there are factors to recover the original data that is carried by the different challenges. (Islam et al., 2016). The response and the storage is to make sure of the correctness of the cloud data with the original files that are being recovered with the interaction with the server. The defined correctness and the soundness is about the concepts which relates to how the data files are looking forward for the storage. The verification of the data is to make sure of trying to generate the validated responses and then pass the data verification to the system. The security model is based on the patterns of the PoR models with the verification based on the data operations. With this, there is a support of the block insertion activities where the file block is inserted with some of the computations that will deal with removing the index information, with the computation of the signatures and the use. (Lin et al., 2017). The operations are based on working over the client base structure which has no major capability to calculate the different patterns without any transmission or information of the data. The consequence of the data variance is based on the adversary opportunities which could lead to the manipulation of the data. (Saxena et al., 2016). The construction of the security models is based on the schemes which need to properly verify about the pre-stored data formats. The tags for the authentication is for the proper extraction rather than being calculated or pre-sorted by the verifier.

System Models

They are mainly based on the system public auditability and how the storage correctness is ensured with the allowing of the clients to store the file and then work on the capability to verify and the correctness of the data that has been stored on the demand. The dynamic data processing is for the operational support which allows the clients to perform about the block-level operations on the data files which is mainly to maintain the same level of the data. Hence, the designing is based on ensuring the correctness and the reliability with better operational support. (Yang et al., 2017). The block less verification includes that there is no challenge to the file blocks which could easily be retrieved by the verifier. It is found mainly at the time of verification process which is considered to be the major efficiency concern.

There are different security protocols which include how the cloud data storage services are researched. With this, there are certain integrity assurance patterns which allow the cloud data to discuss and support the public auditability and the data dynamics. The main concern is about working on the delegations which relate from the multi-users. (Liu et al., 2016).

It is for the mapping and the combination of the different elements which are based on the two-vector spacing. The linear spacing is also for the different arguments which is mainly to take hold of the forms that include the modules as well as the other commutative rings that could be processed using the n-ary functions. Hence, for this, there is a need to check on the consequences with the linear sub-spacing of the vector space. (Singh et al., 2016).

                                         

The figure is mainly defining the matrix M which is for the bilinear mapping that is set to handle all the other bilinear forms. A proper dimensional analysis is set with the multiplication of the matrix and the mapping done in the bilinear form. It works for:

Here, the vector spacing is also for V and the other real numbers that are important for the product handling.

For this, V is set to handle the dual spacing functions with the applications of the operator that are for the applications which have been set for the b(f,y)=f(v). This is considered to be the bilinear mapping process where the commutative rings are under the R modules. The homomorphism process also includes the S modules with other additive arguments. The methods are set with the vector spacing V which are for the scalars. The representation is based on how the modules work on the rings with defined patterns of how the homomorphism process exists. The finite dimensional vector includes the forms which are computable, bilinear and non-degenerate. (Kaufman, 2009).

There are patterns which include the elements related to:

With this, there are different levels of the commutative rings that are set for the different bilinear formats. The modules are considered to be important to handle the different ring sets that allow a proper pairing with some of the inspective and the injective form structures. The dimensions are also based on the product, and the quadratic forms. (Jouini & Rabai, 2016).

Security Models

This method is to define how the bilinear mapping is processed with the authentication that includes the verification of the different processes. The security of the material is mapped and then handled through the constructive forms. The binary tree is when there are hashes for the authentication of different data values. The setup of the data structure is based on demonstrating about the node structure and then working on the leaves for the data blocks to properly handle the different sets of the files. The peer-to-peer network of the system, with NoSQL is for the proper authentication of the system process. (Yi et al., 2017). Here, in this,

The authentication and the verification process is based on handling hr which is mainly to work on the request standard process of x. Along with this, the process includes the blocks that are important to be received with auxiliary authenticated information.

Hence, the verification for the elements is based on:

The check is on how the values for the authentication are processed with determining the values for the different data functions. The file F is defined to work on the ordered collections of the signatures and then analyse the systems with the roots R for the Merkle Hash Tree. The processing of the algorithm is about handling the clients with the third-party auditors as well. The security in the system could easily be met through:

To focus on the security standards, the data storage works on the individual user data which could be stored in the multiple physical locations which are based on the different forms of the individual servers. (Ferris et al., 2016). This also includes the exploitation of the individual servers and the data redundancy which could easily be set through the toleration of the faults or the server which tend to create the crashes for the user data to grow in size with proper importance. The erasure correcting code could also be used for the verification of the different failures and working on the distributed storage system. (Yang et al., 2016).The forms are set to handle the multiple servers to tolerate and improve the growth of the data with better sizes and the importance. The erasure correcting code are also used for the distributed storage systems where the cloud data storage is based on relying over the technique to mainly handle the dispense of the file structure.

The designing is based on how the security is enabling to Reed Solomon code that is mainly to work on the redundancy parity vectors from the m data in the original m data format. With this, there is also the analysis of the security standards, where the computational Diffie Hellman problem is discussed. Hence, for this, there is a retrieving protocol that is set to store the file with the extractor algorithm that works on the proof-of-retrievability. The performance is based on how the different functions are able to improve the blocks which suffices for the toleration of the file corruption. The quantification of the extra cost is mainly through the dynamic data functioning in the server computation, where the verifier works with the communication overhead as well. (Murali et al., 2016).

Conclusion:

The check is on the data storage security with the third-party auditors that work on the decisions to evaluate the service quality. It also includes the measuring of the different objectives with the other perspectives. The public auditability is based on allowing the delegates with integrity verification for the third-party tasks. The focus is on the computation of the resources where the major concern is about handling the dynamic data. (Nagaraju et al., 2016). The remote processing with the data integrity that is for the designing and meeting the goals which are being effective. The standards are set to meet the goals of the storage models that are depending upon the manipulation of the Merkle Hash which is based on the accommodation of the dynamic data patterns. The support of the effective handling with the multiple auditing tasks is to check over the bilinear approach with extensions to the multi-user setting. Here, TPA is for the performance and to handle the multi-user settings. (Madhav et al., 2017). The extensive security measures and the performance analysis is based on the highly effective and the secured patterns.

References:

Beloglazov, A. and Buyya, R., Manjrasoft Pty. Ltd., 2016. System, method and computer program product for energy-efficient and service level agreement (SLA)-based management of data centres for cloud computing. U.S. Patent 9,363,190.

Cheng, H.K., Li, Z. and Naranjo, A., 2016. Research note—Cloud computing spot pricing dynamics: Latency and limits to arbitrage. Information Systems Research, 27(1), pp.145-165.

Daniel, E. and Vasanthi, N.A., 2016. An Efficient Continuous Auditing Methodology for Outsourced Data Storage in Cloud Computing. In Computational Intelligence, Cyber Security and Computational Models (pp. 461-468). Springer Singapore.

Ferris, J.M. and Riveros, G.E., Red Hat, Inc., 2016. Monitoring cloud computing environments. U.S. Patent 9,529,689.

Islam, T., Manivannan, D. and Zeadally, S., 2016. A classification and characterization of security threats in cloud computing. Int. J. Next-Gener. Comput, 7(1).

Jouini, M. and Rabai, L.B.A., 2016. A Security Framework for Secure Cloud Computing Environments. International Journal of Cloud Applications and Computing (IJCAC), 6(3), pp.32-44.

Kaufman, L.M., 2009. Data security in the world of cloud computing. IEEE Security & Privacy, 7(4).

Krishnan, R. and Mini, G.V., 2016. A Study on Sharing of Data in Cloud Storage Using Key Aggregate Cryptosystem. International Journal of Engineering and Future Technology™, 5(5), pp.43-47.

Lin, C., Shen, Z., Chen, Q. and Sheldon, F.T., 2017. A data integrity verification scheme in mobile cloud computing. Journal of Network and Computer Applications, 77, pp.146-151.

Liu, C.W., Hsien, W.F., Yang, C.C. and Hwang, M.S., 2016. A Survey of Public Auditing for Shared Data Storage with User Revocation in Cloud Computing. IJ Network Security, 18(4), pp.650-666.

Madhav, N. and Joseph, M.K., 2017, January. Cloud for Engineering Education: Learning networks for effective student engagement. In Computing and Communication Workshop and Conference (CCWC), 2017 IEEE 7th Annual (pp. 1-4). IEEE.

Mahajan, A. and Kumar, R., 2016. SECURE METHOD FOR AUTHORIZED DEDUPLICATION AND DATA DYNAMICS IN CLOUD COMPUTING. Development, 3(1).

Murali, G. and Prasad, R.S., 2016, February. CloudQKDP: Quantum key distribution protocol for cloud computing. In Information Communication and Embedded Systems (ICICES), 2016 International Conference on (pp. 1-6). IEEE.

Nagaraju, S. and Parthiban, L., 2016. SecAuthn: Provably secure multi-factor authentication for the cloud computing systems. Indian Journal of Science and Technology, 9(9).

Sawant, S.P., Deshmukh, A.A., Mihovska, A.D. and Prasad, R., 2016. Public Auditing and Data Dynamics in Cloud with Performance Assessment on Third Party Auditor. Wireless Vitae 2015.

Saxena, R. and Dey, S., 2016. Cloud Audit: A Data Integrity Verification Approach for Cloud Computing. Procedia Computer Science, 89, pp.142-151.

Singh, A.P. and Pasupuleti, S.K., 2016. Optimised Public Auditing and Data Dynamics for Data Storage Security in Cloud Computing. Procedia Computer Science, 93, pp.751-759.

Sookhak, M., Gani, A., Khan, M.K. and Buyya, R., 2017. Dynamic remote data auditing for securing big data storage in cloud computing. Information Sciences, 380, pp.101-116.

Talluri, S., 2016. Outsourcing Of Multi-Copy Dynamic Data And Alleviate Data Storage And Maintenance. IJSEAT, 4(6), pp.284-286.

Yang, C., Huang, Q., Li, Z., Liu, K. and Hu, F., 2017. Big Data and cloud computing: innovation opportunities and challenges. International Journal of Digital Earth, 10(1), pp.13-53.

Yi, M., Wei, J. and Song, L., 2017. Efficient integrity verification of replicated data in cloud computing system. Computers & Security, 65, pp.202-212.