DESIGN AND IMPLEMENTATION OF A PRIVACY PRESERVED OFF-PREMISES CLOUD STORAGE

Despite several cost-effective and flexible characteristics of cloud computing, some clients are reluctant to adopt this paradigm due to emerging security and privacy concerns. Organization such as Healthcare and Payment Card Industry where confidentiality of information is a vital act, are not assertive to trust the security techniques and privacy policies offered by cloud service providers. Malicious attackers have violated the cloud storages to steal, view, manipulate and tamper client’s data. Attacks on cloud storages are extremely challenging to detect and mitigate. In order to formulate privacy preserved cloud storage, in this research paper, we propose an improved technique that consists of five contributions such as Resilient role-based access control mechanism, Partial homomorphic cryptography, metadata generation and sound steganography, Efficient third-party auditing service, Data backup and recovery process. We implemented these components using Java Enterprise Edition with Glassfish Server. Finally we evaluated our proposed technique by penetration testing and the results showed that client’s data is intact and protected from malicious attackers.


INTRODUCTION
Cloud computing offers an innovative method of delivering computing resources whereby clients are able to execute their applications at remote servers with unlimited storage capability while enjoying efficient features such as scalability, availability, on-demand selfservice and elasticity on pay-per-use billing pattern (Yashpalsinh and Modi, 2012).The cloud computing paradigm has brought up an agile and revolutionized IT infrastructure for business organizations, since they can now focus on their core business while transferring the IT responsibilities such as managing servers, storages, developing applications and installing networks, to a Cloud Service Provider (CSP) (Khayyam et al., 2012).Due to cost-effective operational efficiency, business organizations are rapidly adopting the cloud paradigm.However migrating data to the cloud is still a serious concern for the organizations requiring consistent confidentiality and integrity (Rocha and Correia, 2011).
In order to formulate privacy preserved off-premises cloud storage, in this research paper, we propose an improved technique that will give confidence to users in using the cloud storage for their daily transactions via cloud applications.Our proposed privacy preserved offpremises cloud computing storage solution may not be appealing enough to ordinary users who do not need strict security as well as privacy requirements.Ordinary users also do not deal with critical data or follow any industry based regulatory compliance standards such as Health Insurance Portability and Accountability Act (HIPAA) or Payment Card Industry Data Security Standard (PCIDSS).
Since proposed solution requires client to perform number of tasks such as encryption, decryption, metadata generation, requesting and inserting security code for each transaction, ordinary users may consider the overall computing process as time consuming.However, to protect the sensitive information of an organization, these tasks are quite essential.For-instance nowadays in Science Publications JCS online banking systems, users are required to complete a set of security requirement to perform an operation, they follow these processes appropriately to protect their confidential information from malicious attacks, so there has to be a trade-off among security, privacy and performance.

ORGANIZATIONS AND CLOUD SECURITY
Organizations that are required to follow well-defined data security standards such as HIPAA and PCIDSS do not trust the existing security as well as privacy policies offered by CSPs (Mervat et al., 2012;Shucheng et al., 2010).They feel the lack of control while storing confidential records at off-premises storage and they are concerned that malicious users might gain illegal access to their records.In order to defend the attacks, CSPs formulate their cloud infrastructure with efficient security controls such as network firewalls, secure cryptographic processors, anti-malwares, honey pots, access control mechanisms.However cloud storages using these techniques are also subjected to vulnerabilities in terms of implementation (Rocha and Correia, 2011).
Clients require full confidence that attackers are not able to steal, view or tamper their data.We know from the past studies that security and privacy on cloud is breached by external or internal attackers (Ling et al., 2011).External attacks are issued by hackers who steal client's confidential records with an objective to obtain desired amount of cash.These attacks may take place by an IT personal belonging to competitors of CSP or client.The intention of these attacks is to damage the brand reputation of CSP or to abuse as well as misuse client's files.CSPs secure their physical and virtual infrastructure by using various tools and techniques to protect data and their systems from outsider attacks.However, we found out that existing solutions are not adequate to preserve the client's privacy.It is also identified that internal employees of CSP may become malicious as well (Catteddu and Hogben, 2009).
Inside attackers such as malicious employees of CSPs, intentionally exceed their privileged accesses in a negative manner to affect the confidentiality (Adrian et al., 2012).In contrast to an external hacker, malicious insider can attack the computing infrastructure with relatively easy manner and less knowledge of hacking, since they have the detailed description of underlying infrastructure.CSPs admitted their full awareness aboutpossible malicious insiders and they would normally claim to have its solution.
For-example, in order to hire a trusted cloud admin, CSPs conduct strict background checks and multiple detailed interviews whereas some CSPs state that they have strict security procedures in place for all employees when they have access to machines that store client's files.Although these are essential security steps that they must follow, but without acomplete trustworthy solution for defending insider attacks, a malicious insider can easily obtain passwords, cryptographic keys, files and gain access to client's records (Francisco et al., 2011).When client's data confidentiality has been breached, client would never have any knowledge of the unauthorized access mostly due to lack of control and transparency in cloud provider's security practices and policies.Ateniese et al. (2007) proposed Provable Data Possession (PDP) model to ensure that external storage retains the client's file with required integrity policies.Using the PDP model, data owner i.e. the client first preprocesses the file F and adds some additional data or expands it such as the new file is F', client then generates Verification Metadata (VMd) denoted as M for F' before sending it to the server for storage.Generated M will be stored locally and client may delete the personal copy of F, but prior to the deletion, client will execute a data possession challenge to make sure that server has successfully retained the file.The yes or no response from server will verify the existence of file.While uploading the file, client may encrypt it or it may already be encrypted.To verify the file integrity, client issues a challenge and request R to server to compute hash for the stored file.Server will compute hash and send the result P to the client and client will verify the integrity results by comparing it to locally stored metadata M. Wang et al. (2010) introduced an improved technique of verifying data integrity on cloud by utilizing concept of Third Party Auditor (TPA).Since client doesnot have knowledge and expertise of auditing process, TPA will conduct auditing services on behalf of client.TPA is a certified and trusted entity for conducting scheduled or requested auditing.This technique of using TPA, enables the client to send the file by dividing it into number of blocks, so file F will be denoted as sequence of N blocks i.e., m 1 ,…,mi,…,mn.For each block Message Authentication Code (MAC) will be generated by using the following function:

Related Work
where, K denotes the key space.
Using the TPA concept, the cloud user first generates the public and private key parameters and then signs each block and sends file with its VMd to server.As TPA receives the file block number and verifies its signature, quits by indicating false if verification has failed or process continues if the result is true.Venkatesh et al. (2012) provided a solution similar to (Ateniese et al., 2007;Wang et al., 2010) but enhancing them by adding Rivest Shamir Adleman (RSA) algorithm based storage security.Client generates signature for each block using RSA secret key and hashing algorithm Ti = (H(mi).gmi rsk and generates signature Φ = {Ti} for collection of all blocks.Merklee Hash Tree (MHT) is constructed by involving each block.Client then signs the root of MHT with secret key as sigrsk(H(R)) = H(R)rsk.In next step client sends the file, its hash and signed root {F,Φ, sigrsk(H(R))} to server and deletes the local copy of Φ and sigrsk(H(R)).During the auditing process, client issues a challenge to server by selecting a specific file block, while the server computes the proof and sends the results back to client.Client verifies the data integrity by using the MHT root and authenticates by using the secret key.Ranchal et al. (2010) argued that the concept of involving TPA in cloud services may associate additional risk to confidentiality.TPA may not act as expected (this happen when TPA is attacked by external hacker or malicious insider) and they counter a solution for protecting information integrity in cloud without using TPA.They proposed an Identity Management (IDM) approach using active bundle scheme which includes sensitive data, metadata and a virtual machine.In order to maintain confidentiality, the Personally Identified Information (PII) is packed and encrypted inside the active bundle.Virtual machine monitors and manages the program inside the bundle that actually controls the access to bundle.Prasadreddy et al. (2011) focused on key and data management technique while preserving the privacy of users on cloud.They proposed a web-browser plug-in that enables the client to store the key and data with isolated CSPs.In this architecture, it is not required to have mutual communication between two CSPs, rather the plug-in handles all the necessary tasks.For-example if file is stored with Amazon, keys will be stored with Google, so Amazon cannot have illegal access to file since it is obfuscated and it can only be de-obfuscated using the corresponding key.Each file and its key will have a unique tag.If user wants to access the file, the data storing CSP sends the obfuscated data with its associated tag to the plug-in and then plug-in sends request to the key storing CSP by providing the tag.The key storing CSP will check the tag, identify the corresponding key and sends it to the plug-in and finally plug-in de-obfuscates the data by using that secret key.

A PRIVACY PRESERVED OFF-PREMISES CLOUD STORAGE
Considering the requirements, we proposed a privacy preserving technique that enables an organization dealing with sensitive information to store their confidential data at off-premises cloud storage without security and privacy concerns.We have designed as well implemented an improved technique which focuses on achieving following set of requirements: • Only privileged users can access the system under a strict access control policy • Grant adequate control to client on his data, such as handling encryption, decryption and VMd generation tasks • Client can process the data and perform certain transactions without decryption • Client can consistently monitor his files on cloud without revealing any data to an illegal authority • Client can efficiently restore the unintentionally written, modified or violated records

System Architecture
The proposed system architecture consists of three end-users i.e., Client, Trusted-TPA (TTPA) and Cloud admin.From client's perspective, we have assumed that cloud admin is most un-trusted, TTPA is semi-trusted and client's personal admin is a fully trusted user.This is based on the assumption that TTPA may have been hijacked by a malicious attacker.In order to ensure a safe computing environment, each involved entity is only permitted to perform his privileged tasks but with added security implementation of Role-based Access Control (RBAC) with random Security Code Generator (SCG).Cloud server is the central component of overall system.It analyzes the access control mechanism and performs corresponding operations requested by the privileged users as shown in The auditing reports will be shared among all three involved parties to determine the integrity status.Cloud admin is responsible for successful recovery of certain amount of records that has been violated by authorized or unauthorized users.

System Workflow
The process is initiated from client admin with the generation of private and public keys by requesting the cloud server.Let us examine a simple scenario.Forinstance client admin wants to store a file named as EMP.txt containing organization's employees' confidential records at the cloud storage.Cloud server requires the file and the public key for encryption process as represented in Fig. 2. Data inside the file will be homomorphically encrypted from the stream during the uploading process and when file arrives at the server it will be completely encrypted and stored.File EMP.txt is renamed as Encrypted.txtwhen it is saved at the cloud storage.Whenever required, server needs client's private key to decrypt the selected file as shown in Fig. 3.
Each attribute is retrieved from storage, decrypted and presented for client's view.Since data is homomorphically encrypted, client admin can also perform certain number of transactions without actually decrypting the contents.Client admin requests the server to compute VMd for the file.Cloud server divides the file in equal number of blocks.In this example, Encrypted.txt is divided into four blocks according to its size and server generates metadata for the entire file as well as its each block as shown in Fig. 4. Based on the auditing results, cloud server creates the auditing reports and shares it among involved users.Since the file is securely saved with required integrity, auditing reports indicated that integrity of file and each block is well maintained as shown in Fig. 8.The auditing report clearly indicates the integrity violation and its location i.e. block-1 as shown in Fig. 10.TTPA admin requests the cloud admin to overcome the violation and recover the data back to its original state.
Upon receiving the request cloud admin views the auditing reports and identifies that block-1 is violated.Cloud admin will issue a request to cloud server for recovering the block-1 as shown in After the successful recovery, he will alert the TTPA admin to restart the auditing process in order to ensure that data is intact.After the successful recovery, cloud admin will alert the TTPA admin to restart the auditing process in order to ensure that data is intact.The auditing reports after recovery indicate that integrity in well maintained as shown in Fig. 12 and there is no data loss.At this point, TTPA admin will inform the client admin to proceed with other operations such as uploading more files, update existing records or download files permanently.
Since client admin has already deleted all the parameters from local storage, client will retrieve both sound files from TTPA admin and request the cloud server to extract the public and private key from sound-1 and sound-2 respectively.

System Implementation
We have described the implementation mechanism of our suggested contributions such as Resilient RBAC mechanism, Partial homomorphic cryptography, Metadata generation and sound steganography, efficient third-party auditing service, Data backup and recovery process.The codes snippets used throughout this paper are written in Java and compiled using NetBeans 6.9.1 runtime environment with Glassfish Server version-3.0.

Resilient Role-based Access Control Mechanism
According to security recommendations provided by Cloud Security Alliance (CSA), data owner is responsible for enforcing access control policies whereas CSP is responsible for their implementation (CSA, 2011).We assumed that data owner together with its IT professionals and legal law authorities will specify and enforce the access control policies by signing a Service Level Agreement (SLA) with CSP.We have implemented the access policies of client by using RBAC.

Partial Homomorphic Cryptography
One of the limitations of cloud systems is the enablement of the client to process the data on cloud while it remains encrypted (Hibo et al., 2011).In order to perform this task, client is required to decrypt the data prior to processing.We have implemented partial homomorphic cryptography which enables the client to perform certain number of operations on encrypted data.However, it is not possible to perform unlimited number of operations, due to unavailability of practical implementation for full homomorphic cryptography.Considering the security features of asymmetric algorithms, we implemented the homomorphic version of RSA algorithm to encrypt, decrypt and process the encrypted data on cloud.The random homomorphic public and private keys are generated by using following code snippet: BigInteger p=BigInteger.probablePrime(N/2,random);BigInteger q=BigInteger.probablePrime(N/2,random);n=p.multiply(q);BigInteger phi_n=(p.subtract(one)).multiply(q.subtract(one));publicKey=new BigInteger("65537"); privateKey=publicKey.modInverse(phi_n); // where N refers to bit-length of returned BigInteger.
The public and private keys are generated and stored with user.Using the public key client admin can then start the encryption by using the following code which is multiplicative homomorphic.message.modPow(publicKey,n);// where message refers to the data being encrypted.
Similarly, data will be decrypted using the following code: encrypted.modPow(privateKey,n);// where encrypted refers to the cipher data.
Since the data is encrypted by using RSA multiplicative homomorphism, client processes the encrypted data by performing number of transaction that are multiplicative in nature.For-example when file EMP.txt is stored at the cloud, client can increase, modify or delete the salary and commission rate of an employee.Some demo queries are implemented as follows: Query-1: Increase the commission rate of an employee.inputFactor=2; encrypt(inputFactor,publickKey).inputFactor.muptiply(commission_rate).

Metadata Generation and Sound Steganography
When data is stored at the cloud, client admin needs to generate VMd by using file and block level hash calculation methods.Client first generates the VMd for entire file and then for its each block.The file will be automatically divided by server in N number of blocks according to its size.For-example if the file F is 8MB, it will be divided in four equal chunks F1, F2, F3 and F4 each of size 2MB.VMd for entire file will be calculated by using Digital Signature Algorithm (DSA) with SHA-1, as represented in following code snippet: Signature dsa=Signature.getInstance("SHA1withDSA","SUN");dsa.initSign(privateKey); dsa.update(fileData); verificationMetadata=dsa.sign(); Similarly VMd for each block will be generated, but instead of file, block number will be provided as parameter i.e. dsa.update(fileBlock_n),where n refers to a specific block number.After metadata generation process is accomplished, the client admin will send the VMd and public key to TTPA admin for conducting the auditing process.Client admin will also send his private key for secure storage.However, to verify the data integrity TTPA admin only requires the VMd and client's public key.In order to protect these parameters from man-in-middle and other malicious attacks, parameters such as keys and VMd are first encoded using Base64 encoding scheme for the transmission over the network and secondly by using sound steganography techniques, as represented in following code snippet: BASE64Encoder encoder=new BASE64Encoder(); String message=encoder.encode(parameter);// Sound steganography.Sound s=new Sound(inputFile); int nbrOfSamples=message.length()*3; Science Publications JCS violation, a request will be sent to cloud admin for successful recovery of violated records from the backup cloud storage.

Data Backup and Recovery Process
We assumed that client's primary storage on cloud is Kuala Lumpur while secondary or backup location is New York datacenter.Under some unwanted circumstances such as natural disasters, malicious attacks or un-wanted modifications, data will be recovered from the New York datacenter without any loss or leakage.During the auditing process if TTPA admin identifies integrity violation, it will not generate success signal for client to move on with further process, unless data is successfully recovered from backup storage to its state of correctness.By viewing the auditing reports cloud admin will initiate the backup and recovery process.If entire file or a block is violated, it will be recovered and there is no need to recover the un-violated records.This process will take place by using following code snippet: Once the violated, lost or damaged records are recovered back, TTPA admin will re-initiate the auditing process to verify its correctness, since data is successfully backed up, the auditing results will be positive and TTPA will send success message to client for further tasks such as downloading, uploading new files or processing.However the data stored on cloud is homomorphically encrypted, for the decrypted downloading process, client needs to extract his private key from encoded sound file.

Advantages of the Proposed Technique
Data owner has sufficient degree of control over his data since the client admin is privileged to perform encryption, decryption and metadata generation tasks.Performing these tasks does not require in-depth and technical knowledge of cryptography, security or cloud computing.An intermediate IT admin can handle these operations efficiently.Unlike other cloud systems, client admin is not required to encrypt the data before sending it for cloud storage, using our system, data will be encrypted automatically while it is being uploaded but before it arrives to the cloud storage.Data is encrypted in real-time directly from the stream without being stored at any location.When data arrives at the cloud storage, it will be fully encrypted.Similarly for the decryption process, data is not decrypted at the CSP's end and then transferred to the client because this could raise the concerns of confidentiality violation.Data will depart from cloud storage as encrypted and it will be decrypted in real-time from the download stream.Contents will be downloaded or viewed once arrive at the client's end.With the use of RBAC and random security access generator, client can ensure that only authorized users are accessing the system as per their privilege, this will provide sense of satisfaction to the client.
For performing any transaction, client admin is not required to decrypt the records due to the implementation of partial homomorphism cryptography so it saves time and bandwidth cost.In order to provide client full advantage of acquiring a cloud storage service, using our approach client admin will store the VMd, private and public keys with TTPA, not at their local machine, however these parameters are encoded and can be decrypted only by the privileged user.For the integrity checks, TTPA will initiate an efficient auditing process, where it will be easy to determine the location of violated data from a huge file so as to enable efficient recovery.During the recovery process, cloud admin will check the auditing report to determine the actual data that needs to be recovered instead of recovering the entire file.

EVALUATION
In order to verify the objectives of this research i.e., preserving the privacy of client's data at off-premises cloud storage, we evaluated the security of implemented system using penetration testing strategy in a lab based environment by creating a network of three workstations (one for each user).The system is checked against various attacks that might take place by an external hacker or malicious internal such as cloud or TTPA admin.The evaluation results are represented in Table 1.The evaluation process is based on considering the following attack scenarios: • External hackers launch attacks during the data transmission process to steal the client's confidential records and remotely access the cloud system in order to exploit the access control privileges of the client admin by stealing his authentication credentials • Internal as well as external hackers breach the physical security barriers deployed by CSP to gain unauthorized and direct access to client's data in order to view, modify or delete the actual contents.They also attack the secure storage of TTPA to steal or manipulate the significant parameters of client i.e., keys and VMd • A malicious TTPA admin attempts to extract the client's private key to decrypt the data

Results
During the experiments, we identified that client's privacy always remains intact despite the attacks launched by several malicious users.For-example if an expert hacker is able to attack the data during the transfer (downloading, uploading) or at the storage it doesn't affects the privacy because before data departs from the client it gets and remains encrypted throughout the entire process even when it is stored or processed at cloud storage.When attackers get access, they are not able to get any meaningful information just beside the cipher text and if an attacker violates the integrity at physical cloud storage, it is immediately identified during the auditing process and data is recovered back to its original state from the backup storage.
Similarly when, TTPA admin wants to extract the private key of client, attacker will not be able to decrypt it because it is encrypted as sound.Also if attacker gets the private key, attacker cannot decipher the client's data, since for decryption, system must perform the decrypt process and this task can only be initiated by the client when successfully logs in with required credentials.Un-authorized users cannot perform any operation, even if they break-in security of login menu they need to request for random security code and the code can be only sent to privileged users under the implemented RBAC.
We concluded that using the proposed technique, besides the threatening attacks, client's privacy i.e., data confidentiality and integrity is preserved at off-premises cloud computing storage.

Discussion
Various researchers across the globe have formulated valuable contributions in forms of models, techniques and algorithms to overcome the security and privacy concerns of adopting the cloud paradigm.In this research, we analyzed the significant contributions of (Ateniese et al., 2007;Wang et al., 2010;Venkatesh et al., 2012;Ranchal et al., 2010;Prasadreddy et al., 2011).We proposed an improved and enhanced technique by Science Publications

JCS
overcoming the limitations of existing work and acquiring their strengths.For-instance we developed the process of efficient third party auditing process as suggested by (Ateniese et al., 2007;Wang et al., 2010) but we further enhanced their efficiency by implementing partial homomorphic cryptography where users are not only able to store and audit their data files, they can also perform transaction on their encrypted data.Venkatesh et al. (2012) proposed a RSA based cryptography technique which facilitates users to only encrypt their data but this technique does not have capability of processing the encrypted data.Similarly, as (Ranchal et al., 2010), argued that involving TPA may associate additional risk to confidentiality of client's data, we overcame this issue by implementing a process of sound steganography which secures the significant parameters of client by malicious activities of TTPA hence user's parameters can be stored without any security concerns.Prasadreddy et al. (2011) provided the solution for securing the data of client by storing the data and their associated keys with isolated CSPs, however this is a good suggestion, but we believe that involving multiple CSPs may result in enhancing the communication and security complexity.In order to improve this approach we implemented a concept of encoding the cryptographic keys while at storage and during the transfer.Keys and data can be only decrypted by the relevant privileged authorities due to the implementation of RBAC and SCG.In order to overcome data violation issues, we further enhanced the existing work by implementing the process of data backup and recovery, where data of user is efficiently recovered from secondary or backup cloud storage with any loss or damage.

CONCLUSION
This paper provides a data privacy preserving solution for organizations dealing with sensitive information to adopt off-premises cloud paradigm.We proposed and presented an improved technique evaluated it by penetration testing.The results showed that client's data is intact and protected from malicious attackers.
For future work, we would solely focus on privacy concerns to maintain the confidentiality and integrity of client's data.We intend to implement the proposed system on Secure Socket Layer (SSL) to enhance the security capabilities during the data transfer and receiving process to protect the system from man-in-themiddle and other session hijacking sort of attacks.Other plans include, enhancing the functionality of proposed technique, implementing a secure computing environment for multiple clients, TTPA and cloud admins working together to protect the data confidentiality and integrity and analyzing and evaluating the performance of system together with its security and privacy capabilities.

Fig
Fig. 1.System architecture For maintaining data confidentiality and integrity, the responsibilities of client admin are as follows: • Homomorphic encryption of data prior to its arrival at the cloud storage • Decrypt, update and download data throughout the entire computing life cycle • Safe delivery of significant parameters such as hashes, private and public keys • Ensure with the assistance of TTPA admin that data on cloud is intact TTPA admin belongs to an auditing authority and is responsible for following tasks: • Conducting the auditing services on behalf of client • Initiating scheduled or requested auditing process as directed by client admin • Providing response to the client about the status of data,whether it is tampered, deleted, manipulated or intact • Requesting the cloud admin to start data recovery process from backup cloud storage • Storing the client's parameters such as keys and VMd

Table 1 .
Evaluation results