heroImg

Control Your Own Data with Züs | Weekly Debrief July 12, 2023

Tiago Souza
July 12, 2023
News & Updates
Control Your Own Data with Züs
Control Your Own Data with Züs

Cloud Cover AMA:

Happy Wednesday! Our bi-weekly Cloud Cover AMA (Ecclesia #19) is scheduled for tomorrow, Thursday, July 13th, at 9 AM PST. Join us as Saswata presents the latest updates on the Züs Mainnet and the Active Set. Keep reading today’s post to learn more about blockchain updates and how to control your own data with Züs.

We highly value your participation! Feel free to submit your questions on the Discord channel or via Telegram. Now, let’s dive into this week’s updates!

Calling all Züs Community Members!

We need your help! As we approach the Mainnet launchs, we invite you to test our Apps. Your feedback is invaluable in ensuring a seamless and user-friendly experience. To participate, simply visit https://zus.network/launch-apps/ and select the App you would like to test. We kindly ask you to share your feedback on our Discord Channel.

Storm of the Week: 

Züs Empowers Users to control their own data in a Data-Driven World

In the wake of the recent class-action lawsuit against Google over data privacy concerns, the need to change how we manage, and control data is more apparent than ever. The lawsuit was triggered by Google’s implementation of a new policy that permits data scraping for AI training, with allegations stating that the company extracted data from millions of users without their consent, underscores the risks associated with centralized data storage systems.

Züs aims to fill this need with our decentralized cloud storage solution. Our system empowers users to control and own their data, marking a significant shift away from the risks associated with Centralized Storage systems.

“Publicly available” data does not equate to “free to use for any purpose.” Instead, any utilization of data for purposes such as AI training should require explicit user consent. Leveraging blockchain technology, we can provide transparency, enabling users to verify how their data is accessed and used.

This approach not only preserves user privacy, but also prevents unfair competitive advantages for large entities like Google. No single entity governs vast amounts of data in a decentralized system, precluding such competitive imbalances. The road to decentralized storage comes with challenges, but at Züs, we are committed to overcoming them. Our goal is not just to build a product but to shape the future of secure, transparent, and fair data management. We believe the future of data storage is decentralized and user focused. Join us as we redefine the way we control, store, and protect our data.

Blockchain Updates:

Last week, the blockchain team focused on the Smart Contract (SC) optimization PR and initialized the benchmark tests on large data MPT. However, the performance was not effective enough to be merged, so they decided to postpone the PR and continue the investigation after Mainnet. Here are some assumptions regarding the reasons behind the suboptimal performance of the optimization on large MPT data:

a) The team observed that partitions slow down when one partition becomes full, requiring the packing and generation of a new head partition. Moving the item’s index MPT node results in a significant amount of MPT writing. For example, if a partition size is 10, we would need to access the MPT at least 10 times.

b) It appears that MPT path addressing becomes sluggish when dealing with large MPT nodes. If this assumption holds true, we will need to store information such as the global config and provider nodes in a shorter path to enhance the speed of get and put operations later.

The team moved focus from the SC optimization to closing backend issues.

The following are the main issues and PRs the team closed:

  • PR #2603 – Closed the issue here. It involved using benchmark tests to ensure that no extra tokens were minted after running SCs. The team added assertions in the smart contract to check for unexpected token minting and burning by examining the before and after balances of each transaction’s from and to addresses. Running these checks directly in the SC would significantly slow it down, so performing them in benchmark tests was a suitable choice. So far, all SCs have been working well.
  • PR #2604 – Replaced the zcnsc.GlobalNode WZCN nonce with partitions. Previously, the WZCN global nonce was stored in a map within a single MPT map. The size of the map would increase with the number of new zcnsc mint SC calls. The team fixed this issue by replacing it with partitions.
  • PR #2607 – This PR utilized gosdk to sync the client nonce in the 0chain repository, allowing miners/sharders to retrieve the latest nonce before constructing new transactions. It also gave us the confidence to import other packages from gosdk to avoid duplicate code. However, we kept an eye on the docker image and binary sizes.
  • PR #2585 – This PR provided a quick fix for the min lock demand. The team discovered a bug where the min lock demand config was loaded with the wrong key, resulting in the min lock demand config always being 0. This led to incorrect allocation and remaining demand calculation.
  • PR #2575 – Implemented max token supply checking during transaction execution. Further details about the issue can be found here.
  • PR #2577 – This PR introduced the usage of Data Transfer Objects in Update Provider SmartContract Calls. The update provider smart contract endpoints previously accepted an object as input, which caused an issue in distinguishing between unchanged and default settings. To address this, we modified the input to use omitempty with pointer types. Additional details can be found in the issue here.
  • PR #2573 – Removed the postgres-post container from the sharder, as it had no effect. Binding the initial script to the postgres container was sufficient.
  • PR #2552 – Eliminated repeated transaction validation code in miners when accepting new transactions.
  • PR #2584 – Fixed staked capacity during writeprice update. We checked the total data saved to avoid reducing the allocation size below the used capacity.
  • PR #2593 – Removed min lock demand updating in the event db.
  • PR #2574 – Addressed the fix for the block rewards endpoint.
  • PR #2592 – Resolved the docker compose issue with conductor tests.
  • PR #2564 – Resurrected useful indexes in the event db.
  • PR #2580 – Reduced max_read_price and max_write_price to 7 ZCN and added max_charge to sc-config.
  • PR #2594 – Reduced the offer when finalizing the allocation. This had already been done when canceling the allocation, so it was also implemented in the allocation finalizing SC.

Gosdk repository:

  • PR #1073 – Refactored downloads in gosdk to use sys.File. It introduced new methods such as DownloadFileToFileHandler, DownloadByBlocksToFileHandler, DownloadThumbnailToFileHandler, DownloadFileToFileHandlerFromAuthTicket, DownloadByBlocksToFileHandlerFromAuthTicket, and DownloadThumbnailToFileHandlerFromAuthTicket to allow the usage of any in-memory implementations of the sys.File interface. Additionally, a new fileHandlerDownloader option was added, utilizing these new methods for sdk.CreateDownloader.
  • PR #1034 – Exposed multiop winsdk.
  • PR #1013 – Added the path in the thumbnail hash.
  • PR #1004 – Refactored repair.
  • PR #1085 – Changed the contract of Sharder and Miner update settings, which was part of the fix for the issue here.
  • PR #1084 – Added support for sending chunks.
  • PR #1090 – Fixed the download file callback panic.
  • PR #1096 – Resumed download.
  • PR #1099 – Cleaned up the preferredBlobbers list from the chain config.
  • PR #1093 – Fixed the excluded path.
  • PR #1033 – Displayed the correct size.

Blobber:

  • PR #1114 – Removed the fileID from the fileMetaHash in the blobber.
  • PR #1140 – Updated the challenge timing submission.
  • PR #1163 – Fixed the blobber size.
  • PR #1166 – Prevented redeeming readmarkers for free reads.
  • PR #1158 – Fixed the concurrent map write.
  • PR #1141 – Removed the custom nonce.

Control Your Own Data and Have Peace of Mind with Züs.

Data is a huge part of our lives and managing it can be incredibly challenging. Thankfully Züs is here to help. With the right tools, you will be able to control your own data and you can have peace of mind knowing that your data will remain secure and protected at all times. To learn more about Zus updates, join us for our AMA tomorrow.

Latest Articles
Lori Bowers
March 29, 2024

Enterprises are constantly seeking ways to enhance the performance of our AI applications while keeping costs low. With the rise of data lakes as a key component of modern enterprises, finding a solution that offers high performance for faster AI processing at minimal cost is crucial. This is where Züs comes in, offering not only […]