Description
We are currently using Pinata as an IPFS pinning service, as well as for resolution.
Recently something starts hammering Pinata with requests, which brings us over the monthly rate, which in turn disables the resolution towards end of each month. We are seeing around 150000 request per month, which is over the 100000 allowed.
The two options I see are:
- Switching to a higher plan at 100$/month, which would give us 1000000 requests per month and should provide a decent buffer.
- Switch to a free pinning and resolution service, which would come at no cost, but also at way lower resolution speed.
Category: | Treasury | Process
Status: New
Motivation
The current state regularly breaks our website and might lead to problem with other indexers, if they are not keeping cached versions of the nft data.
Key Terms / Background (optional)
Pinata, our current pinning / resolution service: https://www.pinata.cloud/
NFTStorage, free pinning service: https://nft.storage/
Gateway services: https://docs.ipfs.tech/concepts/ipfs-gateway/
Pinning: The process of uploading a file to IPFS and having IPFS nodes store it on the network. https://docs.ipfs.io/concepts/persistence/
Resolving: Reverse process of pinning, where we ask the network to find a file for us.
Details
Both approached come with their own set of pros and cons and route to implementation.
- Switching to a higher plan at 100$/month, which would give us 1000000 requests per month and should provide a decent buffer.
- Pros:
- We have a buffer
- We can keep using Pinatas fast resolution
- Cons:
- We are spending 100$/month on IPFS
- Implementing a free service
- Pros:
- We are not spending 100$/month on IPFS
- Cons:
- Requires some programming to make sure we keep the data intact. Bit of background: The ExpansionPunks nft consists of two files, the metadata and the image. The image is linked within the metadata using the Pinata gateway. Switching to a free gateway would require rewriting the metadata to point to the new gateway as well and reuploading both bucket of files. To make sure there are no discrepancies and errors introduced I would strongly favor having a data-pipeline that does that (contrary to a simple find/replace) and also does data-integrity checking.
Implementation
- DAO could buy the higher plan
- My estimate for the implementation of the data pipeline, validation, upload of the data and switching the contract would be around 2-3 days of work. I would be pretty comfortable to commit getting it done withing two weeks.
Timeline
- As soon as proposal passes
- 2 weeks, starting from the passing of the proposal
Cost
- 1200$ / year
- 0.7Eth (15hours * 60$/hour => 900$ at 1280$/Eth) once