Quantcast
Channel: devBlog of Michal Zalecki
Viewing all 53 articles
Browse latest View live

How to verify smart contract on Etherscan?

$
0
0

Why would you like to verify your smart contract? It depends. It mostly depends on your use case. It always comes down to being transparent. If I were into ICOs, I would want to make sure that the token and the crowdsale contract code enforces cryptoeconomics described in the whitepaper (or, ugh…, in the video). Open sourcing the code on GitHub is a great idea but gives no guarantees that the code in the repository is even remotely similar to the one running on-chain. It is a contract after all so it would be good to give other parties a chance to familiarize with the conditions they are going to “sign”. Verify the source code even if not everyone has programming skills to read it.

For addresses that point to smart contracts, it is possible to get their code.

web3.eth.getCode("0xd49d7704b72b373f7c7adc14623511e25ecc4a2d");

Since this gives you binary code in hex representation, it is not feasible to try to understand what the contract is doing.

Etherscan is very popular (if not the most popular) blockchain explorer. A lot of people use Etherscan to learn more about the transaction or particular address on Ethereum blockchain. It provides multiple services on top of its exploring capabilities. One of them is confirming that binary data under specific address is a result of compilation of the specified source code (that you can read and analyze).

Sample project

To make it more interesting, our sample project will have a constructor that accepts a parameter and extends Ownable contract from zeppelin-solidity. Those simple elements will showcase a few challenges you might come across while trying to verify contract on your own.

pragmasolidity0.4.23;import"zeppelin-solidity/contracts/ownership/Ownable.sol";contractMessageOfTheDayisOwnable{stringpublicmessage;constructor(string_message)public{message=_message;}functionsetMessage(string_message)publiconlyOwnerreturns(bool){message=_message;returntrue;}}

The full source code is available on GitHub. You can clone it, deploy it and try to follow my steps to confirm your instance on Etherscan.

$ ./node_modules/.bin/truffle migrate --network kovan

Using network 'kovan'.

Running migration: 1_initial_migration.js
  Replacing Migrations...
  ... 0x1124628216610c4683fe4a81107dd9a34a2532a44e2a39f6e8cf7918517be8d8
  Migrations: 0x16a49d6fe6831760e5208590435d911e7a462560
Saving successful migration to network...
  ... 0xf1c727123800989592d55c988aacc62b928b0ad36fc44c6bff97cce43f6a49cb
Saving artifacts...
Running migration: 2_message_of_the_day.js
  Deploying MessageOfTheDay...
  ... 0xb610d6ad2c1ac2eb742216a7248d49b65671d0712bf4e4863e006c1c1baf0e81
  MessageOfTheDay: 0x91926e1e9c7bdb0be4d2226caf7036392cc06763
Saving successful migration to network...
  ... 0x065eece5dc0fc260bca4e71589c575b9f43ca5a69336738b6f4c7f4e91c4897a
Saving artifacts...

Preparations

Before we start, we need to prepare ABI-encoded constructor arguments. You can do it by encoding values using web3. Much quicker is to do it using abi.hashex.org or read them straight from the transaction output that created the contract.

ABI-encoded representation of a single string argument:

0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001948656c6c6f2c206d696368616c7a616c65636b692e636f6d2100000000000000

We have to prepare a single file that contains entire source code of the contract such that it has no import statements. Let’s install Truffle Flattener.

npm install -D truffle-flattener
./node_modules/.bin/truffle-flattener ./contracts/MessageOfTheDay.sol > flattened.sol

You can now look into flattened.sol and find the source code of Ownable.sol contract in place of import statement.

Verification

To start the process go to Etherscan Verify Contract Code page. Make sure to use the network to which you deployed the contract and 2.0 version (with verifyContract2 in the URL).

  1. Paste contract address
  2. Enter the name of the contract
  3. Select compiler version you have used to compile the project. You can check it by running truffle version
  4. Set optimization accordingly to your project settings. Remember to also set the correct number of runs
  5. Paste flattened source code
  6. Paste ABI-encoded constructor arguments
  7. Click “Verify And Publish”

https://res.cloudinary.com/michal/image/upload/v1524928257/MessageOfTheDay.png

That’s it, you have successfully verified your smart contract!


Register .test domain with ENS

$
0
0

Cee, five, ef, dee, ef, four, ow, six, seven, oh wait, it’s ow, seven, six, bee… That’s of course not the way you would like to share your Ethereum address or Swarm and IPFS content hash. You copy and send it or scan QR codes but this experience is still inferior to using easy to remember, readable names. In the same way, as DNS solved this problem for IP addresses, ENS has a goal of mitigating this issue in Ethereum ecosystem.

ENS stands for Ethereum Name Service which is a set of smart contracts that provide distributed naming system on the Ethereum blockchain. ENS itself is not a part of Ethereum stack but a community-driven specification that you can easily extend using your custom resolvers for owned names.

In this tutorial, I would like to guide you through the process of registering a test domain on Rinkeby. You can easily apply following steps to other testnets changing only addresses.

Unlike on Mainnet or Ropsten, ENS on Rinkeby does not support .eth domain and is limited to use a .test domain only. Registering .test domain is much quicker as it does not require you to go through the auction process. Test domain also expires after 28 days which makes it good enough to use during a development phase.

Register .test domain

Before we start, let’s download ensutils-testnet.js which contains a few ABIs and helper functions that will make the process more straightforward. Hardcoded ENS address is no good for us since we would like to use Rinkeby, not Ropsten. On the other hand, if you try to register a name on Ropsten, you are all set. Head toward line 220 and change ENS contract address.

varens=ensContract.at('0xe7410170f87102df0055eb195163a03b7f2bff4a');

I have prepared a gist with all changes to ensutils needed to make it work on Rinkeby: ensutils-rinkeby.js.

  • ENS on Mainnet: 0x314159265dd8dbb310642f98f50c066173c1259b
  • ENS on Ropsten: 0x112234455c3a32fd11230c42e7bccd4a84e02010
  • ENS on Rinkeby: 0xe7410170f87102df0055eb195163a03b7f2bff4a

Once you did it, connect to running Ethereum node using geth and load the ensutils script.

$ geth attach http://127.0.0.1:8545
> loadScript("./ensutils-rinkeby.js")

Before we register, let’s check whether the name you would like to own is available.

>testRegistrar.expiryTimes(web3.sha3("michalzalecki"))0

If the timestamp returned is 0, or from the past then you can register this name.

>testRegistrar.register(web3.sha3("michalzalecki"),eth.accounts[0],{from:eth.accounts[0]})0x986a21970a14258fae1bfc952e731ec88cb94818d78a4c68212da312e6eee2f0

Wait for the blockchain to include transaction. You can now confirm the registration by rechecking the expiration time and the owner.

>testRegistrar.expiryTimes(web3.sha3("michalzalecki"))1527679534>ens.owner(namehash("michalzalecki.test"))"0xY0urAdd355"

Congratulations, you have got yourself a name on ENS!

Public resolver

You own a name, but it does not resolve to anything just yet. You need a resolver. From what I know, there is no official public resolver on Rinkeby at the time of writing. The good news is that anyone can create and deploy one. If you don’t want to create a resolver, then skip to the next section.

Clone the ENS repository, install dependencies and remove the build directory.

git clone https://github.com/ethereum/ens
npm install
rm -rf build

Change truffle config so you can deploy to Rinkeby.

// truffle.jsmodule.exports={networks:{rinkeby:{host:"127.0.0.1",port:8545,network_id:4},},solc:{optimizer:{enabled:true,runs:200,},},};

We do not need to deploy all contracts but only PublicResolver. Let’s remove unnecessary migration file and create a new one.

rm migrations/2_deploy_contracts.js
// 2_public_resolver.jsconstPublicResolver=artifacts.require("./PublicResolver.sol");constENS="0xe7410170f87102df0055eb195163a03b7f2bff4a";module.exports=function(deployer){deployer.deploy(PublicResolver,ENS);};

The last step is to run a migration.

./node_modules/.bin/truffle migrate --network rinkeby

If you want to know more about verifying your contract on Etherscan and give your resolver a little credibility, then read my other tutorial: How to verify smart contract on Etherscan?

Resolving domains

We need to point our name to the resolver and set an address to which our domain is resolving. If you do not have your resolver, you can use mine.

>publicResolver=resolverContract.at("0x5d20cf83cb385e06d2f2a892f9322cd4933eacdc")>ens.setResolver(namehash("michalzalecki.test"),publicResolver.address,{from:eth.accounts[0]})0x939da269d9cf314ef61cba8226501b2f140383b968b6274f74a99b36bc7b986b

You can now check that ENS keeps the resolver address for your domain

>ens.resolver(namehash("michalzalecki.test"))"0x5d20cf83cb385e06d2f2a892f9322cd4933eacdc"

I am going to set the domain to point to my account address, but it could be any address at this point.

>publicResolver.setAddr(namehash("michalzalecki.test"),eth.accounts[0],{from:eth.accounts[0]})0x621a39c3cb0af3d951102453e11fb6f037918c2a63731c3b519b11d639ce22c0

Now let’s see to which address our domain resolves. We can do it by directly calling public resolver or using the getAddr helper function.

>getAddr("michalzalecki.test")"0x7d20cb28c496a76173ee828ecacfb08deb379e8d">publicResolver.addr(namehash("michalzalecki.test"))"0x7d20cb28c496a76173ee828ecacfb08deb379e8d"

If you would like to resolve addresses on the client or your oracle, then you already have all tools you need. You can use ABI of the resolver and helper function from ensutils. There is also ethereum-ens but it does not work with web3 1.0.

Conclusion

ENS is a great project that tries to make the Ethereum userland more friendly place. While developer experience is not great yet, it is a strong foundation and broad adoption of the popular projects like MyEtherWallet, MetaMask, or Mist.

There is of course more to ENS than what we have covered here. There is a proper documentation available that answers many questions and provides multiple examples. If you would like to learn more about the project and people behind it grab a cup of coffee and watch Nick Johnson’s talk - Ethereum ENS - The Ethereum Name Service.

Integration tests and mocking web3 apps

$
0
0

Decentralized applications present a new set of challenges. One of them is testing. Transaction lifecycle is more complex than the old-school POST request/response flow and errors are often less than helpful. Although developer experience is getting better, this puts into perspective how testing is essential.

Error: VM Exception while processing transaction: revert

is a new

Uncaught TypeError: undefined is not a function

The most significant change from the developers perspective when switching from Web 2.0 backend to Ethereum dApps is that you cannot expect “request” return value straight away in the response. Transaction hash is available just after you send the transaction, but it does not mean that transaction will succeed or even that miners will include it into the blockchain. This is how handing transactions may look like in your React/Redux app.

this.props.addTransaction({id});AwesomeContract.methods.awesomeMethod(web3.utils.asciiToHex(values.awesomeString)).send({from:account}).on("transactionHash",txhash=>this.props.setTransactionHash({id,txhash})).on("receipt",receipt=>this.props.setTransactionReceipt({id,receipt})).on("error",error=>this.props.setTransactionError({id,error})).on("confirmation",confirmation=>this.props.setTransactionConfirmation({id,confirmation}));

Testing each function to handle the particular state of the transaction is very easy. That is especially true if you implement the business logic of handling transaction in reducers.

Testing entire flow is quite a challenge and would require a lot of mocking to make it a unit test quick to complete. That is why I prefer to test it using integration tests.

Recently I enjoy testing using Cypress, excellent, hassle-free developer experience. Cypress is running an instance of chrome which is, of course, missing web3 instance on the window object. Not having access to MetaMask or Mist is going to be a common problem no matter which tool for integration tests you use. My solution is to inject web3 instance that does not require any user action to sign the transaction. By attaching to window:before:load event, we can modify window object before the app code runs.

importWeb3from"web3";importPrivateKeyProviderfrom"truffle-privatekey-provider";cy.on("window:before:load",(win)=>{constprovider=newPrivateKeyProvider(Cypress.env("ETH_PRIV_KEY"),Cypress.env("ETH_PROVIDER"));win.web3=newWeb3(provider);// eslint-disable-line no-param-reassign});

There a few ways to use environment variables in Cypress. Do not let the name of truffle-privatekey-provider fool you. It is not a truffle dependent package.

Possibility to use PrivateKeyProvider in tests does not end here. You can also test how your application UI reacts to events triggered by “another user” by making a transaction directly from the test scenario code. I hope that this gives you some insights how to test your dApp.

An Intro to Nebulas for Ethereum Developers

$
0
0

Nebulas is yet another platform on which you can develop smart contracts. It offers a means of using JavaScript to develop Smart Contracts — an intriguing alternative to more established solutions, such as Ethereum.

I read about it for the first time on Reddit when they announced the Nebulas Incentive Program, which rewards developers for successfully submitting a dApp (decentralised application). From Nebulas’ whitepaper, we can learn about the team’s motivation and their goal to come up with a search engine and ranking algorithm for dApps. Sounds familiar? Let me google that. Oh, that sounds like Google.

By skimming through the whitepaper, you learn that Nebulas recognizes the problem of “measurement of value for applications on the blockchain” and the difficulties of platforms that operate on blockchain to upgrade themselves and evolve.

This is not a review, and I neither want to nor feel knowledgeable enough to assess whether such the problems that this project solves, as mentioned above, are worth investing your time or money. I am interested in the developer experience, the quality of the provided tooling from an engineering perspective and seeing how it compares to the well-established Ethereum. If our goals are inline, then this is a post worth reading.

Ethereum Virtual Machine and Nebulas Virtual Machine

In general, learning about the Nebulas Virtual Machine (NVM) and how the platform works is a breeze if you are familiar with how Ethereum works. Supplied gas intrinsically binds computation on both the Ethereum Virtual Machine (EVM) and NVM. The transaction fee is the the gas used, multiplied by the gas price.

There are two types of accounts: external/non-contract account and smart contracts (denoted accordingly by type 87 and 88).

curl -X POST \
  http://localhost:8685/v1/user/accountstate \
  -H 'content-type: application/json' \
  -d '{ "address": "n1Vg9Ngvi3vXo5f59diW4MK8XXger36weUm" }'

{"result":{"balance":"1000000000000000000","nonce":"0","type":87}}

Calls running locally on a currently connected node are free, immediately return values and do not change the blockchain state.

curl -X POST \
  http://localhost:8685/v1/user/call \
  -H 'content-type: application/json' \
  -d '{
  "from": "n1QA4usgq7sJbcM5LEkJWpgyNBcKtVEULFf",
  "to": "n1mQoB6HneRuu7c15Sy79CPHv8rhkNQinJe",
  "value": "0",
  "gasPrice": "1000000",
  "gasLimit": "2000000",
  "contract": { "function": "myView", "args": "[100]" }
}
'

{
  "result": {
    "result": "{\"key\":\"value\"}",
    "execute_err": "",
    "estimate_gas": "20126"
  }
}

Each transaction costs gas and changes the blockchain state (it’s dirt cheap and a small fraction of a penny at the time of writing).

curl -X POST \
  http://localhost:8685/v1/admin/transactionWithPassphrase \
  -H 'content-type: application/json' \
  -d '{
  "transaction": {
    "from": "n1Vg9Ngvi3vXo5f59diW4MK8XXger36weUm",
    "to": "n1gQgDb72yL1vrRcUEP3219ytcZGxEmcc9u",
    "value": "0",
    "nonce": 59,
    "gasPrice": "1000000",
    "gasLimit": "2000000",
    "contract": { "function": "myMethod", "args": "" }
  },
  "passphrase": "passphrase"
}
'

{
    "result": {
        "txhash": "36a61c6413e71387f34b0b442e73d2a8b54646917c58338166b0473292c0b26d",
        "contract_address": ""
    }
}

The most noticeable difference is the programming language used to develop smart contracts. On EVM, Solidity is the de facto standard language to write a smart contract. In its current form, NVM supports JavaScript (v8 6.2, from what I have found out) and quasi-TypeScript though compilation to JS. There are no typings available for storage, transaction or other globally available objects.

Due to plans for supporting LLVM, we might see the broader range of supported languages, such as C/C++, Go, Solidity or Haskell. This would be quite a feature if the Nebulas team can deliver on this promise and a big disappointment otherwise.

Smart Contracts

Let’s take a deep dive into how the same constructs are implemented in Ethereum (Solidity) and Nebulas (JavaScript).

Transfering value

Both Ethereum and Nebulas have the smallest nominal value called Wei. 1 ETH or 1 NAS is 10¹⁸ Wei. The value accepted by the transfer function should be the number of Wei. In the case of Ethereum, it is uint256, and for Nebulas, it should be an instance of BigNumber.

// Solidityaddress(address).transfer(value);// Nebulas (JavaScript)Blockchain.transfer(address,value);

Transaction properties

Transaction properties exist in the global namespace and provide a set of information about the height of the chain, block timestamp, supplied gas and much more.

// Soliditymsg.sender// sender address (address)msg.value// number of Wei sent (uint256)block.timestamp// current block timestamp (uint256)// Nebulas (JavaScript)Blockchain.transaction.from// sender address (string)Blockchain.transaction.value// number of Wie sent (string)Blockchain.block.timestamp// number of Wie sent (string)

Ethereum address validation is possible using mixed-case checksum encoding, as described in EIP55, the proposal by Vitalik Buterin. Wallet software has widely adopted this improvement. Due to using a 20-bytes address type, it is impossible to validate it using this method on-chain. Nebulas address is a little different; you can calculate the checksum from the public key. It also contains the information regarding whether the address is a regular account or a smart contract.

Blockchain.verifyAddress(address);

Preventing overflows

The maximum safe integer in JavaScript is 2⁵³ — 1. The maximum safe integer (unsigned) in Solidity is even bigger: 2²⁵⁶ — 1. In some particular use cases, it is possible to overflow (or underflow) these values. To mitigate the severity of any issues that may originate from an overflow, you can use third-party libraries.

uintmax=2**256-1;// 1157920892373161950...57584007913129639935max+1;// 0

In Solidity, you can use the popular SafeMath library, which throws error, consuming all the gas left in case of the underflow or overflow.

import"openzeppelin-solidity/contracts/math/SafeMath.sol";usingSafeMathforuint256;uintmax=2**256-1;max.add(1);// VM error

Our JavaScript-powered smart contract running on Nebulas can use bignumber.js without any additional imports.

constBigNumber=require("bignumber.js");Number.MAX_SAFE_INTEGER;// 9007199254740991Number.MAX_SAFE_INTEGER+1;// 9007199254740992Number.MAX_SAFE_INTEGER+2;// 9007199254740992constnumber=newBigNumber(Number.MAX_SAFE_INTEGER);number.plus(2).toString();// "9007199254740993"

Contract structure

Solidity is a contact-oriented language. This means that it is an object-oriented language with a contract keyword that defines a class-like type, able to store state and provide behavior for this state via functions.

contractCrowdsaleisMintedCrowdsale,CappedCrowdsale{constructor(uint256_rate,address_wallet,ERC20_token)public{// ...}function()publicpayable{buyTokens(msg.sender);}functionbuyTokens(address_beneficiary)publicpayable{// ...}}

Furthermore, Solidity also supports interfaces and Python-like multiple inheritance (through C3 superclass linearization).

On Nebulas, a contract is a class (or a function) with methods available on its prototype. One function, init, is required and executed during the contract initialization; it accepts arguments passed during the contract creation.

classStandardToken{init(){// ...}}module.exports=StandardToken;

State variables

Solidity has three types of storage if you do not count the events log as a fourth: Storage, Memory, and Stack. Storage keeps contract state variables.

contractOwnable{addressowner;constructor()public{owner=msg.owner;}}

Nebulas provides the abstraction on top of its storage capabilities. Using LocalContractStorage, we have to indicate which variables should persist explicitly.

classOwnable{constructor(){LocalContractStorage.defineProperty(this,"storage");}init(){this.owner=Blockchain.transaction.from;}}

Visibility

Solidity has four visibility specifiers: public, private, external and internal. The public specifier allows for external and internal calls, while private functions can be called only from the contract that defines them. Internal functions work like private, but extending contracts can call them too. External functions are callable by other contracts and transactions, as well as internally using “this” keyword.

contractVisibility{functionvisible()purepublic{}functionhiddenFromOthers()pureprivate{}functionvisibleOnlyForFunctions()pureexternal{}functionvisibleForChildContracts()pureinternal{}}

On Nebulas, private functions are achieved via naming convention and not forced by the language. All functions that start with an underscore are not a part of the interface and cannot be called via transaction.

classVisibility{functionvisible(){this._hidden();}function_hidden(){}}

Client-side applications

Ethereum dApps use an injected web3 instance with the embedded provider that asks users for confirmation each time the dApp tries to sign the transaction. MetaMask, which is an Ethereum extension for your browser, supports this flow, as well as Mist and Status (wallet apps with dApps browsers for desktop and mobile, accordingly).

When using Nebulas dApps, you can install WebExtensionWallet. It lacks convenient web3 abstraction but it is good enough for PoCs and simple use cases. The API for sending transactions is very similar to using RPC directly. In fact, using RPC directly is the easiest way to make a call that does not require signing.

Deployment

There are multiple ways to deploy contracts in Ethereum. The most developer-friendly are Remix and Truffle migrations scripts. Nebulas provides Mist-like experience within its web-wallet. You copy and paste the contract source code, specify constructor arguments and you are all set to go.

Alternatively, you can change the contract source to an escaped string and send the transaction, which creates a new contract using RPC.

Testing

Testing is one of the most crucial parts of smart contracts development, due to their immutability and costly mistakes. The Nebulas ecosystem is in its infancy and has no tooling to make such testing possible. Although clumsy, it is not impossible to test the Nebulas smart contract. It requires mocking internal APIs but, once you set your mind to do it, you will more or less reliably test the contract.

It is much easier to test smart contracts in Solidity. Thanks to the efforts of the Truffle’s team, you can almost reliably test contracts in isolation in both Solidity and JavaScript.

pragmasolidity0.4.24;import"truffle/Assert.sol";import"../contracts/Ownable.sol";contractOwnableTest{Ownableownable;functionbeforeEach()public{ownable=newOwnable();}functiontestConstructor()public{Assert.equal(ownable.owner(),address(this),"owner address is invalid");}// ...}

Conclusion

Frankly, since mid-2017, I thought that Lisk was going to be the first platform for JavaScript smart contracts. Nebulas took me by surprise.

Nebulas, of course, cannot match the more mature Ethereum ecosystem yet but the comparison to come up with what is better is not the goal here. I think that new projects should be a little more modest and be upfront about their shortcomings. My initial disappointment was only due to the huge claims made at the time.

When I take a step back, it is clear that the Nebulas team have made the considerable progress in the last few months. I believe that this is a project worth observing and I hope that releasing mainnet was not premature and is not going to slow down the development as it tends to with other, similar projects.


This article has been originaly posted on Tooploox’s blog: Nebulas: JavaScript Meets Smart Contracts

Set up IPFS node on the server

$
0
0

IPFS (InterPlanetary File System) is protocol establishing the peer-to-peer network with resources addressed based on their content instead of the physical location like in HTTP. IPFS gives us some guarantees of blockchains like decentralization and unalterable storage at a fraction of the price you would have to give in transaction fees. Participation in the IPFS network is free.

The critical thing to understand about IPFS is that the network is not going to store files once you add them. Adding files to IPFS does not upload them anywhere and only means that you add them to the local repository you host on your node. Unless other peers are interested in hosting your content on their nodes, once you shut down your node, files you have added will not be available for others until you are back online. The caching mechanism mitigates that issue as peers that were interested in fetching your content keep it in the cache. Aggressive garbage collector quickly removes unused files. You should not rely on the cache to keep your files online.

To reliably share files with other peers and use IPFS e.g. to host a webpage you would like to set up a server and does not rely on your personal device connectivity.

Install and setup IPFS

I am going to use AWS EC2 to spin up my server with Amazon Linux 2 using the default VPC. A t2.micro instance will not cost you a dime during the free tier and is good enough for IPFS node and you a few web services.

Install Golang and IPFS.

sudo yum update -y
sudo yum install -y golang

wget https://dist.ipfs.io/go-ipfs/v0.4.15/go-ipfs_v0.4.15_linux-amd64.tar.gz
tar -xvf go-ipfs_v0.4.15_linux-amd64.tar.gz
./go-ipfs/install

If the installation fails, then you can move executable to your bin path manually.

sudo mv ./go-ipfs/ipfs /usr/local/bin

Initialize local IPFS configuration and add your first file.

> ipfs init
echo "<h1>Michal</h1>"> index.html
> ipfs add index.html
added Qma1PYYMwbgR3GBscmBV7Zx8YgWdhBUAY6z27TznhrBet9 index.html
> ipfs cat Qma1PYYMwbgR3GBscmBV7Zx8YgWdhBUAY6z27TznhrBet9
<h1>Michal</h1>

Congratulation! You’ve just added a file to IPFS repository. Although you can fetch it, it works only locally. To join the network, you should run the IPFS daemon.

ipfs daemon

If your firewall does not block connection, then you should be able to fetch your files from the remote node or use a public gateway like https://ipfs.io/ipfs/.

Run IPFS daemon on start

It would be better to start IPFS daemon as a service instead of the terminal attached process. Let’s define a simple unit file responsible for running IPFS daemon service.

sudo vi /etc/systemd/system/ipfs.service

Copy and paste unit file definition.

[Unit]
Description=IPFS Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub
User=ec2-user

[Install]
WantedBy=multi-user.target

Running daemon with --enable-namesys-pubsub benefits from nearly instant IPNS updates. IPNS is IPFS naming system that allows for mutable URLs. After editing a unit file, reload the daemon, enable service to start on boot and start the service.

sudo systemctl daemon-reload
sudo systemctl enable ipfs
sudo systemctl start ipfs

You can now reboot your instance and make sure whether IPFS is back and running.

sudo systemctl status ipfs

Make gateway publicly accessible

If you want to, you can make your IPFS gateway publicly accessible. Change gateway configuration to listen on all available IP addresses.

In ~/.ipfs/config change

"Gateway": "/ip4/127.0.0.1/tcp/8080"

to

"Gateway": "/ip4/0.0.0.0/tcp/8080"

Conclusion

I am running IPFS node on EC2 for some time now and I did not have any major problems with it. You can use scp to copy files over ssh to your remote server. For programmatic access, you can make your IPFS gateway writable and use IPFS HTTP API. There are plenty of creative use cases for IPFS!

Using IPFS with Ethereum for Data Storage

$
0
0

Ethereum is a well-established blockchain that enables developers to create smart contracts — programs that execute on blockchain that can be triggered by transactions. People often refer to blockchain as a database but using blockchains as a data store is prohibitively expensive.

At the current price ($530, 4gwei) storing 250GB on Ethereum would cost you $106,000,000. In general, we can put up with the high cost because we a) don’t save that much data on blockchains b) the censorship resistance, transparency and robustness of blockchains are worth it.

Decentralized Storage

IPFS (InterPlanetary File System) has some guarantees we know from blockchains, namely decentralization, and tamper-proof storage, but doesn’t cost more than a conventional disc space. Running your EC2 t2.micro instance with EBS 250GB storage would cost you about $15/mo. A unique feature of IPFS is the way it addresses files. Instead of using location-based addressing (like domain name, IP address, the path to the file, etc.), it uses content-based addressing. After adding a file (or a directory) to the IPFS repository, you can refer to it by its cryptographic hash.

$ ipfs add article.json
added Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM article.json

$ ipfs cat Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM
{
  "title": "This is an awesome title",
  "content": "paragraph1\r\n\r\nparagraph2"
}

$ curl https://ipfs.io/ipfs/Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM
{
  "title": "This is an awesome title",
  "content": "paragraph1\r\n\r\nparagraph2"
}

You can then access files using IPFS client or any public gateway. You can also create a non-public gateway, make it writable (read-only) by default and implement your authorization scheme getting programmatic access to the IPFS network.

It’s important to understand that IPFS is not a service where other peers will store your content no matter what. If your content isn’t popular, the garbage collector will remove it from other nodes if they didn’t pin the hash (they are not interested in renting you their disc space). As long as at least one peer on the network does care about your files and has the interest in storing them, other nodes on the network can easily fetch that file. Even when your file disappears from the network, it can be added again later, and unless its content changes, its address (hash) will be the same.

IPFS and Ethereum Smart Contracts

Although Ethereum protocol doesn’t provide any native way to connect to IPFS, we can fall back to off-chain solutions like Oraclize to remedy that. Oraclize allows for feeding smart contracts with all sorts of data. One of the available data sources is URL. We could use a public gateway to read from our JSON file on IPFS. Relying on a single gateway would be a weak link. Another data source we are going to use is IPFS. By using JSON parser, which is part of the query, to Oraclize smart contract we can extract specific field in the JSON document.

oraclize_query("IPFS", "json(Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM).title"));

If Oraclize can fetch the file within 20 seconds, you can expect an asynchronous request. If you upload file using well-connected node, timeout is not something you should be concerned about. Our EC2 (EU Frankfurt) instance connects to roughly 750 peers. Fetching files through the public gateways or locally running daemon is almost instant. The response is asynchronous, and oraclize_query call returns query id (bytes32). You use it as an identifier for data coming from Oraclize.

function__callback(bytes32_queryId,string_data)public{require(msg.sender==oraclize_cbAddress());process_data(_data);}

For safety reasons, we want to make sure that only Oraclize is allowed to call the __callback function.

You can find the full codebase of out decentralized blog example on GitHub: tooploox/ipfs-eth-database!

Performance and Implementation

Initially, I was concerned for the performance. Can you fetch JSON files hosted on IPFS as quickly as it takes centralized services to send a response? I was pleasantly surprised.

$ wrk -d10s https://ipfs.io/ipfs/Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM
Running 10s test @ https://ipfs.io/ipfs/Qmd4PvCKbFbbB8krxajCSeHdLXQamdt7yFxFxzTbedwiYM
  2 threads and 10 connections
  Thread Stats Avg Stdev Max +/- Stdev
    Latency 59.18ms 24.36ms 307.93ms 94.73%
    Req/Sec 86.34 15.48 101.00 85.57%
  1695 requests in 10.05s, 1.38MB read
Requests/sec: 168.72
Transfer/sec: 140.70KB

In our implementation of the censorship-resistant blog, the author has to enter only the IPFS hash when calling addPost on the smart contract. We read the title from the file using IPFS and Oraclize to store it using Ethereum events. We don’t need to keep the title accessible for other smart contracts so using events is good enough for our use case. That might be not the most groundbreaking example but nicely shows how to optimize for low transaction fees.

pragmasolidity0.4.24;import"openzeppelin-solidity/contracts/ownership/Ownable.sol";import"./lib/usingOraclize.sol";import"./lib/strings.sol";contractBlogisusingOraclize,Ownable{usingstringsfor*;mapping(address=>string[])publichashesByAuthor;mapping(bytes32=>string)publichashByQueryId;mapping(bytes32=>address)publicauthorByHash;eventPostAdded(addressindexedauthor,stringhash,uinttimestamp,stringtitle);eventPostSubmitted(addressindexedauthor,stringhash,bytes32queryId);uintprivategasLimit;constructor(uint_gasPrice,uint_gasLimit)public{setCustomOraclizeGasPrice(_gasPrice);setCustomOraclizeGasLimit(_gasLimit);}functiongetPrice(string_source)publicviewreturns(uint){returnoraclize_getPrice(_source);}functionsetCustomOraclizeGasPrice(uint_gasPrice)publiconlyOwner{oraclize_setCustomGasPrice(_gasPrice);}functionsetCustomOraclizeGasLimit(uint_gasLimit)publiconlyOwner{gasLimit=_gasLimit;}functionwithdraw()publiconlyOwner{owner.transfer(address(this).balance);}function__callback(bytes32_queryId,string_title)public{require(msg.sender==oraclize_cbAddress());require(bytes(hashByQueryId[_queryId]).length!=0);stringmemoryhash=hashByQueryId[_queryId];addressauthor=authorByHash[keccak256(bytes(hash))];hashesByAuthor[author].push(hash);emitPostAdded(author,hash,now,_title);}functionaddPost(string_hash)publicpayablereturns(bool){require(authorByHash[keccak256(bytes(_hash))]==address(0),"This post already exists");require(msg.value>=oraclize_getPrice("IPFS"),"The fee is too low");bytes32queryId=oraclize_query("IPFS","json(".toSlice().concat(_hash.toSlice()).toSlice().concat(").title".toSlice()),gasLimit);authorByHash[keccak256(bytes(_hash))]=msg.sender;hashByQueryId[queryId]=_hash;emitPostSubmitted(msg.sender,_hash,queryId);returntrue;}functiongetPriceOfAddingPost()publicviewreturns(uint){returnoraclize_getPrice("IPFS");}}

The frontend reads events using Web3 and builds a list of all blog posts for a given author.

The content of the article in markdown is also stored on IPFS. It allows keeping the fixed fee for adding new blog posts. We use a range of public IPFS starting with our own. That makes sense especially when you upload files from the same node. You can also pin files programmatically if you decide to run your gateway in write mode (by default it’s read-only). We also allow the user to specify his own gateway. If user installed IPFS Companion he can take advantage of running his own node.

BlogEvents.getPastEvents("PostAdded",{fromBlock:0,filter:{author}}).then(events=>{this.setState({addedPosts:events.map(e=>e.returnValues)});});// ...getPost(gatewayIndex=0){this.fetchPostFromIpfs(gateways[gatewayIndex]).catch(()=>this.retry(gatewayIndex))}

You can find the full codebase of out decentralized blog example on GitHub: tooploox/ipfs-eth-database!

Conclusions

Our little experiment with requesting IPFS data from Ethereum smart contracts let us dive deeper into IPFS performance and built the foundation for further implementation in more production use cases.

The only place where performance is an issue can be IPNS. IPNS is the naming system for IPFS and allows for mutable URLs. Hash corresponds to the peer id instead of the file or directory content hash. The new IPNS resolver and publisher introduced in version 0.4.14 have mitigated some of the problems. Make sure you have an up-to-date version and run the daemon with — enable-namesys-pubsub option to benefit from nearly instant IPNS updates.

There were no significant problems with continuously running IPFS node on Amazon Linux 2 whatsoever.


This article has been originaly posted on Tooploox’s blog: Using IPFS with Ethereum for Data Storage

Docker Compose for Node.js and PostgreSQL

$
0
0

Docker is the response to an ongoing problem of differences between environments in which application runs. Whether those differences are across machines of the development team, continuous integration server, or production environment. Since you are reading this, I assume you are already more or less familiar with benefits of containerizing applications. Let’s go straight to Node.js specific bits.

There is a set of challenges when it comes to dockerizing Node.js applications, especially if you want to use Docker for development as well. I hope this guide will save you a few headaches.

TL;DR: You can find the code for a running example on GitHub.

Dockerfile

As a base image, I am using node image that runs under Alpine Linux, a lightweight Linux distribution. I want to expose two ports. EXPOSE is not publishing any ports, it is just a form of a documentation. It is possible to specify ports with Docker Compose later. Port 3000 is the port we use to run our web server, and 9229 is a default port for Node.js Inspector. After we copy files to the container, we install dependencies.

FROM node:8.10.0-alpine
EXPOSE 3000 9229
COPY . /home/app
WORKDIR /home/app
RUN npm install
CMD ./scripts/start.sh

The executable for the container could be an npm start script, but I prefer to use a shell script instead. It makes it easier to implement more complex build steps which might require executing a different command to start the application in a development or production mode. Moreover, it allows for running additional build steps.

#!/bin/sh

npm run build

if["$NODE_ENV"=="production"] ; then
  npm run start
else
  npm run dev
fi

If you want to check and install dependencies on each startup, you can move npm install from Dockerfile to start.sh script.

Docker Compose

I am splitting my Docker Compose configuration into two files. One is a bare minimum to run the application in production or on the continuous integration server. Namely, no volumes mounting and .env files. The second one is a development-specific configuration.

# docker-compose.ymlversion:"3"services:app:build:.depends_on:-postgresports:-"3000:3000"-"9229:9229"postgres:image:postgres:9.6.8-alpineenvironment:POSTGRES_PASSWORD:postgres

During development, I am interested in sharing code between the container and the host file system, but this should not apply to node_modules. Some packages (e.g., argon2) require additional components that need a compilation step. Package compiled on your machine and copied to the container is unlikely to work. That is why you would like to mount extra volume just for node modules.

The other addition to the development configuration of docker compose is using the .env file. It is a convenient way to manage environment variables on your local machine. That said, you should not keep it in the repository. In production, use environment variables instead.

For more information on how to configure Postgres container go to Docker Hub.

# docker-compose.override.ymlversion:"3"services:app:env_file:.envvolumes:-.:/home/app/-/home/app/node_modules

Docker Compose reads the override file by default unless said otherwise. If you are using Docker Compose on CI then explicitly specify all configuration files that apply.

docker-compose -f docker-compose.yml -f docker-compose.ci.yml up

Npm Scripts and Node Inspector

Npm scripts are specific to your project, but for the reference, those are mine.

{..."scripts":{"dev":"concurrently -k \"npm run build:watch\" \"npm run start:dev\"","start":"node dist/index.js","start:dev":"nodemon --inspect=0.0.0.0:9229 dist/index.js","build":"tsc","build:watch":"tsc -w"}}

I do not call npm scripts directly from the command line. They are a convenient place to encapsulate complexity and simplify start.sh (and later the other scripts).

The important takeaway is that inspector should be bound to host 0.0.0.0 which is a public IP of the container instead of the default localhost. Otherwise, you are not able to access it from your local machine.

.dockerignore

There is a bunch of stuff you can list here and which are not needed to run the application in the container. Instead of trying to list all of them, I distinguish two.

node_modules
dist

Ignore node_modules for reasons I have already explained when covering volumes. dist is just the output directory of our build pipeline. You might not have its counterpart in your project when you write in JavaScript/CommonJS and do not need a build step. These are all simple things, but you would better not miss them.

Conclusion

You may not like this approach, and it is ok (tell me why in the comments). For better or worse there is no single way to do it. Hopefully, this reference gave you a different perspective and helped you to fix this one thing that did not work for you.

I have not touched on deployment and running in production. There are a few ways you can approach it. Some just run the application container in Docker and install the database directly on the host which makes it harder to lose your data by accident. You could build and push an image to the registry or push to Dokku if you do not feel like using an image repository. Deployment on its own is a topic for another article.

Hyperledger Fabric: Confidentiality on the Blockchain

$
0
0

There are two contexts for talking about confidentiality on the blockchain. When it comes to public and permissionless blockchains, there are projects like Monero and Zcash where privacy is the outcome of anonymous transactions. Transactions in Ethereum, the biggest smart contract blockchain, can be traced and transaction payload is readable.

The openness of Ethereum is excellent from the ideological standpoint, attracts many developers and allows for implementing mechanisms (ICOs, escrows, all sorts of gambling games, collectibles, etc.) in a transparent manner that wasn’t possible before. The flip side is that Ethereum doesn’t work for many businesses that require a high level of confidentiality or have to comply with data privacy regulations.

Private and permission blockchains like Hyperledger Fabric try to address these business requirements allowing the organization to manage who can participate in the network and what data they can access.

Hyperledger Fabric

Hyperledger Fabric is a modular framework for building hierarchical and permission blockchain networks capable of running the chaincode. The chaincode in Hyperledger Fabric is an installed and initialized program that runs on the blockchain, an equivalent of a smart contract in Ethereum.

Applications running on Hyperledger Fabric are upgradeable and since version 1.2 they can also save and read from the private data storage. The other feature that allows programmers to secure data is an application-level solution in the form of attribute-based access control (ABAC). Private data and ABAC together give enough flexibility to model a non-trivial business process without revealing confidential information.

Ledger in Hyperledger Fabric consists of the current world state (database) and transaction log (blockchain). Assets are represented by a collection of key-value pairs. Changes are recorded as transactions on a channel ledger. Assets can be in binary or JSON format. World state is maintained so reading data doesn’t involve traversing the entire blockchain. Each peer can recreate the world state from the transaction log.

Chaincode

To understand how to incorporate private data and ABAC into your smart contract let’s implement a simple use case that involves storing medicine prescription:

  • The doctor can create a new prescription for the patient
  • The doctor can see the prescription that he issued
  • The doctor cannot see the prescription that he didn’t issue
  • The patient can see his prescription
  • The patient cannot see the prescription that doesn’t belong to him
  • The patient can reveal his prescription to the pharmacy
  • The pharmacy can see only prescriptions that the pharmacy filled

Access to the private data is configurable on the organization-level. Doctors and patients access ledger using peers that are members of the first organization (Org1). Doctors can issue new prescriptions and patients can access them. These two rules will have to be programmed in the chaincode as private collections config doesn’t allow for specifying such action-based rules. The pharmacy should maintain their own set of prescriptions for patients (Org2). That’s the minimal configuration for private collections to meet those requirements:

[{"name":"pharmacyPrescriptions","policy":"OR('Org1MSP.member', 'Org2MSP.member')","requiredPeerCount":0,"maxPeerCount":3,"blockToLive":0},{"name":"healthcarePrescriptions","policy":"OR('Org1MSP.member')","requiredPeerCount":0,"maxPeerCount":3,"blockToLive":0}]

We will have to specify a path to that file later when we instantiate the chaincode.

peer chaincode instantiate -C mychannel -n mycc -v 1.0 -c '{"Args":[""]}' -P "AND('Org1MSP.peer','Org2MSP.peer')" --collections-config /path/to/collections_config.json

I use hyperledger/fabric-samples/first-network as the foundation for my network setup.

Start with generating the required certificates, genesis block, and docker compose file for configuration with two Fabric CA containers, one per each organization. Fabric CA is a certificate authority for Hyperledger Fabric.

Specify newly generated docker compose file and bring up the network.

./byfn.sh generate

./byfn.sh up -f docker-compose-e2e.yaml

On the application level, we can restrict data access using custom attributes. Certificates issued by the Fabric CA can contain custom attributes that we will use for authorization.

fabric_ca_client.register({enrollmentID:"user1",affiliation:"org1.department1",role:"client",attrs:[{name:"role",value:"PAT0"}]},admin_user);

For more information on how to issue the certificate with Fabric CA, check out hyperledger/fabric-samples/fabcar example.

Chaincode is written in Go and uses cid library that might not be available in your container. Make sure to have fabric available in your $GOPATH and install two additional dependencies.

go get -u github.com/golang/protobuf
go get -u github.com/pkg/errors

Let’s start with defining the prescription struct, which we will use for storing prescription information. We can use the same prescription type for both pharmacyPrescriptions and healthcarePrescriptions collections.

typePrescriptionstruct{Patientstring`json:"patient"`Doctorstring`json:"doctor"`Contentstring`json:"content"`Expiresstring`json:"expires"`FilledBystring`json:"filled_by"`}

Our smart contract has to handle invoking of four functions. The best practice is to have a separate method for state initialization. Let’s add one prescription to the healthcarePrescriptions private data for testing purposes. In the main call, we only start the chaincode.

typeSmartContractstruct{}func(s*SmartContract)Init(stubshim.ChaincodeStubInterface)peer.Response{returnshim.Success(nil)}func(s*SmartContract)Invoke(stubshim.ChaincodeStubInterface)peer.Response{fn,args:=stub.GetFunctionAndParameters()iffn=="initLedger"{returns.initLedger(stub)}elseiffn=="addPrescription"{returns.addPrescription(stub,args)}elseiffn=="getPrescription"{returns.getPrescription(stub,args)}elseiffn=="transferPrescription"{returns.transferPrescription(stub,args)}returnshim.Error("Invalid function name.")}func(s*SmartContract)initLedger(stubshim.ChaincodeStubInterface)peer.Response{prescriptions:=[]Prescription{Prescription{Patient:"PAT0",Doctor:"DOC0",Expires:"2018-07-17 14:01:52"},}fori,prescription:=rangeprescriptions{prescriptionAsBytes,_:=json.Marshal(prescription)stub.PutPrivateData("healthcarePrescriptions","PRE"+strconv.Itoa(i),prescriptionAsBytes)}returnshim.Success(nil)}funcmain(){err:=shim.Start(new(SmartContract))iferr!=nil{fmt.Printf("Error starting SmartContract chaincode: %s",err)}}

Prescriptions stored in the healthcarePrescriptions collection should be only readable by patient owning the prescription and the doctor that issued the prescription. We know who is who as each user of the system should identify himself by using the certificate issued by Fabric CA with a role attribute. We respond with “Prescription not found” error also when the user is unauthorized to see the prescription.

Pharmacies store only prescriptions that they filled in, in a separate private collection. We don’t check their specific identifier.

func(s*SmartContract)getPrescription(stubshim.ChaincodeStubInterface,args[]string)peer.Response{// Check argumentsiflen(args)!=1{returnshim.Error("Incorrect number of arguments. Expecting 1")}// Get prescriptionrole,err:=s.getRole(stub)iferr!=nil{returnshim.Error(err.Error())}key:="PRE"+args[0]varprescriptionBytes[]byteifstrings.HasPrefix(role,"PAT")||strings.HasPrefix(role,"DOC"){// When patient or doctorprescriptionBytes,err=stub.GetPrivateData("healthcarePrescriptions",key)ifprescriptionBytes!=nil{prescription:=Prescription{}json.Unmarshal(prescriptionBytes,&prescription)ifprescription.Patient==role||prescription.Doctor==role{returnshim.Success(prescriptionBytes)}}}elseifstrings.HasPrefix(role,"PHR"){// When pharmacyprescriptionBytes,err=stub.GetPrivateData("pharmacyPrescriptions",key)ifprescriptionBytes!=nil{returnshim.Success(prescriptionBytes)}}else{// When otherreturnshim.Error("Only patients, doctors and pharmacies can access prescriptions")}returnshim.Error("Prescription not found")}func(s*SmartContract)getRole(stubshim.ChaincodeStubInterface)(string,error){role,ok,err:=cid.GetAttributeValue(stub,"role")iferr!=nil{return"",err}if!ok{return"",errors.New("role attribute is missing")}returnrole,nil}

Doctors can add prescriptions specifying patient’s identifier. We use doctor’s role attribute to reference him in the Prescription.

The last feature to implement is to allow the patient to transfer the prescription to the pharmacy upon filling the prescription. The patient can use the chaincode that writes to the pharmacyPrescriptions private collection.

func(s*SmartContract)transferPrescription(stubshim.ChaincodeStubInterface,args[]string)peer.Response{// Check argumentsiflen(args)!=2{returnshim.Error("Incorrect number of arguments. Expecting 2")}role,err:=s.getRole(stub)iferr!=nil{returnshim.Error(err.Error())}ifstrings.HasPrefix(role,"PAT"){returnshim.Error("Only patients can transfer prescriptions")}// Get prescriptionkey:="PRE"+args[0]prescriptionBytes,err:=stub.GetPrivateData("healthcarePrescriptions",key)iferr!=nil{returnshim.Error(err.Error())}ifprescriptionBytes==nil{returnshim.Error("Prescription not found")}prescription:=Prescription{}json.Unmarshal(prescriptionBytes,&prescription)// Check permissionsifprescription.Patient!=role{returnshim.Error("Prescription not found")}// Set FilledByprescription.FilledBy=args[1]prescriptionBytes,_=json.Marshal(prescription)err=stub.PutPrivateData("healthcarePrescriptions",key,prescriptionBytes)iferr!=nil{returnshim.Error(err.Error())}// Save pharmacy prescriptionerr=stub.PutPrivateData("pharmacyPrescriptions",key,prescriptionBytes)iferr!=nil{returnshim.Error(err.Error())}returnshim.Success(nil)}

We could consider changing this implementation so it involves the pharmacy allowing the patient to transfer the prescriptions. It adds very little to how we work with ABAC or private data, so I decided to skip it.

You can find the complete implementation here: prescriptions.go.

Client

To interact with the Hyperledger Fiber, we can use fabric-client or fabric-ca-client SDKs. To test the implementation, you can start with scripts from hyperledger/fabric-samples/fabcar example. Some modifications are needed as our network uses TLS encryption, and fabcar doesn’t.

In query.js change

varpeer=fabric_client.newPeer('grpc://localhost:7051');

to

constserverCert=fs.readFileSync('./crypto-config/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem','utf8');constpeer=fabric_client.newPeer('grpcs://localhost:7051',{pem:serverCert,'ssl-target-name-override':'peer0.org1.example.com'});

In invoke.js change

varpeer=fabric_client.newPeer('grpc://localhost:7051');varorder=fabric_client.newOrderer('grpc://localhost:7050');

to

constserverCert=fs.readFileSync('../crypto-config/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem','utf8');constordererCert=fs.readFileSync('../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem','utf8');

and change

letevent_hub=fabric_client.newEventHub();event_hub.setPeerAddr('grpc://localhost:7053');

to

letevent_hub=channel.newChannelEventHub(peer);

and change

console.log('The transaction has been committed on peer '+event_hub._ep._endpoint.addr);

to

console.log('The transaction has been committed on peer '+event_hub.getPeerAddr());

Now you should be able to query the chaincode and make transactions.

Hyperledger Composer

If you feel like Hyperledger Fiber is a just bare bones, you might want to look at Hyperledger Composer. Hyperledger Composer is a set of tools that provide higher-level abstraction over the Hyperledger Fabric. It allows to model a business network that consists of assets and participants, runs JavaScript to execute a query or a transaction and provides easy-to-use REST API with different authorization schemes.

Hyperledger Composer is built on top of the Hyperledger Fabric v1.1 and doesn’t support the newest features. Lack of support for private data is a limiting factor for applying Hyperledger Composer where lack of confidentiality can be a problem. Nonetheless, developer experience of using Hyperledger Composer is much better than setting up and using Hyperledger Fiber. The project is under active development, and I’m looking forward to trying new version that comes with a driver for Hyperledger Fiber v1.2.

Conclusion

At Tooploox we incorporate blockchain into applications taking into consideration their long-term impact on the product. Currently, Hyperledger Fiber is the best fit for implementations where there are other reasons than transparency for using blockchain like compliance, building trust between business parties or streamlining processes.

On the other hand, when you would like to provide your users with the ability to easily participate in the network, collaborate and trade, you will be better off with Ethereum. There are multiple standards driven by the community so we can make your product compatible with the ever-growing ecosystem of decentralized applications.


This article has been originaly posted on Tooploox’s blog: Hyperledger Fabric: Confidentiality on the Blockchain


Curated list of podcasts for software developers

$
0
0

For me, podcasts are a great alternative to listen to music when I exercise, commute, do some chores, or cooking. Over the last few years, I’ve built up a decent-size list of almost 100 subscribed podcasts but happen to listen regularly to only a few. That’s my curated list of podcasts (not just about coding) which I think are very valuable for software developers.

One more thing. I tend to keep this blog highly focused on purely technical content that involves exploring new fields and technical challenges mostly connected to JavaScript and blockchain. I started to consider deviating a little from this trend due to two reasons. There’s more to software development than only tackling programming challenges. The other reason was writing a lot of during my working hours at Tooploox (check out Tooploox Blog) and upcoming ebook about tips and tricks for Solidity/Ethereum developers. For a little while, those additional activities suppressed my need for publishing more technical content on this blog.

Software Engineering

  • Software Engineering Daily - if you have time to listen to only one podcast, that’s the one. Jeff Meyerson puts a lot of effort into understanding the tech and interviewing guests to bring listeners closer to one of many topics ranging from cloud, data science, open source, blockchain, and software development. Daily.

JavaScript

  • JS Party - thanks to the wide variety of co-hosts with different backgrounds it’s exciting to listen. Focusing on both news in JavaScript land and technologies. Part of the Changelog network.
  • JavaScript Jabber - co-hosted by Charles Max Wood, Aimee Knight, and AJ ONeal. The podcast features the wide variety of topics and thoughts on given technology. From time to time episodes have a different format called MJS (My JavaScript Story) where invited engineers share their story about how they got into JavaScript.
  • Shop Talk Show - Dave Rupert and Chris Coyier discuss front-end, WordPress, small e-commerce, and Q&As. WordPress is not my thing, but I still enjoy listening.
  • Syntax - Wes Bos and Scott Tolinski talking about frontend stack in a very approachable way. If you are a junior developer listening to Syntax should be high on your priority lists.
  • Toolsday - beginners-friendly podcast you cannot mistake with anything else thanks to the co-host Una Kravets who sings a song about tool covered in the episode.

Blockchain

  • Unchained - an unbiased podcast about blockchain and distributed ledger technology hosted by Laura Shin.
  • The Bad Crypto Podcast - an entertaining podcast about everything ridiculous in cryptocurrencies by Joel Comm and Travis Wright.

Security

  • Risky Business - weekly security news that goes in-depth covering hacks, breaches and the best practices in InfoSec industry.

Career

  • Developer Tea - Jonathan Cutrell records quite a different podcast for developers and focuses on psychology, productivity, goal setting, and other topics loosely related to software development. Episodes are 10 to 20 minutes long to fit inside your tea break.
  • Entreprogrammers Podcast - Josh Earl, John Sonmez, Derick Bailey, and Charles Max Wood meet to talk about their goals, challenges, and businesses. This podcast was my inspiration for getting involved in the mastermind group.

Economy and finance

  • Planet Money and The Indicator - both NPR productions have this radio show vibe and touch on business, work, and economy often though covering a particular story and interviews.
  • Freakonomics - Stephen J. Dubner explores the hidden side of non-obvious problems using statistics and economics.
  • The Investing Podcast - Preston Pysh and Stig Brodersen comment on moves of the most profound billionaires. If listening to what Warren Buffet, Jeff Bezos, or Jack Ma have to say is your thing, you will enjoy this podcast.
  • Optimal Finance Daily - Dan Weinberg saves me a lot of time by reading hand-picked articles on personal development, minimalism, finance, health, and business.

App

To listen to podcasts, I’m using Pocket Casts app which is available on both iOS and Android. I had no problems with it and progress sync works great. Messed up sync was what made me stop using iTunes.

Summary

Even though this is a relatively short list and you could be comfortable listening at x1.5 or x2.0 speed, it might take a lot of time to stay up-to-date with all this content. Be selective and try incorporating learning from podcasts into the activities that don’t require your full attention like exercising, commuting, or cooking.

One of my 2017-favorite podcasts that sadly didn’t make on this list is Partially Derivative. It’s a great show about machine learning, but unfortunately, it has been discontinued.

Feel free to let me know if your favorite podcast didn’t make on this list. Maybe I’m missing on something great!

Integration tests with web3, Ganache CLI and Jest

$
0
0

Well written set of tests plays a crucial role in delivering a reliable software. A good test suit ensures that the application works as intended and significantly reduces the number of bugs. It’s easier said than done. One of the steps to achieve this goal is adhering to boundaries of different test levels. In this article, I want to focus on the integration testing of decentralized applications that rely on web3. For integration testing, I’m able to sacrifice the speed of execution to gain the confidence that different components of the app seamlessly work together.

Last time, I presented a more E2E solution for mocking web3 during black-box tests where I injected web3 into the window object. This technique required a connection to the Ethereum node and locally running Ganache in that case. It’s an elegant solution when you care about the ability to automate testing also against the test network, Ropsten, for example. It’s not a silver bullet though.

Recently I was working on the smart contract deployment from the browser. Running robust E2E test cases for components responsible for that functionality is too slow. Luckily, Ganache CLI also provides programmatic access to the provider that automatically mines subsequent blocks. We can use it to spin up ganache instance quickly and prevent sharing blockchain state between test reruns.

Ganache CLI provider

To swap provider for selected test suits, we use Jest’s mocking feature. Let’s create a separate module for the web3 provider getter.

// web3Provider.tsimportWeb3from"web3";exportfunctionprovider(){returnWeb3.givenProvider;}

We can then use this module to instantiate a web3.

// web3.tsimportWeb3from"web3";import{provider}from"./web3Provider";exportconstweb3=newWeb3(provider());

The implementation of the provider we use for testing is slightly different.

// someModule.spec.tsjest.mock("./web3Provider",()=>{functionprovider(){returnrequire("ganache-cli").provider();}return{provider};});

In the test cases where we mocked the provider, all transactions are going to run on ganache. This way we have a fresh blockchain instance each time we test the app.

Skipping slow tests

Although automating the Ganache start and making it part of the test case is an improvement over having it running as a separate application, it still takes a few seconds. This is more than I feel comfortable with to run along with my unit tests. I came up with a workaround that mitigates this issue.

We wrap slow-running tests with slowDescribe as opposed to using Jest’s describe directly.

describe("BettorAgreement",()=>{slowDescribe("deploy",()=>{it("deploys a contract",async()=>{const[account]=awaitweb3.eth.getAccounts();constcontract=awaitdeploy("1000000",{from:account,gas:3000000});expect(contract.options.address).toContain("0x");});});});

If ALLOW_SLOW environment variable equals to false, we skip the given set of tests entirely.

functionallowSlow(){return`${process.env.ALLOW_SLOW}`.toLowerCase()!=="false";}exportfunctionslowDescribe(msg:string,handler:jest.EmptyFunction){if(allowSlow()){describe(msg,handler);}else{describe.skip(msg,handler);}}

I’m somewhat against modifying the “production” code just for the sake of making some hacks in tests possible. In a month no one will remember what the intention behind such implementation was. Nonetheless, you could check the result of allowSlow inside the provider and bet on UglifyJS to remove the dead code from the bundle.

// web3Provider.tsimportWeb3from"web3";exportfunctionprovider(){if(allowSlow())returnrequire("ganache-cli").provider();returnWeb3.givenProvider;}

Conclusion

The dynamic nature of JavaScript gives us the flexibility to mock and patch both test code and actual implementation. Remember that your test suit is also a code you have to maintain and overengineering comes at a cost so try to keep things simple.

Using Dokku with Docker, Let's Encrypt HTTPS, and redirects

$
0
0

This post is a step by step guide to configure Dokku to host multiple applications, supporting domains, subdomains, redirects, and secure connection via HTTPS using free certificates issued by Let’s Encrypt.

After completing this tutorial, you set up the server that can serve multiple dockerized application for as low as $5 a month. I’m using a very similar setup to run this blog and my side projects.

Dokku is a self-hosted platform as a service that tries to mimic the way Heroku works and uses Docker to isolate, manage, and run multiple applications. It’s designed to run on a fresh VM installation and abstracts configuring nginx to using a few simple Dokku commands. You can deploy your application by pushing to the git repository that Dokku maintains for each of your applications, just like Heroku does.

Make sure you follow the steps in the right order.

Setup Dokku

Thanks to one-click Dokku droplet installation, DigitalOcean is the easiest way I know of to get started with Dokku. Use this referral link to get $10 in credit for signing up on DigitalOcean. You can use your credits to follow this guide and run Dokku on the cheapest instance for two months.

You can also install Dokku yourself. Dokku provides you with a shell script that will take 5-10 minutes to complete. Head to Dokku documentation for the information on the latest version.

wget https://raw.githubusercontent.com/dokku/dokku/v0.12.12/bootstrap.sh;
sudo DOKKU_TAG=v0.12.12 bash bootstrap.sh

Once you have a Dokku installed go to your server IP address using your browser.

open http://<SERVER_IP>

Dokku serves the web installer on the default port 80. Paste your public key and remember to select the use virtualhost naming checkbox.

Execute the following command on your local machine to find out what’s your public key.

cat ~/.ssh/id_rsa.pub

Create and deploy the application

SSH into your server

ssh root@<SERVER_IP>

Create a new application

dokku apps:create <APP_NAME>

Switch back to your local machine, enter your git repository and add a dokku remote.

git remote add dokku dokku@<SERVER_IP>:<APP_NAME>

You can now push to dokku repository to deploy the application.

git push dokku master

Dokku derives some configuration from your Dockerfile like ports mapping based on the EXPOSED instruction.

Domain configuration

Dokku can support multi-domain configuration. Go to your DNS configuration and point your domain using A record to the server IP address.

IN A <SERVER_IP>

Optionally you can set a CNAME record for the www subdomain.

www IN CNAME <DOMAIN_NAME>.

Now, you have to tell Dokku what domain the application should handle.

dokku domains:add <APP_NAME> <DOMAIN_NAME>
dokku domains:add <APP_NAME> www.<DOMAIN_NAME>

You can check which application handles what domains.

dokku domains:report

Remove any unnecessary domain configuration that is a leftover after the app creation.

dokku domains:remove <APP_NAME> <APP_NAME>.dokku-s-1vcpu-1gb-fra1-01

If you decided to configure www subdomain, your application is accessible under both www.<DOMAIN_NAME> and <DOMAIN_NAME>. It would be better to settle on one and set up a 301 redirect. We will wait with setting up redirects till we have a valid SSL certificate for both domains.

TLS/SSL certificates

I’m going to use Let’s Encrypt CA to obtain free TLS certificates. Dokku doesn’t support Let’s Encrypt by default, but there’s a plugin for that we can install.

sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

It’s important to expose the application to the host on the port 80.

dokku proxy:ports-add <APP_NAME> http:80:<EXPOSED_PORT>

Let’s Encrypt requires you to specify an email address. You will receive notifications before the certificates expire.

dokku config:set --no-restart <APP_NAME> DOKKU_LETSENCRYPT_EMAIL=<EMAIL>

Request certificates and complete the ACME protocol challenge automatically.

dokku letsencrypt <APP_NAME>

In case of any problems with domains make sure the DNS records propagated correctly. It might be the reason why Let’s Encrypt challenge is failing.

Check the ports configuration for your application. If you successfully obtain the certificates, you can see that host’s port 443 is mapped to the same port as the port 80.

dokku proxy:ports <APP_NAME>

Redirects

Although Dokku does not support redirects out of the box, there’s another plugin to remedy that.

dokku plugin:install https://github.com/dokku/dokku-redirect.git

Now you can configure 301 redirect from www.<DOMAIN_NAME> to <DOMAIN_NAME>.

dokku redirect:set <APP_NAME> www.<DOMAIN_NAME> <DOMAIN_NAME>

Conclusion

Until you start to modify nginx files not knowing what are you doing, Dokku is a reliable platform to host your applications. Dokku combines the ease of use of Heroku with affordable pricing of VPS providers like DigitalOcean or Linode.

It’s still possible to run Dokku behind the custom reverse proxy by excluding dokku-generated files from the nginx configuration. Despite missing out on a few features, going with Dokku is still a good choice if your goal is supporting git-based deployments and having a way to expose the app on the selected port.

If you find this article helpful use this referral link to get $10 in credit for signing up on DigitalOcean.

Solve code sharing and setup project with Lerna and monorepo

$
0
0

Code sharing is easy but doing it correctly is challenging. There are multiple ways you can do it, and your use case dictates what approach is right for you. The low hanging fruit is just copy-paste what you need from StackOverflow, GitHub, or your previous project. Despite the apparent benefit of being the most natural, copying and pasting wastes your time in the long run due to causing the maintenance burden of syncing changes manually.

Don’t get me wrong, we all copy and paste the code from time to time, and I’m even not disregarding this naive approach as sometimes you want to move fast and worry later. However, what when you don’t?

Why monorepo

When optimizing for long-term maintenance, we have a few choices. I like to bet on monorepo. A monolithic repository is a simple idea. You organize the code of all your services in a single repository. It has a few advantages over using a separate repository for each service.

Reusing code is easy. Once you abstract a coherent unit of code into a module, you can then import it from anywhere.

Continuous integration runs tests against the entire monorepo, so once PR is merged you bump the version of all sub-services and there is no doubt what versions are compatible with each other. Version 1.2 of service A is always compatible with version 1.2 of service B. This is why complex projects with multiple dependencies often use monorepo as well (Babel, React, Angular, Jest). Due to the very same reason, large-scale refactorings are also feasible.

You maintain one third-party dependencies tree. It’s too easy, especially with all NPM goodies, to end up with two different versions of the same library and having to sync them across different repositories manually gives me a headache. Having one main package-lock.json is a real time-saver.

Monorepo forces collaboration, it encourages having the same coding style by having a single config for your linter/code formatter/module bundler and so on.

Setup step by step

Install Lerna and initialize the project.

npm i -D lerna
npx lerna init

I’m going to use Create React App to scaffold two packages: alice and bob.

cd packages
npx create-react-app alice
npx create-react-app bob

Let’s create one more package called common in which we can place modules shared across alice and bob. Call lerna create from the root of the repository.

npx lerna create @yourproject/common -y

Using scoped names for your project packages is a clear way to distinguish them from publicly available NPM packages.

We can make sure we have the same version of the React and ReactDOM in each package by calling lerna add from the root of the repository.

npx lerna add react@^16.6.3
npx lerna add react-dom@^16.6.3

Let’s create a simple component that we can reuse in the common package.

// packages/common/Heading.jsximportReactfrom"react";functionHeading({level="1",title}){returnReact.createElement(`h${level}`,{},title);}exportdefaultHeading;

Now call lerna add from the root of the repository to link the common packages to alice and bob.

npx lerna add @yourproject/common

Currently, each package maintains its node_modules with all dependencies it needs. Hoisting dependencies to the root of the repository is possible. Let’s clean currently installed dependencies and try it.

npx lerna clean -y && npx lerna bootstrap --hoist

Now all packages are installed in the root of the repository, and node_modules local to packages contain only symlinks. We can now import modules from the common package.

// packages/alice/src/App.jsimportReact,{Component}from"react";importHellofrom"@yourproject/common/Hello";classAppextendsComponent{render(){return(<Hellotitle="Hello, World!"/>);}}exportdefaultApp;

Let’s now run the alice service.

npx lerna run start --scope=alice

You can see how changes in the common package are immediately reflected in the alice app which makes for an excellent developer experience.

Deployment

Deployment side of things gets tricky with monorepo. You might be required to provide more guidance for your deployment tool or create a custom script to be able to deploy a single service to Heroku as you want to separate it from the rest of the project. I solved this using separate Dockerfiles, so it comes down to specifying a different path when running docker build.

This is an example of the Dockerfile that maximizes Docker caching (don’t get scared by the long dependencies section). After the build, files are copied to the light nginx image as we don’t need Node.js any more.

FROM node:10.13-alpine as builder

# Environment

WORKDIR /home/app
ENV NODE_ENV=production

# Dependencies

COPY package.json /home/app/
COPY package-lock.json /home/app/
COPY lerna.json /home/app/

COPY packages/alice/package.json /home/app/packages/alice/
COPY packages/alice/package-lock.json /home/app/packages/alice/

COPY packages/common/package.json /home/app/packages/common/
COPY packages/common/package-lock.json /home/app/packages/common/

RUN npm ci --ignore-scripts --production --no-optional
RUN npx lerna bootstrap --hoist --ignore-scripts -- --production --no-optional

# Build

COPY . /home/app/
RUN cd packages/alice && npm run build

# Serve

FROM nginx:1.15-alpine
COPY --from=builder /home/app/packages/alice/build /usr/share/nginx/html

You can now build the image and run the container. Execute the following commands from the root of the repository.

docker build -t yourproject/alice -f ./packages/alice/Dockerfile .
docker run --rm -p 8080:80 --name yourproject-alice yourproject/alice

Code is available on GitHub: MichalZalecki/lerna-monorepo-example.

An elegant solution for handling errors in Express

$
0
0

Express is a microframework that according to 2018 Node.js User Survey Report is used by 4 in 5 back-end and full-stack node.js developers. Thanks to its simplicity, the always-growing range of available middleware, and active community the express userbase is still growing.

Arguably, the simplicity of Express is its most significant advantage but comes with a cost of bare-bones API for handling requests and leaves the rest to the developer. In general, it’s fantastic! We’re developers, and we love to roll out the solutions that we tailor to meet our requirements.

The two everyday tasks for each RESTful application is handling errors and payload validation. I like to keep my controllers lean, so for me, middleware is the way to deal with framework-ish features like these two.

Code is available on GitHub: MichalZalecki/express-errors-handling.

Payload Validation

Hapi.js is another framework for building web services in Node.js. Hapi.js extracts input validation out of controllers into the intermediate layer between router and route handlers. Hapi is very modular and its schema validation component is a separate library called Joi. We are going to use Joi with our Express application through the Celebrate, an Express middleware for Joi.

importexpressfrom"express";importbodyParserfrom"body-parser";import{Joi,celebrate,errors}from"celebrate";constPORT=process.env.PORT||3000;constapp=express();app.use(bodyParser.json());typeCreateSessionBody={email:string,password:string};constcreateSessionBodySchema=Joi.object({email:Joi.string().email().required(),password:Joi.string().required(),}).required();app.post("/session",celebrate({body:createSessionBodySchema}),(req,res)=>{const{email,password}=req.body;const{token}=login(email,password);res.status(201).send({token});});app.use(errors());app.listen(PORT,()=>{console.log(`Serverlistensonport:${PORT}`);});

Using Celebrate with Joi is a quick and easy way to validate your schema and respond with a well-formatted error message. It also helps to keep your controllers lean, so you can assume that data from the request payload is well formatted.

That’s how the sample error response may look like:

{"statusCode":400,"error":"Bad Request","message":"child \"email\" fails because [\"email\" is required]","validation":{"source":"body","keys":["email"]}}

Runtime Errors

Some errors are accidental like database constraint violation, lost connection, third-party service timeout, and some are expected under precisely defined conditions like token expiration, incorrect input data beyond their format (trying to register with the same email twice), etc.

I aim to unify errors handling and make it possible to early return errors to avoid deeply nested code and minimize the effort to handle exceptions. I like to wrap my route handlers with enhanceHandler which is a higher order function that formats the output and sets the correct status based on the returned value from the actual route handler.

If you use TypeScript then start with defining the type for the route handler which allows specifying the type for params and body of the request.

// types.d.tsimportexpressfrom"express";importBoomfrom"boom";interfaceRequest<TParams,TBody>extendsexpress.Request{params:TParams;body:TBody;}declareglobal{typeRouteHandler<TParams,TBody>=(req:Request<TParams,TBody>,res:express.Response)=>|void|express.Response|Boom<any>|Promise<void|express.Response|Boom<any>>;}

Depending on the middleware you use it’s possible to extend Express request and add additional properties.

The essential part of the enhanceHandler is utilizing Boom library. Boom is a set of factory functions for errors that correspond to HTTP errors. Boom, like Joi, is library made to use with Hapi.js but doesn’t depend on it.

// enhanceHandler.tsimportexpressfrom"express";importBoomfrom"boom";importisNilfrom"lodash/isNil"functionformatBoomPayload(error:Boom<any>){return{...error.output.payload,...(isNil(error.data)?{}:{data:error.data}),}}exportfunctionenhanceHandler<T,P>(handler:RouteHandler<T,P>){returnasync(req:express.Request,res:express.Response,next:express.NextFunction)=>{try{constresult=awaithandler(req,res);if(resultinstanceofError&&Boom.isBoom(result)){res.status(result.output.statusCode).send(formatBoomPayload(result));}}catch(error){if(process.env.NODE_ENV!=="production"&&(error.stack||error.message)){res.status(500).send(error.stack||error.message);}else{res.status(500).send(Boom.internal().output.payload);}}next();};}

To better understand each case that is handled by enhanceHandler, read the tests on GitHub.

To use enhanceHandler just pass an actual route handler as a parameter and you can now return Boom errors from your controllers.

typeCreateSessionBody={email:string,password:string};constcreateSession:RouteHandler<{},CreateSessionBody>=(req,res)=>{const{email,password}=req.body;if(password!=="password"){returnBoom.unauthorized("Invalid password");}res.status(201).send({email,password});};app.post("/session",enhanceHandler(createSession));

Wrap Up

Whether you like this approach or not and consider it elegant as I do is the matter of preference. The most important is flexibility, so you adjust it to your liking. Hope you at least enjoy this poem:

Express is awesome, but Hapi is too.
Do not add more code out of the blue.
Combine the best tools and code just the glue.
You can buy me a coffee, I like a cold brew.

Viewing all 53 articles
Browse latest View live