Quantcast
Channel: devBlog of Michal Zalecki
Viewing all 53 articles
Browse latest View live

Why I use double quotes in JavaScript

$
0
0

Coding style is often a topic of fierce debates, mostly unnecessary. What matters is being consistent throughout the project, or better, your entire codebase. Tools like JSHint, JSCS and ESLint contribute and popularized advantages which comes from keeping code style consistent. I’m used to airbnb/javascript style guide with couple exceptions and this post justify my decision to go for double quotes in our entire JavaScript codebase.

Cons

Let’s start with cons of using double quotes I heard from people.

I have to use \ to escape HTML attributes

Here’s the example people often back up their opinion.

// double quoteselement.innerHTML="<img src=\"cat.gif\" />";// single quoteselement.innerHTML='<img src="cat.gif" />';

But, let’s be honest. Why do you need to hardcode any HTML tag as string? Would you really accept using innerHTML as solution for mutating DOM doing code review? I don’t. Nevertheless, I agree it’s additional work and thing to remember.

The better example would be what you do writing acceptance tests. The sad thing is that many developer or rather companies tend to left this task for testers so such examples doesn’t gain such popularity like jQuery style hacks.

I.amOnPage("/login");I.fillField("[name=\"email\"]","text@example.com");// but this also worksI.fillField("[name=password]","123456");I.click("Login");

I have to press additional key [it slows me down]

Really? The same argument is used for dropping semicolons at the end of the line. If you claim typing speed is your main efficiency bottleneck there are two possible reasons. You’re way better than me or you don’t know what programming is about. Personally, I spend more time on thinking about what I’m going to write and reading others people code.

It’s more popular convention to use single quotes in JavaScript

Agree. Entire 13% more.

Pros

It’s closer to JSON

I find this is especially useful when writing unit tests when I have to mock some data. I can just copy&paste JSON into my code without ESLint yeling at me.

Emmet use double quotes also for JSX

I prefer to use JSX when working with React but not only. One of the JSX advantages over HyperScript is that it works with Emmet which is an experience I’m used to and enjoy.

nav#menu>ul.list>li.list__item{$}*3<navid="menu"><ulclassName="list"><liclassName="list__item">1</li><liclassName="list__item">2</li><liclassName="list__item">3</li></ul></nav>

Which bring us to counterargument that double quotes are for HTML not JavaScript. Which on the other hand bring us to next double quotes advantage.

Double quotes are broadly used in… programming

I don’t consider myself a polyglot programmer, although I can code in C++, Java, PHP, JavaScript, Ruby, Elm and Elixir. In some of them it doesn’t really matter whether you use single quotes or double quotes for strings (i.e. Ruby) but in others (i.e. Elixir) it’s totally different data type or a syntax error (i.e. Elm). So, using double quotes prevents me from constant context switching.

Double quotes looks clearly different than back-tick

ES6 brings template strings to JavaScript which are created with back-tick `. Maybe it doesn’t make any difference for you and your retina display but when you give a talk your audience will appreciate it.

Tooling

It’s reasonable favor consistency over personal preferences. No matter you prefer signle or double quotes, there are extensions for toggling quotes which are useful when you copy i.e. configuration for live chat or error reporting service.

ESLint config for double quotes:

/* .eslint.json */{"rules":{"quotes":["error","double"]}}

Custom root domain and SSL on Heroku

$
0
0

Heroku managed to significantly lower the bar when it comes to deploying applications and integrating them with third-parties through various add-ons. It can be argued, but in my opinion optimizing platform to be easy at the entry-level made it much harder to do some more advanced setups. It’s strange but for quite a long time of using Heroku and feeling comfortable with it I’ve never had to configure it from A to Z. I was treating Heroku as some kind of rapid prototyping environment and finally migrating it to more dedicated solutions. That said, for one of the projects Heroku turned out to be a great fit and it made sense to keep it that way. So, now I only have to set up a root domain.

Add domain on Heroku

Firstly, we have to make Heroku aware which domain is going to be assigned to our application. I’m going to focus on common scenario, adding a root domain and www subdomain for the sake of showing the differences.

heroku domains:add example.com
heroku domains:add www.example.com

Remember to specify correct remote in case of using more than one. I’m used to use master branch as production and staging branch as… staging.

heroku domains:add example.com -r production
heroku domains:add staging.example.com -r staging

You can see your current config by calling heroku domains without any additional parameters.

$ heroku domains

Domain Name           DNS Target
───────────────  ──────────────────────────────────
example.com      example.com.herokudns.com
www.example.com  www.example.com.herokudns.com

Configure DNS

Next thing to take care of is DNS configuration. It’s super straightforward for subdomain as it only requires setting CNAME record.

CNAME	www.example.com	www.example.com.herokudns.com

Things get more tricky when it comes to the root domain. If you goggled this post you probably know what’s the problem. Heroku doesn’t provide static IP address for your application or to be more precise, Heroku claims IP can be changed to provide maximum uptime so you shouldn’t rely on IP address.

At this point is should be clear that using A record isn’t reliable in the long run, although you can host example.com to check you app’s server IP. So, what’s the alternative to A record? You should use ALIAS/ANAME records depends on your DNS provider. The problem is that most of the domain registers like GoDaddy doesn’t allow you to set such record from your domain management panel. I’m not sure why is that way (I can guess, $$$). At this point I’d like to come up with something better than serving my app from www subdomain and redirect to www.* from root domain. That’s the best you can do if you don’t want to swich DNS server.

I’m going to use DNSimple as I like their clean UI and affordable pricing, but you will be good with any popular DNS provider. To switch to DNSimple you have to configure your domain’s DNS Servers to:

ns1.dnsimple.com
ns2.dnsimple.com
ns3.dnsimple.com
ns4.dnsimple.com

Those may change in the future (e.g. for new domains), so make sure those are ment for you.

Now, when you are waiting for a domain to switch to different DNS server you can add DNS entries. Example configuration can look like this:

Type   Name             Content
CNAME  www.example.com  www.example.com.herokudns.com
ALIAS  example.com      example.com.herokudns.com

It may take up to 24h for changes to take place but it’s more like 6h from my experience. You can check the progress on cachecheck.opendns.com. Keep in mind that setting your Heroku domain instead of Heroku’s DNS would work but it fail when you setup SSL.

Don’t forget about cloning MX records so you keep your email working if you have any.

Obtain certificate

Update: Heroku introduced Automated Certificate Management. Service is available on Hobby and Professional dynos. ACM uses Let’s Encrypt to automatically generate and renew certificates for custom domains. You can skip to Enforce HTTPS if you decide to use it.

When it comes to certificates, we have two options: buy one or generate it with Let’s Encrypt. What is better for you depends on your needs but if you just want to have encrypted connection, use HTTP/2 or Service Workers then you should be fine with free certificate from Let’s Encrypt.

The easiest way to generate Let’s Encrypt certificate is using certbot. Unfortunately you won’t be able to perform auto-authorization as you can’t(?) run certbot on Heroku. Unlike on your own server with SSH access.

Install letsencrypt:

sudo apt-get install letsencrypt

or in case you are using macOS:

brew install certbot

Generate certificate, use manual flag. Running letsencrypt may require sudo.

letsencrypt certonly --manual --email admin@example.com -d example.com -d www.example.com

Follow further instructions and verify your ownership of the domain to complete the process.

Install certificate

To upload SSL certificate we are going to use Heroku SSL addon which is enabled by default on all paid dynos. There is also paid alternative, SSL Endpoint, which is only relevant if you care about Firefox 2 or Internet Explorer 7, due to lack of SNI support, ouch!

heroku certs:add /etc/letsencrypt/live/example.com/fullchain.pem /etc/letsencrypt/live/example.com/privkey.pem --app example-app

Make sure you’ve changed the path to match your domain. The path where certifiicate is saved should be included in the success message you saw after generating it.

After certificate is added you see instructions how to configure DNS, but if you followed previous steps you should be already fine.

Enforce HTTPS

On Heroku SSL termination happens at load balancer, so your app has to check x-forwarded-proto header to determine either you are serving through HTTP or HTTPS.

Here’s an example of middleware which redirects requests to https.

functionenforceHttps(req,res,next){if(!req.secure&&req.get("x-forwarded-proto")!=="https"&&process.env.NODE_ENV==="production"){res.redirect(301,`https://${req.get("host")}${req.url}`);}else{next();}}app.use(enforceHttps);

React components and class properties

$
0
0

React components went the long way from React.createClass through ES2015 powered React.Component to React.PureComponent and stateless functional components. I really enjoy the idea that we don’t have to “hack” the language anymore, at least not that much as we used to. The progress in this department is quite clear and brings not always obvious benefits. Using constructs build into to the language/transpiler instead of relying on framework’s factory functions or constructors accepting huge configuration objects future proofs your code.

With Babel or TypeScript future is already here or comes down to your risk aversion for transpiling your code with stage-N preset. Stage 2 is a proposal likely to be included in the new standard. It brings class fields and static properties. React components can greatly benefit from them, both when it comes to performance and reducing noise in your code.

Initializing state

As you should prefer stateless components over classes, sometimes you need to persist some state. To initialize state in your React component we are used to do this in the constructor.

// BEFOREimportReact,{Component}from"react";classCounterextendsComponent{constructor(props){super(props);this.state={counter:0};}...}

Initializing state inside of the constructor comes with an overhead of calling super and remembering about props which are React’s abstractions which leaks here a little. We can initialize state directly as a class property.

// AFTERimportReact,{Component}from"react";classCounterextendsComponent{state={counter:0};...}

Although it reduces noise in your code it comes with the limitation that you are no longer able to initialize component’s state with props.

Bounded methods

It’s common for event handlers to modify state. To do that handler has to be called in the component’s context which isn’t going to happen by default. Common source of the performance problems is passing event handlers with bounded state with arrow function or calling .bind(this) on the event handler inside the render() call. Each of those techniques causes creation of new function inside the render killing PureComponent/PureRenderMixin optimizations. The right way to approach this problem is binding your event handler inside of the component’s constructor.

// BEFOREimportReact,{Component}from"react";classCounterextendsComponent{state={counter:0};constructor(props){super(props);this.onIncrementClick=this.onIncrementClick.bind(this);}onIncrementClick(){this.setState(this.increment);}increment(state){return{...state,counter:state.counter+1};}render(){return<buttononClick={this.onIncrementClick}>{this.state.counter}</button>;}}

We can benefit from the fact that arrow function preserves context in which was defined and set handler directly as a class field.

// AFTERimportReact,{Component}from"react";classCounterextendsComponent{state={counter:0};onIncrementClick=()=>{this.setState(this.increment);}increment(state){return{...state,counter:state.counter+1};}render(){return<buttononClick={this.onIncrementClick}>{this.state.counter}</button>;}}

Static fields

There are three common cases where static fields shine when it comes to React components. It’s setting propsTypes, defaultProps and childContextTypes. We are used to setting them outside of the class. It often makes you to scroll to the very bottom of the file to learn about props which component accepts.

// BEFOREimportReact,{Component,PropTypes}from"react";classCounterextendsComponent{...}Counter.propTypes={step:PropTypes.number.isRequired,};

Defining props as static fields allows you to keep them inside of the class and benefit from common convention of keeping statics at the top of the class.

// AFTERimportReact,{Component,PropTypes}from"react";classCounterextendsComponent{staticpropTypes={step:PropTypes.number.isRequired,}}

What’s next

I can only guess but I’m sure we are going to discover great use cases for await/async and popularize partial application with rest parameters. No matter what’ll come first, it’s good we’re ready.

Why using localStorage directly is a bad idea

$
0
0

If you are working on web services for some time you probably remember, or at least heard about, The First Browser War. We are extremely lucky that this scramble between Internet Explorer and Netscape turned into great race for better, faster, more unified web experience. That said, we still facing a lot of inconsistency or not trivial edge cases while working with so-called browser APIs.

Some of them are excellent like Service Worker. Despite complexity of production ready Service Worker, I haven’t come across any inconsistency in browser implementations since I started working on Progressive Web Apps. Well, of course under the condition browser has implemented it. Some APIs are cumbersome like IndexedDB and it blows my mind every time I’m using it how such API was accepted, but I’m digressing right now. The point it unlike those two APIs of significant size we have also many small APIs which are also taking a burden off developers’ shoulders providing us with set of great features. One of them is, mentioned in the title, localStorage. In my perfect bubble localStorage always works and I can rely on it unless you are using Opera Mini. The reality is a little bit different, reality assumes Private Browsing and Privacy Content Settings. I’ve learned that the hard way, from error tracking tool.

Everything here also applies to sessionStorage.

QuotaExceededError: Dom exception 22: An attempt was made to add something to storage that exceeded the quota

This rather unusual error is thrown by Safari and has nothing to do with space you are using or which is left on device. Safari, when Private Browsing is enabled (Cmd+Shift+N), doesn’t allow accessing localStorage and it takes us by surprise. This will return true in Safari (also while Private Browsing):

"localStorage"inwindow// => true

Local storage works perfectly fine in Chrome in Incognito mode and in Firefox Private Window. Data is kept only until you quit the browser.

Uncaught DOMException: Failed to read the ‘localStorage’ property from ‘Window’: Access is denied for this document.

This error is thrown by Chrome when Content Settings prevents from setting any data. Error message is fair although checking whether localStorage exists in window won’t take us far.

"localStorage"inwindow// => true

TypeError: Cannot read property ‘getItem’ of null

Problem itself looks simple, localStorage is null. The root of it is still bothering me. There were two reasons why this captured my attention. The first one is that null represents missing reference. If localStorage wouldn’t have been implemented, error should be TypeError: Cannot read property 'getItem' of undefined (like in IE 7) as we’re trying to access not initialized property of window object. The other reason why this error seems strange is that it’s taking place on Android, on latest versions of Chrome. And as you know by now, Chrome tends to alert us about problems with accessing localStorage in much different way. I’ve done some digging and haven’t found anything useful, only some rookie mistakes when setting up a web view. Users’ IPs were different, without any pattern and much repetition, so it can be anything like some crawler, “privacy browser” app or RSS reader preview. I’ll appreciate a comment if you know what it might be.

"localStorage"inwindow// => true

This is also not enough, localStorage is set to null, but it is set.

Solution

I’ve mentioned that using localStorage directly can be a bad idea and I hope you agree that above problems are a proof of that. Yes, you could wrap each call in a try…catch block but I doubt that this is what you want to do. Certainly, I’m not going to do this. If you are using localStorage in one or two places in you code and you are not using it for critical use cases like storing secure token you could go away with try…catches. Maybe letting user know that he or she should somehow enable localStorage for best experience.

My goal was simple, to enable users to sign up or sign in even when localStorage or sessionStorage are not available, whatever the reason is.

If we want to provide a wrapper we start with reliable way to determine whether storage is supported or not:

functionisSupported(storage){try{constkey="__some_random_key_you_are_not_going_to_use__";storage.setItem(key,key);storage.removeItem(key);returntrue;}catch(e){returnfalse;}}isSupported(window.localStroage);// => true | false

In case or any problem with storage exception is thrown and we are sure we should use alternative solution. Simple implementation can look like this:

To provide more sophisticated functionality like support for in operator we would have to clutter code with bunch of ifs keeping methods and stored values in the same object. We could also use Proxy which slightly defeats the purpose when we are trying to provide solution working possibly in all our clients’ browsers, but it’s up to you. This implementation relays on variable decelerated in a closure, so all information is lost after reloading. Depending on your use case maybe you need to fallback to the server and store required information in session on your backend.

I think it makes sense for us to consider wrapping native APIs in our own modules keeping possibility the same API. Neither you nor your co-workers have to relearn it each time. This way you can benefit from latest and greatest browser features but in the same time have possibility to escape the limitations and improve testability.

It may be tempting to use one of plenty packages available on npm, but it comes with a cost of additional bytes send to the user and you probably don’t need that. I can recommend localstorage-memory for unit testing purposes (localStorage isn’t available in NodeJS).

The Best React Boilerplate: DIY

$
0
0

I know, I know. This title sounds cocky. In fact, it makes a lot of sense if you think about it. I’ve been asked multiple times by my friends from our local React meetup group I’m organizing or by teams I’m helping to develop their applications for a starter, boilerplate or a setup. This post is a result of another such question. Most of the time it’s one of the two scenarios.

In first scenario developers are looking for a way to start a project and this is the problem create-react-app is trying to address. It’s an official command line tool which is recommended way to start a react project. It works well for people who are starting with React as it abstracts a lot of complexity connected to configuration of babel, webpack etc. For more serious, production use when you require having more control over your build, create-react-app provides also eject script which unpacks a lot of logic initially hidden in react-scripts package. At that point, complexity initially hidden is often intimidating and that’s only the tip of the iceberg. Now you probably would like to have library for state management, server side rendering and so on. Nevertheless, if you want to test a new library or learn about React create-react-app is a great choice.

In the second scenario application is already under development, but no one on the team is comfortable with it and perceives “code around webpack” as strange and extraneous. The core of the application is one of the popular boilerplates cloned at the beginning of the project. Often there was never a time to understand it from the ground up and get a holistic picture of what’s going on.

This a little long introduction leads us to the merit of what I’m trying to say. I believe that the best boilerplate is the one you do yourself. Initial investment in putting puzzles together pays of in confidence, ease of solving later problems and overall maintenance cost.

So, what I’m going to do is take my minimal setup (webpack 2 + hot reloading) and start building upon it. At the end I’ll have Redux, React Router and Helmet playing nicely together on both client and server.

If you don’t like it, that’s fine, it was created to make my life easier and does that great. You can always use create-react-app instead and eject.

Entire code is available on GitHub: react-boilerplate-lite#not-so-lite.

React Router

Routing is an important part of our setup as it has to be integrated on both client and server side. React Router is de facto standard way to manage routes in react applications and it supports server side rendering.

I’m going to use the most up-to-date version, React Router v4, but if you are already using previous version follow instructions from corresponding docs or update. Migration paths are straightforward, so I would recommend updating. We want to use the same routes definition on both client and server, but we need different routers. On the client side application should be wrapped in a <BrowserRouter> and on the server side in a <StaticRouter>. Let’s start with defining routes in the top-level component.

Now, let’s edit render function in bundle entry file and wrap routes in BrowserRouter. BrowserRouter is using HTML5 history API, so there’s need for explicitly passing history like in previous React Router version. Implementation supporting hash portion of the URL has been moved to HashRouter. We’ll get back to render function later when integrating Redux.

Server Side Rendering: Router

To render application on the server we need few changes to how project is built. We could try to use app’s source files directly in Node. Major problem with this approach is that it becomes hard to supplement webpack’s features like chunks and various loaders. Whether it’s possible or not depends on to what extend your application is coupled with the build process.

I’m using css-loader modules, I like to require my images and video files and then use list of assets in my Service Workers for caching. I’d say my apps are tight to webpack. The best solution in such case is to create separate server build working when run by Node. Webpack config for server is similar to the production one with different entry and output.

Once we have a suitable build we can create render function used on the server. I’m using HTMLWebpackPlugin for generating index.html file and we can use it as a template on the server. We could also use templating engine, but we can get away without it. Depending on whether we serve files from memory during development or from hard drive on production we need different way to read this file. To read template during development we use webpack-dev-middleware. As JSX is not supported natively in Node I’m using React.createElement to wrapp App component with StaticRouter. During development we want to make sure that render is always using the newest version of the server bundle, so we delete it from require.cache. We don’t have to worry about it in production as we do one build per release.

Although page is rendered, server is not aware of 404 errors and cannot perform redirect. When page is not found or redirect should be performed page is going to be rendered like usually with 200 state. To fix it we can use context which can be modified mutating staticContext which is passed to each route on render. Let’s start with creating generic component which allows to set correct status code.

We can use Status generic component for Not Found page and redirects.

On the server in render function we have to check whether context has a url set. If it has that means redirect was matched by router. Then we can perform redirect with correct status. If context.url isn’t set then we just render application with given status, 200 by default.

Helmet

Helmet is a great little library providing react component for managing all changes to document head. We can use it for setting title or different meta tags.

The Open Graph protocol which Facebook’s using to better understand your website won’t work with Single Page Applications. Both Facebook and Twitter crawlers don’t evaluate JavaScript. This is where Server Side Rendering shines.

After page is rendered to string we can obtain plain HTML strings from Helmet and put them into our template. The one caveat is that we have to use the same library instance. To do that we could use bundled version of Helmet or do something opposite, exclude react-helmet from the bundle making it an external dependency. Without this step Helmet would just render an empty element as application would be rendered with bundled instance and renderStatic would be called on instance from node_modules.

That’s it!

Redux

For this setup I’m going with Redux as it provides solution for preloading state which makes things much simpler when server rendering. You can do the same with MobX, but due to not centralized state (multiple observables) it requires more glue code.

I’m not going to use any library for effects/async actions like redux-thunk, redux-loop, redux-saga or redux-observable as this is a per-project changing dependency. As far as I’m concerned it doesn’t make much sense to include any in a boilerplate. On the other hand, if you are a true believer of ant of those, go ahead! Each library has a good documentation and plenty of examples so you shouldn’t have any problems to set it up. You can always google some boilerplates which implemented one of those for the sake of reference.

Later on we make preloaded state globally available. Don’t forget about replacing store’s reducer once it changes.

Server Side Rendering: Redux

Server Side Rendering with Redux comes down to providing preloaded state for a given route. There are few possible approaches. Data dependency might be defined by a current Route or resolved by particular route handler on the server. I’m not opinionated, which approach you choose highly depends on whether your Node application can access database. If that’s not the case letting route decide has an advantage of defining required data (e.g. though GraphQL fragment) or requests (e.g. axios is universal) in once place.

To preload state we need to create a store, to do that we need a root reducer. Let’s add root reducer as a second entry point.

Store is created in render function with preloaded state applied. Root reducer also shouldn’t be cached so new version is used every time. If you use connect from react-redux you need to wrap the application in Provider and pass a store in props. Finally, we have to make state globally available, so it can be picked up by store created on the client side.

Summary

It’s a lot of steps and it’s not trivial to put it together! That said, it pays off, with interest. Entire code is available on GitHub: react-boilerplate-lite#not-so-lite.

If you don’t mind giving away fun of setting it up and/or you are intimidated by complexity you can consider using next.js. You will pay price of being coupled with the framework, but credit should be given where it’s due. Guys from ZEIT are doing great work in Open Source!

Implementing Geofencing with HERE

$
0
0

Geofencing allows for locating points within defined geographic areas. Areas can be defined with geographic points, consisting of latitude and longitude, forming any shape. HERE provides Geofencing Extension API for that purpose.

To get started with HERE you need an account. There’s a free 90-day trial which should be plenty for the beginning. On developer.here.com you can also find documentation and API explorer, but don’t get too excited, many of the examples are still using version 1 of the Geofencing Extension API, we are going to use version 2.

Creating a map

To render a map only mapsjs-core.js and mapsjs-service.js are required. To add some interactivity to the map we need mapsjs-events.js. Events module makes it possible to add listeners for events like tap or drag. Events module, except standard set of events, lets us add a default behavior to the map. That’s why map is not only a static image on a canvas but also supports pan interaction, zooming and more.

<script src="https://js.api.here.com/v3/3.0/mapsjs-core.js"></script><script src="https://js.api.here.com/v3/3.0/mapsjs-service.js"></script><script src="https://js.api.here.com/v3/3.0/mapsjs-mapevents.js"></script>

To create a map, or use HERE services in general, you need App ID, App Code for JavaScript/REST API and create platform instance. There are many map types to choose from. Map constructor accepts an HTML element to render a map inside it and a basic set of parameters like map’s zoom and center position.

constplatform=newH.service.Platform({app_id:process.env.APP_ID,app_code:process.env.APP_CODE,useHTTPS:true});constdefaultLayers=platform.createDefaultLayers();constmap=newH.Map(document.querySelector(“#map),defaultLayers.satellite.map,{zoom:17.4,center:{lat:52.514480,lng:13.239491}},);

To give a map ability to move, zoom in and out we need to create map events and attach them a default behavior.

constmapEvents=newH.mapevents.MapEvents(map);constbehavior=newH.mapevents.Behavior(mapEvents);

Defining areas

As areas are defined with geographic points we need to get those points as tuples of latitude and longitude values. If you require high precision, like detecting the correct side of the street, then forget about taking those points from a different map service. It’s tempting as Google Maps allows you to do that without any additional setup, but there’s a possibility that grids are not perfectly aligned across the services.

Latitude and longitude of a given point aren’t available directly on the event object. Using pointer viewportX and viewportY position we can calculate coordinates. Function screenToGeo is provided to us on a map object.

map.addEventListener("tap",(e)=>{const{viewportX,viewportY}=e.currentPointer;constcords=map.screenToGeo(viewportX,viewportY);console.log(`${cords.lng}${cords.lat}`);});

With a set of points building an area, we define polygons which we can upload to HERE service.

ID	NAME	ABBR	WKT
1	Sector A	SEC_A	POLYGON((13.237612232533337 52.514680421323064, 13.237649783459545 52.51476856436084, 13.238301560250164 52.514639614301224, 13.238272055951 52.5145857489744, 13.237617596951367 52.51467878904291))
2	Sector B	SEC_B	POLYGON((13.238292909249964 52.514639045133485, 13.237653202399912 52.51477044361, 13.237710869893732 52.51486674783397, 13.238323754653635 52.514692094255146))
3	Sector C	SEC_C	POLYGON((13.238327777967157 52.51468964583558, 13.23771221099824 52.514869196243666, 13.237818158254328 52.51495570663296, 13.238353258952799 52.514740246479356))
4	Sector D	SEC_D	POLYGON((13.237805343723096 52.514971283632484, 13.237885809993543 52.51503983889004, 13.238426275110044 52.514793365917456, 13.238364584302701 52.51474521375147, 13.237794614887036 52.51496883522846))
5	Sector Z	SEC_Z	POLYGON((13.241086924988508 52.51474124523777, 13.241709197479963 52.514831020428126, 13.241719926316023 52.51472329017769, 13.241103018242597 52.514689012315344, 13.241097653824568 52.51474124523777))

The first row consists of an arbitrary set of column names, only WKT column is mandatory. Be careful and separate columns with tabs instead of spaces. If you use spaces upload will fail with Illegal column name error response. When you are done with adding fences, zip a wkt file.

zip areas.wkt.zip areas.wkt

Areas are uploaded via sending a zipped wkt text file you’ve just created. We are going to use curl. Upload is a full upload which means that each time you send a new file to HERE, old shapes are removed.

curl \
--request \
-i \
-X POST \
-H "Content-Type: multipart/form-data"\
-F "zipfile=@areas.wkt.zip"\"https://gfe.cit.api.here.com/2/layers/upload.json?layer_id=4711&app_id=<APP_ID>&app_code=<APP_CODE>

If you did everything correctly the response should be:

{"storedTilesCount":6,"response_code":"201 Created"}

Checking position

Now we are done with the difficult part of defining and uploading geographic areas to HERE. To check the point against geofence polygons we issue a request.

fetch(`https://gfe.cit.api.here.com/2/search/proximity.json?app_id=${process.env.APP_ID}&app_code=${process.env.APP_CODE}&layer_ids=4711&key_attribute=NAME&proximity=${lat},${lng}`)

The response contains an array of matched geometries where each of them has a few values. We are particularly interested in two of them. To read data by column names associated with matching polygon we can access attribues. Polygons shape is defined under a geometry, unfortunately, as a string instead of a set of points.

{"geometries":[{"attributes":{"ID":"3","GEOMETRY_ID":"2","NAME":"Sector C","ABBR":"SEC_C"},"distance":-99999999,"nearestLat":0,"nearestLon":0,"layerId":"4711","geometry":"MULTIPOLYGON(((13.23833 52.51469,13.23771 52.51487,13.23782 52.51496,13.23835 52.51474,13.23833 52.51469)))"}],"response_code":"200 OK"}"

Drawing fence on a map

If your use case requires you to display matching fence you have two options. One is to use a fence ID from wkt file to match with the list of predefined polygons. The flaw in this approach is having two datasets of polygons which we must keep in sync. The second option is to reuse polygon definition from the response. Keeping files in sync doesn’t sound like fun and parsing a string to set of points is a simple task allowing us to have one source of truth for our fences.

functionpairToLatLng(pair:string){const[lng,lat]=pair.split(" ").map(parseFloat);return{lng,lat};}functiongeometryToPoints(geometry){returngeometry.replace(/[A-Z()]*/g,"").split(",").map(pairToLatLng);}

First of all, we have to get rid of all characters which are meaningless for our use case. Pairs of points are separated by a comma and coordinates in each pair are separated by a single space. The first coordinate is a longitude and latitude comes second.

conststrip=newH.geo.Strip();geometryToPoints(response.geometries[0].geometry).forEach((point)=>{strip.pushPoint(point);});constpolygon=newH.map.Polygon(strip,{style:{lineWidth:1,fillColor:"#FF0000",strokeColor:"#000"}});map.addObject(polygon);

We’ve built an instance of H.geo.Strip from a set of parsed points and we’ve drawn a fence on a map using H.map.Polygon. Polygons can be styled similarly to SVG elements.

For more information on geofencing visit documentation and HERE blog.

Converting DOCX to PDF using Python

$
0
0

When you ask someone to send you a contract or a report there is a high probability that you’ll get a DOCX file. Whether you like it not, it makes sense considering that 1.2 billion people use Microsoft Office although a definition of “use” is quite vague in this case. DOCX is a binary file which is, unlike XLSX, not famous for being easy to integrate into your application. PDF is much easier when you care more about how a document is displayed than its abilities for further modifications. Let’s focus on that.

Python has a few great libraries to work with DOCX (python-dox) and PDF files (PyPDF2, pdfrw). Those are good choices and a lot of fun to read or write files. That said, I know I’d fail miserably trying to achieve 1:1 conversion.

Looking further I came across unoconv. Universal Office Converter is a library that’s converting any document format supported by LibreOffice/OpenOffice. That sound like a solid solution for my use case where I care more about quality than anything else. As execution time isn’t my problem I have been only concerned whether it’s possible to run LibreOffice without X display. Apparently, LibreOffice can be run in haedless mode and supports conversion between various formats, sweet!

I’m grateful to unoconv for an idea and great README explaining multiple problems I can come across. In the same time, I’m put off by the number of open issues and abandoned pull requests. If I get versions right, how hard can it be? Not hard at all, with few caveats though.

Testing converter

LibreOffice is available on all major platforms and has an active community. It’s not active as new-hot-js-framework-active but still with plenty of good read and support. You can get your copy from the download page. Be a good user and go with up-to-date version. You can always downgrade in case of any problems and feedback on latest release is always appreciated.

On macOS and Windows executable is called soffice and libreoffice on Linux. I’m on macOS, executable soffice isn’t available in my PATH after the installation but you can find it inside the LibreOffice.app. To test how LibreOffice deals with your files you can run:

/Applications/LibreOffice.app/Contents/MacOS/soffice --headless --convert-to pdf test.docx

In my case results were more than satisfying. The only problem I saw was a misalignment in a file when the alignment was done with spaces, sad but true. This problem was caused by missing fonts and different width of “replacements” fonts. No worries, we’ll address this problem later.

Setup I

While reading unoconv issues I’ve noticed that many problems are connected due to the mismatch of the versions. I’m going with Docker so I can have pretty stable setup and so I can be sure that everything works.

Let’s start with defining simple Dockerfile, just with dependencies and ADD one DOCX file just for testing:

FROM ubuntu:17.04RUN apt-get updateRUN apt-get install -y python3 python3-pipRUN apt-get install -y build-essential libssl-dev libffi-dev python-devRUN apt-get install -y libreofficeADD test.docx /app/

Let’s build an image:

docker build -t my/docx2pdf .

After image is created we can run the container and convert the file inside the container:

docker run --rm --name docx2pdf-container my/docx2pdf \
  libreoffice --headless --convert-to pdf --outdir app /app/test.docx

Running LibreOffice as a subprocess

We want to run LibreOffice converter as a subprocess and provide the same API for all platforms. Let’s define a module which can be run as a standalone script or which we can later import on our server.

importsysimportsubprocessimportredefconvert_to(folder,source,timeout=None):args=[libreoffice_exec(),'--headless','--convert-to','pdf','--outdir',folder,source]process=subprocess.run(args,stdout=subprocess.PIPE,stderr=subprocess.PIPE,timeout=timeout)filename=re.search('-> (.*?) using filter',process.stdout.decode())iffilenameisNone:raiseLibreOfficeError(process.stdout.decode())else:returnfilename.group(1)deflibreoffice_exec():# TODO: Provide support for more platformsifsys.platform=='darwin':return'/Applications/LibreOffice.app/Contents/MacOS/soffice'return'libreoffice'classLibreOfficeError(Exception):def__init__(self,output):self.output=outputif__name__=='__main__':print('Converted to '+convert_to(sys.argv[1],sys.argv[2]))

Required arguments which convert_to accepts are folder to which we save PDF and a path to the source file. Optionally we specify a timeout in seconds. I’m saying optional but consider it mandatory. We don’t want a process to hang too long in case of any problems or just to limit computation time we are able to give away to each conversion. LibreOffice executable location and name depends on the platform so edit libreoffice_exec to support platform you’re using.

subprocess.run doesn’t capture stdout and stderr by default. We can easily change the default behavior by passing subprocess.PIPE. Unfortunately, in the case of the failure, LibreOffice will fail with return code 0 and nothing will be written to stderr. I decided to look for the success message assuming that it won’t be there in case of an error and raise LibreOfficeError otherwise. This approach hasn’t failed me so far.

Uploading files with Flask

Converting using the command line is ok for testing and development but won’t take us far. Let’s build a simple server in Flask.

# common/files.pyimportosfromconfigimportconfigfromwerkzeug.utilsimportsecure_filenamedefuploads_url(path):returnpath.replace(config['uploads_dir'],'/uploads')defsave_to(folder,file):os.makedirs(folder,exist_ok=True)save_path=os.path.join(folder,secure_filename(file.filename))file.save(save_path)returnsave_path
# common/errors.pyfromflaskimportjsonifyclassRestAPIError(Exception):def__init__(self,status_code=500,payload=None):self.status_code=status_codeself.payload=payloaddefto_response(self):returnjsonify({'error':self.payload}),self.status_codeclassBadRequestError(RestAPIError):def__init__(self,payload=None):super().__init__(400,payload)classInternalServerErrorError(RestAPIError):def__init__(self,payload=None):super().__init__(500,payload)

We’ll need few helper function to work with files and few custom errors for handling error messages. Upload directory path is defined in config.py. You can also consider using flask-restplus or flask-restful which makes handling errors a little easier.

importosfromuuidimportuuid4fromflaskimportFlask,render_template,request,jsonify,send_from_directoryfromsubprocessimportTimeoutExpiredfromconfigimportconfigfromcommon.docx2pdfimportLibreOfficeError,convert_tofromcommon.errorsimportRestAPIError,InternalServerErrorErrorfromcommon.filesimportuploads_url,save_toapp=Flask(__name__,static_url_path='')@app.route('/')defhello():returnrender_template('home.html')@app.route('/upload',methods=['POST'])defupload_file():upload_id=str(uuid4())source=save_to(os.path.join(config['uploads_dir'],'source',upload_id),request.files['file'])try:result=convert_to(os.path.join(config['uploads_dir'],'pdf',upload_id),source,timeout=15)exceptLibreOfficeError:raiseInternalServerErrorError({'message':'Error when converting file to PDF'})exceptTimeoutExpired:raiseInternalServerErrorError({'message':'Timeout when converting file to PDF'})returnjsonify({'result':{'source':uploads_url(source),'pdf':uploads_url(result)}})@app.route('/uploads/<path:path>',methods=['GET'])defserve_uploads(path):returnsend_from_directory(config['uploads_dir'],path)@app.errorhandler(500)defhandle_500_error():returnInternalServerErrorError().to_response()@app.errorhandler(RestAPIError)defhandle_rest_api_error(error):returnerror.to_response()if__name__=='__main__':app.run(host='0.0.0.0',threaded=True)

The server is pretty straightforward. In production, you would probably want to use some kind of authentication to limit access to uploads directory. If not, give up on serving static files with Flask and go for Nginx.

Important take-away from this example is that you want to tell your app to be threaded so one request won’t prevent other routes from being served. However, WSGI server included with Flask is not production ready and focuses on development. In production, you want to use a proper server with automatic worker process management like gunicorn. Check the docs for an example how to integrate gunicorn into your app. We are going to run the application inside a container so host has to be set to publicly visible 0.0.0.0.

Setup II

Now when we have a server we can update Dockerfile. We need to copy our application source code to the image filesystem and install required dependencies.

FROM ubuntu:17.04RUN apt-get updateRUN apt-get install -y python3 python3-pipRUN apt-get install -y build-essential libssl-dev libffi-dev python-devRUN apt-get install -y libreofficeADD app /appWORKDIR /appRUN pip3 install -r requirements.txtENVLC_ALL=C.UTF-8ENVLANG=C.UTF-8CMD python3 application.py

In docker-compose.yml we want to specify ports mapping and mount a volume. If you followed the code and you tried running examples you have probably noticed that we were missing the way to tell Flask to run in a debugging mode. Defining environment variable without a value is causing that this variable is going to be passed to the container from the host system. Alternatively, you can provide different config files for different environments.

version:'3'services:web:build:.ports:-"5000:5000"volumes:-./app:/appenvironment:-FLASK_DEBUG

Supporting custom fonts

I’ve mentioned a problem with missing fonts earlier. LibreOffice can, of course, make use of custom fonts. If you can predict which fonts your user might be using there’s a simple remedy. Add following line to your Dockfile.

ADD fonts /usr/share/fonts/

Now when you put custom font file in the font directory in your project, rebuild the image. From now on you support custom fonts!

Summary

This should give you the idea how you can provide quality conversion of different documents to PDF. Although the main goal was to convert a DOCX file you should be fine with presentations, spreadsheets or images.

Further improvements could be providing support for multiple files, the converter can be configured to accept more than one file as well.

Optimize React build for production with webpack

$
0
0

This guide is a form of writing down few techniques that I have been using with ups and downs for the past two years. Optimizations highly depend on your goals, how users are experiencing your app, whether you care more about time to interactive or overall size. It should not come as a surprise, that like always, there is no silver bullet. Consider yourself warned. Although you have to optimize for your use cases, there is a set of common methods and rules to follow. Those rules are a great starting point to make your build lighter and faster.

TL;DR

  1. Minify with UglifyJS
  2. Remove dead code with Tree shaking
  3. Compress with gzip
  4. Routing, code splitting, and lazy loading
  5. Dynamic imports for heavy dependencies
  6. Split across multiple entry points
  7. CommonsChunkPlugin
  8. ModuleConcatenationPlugin
  9. Optimize CSS class names
  10. NODE_ENV=”production”
  11. Babel plugins optimizations

Minify with UglifyJS

UglifyJS is a truly versatile toolkit for transforming JavaScript. Despite the humongous amount of the configuration options available you only need to know about few to effectively reduce bundle size. A small set of common options brings major improvement.

module.exports={devtool:"source-map",// cheap-source-map will not work with UglifyJsPluginplugins:[newwebpack.optimize.UglifyJsPlugin({sourceMap:true,// enable source maps to map errors (stack traces) to modulesoutput:{comments:false,// remove all comments},}),]};

You can take it from there and finely tune it. Change how UglifyJS mangles functions and properties names, decide whether you want to apply certain optimizations. While you are experimenting with UglifyJS keep it mind that certain options put certain restrictions on how you use certain language feature. Make yourself familiar with them or you come across, or rather your users come across, some few hard to debug problems present only in production bundle. Obey the rules of each option and test extensively after each change.

Remove dead code with Tree shaking

Tree shaking is a dead code, or more accurate, not imported code elimination technique which relies on ES2015 module import/export. In the old days, like 2015, if you have imported one function from the entire library you would still have to ship a lot of unused code to your user. Well, unless library supports methods cherry-picking like lodash does, but this is a story for a different post. Webpack has introduced support for native imports and Tree shaking in version 2. This optimization was popularized much earlier in JavaScript community by rollup.js. Although webpack offers support for Tree shaking, it does not remove any unused exports on its own. Webpack just adds a comment with an annotation for UglifyJS. To see the effects of marking exports for removal disable minimization.

module.exports={devtool:"source-map",plugins:[// disable UglifyJS to see Tree shaking annotations for UglifyJS// new webpack.optimize.UglifyJsPlugin({//   sourceMap: true,//   output: {//     comments: false,//   },// }),]};

Tree shaking only works with ES2015 modules. Make sure you have disabled transforming modules to commonjs in your Babel config. Node does not support ES2015 modules and you probably use it to run your unit tests. Make sure transformation is enabled in the test environment.

{"presets":[["es2015",{"modules":false}],"react"],"env":{"test":{"presets":["es2015","react"],}}}

Let’s try it out on a simple example and see whether the unused export is marked for removal. From math.js we only need fib function and doFib function which is called by fib.

// math.jsfunctiondoFact(n,akk=1){if(n===1)returnakk;returndoFact(n-1,n*akk);}exportfunctionfact(n){returndoFact(n);}functiondoFib(n,pprev=0,prev=1){if(n===1)returnprev;returndoFib(n-1,prev,pprev+prev);}exportfunctionfib(n){returndoFib(n);}// index.jsimport{fib}from"./common/math";console.log(fib(10));

Now you should be able to find such comment in the bundle code with the entire module below.

/* unused harmony export fact */

Enable UglifyJS back, fact along with doFact will be removed from the bundle. That is the theory behind the Tree shaking. How effective is it in a more real-life example? To keep proportions I am going to include React and lodash into the project. I am importing 10 lodash functions I use the most often: omit, pick, toPairs, uniqBy, sortBy, memoize, curry, flow, throttle, debounce.

Such bundle weights 72KB after being gzipped. That basically means that in spite of using Tree shaking, webpack bundled entire lodash. So why we bundle lodash and only given exports from math.js? Lodash is meant for both browser and node. That is why by default it is available as a commonjs module. We can use lodash-es which is ES2015 module version of lodash. Such bundle with lodash-es weights… 84KB. That is what I call a failure. It is not a big deal as this can be fixed with babel-lodash-plugin. Now it is “only” 60KB. The thing is, I would not count on Tree shaking to significantly reduce bundled libraries size. At least not out of the box. The most popular libraries did not fully embrace it yet, publishing packages as ES modules is still a rare practice.

Compress with gzip

Gzip is a small program and a file format used for file compression. Gzip takes advantage of the redundancy. It is so effective in compressing text files that it can reduce response size by about 70%. Our gzipped 60KB bundle was 197KB before gzip compression!

Although enabling gzip for serving static files seems an obvious thing to do only about 69% of pages is actually using it.

If you are using express to serve your files use compression package. There are a few available options but level is the most influential. There is also a filter allowing you to pass a predicate which indicates whether a file should be compressed. Default filter function takes into account a size and a file type.

constexpress=require("express");constcompression=require("compression");constapp=express();functionshouldCompress(req,res){if(req.headers["x-no-compression"])returnfalse;returncompression.filter(req,res);}app.use(express.static("build"));app.use(compression({level:2,// set compression level from 1 to 9 (6 by default)filter:shouldCompress,// set predicate to determine whether to compress}));

When I do not need additional logic which is possible with Express (like Server Side Rendering) I prefer to use nginx for serving static files. You can read more about configuration options in nginx gzip docs.

gzip on;
gzip_static on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

Ok, but what level of compression should you choose? It depends on the size of your files and your server CPU. Just as a reminder, bundle I was working on before is 197KB JavaScript file. I did some stress tests with wrk using 120 connections, 8 threads, during 20 seconds on MacBook Pro 15” mid-2015:

wrk -c 120 -d 20s -t 8 http://localhost:8080/app.8f3854b71e9c3de39f8d.js

I am focusing on border values and a default one I use most of the time.

level 1level 6level 9
68.3 KB57.9 KB57.7 KB
249324352284

I do not have any brilliant conclusion here. It is what I could expect and using default level 6 makes the most sense to me in this setup. Static files have this advantage that they can be compressed during the build time.

constCompressionPlugin=require("compression-webpack-plugin");module.exports={plugins:[newCompressionPlugin(),]};

File compressed this way is 57.7KB and does not take processor time thanks to gzip_static on; set. Nginx will just serve /app.8f3854b71e9c3de39f8d.js.gz when /app.8f3854b71e9c3de39f8d.js is requested. The other takeaway about gzip is to never enable gzip for images, videos, PDFs, and other binary images as those are already compressed by their nature.

Routing, code splitting, and lazy loading

Code splitting allows for dividing your code into smaller chunks in such a way that each chunk can be loaded on demand, in parallel, or conditionally. Easy code splitting is one of the biggest advantages of using webpack which gained webpack great popularity. Code splitting made webpack the module bundler. Over two years ago, when I was moving from gulp to webpack, I was using angularjs for developing web apps. Implementing lazy loading in angular required few hacks here and there. With React and its declarative nature, it is much easier and much more elegant.

Thre are few ways you can approach code splitting and we will go through most of them. Right now, let’s focus on splitting our application by routes. The first thing we need is a react-router v4 and few routes definition.

// routes.jsximportReactfrom"react";import{Route,Switch}from"react-router-dom";importAppfrom"./common/App/App";importHomefrom"./routes/Home/Home";importAboutfrom"./routes/About/About";importLoginfrom"./routes/Login/Login";constRoutes=()=>(<App><Switch><Routeexactpath="/"component={Home}/><Routeexactpath="/about"component={About}/><Routeexactpath="/login"component={Login}/></Switch></App>);exportdefaultRoutes;// Home.jsximportReactfrom"react";import{pick,toPairs,uniqBy}from"lodash-es";constHome=()=>{...}// About.jsximportReactfrom"react";import{sortBy,memoize,curry}from"lodash-es";constAbout=()=>{...}// Login.jsximportReactfrom"react";import{flow,throttle,debounce}from"lodash-es";constLogin=()=>{...}

I have also split my 9 favorite lodash utility functions “equally” across the routes. Now with routes and react-router, application size is 69KB gzipped. The goal is to make loading each route faster but excluding other pages code. You can check code execution coverage in Chrome DevTools. Approximately, over 47% of the code bundled is not used when entering given route.

React Router v4 is a full rewrite of the most popular router library for React. There is no simple migration path. The upside is that the new version is more modular and declarative. The downside is that you need few additional packages to match the functionality of the previous version like query-string or qs for parsing query params and react-loadable for components lazy loading.

To defer loading a page component until it is really needed we can use react-loadable. Loadable HOC expects function which will lazy load and return a component. I am not keen on the idea of adding this code to each route. Imagine next version is a breaking change and you have to go though every route to change the code. I am going to create a LoadableRoute component and use in my routes definition.

// routes.jsximportReactfrom"react";import{Route,Switch}from"react-router-dom";importLoadablefrom"react-loadable";importAppfrom"./common/App/App";constLazyRoute=(props)=>{constcomponent=Loadable({loader:props.component,loading:()=><div>Loading&hellip;</div>,});return<Route{...props}component={component}/>;};constRoutes=()=>(<App><Switch><LazyRouteexactpath="/"component={()=>import("./routes/Home/Home")}/><LazyRouteexactpath="/about"component={()=>import("./routes/About/About")}/><LazyRouteexactpath="/login"component={()=>import("./routes/Login/Login")}/></Switch></App>);exportdefaultRoutes;

After implementing dynamic import for routes components, loading any given route takes 65KB instead or 69KB. You can say it is not much but keep in mind that we have just installed react-loadable. Down the road, this improvement pays off.

Before webpack 2, when imports were not natively supported, to code split and load code dynamically you used require.ensure. The disadvantage of using webpack 2 imports is it doesn not allow you to name your chunks. It is not really webpack 2 fault, it is just not the part of import proposal. Instead of the app.[hash].js, home.[hash].js, about.[hash].js and login.[hash].js bundle contains the app.[hash].js, 0.[hash].js, 1.[hash].js, 2.[hash].js. This is not very helpful, especially when you trying to tackle regression issues. For example, after some change you heve noticed that bundle has grown in size. Adding new dynamic import can change names of unrelated modules. Fortunately, webpack 3 already addressed that issue with so called “magic comments”:

<LazyRouteexactpath="/"component={()=>import(/* webpackChunkName: "home" */"./routes/Home/Home")}/><LazyRouteexactpath="/about"component={()=>import(/* webpackChunkName: "about" */"./routes/About/About")}/><LazyRouteexactpath="/login"component={()=>import(/* webpackChunkName: "login" */"./routes/Login/Login")}/>// -rw-r--r--  1 michal  staff    62K Aug  6 11:07 build/app.ca30de797934ff9484e2.js.gz// -rw-r--r--  1 michal  staff   3.2K Aug  6 11:07 build/home.a5a7f7e91944ead98904.js.gz// -rw-r--r--  1 michal  staff   6.0K Aug  6 11:07 build/about.d8137ade9345cc48795e.js.gz// -rw-r--r--  1 michal  staff   1.4K Aug  6 11:07 build/login.a68642ebb547708cf0bc.js.gz

Dynamic imports for heavy dependencies

There are multiple ways to benefit from dynamic imports. I have covered dynamic imports for React components already. The other way to optimize is to find a library or other module of significant size which is used only under certain conditions. An example of such dependency can be libphonenumber-js for phone number formatting and validation (70-130KB, vary depending on selected metadata) or zxcvbn for a password strength check (whopping 820KB).

Imagine you have to implement login page which contains 2 forms: login and signup. You want to load neither libphonenumber-js nor zxcvbn when a user wants to only log in. This, yet naive, example shows how you can introduce better, more refined rules for on demand, dynamic code loading. We want to show password strength only when user focus the input, not sooner.

classSignUpFormextendsComponent{state={passwordStrength:-1,};staticLABELS=["terrible","bad","weak","good","strong"];componentWillReceiveProps=(newProps)=>{if(this.props.values.password!==newProps.values.password){this.setPasswordStrength(newProps.values.password);}};showPasswordStrength=()=>{if(this.state.passwordStrength===-1){// import on demandimport("zxcvbn").then((zxcvbn)=>{this.zxcvbn=zxcvbn;this.setPasswordStrength(this.props.values.password);});}};setPasswordStrength=(password)=>{if(this.zxcvbn){this.setState({passwordStrength:this.zxcvbn(password).score});}};render(){const{onSubmit,onChange,values}=this.props;const{passwordStrength}=this.state;return(<formonSubmit={onSubmit}><div><label>Email:{" "}<inputtype="email"name="email"value={values.email}onChange={onChange}/></label></div><div><label>Password:{" "}<inputtype="password"name="password"value={values.password}onChange={onChange}onFocus={this.showPasswordStrength}/>{passwordStrength>-1&&<div>Passwordis{SignUpForm.LABELS[passwordStrength]}</div>}</label></div><inputtype="submit"/></form>);}}

Split across multiple entry points

As you probably know, a single webpack build can have multiple entry points. This feature can be used to very effectively reduce loaded code for particular parts of the application. Imagine that your app works similar to Heroku frontend. So, you have a homepage which introduces the service but most of the features, so as code, is meant for logged in users (apps management, monitoring, billing etc.). Maybe you do not even need to use React for your homepage and entire JavaScript code required comes down to displaying a lame popup. Let’s write some VanillaJS!

// home.jsconstemail=prompt("Sign up to our lame newsletter!");fetch("/api/emails",{method:"POST",body:JSON.stringify({email})});

We are going to use different HTML file and code most of the code there. HTMLWebpackPlugin is used for generating HTML file with injected entry points paths. We need two separate files, home.html for our new homepage, and index.html for the rest of the pages. To generate two separate files use two instances of HTMLWebpackPlugin with different config. You want to explicitly specify chunks for each file.

modules.exports={entry:{app:[path.resolve("src/index.jsx")],home:[path.resolve("src/home.js")],},plugins:[newHTMLWebpackPlugin({filename:"home.html",excludeChunks:["app"],template:path.resolve("src/home.html"),}),newHTMLWebpackPlugin({excludeChunks:["home"],template:path.resolve("src/index.html"),}),]};

The last thing is to customize your server so GET / serves home.html. Add handler for GET / before express.static middleware so home.html takes precedence over index.html.

app.get("/",(req,res)=>{res.sendFile(path.resolve("build","home.html"));});app.use(express.static("build"));app.get("*",(req,res)=>{res.sendFile(path.resolve("build","index.html"));});

This way we went down from loading over 70KB of JavaScript to only 3.5KB! Using multiple entry points requires good planning and understanding business requirements. On the other hand, the implementation itself is really simple.

CommonsChunkPlugin

The CommonsChunkPlugin can be used to create a separate file (a chunk) consisting of modules which are used across multiple entry points and their children. The advantage of having one file for common modules it the lack of repetition, which is always a good thing. Moreover, a chunk with a hash calculated from its content in the name can be aggressively cached. Once a file is downloaded, it can be later served from the disk until it changes.

There are few ways to use CommonsChunkPlugin and you can combine them for the best result. Provided configuration consists of creating two instances. The simplest is just creating a separate file, of a given name, with modules reused across entry points (app and home). Next configuration makes sure that modules reused across children (about and login) are also going to be exported to separate file and will not add size to each children chunk. With mixChunks you can set the least number of chunks reusing a module before module can be exported.

module.exports={entry:{home:[path.resolve("src/home.js")],app:[path.resolve("src/index.jsx")],},plugins:[newwebpack.optimize.CommonsChunkPlugin({name:"commons",}),newwebpack.optimize.CommonsChunkPlugin({children:true,async:true,minChunks:2,// the least number of chunks reusing a module before module can be exported}),],};

ModuleConcatenationPlugin

ModuleConcatenationPlugin or actually Scope Hoisting is the main feature of webpack 3. The basic idea behind Scope Hoisting is to reduce the number of wrapper functions around the modules. Rollbar does that too. In theory, it should speed up the execution of the bundle by reducing the number of closures that have to be called. I might sound skeptic but I am really excited that webpack team is pursuing performance! The talk I linked to is just very good and sheds some light on micro-optimizations.

Although some users report significant improvement in a bundle size that improvement would mean that webpack glue code is a majority of their bundled code. From my experience, it may save you few kilobytes here and there (for an average bundle ~350KB) but that is it.

module.exports={plugins:[newwebpack.optimize.ModuleConcatenationPlugin(),],};

Similar to Tree shaking, this optimization works only with ES modules.

Optimize CSS class names

Whether styles splitting applies to you or not depends on how do you handle your styles. If you use CSS-in-JS kind of solution, you just ship JS code and CSS strings styling your components together with components themselves.

On the other hand, if you do prefer to use css-loader and ExtractTextPlugin there is a way in which you can affect the size of the shipped CSS code. I heve recently been testing how class names influence the size of the bundle. As you may know, css-loader allows for specifying a pattern in which selector is mapped to unique ident. By default, it is 23 characters hash. I did few tests, tried few patterns in one of the projects and I was more than pleased with the results. At first glance, the less code the better so the shortest class names should give the best result.

default [hash:23][hash:12][local]-[hash:5][name]-[hash:5]
bundle (with assets)8540 KB8504 KB8524 KB8532 KB
gzip3936 KB3904 KB3829 KB3892 KB
SavingsN/A3210744

Due to nature of how compression works making idents more similar to each other results in smaller gzipped bundle. Selector names are used in both CSS file and components, size savings are doubled. If you have a lot of small components and many generic, similar class names (wrapper, title, container, item) your results will be similar to mine. If you have a smaller number of components but a lot of CSS selectors you might be better off with [name]-[hash:5].

To use ExtractTextPlugin, source map, and UglifyJsPlugin, remember to enable sourceMap option. Otherwise, you can come across some not so obvious to debug issues. By not so obvious I mean some crazy error messages which does not say you anything and does not seem to be related anyway to what you have just done. Love it or hate it but sometimes that is the price of tools which are doing the heavy lifting for you.

NODE_ENV=”production”

This one seems pretty obvious! I bet when you read the title “Optimize React build for production with webpack” you could think of at least two: NODE_ENV and UglifyJS. Although it is a pretty common knowledge, confusions happen.

How does it work? It is not a react-specific optimization and you can quickly try it out. Create entry point with following content:

if(process.env.NODE_ENV!=="production"){alert("Hello!");}

Create a development build. That is should be the content:

webpackJsonp([1],{10:function(n,o,e){n.exports=e(11)},11:function(n,o,e){alert("Hello!")}},[10]);//# sourceMappingURL=app.fc8e58739d91fe5afee6.js.map

As you can see there is even no if statement but let’s move along. Make sure that NODE_ENV is set to "production". You can do it with:

module.exports={plugins:[// make sure that NODE_ENV="production" during the buildnewwebpack.EnvironmentPlugin(["NODE_ENV"]),],};// ormodule.exports={plugins:[newwebpack.DefinePlugin({"process.env":{NODE_ENV:JSON.stringify("production"),},}),],};

Now that is the build:

webpackJsonp([1],{10:function(n,o,c){n.exports=c(11)},11:function(n,o,c){}},[10]);//# sourceMappingURL=app.3065b2840be2e08955ce.js.map

Here is how we got back to UglifyJS and dead code elimination. During the build process.env.NODE_ENV is replaced with a string value. UglifyJS seeing

if("development"!=="production")

skips if statement as it is always true. Opposite happens when minimizer comes across

if("production"!=="production")

This condition is always false and UglifyJS drops the dead code. It works the same with prop types!

Babel plugins optimizations

Babel opens new opportunities not only for writing cutting edge JavaScript for production but also for a wide variety of optimizations. I heve mentioned babel-lodash-plugin already, there is a lotmoretoexplore.

You know that React does not check props in production. You can also notice that despite this optimization, prop types are still present in components code. It is a dead code, I know that, you know that, but uglify does not know that. You can use babel-plugin-transform-react-remove-prop-types to get rid of those calls.

{"presets":[["es2015",{"modules":false}],"stage-2","react"],"env":{"production":{"plugins":["transform-react-remove-prop-types"]}}}

Wrap up

I have went though few possible optimizations but did not mention about one of the most important thing in the entire process. Whatever optimization you are applying make sure to extensively test the outcome. There are few good tools which can help you to do that e.g. BundleAnalyzerPlugin. If you prefer to do it the old way:

tar -czf build.tar.gz build && du -k build.tar.gz

Minimizing data users download is only the first step to improve performance. Stay tuned and if you like the article follow me on twitter to learn more performance optimization techniques!


Writing clean code with memoized event handlers

$
0
0

When we talk about writing asynchronous JavaScript we often use timer functions or promises as an example. Whereas the majority of asynchronous code written in modern JavaScript web apps is focused on events caused either by a user interacting with the UI (addEventListener) or some native API (IndexedDB, WebSocket, ServiceWorker). With modern front-end frameworks and the way we pass event handlers it is easy to end up with leaky abstraction.

When you build your application from multiple small components it is a common good practice to move application state to components which are higher in the tree (parent components). Following this pattern, we have developed the concept of so called “container” components. This technique makes it much easier to provide synced state to multiple children components on different levels in the tree.

One of the downsides is that we sometimes have to provide a callback to handle events along with some additional parameters which are supposed to be applied to that callback function. Here is an example:

constUser=({id,name,remove})=>(<li>{name}<buttononClick={()=>remove(id)}>Remove</button></li>);classAppextendsComponent{state={users:[{id:"1",name:"Foo"},{id:"2",name:"Bar"}],};remove=(id)=>{this.setState(({users})=>({users:users.filter(u=>u.id!==id)}));};render(){return(<ul>{this.state.users.map(({id,...user})=><Userkey={id}id={id}{...user}remove={this.remove}/>)}</ul>);}}

Although User component is not using users id property for presentation, id is required because remove expects to be called for a specific user. As far as I am concerned this is a leaky abstraction. Moreover, if we decide to change id property to uuid we have to revisit User and correct it as well to preserve consistent naming. This might not be the biggest of your concerns when it comes to “id” property but I hope it makes sense and you can see an imperfection here. The cleaner way to do it would be applying id to remove function before it is passed to User component.

<Userkey={id}{...user}remove={()=>this.remove(id)}/>

Unfortunately, this technique has performance implications. On each App render, remove function passed to the User would be a newly created function. Brand new function effectively kills React props check optimizations which are relying on reference equality check and it is a bad practice.

There is a third option (and probably a fourth and a fifth but bear with me). We can combine memoization and currying to create partially applied event handlers without adding too much complexity. A lot of smart words but it is simple:

import{memoize,curry}from"lodash/fp";constUser=({name,remove})=>(<li>{name}<buttononClick={remove}>Remove</button></li>);classAppextendsComponent{state={users:[{id:"1",name:"Foo"},{id:"2",name:"Bar"}],};remove=memoize(curry((id,_ev)=>{this.setState(({users})=>({users:users.filter(u=>u.id!==id)}));}));render(){return(<ul>{this.state.users.map(({id,...user})=><Userkey={id}{...user}remove={this.remove(id)}/>)}</ul>);}}

This is a sweet spot. Each user has its own remove function with id already applied. We don’t have to pass, irrelevant for presentation, property to the User and thanks to memoization we did not penalize performance. Each time remove is called with a given id the same function is returned.

this.remove("1")===this.remove("1")// => true

Decorator

Are you into decorators? If you are not up for importing memoize and curry functions, then wrapping your handlers in each container you may would like to go with a property decorator:

// TODO: Reconsider this name, maybe something with "ninja"functionrockstarEventHandler(target,key,descriptor){return{get(){constmemoized=memoize(curry(descriptor.value.bind(this)));Object.defineProperty(this,key,{configurable:true,writable:true,enumerable:false,value:memoized,});returnmemoized;},};}

This implementation does not cover all edge cases but it sells the idea. In production you probably want to combine two decorators and leave binding execution context to autobind decorator from jayphelps/core-decorators. Usage:

@rockstarEventHandlerremove(id,_ev){this.setState(({users})=>({users:users.filter(u=>u.id!==id)}));}

Conclusion

I cannot say whether that is an optimal solution for you and your team but I am more than happy with this approach writing simpler tests and effectively less glue code.

The downside of this approach is that you will probably have to explain few coworkers the rationale behind all that momoize(curry(() => {})) thing. Throwing a sentence like “it’s just memoized and partially applied function” probably will not be enough. The bright side is you can always point them here!

Render React portals on the server

$
0
0

React works, in what I would call, homogeneous manner. A tree of components is going to be rendered in the given component using a render or recently introduced hydrate function. You are not supposed to change DOM elements created by React or at least do not change components which can return true from shouldComponentUpdate. But what if you need to change an element outside of the React realm? Well, portals are the way to go!

Portals make it possible to escape React mounting DOM container in an elegant way. This problem had been addressed a long time before React DOM 16th introduced an official API to create a portal with createPortal. An example of such library is react-portal or react-gateway.

Managing title and metadata from React

A notorious problem with single page applications is a poor or the lack of support for tools which are not especially great in dealing with JavaScript. Google’s crawlers are doing pretty good job in parsing JavaScript-generated content but if you would like to enhance your social media feed you need to be able to generate Open Graph tags on the server.

The goal for today is to create a wrapper for a ReactDOM’s createPortal in such way that portals can be rendered on the server.

TL;DR: Use React Portal Universal library.

The first try

Let’s forget about the server-side for now and just implement it on the client and see what happens. With following Head component we can modify the content of a head element adding a title and a meta description.

constHead=({children})=>createPortal(children,document.querySelector("head"));constArticle=({title,description,children})=>(<article><Head><title>{title}</title><metaname="description"content={description}/></Head><h1>{title}</h1>{children}</article>);

You can add Open Graph tags analogically.

Each time you render a single article a new title and a meta description are going to be set. This is great and covers most of the use cases for a library like react-helmet. Which, by the way, is an awesome piece of library solving my problems for the past two years.

The current implementation of createPortal fails when it comes to rendering on the server. The core problem with accommodating React’s portals for using in NodeJS is that createPortal expects a DOM node which is not suitable for a NodeJS environment. Trying to render an aforementioned piece of code on the server results in an error:

ReferenceError: document is not defined

The second try

The first thing we need to address is to prevent the server from calling document.querySelector inside a Head component. There are a few ways to tell whether current code is run by the server or by the browser. One of the solutions is to look for a window object.

functioncanUseDOM(){return!!(typeofwindow!=='undefined'&&window.document&&window.document.createElement);}

We would like to avoid per-component checks and avoid duplication. Let’s write our wrapper for createPortal.

functioncreateUniversalPortal(children,selector){if(!canUseDOM()){returnnull;}returnReactDOM.createPortal(children,document.querySelector(selector));}

I would not call it a support yet, but we are heading in the right direction. Now, we are able to use our Head component on the server. This approach is not very fruitful though. Instead of rendering a portal in the element pointed by the selector we just skip rendering altogether.

The third try

To be able to render portals on the server we need two things. The first is the ability to tell the server what components and where should be rendered. The second is the actual way to render them statically in the correct container. We are going to store all server-rendered portals as tuples of React components with corresponding selectors.

// CLIENTexportconstportals=[];functioncreateUniversalPortal(children,selector){if(!canUseDOM()){portals.push([children,selector]);// yes, mutation (҂◡_◡)returnnull;}returnReactDOM.createPortal(children,document.querySelector(selector));}

On the server, we can now access portals array and iterate over each portal to render it into a string using renderToStaticMarkup provided by ReactDOM. To append such string into the correct container we can use cheerio. Cheerio is a library for working with HTML strings on the server and provides a subset of jQuery API to do that.

I do not want to go now into great details on how to implement a server-side rendering for a React application. You can read more about the SSR here.

// SERVERconst{load}=require("cheerio");const{portals}=require("client.js");functionappendUniversalPortals(html){const$=load(html);portals.forEach(([children,selector])=>{$(ReactDOMServer.renderToStaticMarkup(children)).appendTo(selector)});return$.html();}constbody=ReactDOMServer.renderToString(<App/>));consttemplate=fs.readFileSync(path.resolve("build/index.html"),"utf8");consthtml=template.replace("<div id=\"root\"></div>",`<divid="root">${body}</div>`);constmarkup=appendUniversalPortals(html);res.status(200).send(markup);

Now, the server should augment HTML rendered out of React application with the static output of each portal. This implementation is not flawless either. It works on the first render but portals keep aggregating between renders. We need to flush portals on each render.

// CLIENTconstportals=[];functioncreateUniversalPortal(children,selector){...}exportfunctionflushUniversalPortals(){constcopy=portals.slice();portals.length=0;returncopy;}

We don’t need to export portals anymore. The server can call flushUniversalPortals on each render. We want to keep things separate, flushUniversalPortals should be defined as a part of the client-side code and access a portals scope.

// SERVERconst{flushUniversalPortals}=require("client.js");functionappendUniversalPortals(html){const$=load(html);flushUniversalPortals().forEach(([children,selector])=>{$(ReactDOMServer.renderToStaticMarkup(children)).appendTo(selector)});return$.html();}

When you try to reload a page you may notice another problem. Portals are there but as soon as react application renders, portals are going to be added so we’ll end up with two titles and two meta descriptions.

This problem is not exclusive to this implementation. As I already said, cratePortal is not implemented with the server use case in mind and does not try to reuse existing DOM tree like render or hydrate functions. If we can mark which nodes were added statically we can then remove them on the client side.

// SERVERfunctionappendUniversalPortals(html){const$=load(html);flushUniversalPortals().forEach(([children,selector])=>{constmarkup=ReactDOMServer.renderToStaticMarkup(children);$(markup).attr("data-react-universal-portal","").appendTo(selector)});return$.html();}
// CLIENTfunctionremoveUniversalPortals(){if(canUseDOM()){document.querySelectorAll("[data-react-universal-portal]").forEach((node)=>{node.remove();});}}// somewhere later in the applicationremoveUniversalPortals();

Where you call removeUniversalPortals is up to you and your implementation. I recommend calling it just before rendering the application. This should be a reasonable default allowing for rendering an application without problems with reusing existing HTML.

React Portal Universal

Slightly changed implementation is now available as a React Portal Universal library.

npm install react-portal-universal

It is important to make sure that React application code is using the same instance of the library as code responsible for handling rendering on the server. In other words, there must be only one instance of the portals variable in the process. The problem occurs when you import appendUniversalPortals from node_modules on the server but use a bundle with its own instance to render an application.

The cleanest solution is to mark react-portal-universal as an external dependency in your bundler of choice. Here is how to do this in webpack.

constconfig={externals:["react-portal-universal"],};

Summary

This is very generic implementation which does not make almost any assumptions on how you would like to use it. You can now leverage portals and provide good enough support for clients which do not run JavaScript like some crawlers and browsers.

Creating a TypeScript library with a minimal setup

$
0
0

There are a few major reasons due to which you may find yourself creating a library. One, obviously, is that you have a solution which you would like to share with the Open Source community. The other one is that you need to reuse code across different projects or in the same project but on different platforms.

TypeScript’s type system and autocompletion support from the text editors make it a great language for writing a library. What intimidates me when it comes to extracting a part of the codebase to a separate repository is the need for a setup. There are many boilerplates in the wild. Some library boilerplates include rollup or webpack but the TypeScript compiler itself is good enough for building even more complex libraries. Using only the tsc and minimal tsconfig you are able to ship a code ready to run in both node and the browser along with types definitions.

package.json

As you probably know package.json not only keeps a list of dependencies but also allows for defining a set of scripts and information about your package such as name, version, author and so on.

The first thing to do is initializing a repository and setting a remote.

git init
git remote add origin https://github.com/<username>/<reponame>.git

You want to do it before generating a package.json as npm will be able to set repository, bugs and homepage values. You can also do it later by reinitializing package.json. Run init utility to generate a package.json file:

npm init -y

Skip -y to use an interactive mode. Fill in a description, keywords and an author so other developers can find your great library using an npm’s search.

TypeScript is not going to be a dependency required to run the library so you should install it as a development dependency. You can install it globally as well but this is something I try to avoid when I can. Using global dependency makes it harder for other developers to build and contribute to your library.

npm install --save-dev typescript

tsconfig.json

Once you have TypeScript compiler (tsc) installed you can access it to generate a tsconfig.json file.

./node_modules/.bin/tsc --init

Such tsconfig.json contains multiple, possible to set configuration options along with their descriptions. I am used to starting my libraries with the following configuration:

{"compilerOptions":{"target":"ES2015","module":"commonjs","declaration":true,"outDir":"lib","strict":true},"include":["src/**/*"]}

I target ES2015+ environments. Current LTS version of Node.js is 8.9.0 which supports 99% of the specs according to node.green and since Edge, Firefox, Chrome, and Safari support 96%-99% of the spec you might not need to use Babel at all. We want also to generate declaration files so we can preserve type safety and once you get back to using the library after some time you do not have to remember the API, your editor does. Configuration for outDir and what files are going to be included is arbitrary.

In case of splitting your library across multiple files, you may end up with multiple compiled files inside the lib directory. If that’s not what you aim for you can decide to build entire library into a single file. It’s possible to achieve with the outFile option when you target SystemJS or AMD modules.

{"compilerOptions":{"target":"ES2015","module":"amd","declaration":true,"outFile":"lib/index.js","strict":true},"include":["src/**/*"]}

build script

We don’t need anything sophisticated just to show how to set it up. The file is placed in the src directory which aligns with what I have set in tsconfig.json.

// src/index.tsexportfunctionsum(a:number,b:number){returna+b;}

To compile this code and place it in the lib it is now enough to call tsc from the directory containing tsconfig.json file.

./node_modules/.bin/tsc

Calling tsc from node_modules is not very convenient. It’s easier to follow convention and define a build script. The other thing is that keeping an already built version of the library inside the repository introduces unnecessary noise. I want to ignore entire lib directory with git and use prepare script to build the library once it is published and pushed to npm’s registry.

{"main":"./lib/index.js","typings":"./lib/index.d.ts","scripts":{"prepare":"npm run build","build":"tsc",},"files":["lib"]}

When a library is required the main field in package.json points to a file which becomes a default entry point. You also need to provide a path to types definition. Using files field it is possible to narrow down files which are going to be included when we install our package as a dependency. This is an optional configuration and does not have to be set to publish and use the library.

test script

Setting up a test suit for TypeScript code base is particularly easy with Jest. It is mostly due to preprocessing capabilities which removes the need to compile TypeScript files explicitly before running the tests.

npm install --save-dev jest ts-jest @types/jest

Configuration for Jest goes to the package.json file. We’re interested in using both TypeScript and JavaScript modules. Those last ones are used by Jest internally. We want to transform TypeScript modules with ts-jest. Default pattern used by Jest has to be changed too so it matches TypeScript files.

{"scripts":{"test":"jest"},"jest":{"moduleFileExtensions":["ts","js"],"transform":{"\\.ts$":"<rootDir>/node_modules/ts-jest/preprocessor.js"},"testRegex":"/src/.*\\.spec\\.ts$"}}

We can finally write a few tests for our library.

// src/sum.spec.jsimport{sum}from"./index";describe("sum",()=>{it("sums two numbers",()=>{expect(sum(1,2)).toEqual(3);});});

That is it!

By no means, it is everything you can do to your library. When you get an attention from the developers’ community you would like to set up git hooks. Husky is a great tool to do that and e.g. force run tests before pushing to the repository. Continuous integration will make your project easier to maintain as well. None of those is crucial though. Go small with only what is necessary and have fun building libraries!

The code is available for reference on GitHub MichalZalecki/ts-lib-boilerplate.

Fixtures, the way to manage sample and test data

$
0
0

Fixtures are a thin abstraction layer over sample data in your application which allows for better organizing, often complex, data structures representing different entities. If this sounds like a vague description or does not ring the bell maybe an example will speak to you better.

Mocking

The first scenario considers mocking. You might happen to hear that mocks are a bad idea and you would hear right! You definitely do not want to use too many mocks in your tests as you end up with a test suit barely testing anything, or more accurately, testing mocks. Mocking that I have in mind is mocking an external API endpoint during development. You might want to do that to limit access to a service which bills you for a throughput or because backend provided only documentation without yet working implementation.

Instead of using an actual endpoint you can come up with an alternative solution which temporarily provides you with a set of resources so you can continue with development.

fetch("http://example.com/clients").then(response=>response.json()).then(clients=>{// do something with clients})// TODO: Use an actuaal endpoint when API is readyPromise.resolve([...Array(10)].map(()=>clientFixture())).then(clients=>{// do somethign with clients})

You may consider using some abstraction which groups access to available endpoints under a unified interface. That would better separate concerns (fetching, errors handling, processing) and maybe change behavior based on production/development environment or a feature flag but this is beyond the scope of this article.

Testing on “real” data

Imagine that you have to filter clients which agreed for receiving email notifications and return the list of their emails. If you start writing your tests already assuming that function is interested only in two fields: email and emailNotifications you may end up with the following test.

describe("Clients mappers",()=>{describe("getEmailsToNotify",()=>{it("returns emails of clients who agrred to receive notifications",()=>{constclients=[{email:"foo@example.com",emailNotifications:true},{email:"bar@example.com",emailNotifications:false},{email:"foz@example.com",emailNotifications:true},];expect(getEmailsToNotify(clients)).toEqual(["foo@example.com","foz@example.com",])})})})

The test is simple, easy to read, quick to execute, does not cause any side effects and does not rely on network resources. At first sight, it looks ok. The problem with this test is that it is not very concrete. This is not a data structure we can use in an actual application.

Now we want to change our implementation so we return email together with a name which can be displayed by an email client instead of a raw address.

describe("Clients mappers",()=>{describe("getEmailsToNotify",()=>{it("returns names and emails of clients who agrred to receive notifications",()=>{constclients=[{fullName:"John Doe",email:"foo@example.com",emailNotifications:true},{fullName:"John Doe",email:"bar@example.com",emailNotifications:false},{fullName:"John Doe",email:"foz@example.com",emailNotifications:true},];expect(getEmailsToNotify(clients)).toEqual(["\"John Doe\"<foo@example.com>","\"John Doe\"<foz@example.com>",])})})})

We adjusted an expected result but after we have changed an implementation we also had to go back and change data we use for testing. This is less than ideal as it makes you switch back and forth. The other disadvantage is that implementation starts to drive data when it should be the opposite. You can easily refactor your tests using fixtures:

describe("Clients mappers",()=>{describe("getEmailsToNotify",()=>{it("returns emails of clients who agrred to receive notifications",()=>{constclients=[clientFixture({email:"foo@example.com"}),clientFixture({email:"bar@example.com",emailNotifications:false}),clientFixture({email:"foz@example.com"}),];expect(getEmailsToNotify(clients)).toEqual(["\"John Doe\"<foo@example.com>","\"John Doe\"<foz@example.com>",])})})})

Implementing fixtures

Fixtures implementation is a piece of cake. The most difficult part is coming up with reasonable defaults. E.g. creating each new client with emailNotifications set to true.

importuuidv4from"uuid/v4";interfaceRawClient{id:string;fullName:string;email:string;email_notifications:boolean;}functionclientFixture(props:Partial<RawClient>={}):RawClient{constdefaults:RawClient={id:uuidv4(),fullName:"John Doe",email:"john.doe@example.com",email_notifications:true,};return{...defaults,...props};}constclient=clientFixture();

I am showing an implementation in TypeScript as it better represents resulting types. It is important to use not only compatible but the same type definition as used in the application code. Type is called RawClient not without a reason. I like to keep fixtures so they relate to what I fetch from the server. The format received from the server might not be the same you decided to use internally. I am using mappers to create an additional layer which lets me have more control over the shape of data I send or fetch. You can read more about mappers.

DRY

Not repeating yourself is more an outcome than a reason of using fixtures in the first place. Eliminating repetition is the result of having fixtures functions which are simply factories for a given type. You can easily compose different fixtures to represent associations on nested data.

constmailingList=mailingListFixture({recurring:true,clients:[clientFixture(),clientFixture()],})

Wrap up

I come across fixtures for the first time while testingrails applications. In general, I recommend skimming through Rails conferences’ agenda as talks are often full of interesting software engineering solution. I cannot say that fixtures are vastly popular in JavaScript land but if simple functions will not meet your expectations you can still find something on npm for yourself.

Testing redux-thunk like you always want it

$
0
0

Redux Thunk is one of the most if not the most popular Redux middleware with over 2 million downloads a month. If you compare this number to the Redux 4 million downloads a month, it is easy to figure out that over half of Redux projects are using Redux Thunk. As the name “thunk” suggests, the main goal of Redux Tunk is to allow for lazy evaluation (dispatching) of actions. While this makes it possible to dispatch actions in an asynchronous manner, it also makes it harder to test.

Despite that many different Redux middleware libraries are available on npm Redux Thunk is still one of more versatile thanks to leveraging a simple idea of thunks.

Mocking a dispatch

Let’s imagine a case in which we have a list of purchases. We can manually change their status. It might be impossible to update the state of a purchases list just based on the information we have. One of the examples of such situation would be when the server is adding some specific information connected with a purchases state transition like assigning a parcel tracking code. What we need to do it to fetch a list of purchases again.

functionfetchAllPurchases(){returndispatch=>{dispatch({type:"FETCH_ALL_PURCHASES_STARTED"});fetchAllPurchasesRequest().then(response=>{dispatch({type:"FETCH_ALL_PURCHASES_SUCCESS",payload:response.data});}).then(error=>{dispatch({type:"FETCH_ALL_PURCHASES_FAILURE",payload:error});});};}functionchangePurchaseStatus(id,status){returndispatch=>{dispatch({type:"CHANGE_PURCHASE_STATE_STARTED"});changePurchaseStatusRequest(id,status).then(()=>{dispatch({type:"CHANGE_PURCHASE_STATE_SUCCESS",meta:{id,status}});dispatch(fetchAllPurchases());}).then(error=>{dispatch({type:"CHANGE_PURCHASE_STATE_FAILURE",payload:error});});};}

The question arises, how to test it? Well, we can replace a dispatch and a getState with mock functions. Soon we discover a problem with this approach.

describe("changePurchaseStatus",()=>{it("handles changing a purchase status and fetches all purchases",async()=>{constdispatch=jest.fn();constgetState=jest.fn();awaitchangePurchaseStatus("rylauNS2GG","sent")(dispatch,getState);expect(dispatch).toBeCalledWith({type:"CHANGE_PURCHASE_STATE_STARTED"});expect(dispatch).toBeCalledWith({type:"CHANGE_PURCHASE_STATE_SUCCESS",meta:{id:"rylauNS2GG",status:"sent"}});expect(dispatch).toBeCalledWith({type:"FETCH_ALL_PURCHASES_STARTED"});});});

When we try to assert dispatching of FETCH_ALL_PURCHASES_STARTED action, we can see that the latest dispatch has been called with an anonymous function instead of an action. That is not good. It was working just fine but broke during a test as actual dispatch implementation does more than just piping actions to a reducer.

Mocking a store

The other approach to testing Redux Thunk involves mocking a store. A store provided by redux-mock-store is better suited for testing thunks in isolation.

// test/utils/mockStore.jsimportconfigureMockStorefrom"redux-mock-store";importthunkfrom"redux-thunk";exportconstmockStore=configureMockStore([thunk]);
describe("changePurchaseStatus",()=>{it("handles changing a purchase status and fetches all purchases",async()=>{conststore=mockStore();awaitstore.dispatch(changePurchaseStatus("rylauNS2GG","sent"));constactions=store.getActions();expect(actions[0]).toEqual({type:"CHANGE_PURCHASE_STATE_STARTED"});expect(actions[1]).toEqual({type:"CHANGE_PURCHASE_STATE_SUCCESS",meta:{id:"rylauNS2GG",status:"sent"}});expect(actions[2]).toEqual({type:"FETCH_ALL_PURCHASES_STARTED"});});});

Now, we are able to test whether FETCH_ALL_PURCHASES_STARTED is dispatched. Although it works in this case, it is not guaranteed to work when promises do not resolve instantly. Promises should always resolve instantly in your unit tests. That said, you may come across a situation in which it is not worth the hassle. It results in actions array not containing all actions dispatched by a given thunk.

The first thing we can do is to return a promise from the thunk.

functionchangePurchaseStatus(id,status){returndispatch=>{dispatch({type:"CHANGE_PURCHASE_STATE_STARTED"});returnchangePurchaseStatusRequest(id,status).then(()=>{dispatch({type:"CHANGE_PURCHASE_STATE_SUCCESS",meta:{id,status}});dispatch(fetchAllPurchases());}).then(error=>{dispatch({type:"CHANGE_PURCHASE_STATE_FAILURE",payload:error});});};}

For a majority of cases, it is a “good enough” solution.

Less imperative, more declarative

I find referring to actions array indexes inconvenient, especially in cases when an order does not matter. If you, like me, do not feel like changing an implementation just so it is easier to test you might be interested in the more sophisticated approach.

// test/utils/getAction.jsfunctionfindAction(store,type){returnstore.getActions().find(action=>action.type===type);}exportfunctiongetAction(store,type){constaction=findAction(store,type);if(action)returnPromise.resolve(action);returnnewPromise(resolve=>{store.subscribe(()=>{constaction=findAction(store,type);if(action)resolve(action);});});}
describe("changePurchaseStatus",()=>{it("handles changing a purchase status and fetches all purchases",async()=>{conststore=mockStore();store.dispatch(changePurchaseStatus("rylauNS2GG","sent"));expect(awaitgetAction(store,"CHANGE_PURCHASE_STATE_STARTED")).toEqual({type:"CHANGE_PURCHASE_STATE_STARTED"});expect(awaitgetAction(store,"CHANGE_PURCHASE_STATE_SUCCESS")).toEqual({type:"CHANGE_PURCHASE_STATE_SUCCESS",meta:{id:"rylauNS2GG",status:"sent"}});expect(awaitgetAction(store,"FETCH_ALL_PURCHASES_STARTED")).toEqual({type:"FETCH_ALL_PURCHASES_STARTED"});});});

We are able to abstract away defining and then accessing actions array. We only care whether an action of a specified type has been dispatched and with what payload.

Wrap up

The ease of testing is important as it determines how much of developer’s effort is going to be put into making sure whether everything works. After all, we understand the importance of well-tested code. At the same time, we want a cognitive cost of writing them to be as low as possible.

If you are using thunks to dispatch other thunks, like in my example, you may also find Tal Kol’s article an interesting read.

Nominal typing techniques in TypeScript

$
0
0

Many functional programming languages like Haskell or Elm have a structural type system. This perfectly lines in with the direction in which majority of JavaScript’ish community is heading. Nevertheless, every feature comes with a certain set of trade-offs. Choosing structural type system allows for a greater flexibility but leaves a room for a certain class of bugs. What I find interesting is that the answer to the question whether TypeScript, Flow or any other type system adopts structural or nominal type system does not have to be binary. So, it is possible to have the best of both worlds writing in TypeScript?

Nominal types allow for expressing problem semantics in a way that correctness of the program, up to some point, can be assured by a type checker. Both TypeScript and Flow type systems are mostly structural. Flow also adopts a few attributes of nominal type system but let’s leave it for now. In structural type system, two different types but of the same shape are compatible. In TypeScript, there are a few exceptions like in case of private properties. Later I present how to make a practical use of this feature.

Ryan Cavanaugh gathered a compelling list of use cases where nominal typing excels.

Before we dive into TypeScript, it is worth mentioning that Flow addresses this very problem with opaque types. When an opaque type is imported it hides its underlying type. Opaque type resembles a nominal type.

Table of contents

Use case

I would like to show you a few approaches of nominal typing wannabe implementations. Each is slightly different but I would recommend sticking to only one across your code base. To make things simpler, a goal of each implementation is to disallow to sum two numbers if both are not in USD.

Approach #1: Class with a private property

classUSD{private__nominal:void;constructor(publicvalue:number){};}classEUR{private__nominal:void;constructor(publicvalue:number){};}constusd=newUSD(10);consteur=newEUR(10);functiongross(net:USD,tax:USD){return{value:net.value+tax.value}asUSD;}gross(usd,usd);// okgross(eur,usd);// Error: Types have separate declarations of a private property '__nominal'.

The main difference of this approach from any following is that it does not require to perform a type assertion (or casting if you wish). Types of usd and eur variables can be correctly inferred as we only create an instance of a new class. They are not compatible due to separate declarations of a private property. On the other hand, the main disadvantage is that class is a redundant construct from a purely logical standpoint.

Approach #2: Brands

interfaceUSD{_usdBrand:void;value:number;}interfaceEUR{_eurBrand:void;value:number;}letusd:USD={value:10}asUSD;leteur:EUR={value:10}asEUR;functiongross(net:USD,tax:USD){return{value:net.value+tax.value}asUSD;}gross(usd,usd);// okgross(eur,usd);// Error: Property '_usdBrand' is missing in type 'EUR'.

As long as interfaces have different properties they are incompatible. TypeScript team follows this convention. We never assign values to a brand property so there is no runtime cost. There are cases in which interface can be an overkill but its the simplest way to have a taste of nominal typing in TypeScript I am aware of.

Approach #3: Intersection types

classCurrency<Textendsstring>{privateas:T;}typeUSD=number&Currency<"USD">typeEUR=number&Currency<"EUR">constusd=10asUSD;consteur=10asEUR;functiongross(net:USD,tax:USD){return(net+tax)asUSD;}gross(usd,usd);// okgross(eur,usd);// Error: Type '"EUR"' is not assignable to type '"USD"'.

Here we take advantage of intersection types. Both USD and EUR types have features of both number and Currency<T>. We never actually assign a value to the as property, it does not exist in runtime and class Currency itself will be defined as an empty class.

Although Currency could have a more abstract and generic name I would avoid generalization here as it can easily go out of control in a real life project if being followed as the mantra.

functionofUSD(value:number){returnvalueasUSD;}functionofEUR(value:number){returnvalueasEUR;}constusd=ofUSD(10);consteur=ofEUR(10);functiongross(net:USD,tax:USD){returnofUSD(net+tax);}

To avoid a full with explicit type assertion it is convenient to create a separate function.

Approach #4: Intersection types and brands

typeBrand<K,T>=K&{__brand:T}typeUSD=Brand<number,"USD">typeEUR=Brand<number,"EUR">constusd=10asUSD;consteur=10asEUR;functiongross(net:USD,tax:USD):USD{return(net+tax)asUSD;}gross(usd,usd);// okgross(eur,usd);// Type '"EUR"' is not assignable to type '"USD"'.

This approach is a mix of two previous one. Despite being little hacky I find the most elegant and clean solution. An error message is still descriptive. Moreover, type Brand is only a type and will not be present in the output code.

More reading

Convert files for the web from your terminal

$
0
0

For me, using a terminal is fundamental to tasks automation. I love to augment my workflow using a command line tools. One of the things I try to automate is preparing assets for using them in web apps. This post is a kind of documentation to me.

Presented tools are mostly cross-platform and it might be the case that you have already some of them installed. To stick to the merits of the post I decided not to include any tips on how to get those tools up and running. Nonetheless I haven not strugled which any of these so you should be just fine. Just google a command and follow instructions specific your operating system.

Table of contents:

JPG, PNG, SVG to Base64

OpenSSL provides us with command line utilities. To perform Base64 encoding we use enc command with -base64 option or -a for short. The tool is not restricted to work with any particular format although I rarely use it for something other than encoding an image.

$ openssl enc -base64 -in image.jpg > image.jpg.b64
$ openssl enc -a -in image.jpg > image.jpg.b64

The result of each of the commands is the same. I use 84KB JPEG file which is encoded to 114KB Base64 representation. Make sure your use case justifies an overhead of Base64 representation which is on average 137% of what is the size of a binary file.

Add a data URI to use a Base64 string as an image source or background.

<imgsrc="data:image/jpeg;base64,/4AAQSk[…]FRQP9k="><imgsrc="data:image/png;base64,/4AAQSk[…]FRQP9k="><imgsrc="data:image/svg+xml;base64,/4AAQSk[…]FRQP9k=">

You can easily decode Base64 file back to its original representation passing a -d option which stands for decode.

$ openssl enc -d -base64 -in image.jpg.b64 > image.jpg
$ openssl enc -d -a -in image.jpg.b64 > image.jpg

SVG to PNG

When you want to convert SVG to a PNG file you have a few options. One of them is to open a file in the browser and try to save it as a PNG. It is neither quick nor fun but it is more than likely that you a have browser already. You are reading this blog post after all. Contrary to doing it by hand you can use one of a few command line tools.

My first choice is always svgexport. Definitely, give this project a start on GitHub!

$ svgexport image.svg image.png

By default, the PNG size is derived from the viewBox of an <svg> element. In my case, it is 256 pixels wide and 228 pixels high React.js logo from SVG Porn. Now I would like to prepare a file for higher pixel density devices. I can use a scale, set both dimensions or use only one like a width and height will be scaled to preserve the ratio.

$ svgexport image.svg image@2x.png 2x
$ svgexport image.svg image@2x.png 512:

I suggest you also experiment with the quality to get the best quality/size ratio. In many cases default, 100% is an overkill. I often set it to 70% and take it from there.

$ svgexport image.svg image-1.0.png 100%
$ svgexport image.svg image-0.7.png 70%

Looking at the MBP 13” 2017 screen I can not tell the difference between those two but size varies significantly. It is 18KB for 70% and 229KB for 100%.

Inkscape is a very decent alternative for svgexport. After initial start of XQuartz (X11 for macOS), it is also pretty fast too. The biggest advantage of Inkscape over the other tools is its versatility.

$ inkscape -z -w 512 $PWD/image.svg -e $PWD/image.ink.png

Optimize SVG size

Let’s stick to the topic of SVG files. Very often file sent to you by a graphic designer is not optimized for size. How can you optimize SVG? It is a text file so it will benefit vastly from enabling gzip or brotli but it is not the optimization I would like to show you. It is likely that SVG file made in Illustrator or Sketch contains multiple comments, editor metadata or unnecessarily precise values (such as insignificant for a screen decimal places).

You can easily get rid of all this bloat with svgo. There are multiple configuration options available.

$ svgo --pretty image.svg
$ svgo -f /path/to/directory

PDF to PNG

There are two major solutions for converting PDF document to PNG images from your terminal. ImageMagic and Ghostscript. To use ImageMagic’s convert command you will need to install Ghowstscript anyway as gs is used by convert to rasterize vector files. You can use gs directly, it is much faster than ImageMagick. Nonetheless, I have noticed that ImageMagic is doing a better job when it comes to text smoothing so you may want to play with both before deciding.

$ convert -density 300 -background white -alpha remove test.pdf test.png

Converting 9 pages long PDF document took ImageMagick 14 seconds and the size of output files is 1.9MB.

$ gs -sDEVICE=png16m -dTextAlphaBits=4 -r300 -o test-%02d.png test.pdf

Converting the same PDF took Ghostscript only 2.6 second and the size of output files is 2.4MB.

DOCX to PDF

I have not found a perfect solution to convert a DOCX document created with Microsoft Word to PDF preserving 1:1 layout. Well, except using World itself. That said, it is still possible to achieve really good result using LibreOffice. It happens that LibreOffice can run in a headless mode!

The command is not the same across different platforms. On macOS, it is a soffice and you will find it here /Applications/LibreOffice.app/Contents/MacOS/soffice. I suggest creating a symlink. On Linux, it should be just libreoffice and soffice.exe on Windows.

$ soffice --headless --convert-to pdf test.docx

I have used this approach before and explained in more details in a different post: Converting DOCX to PDF using Python.

Remove EXIF data

Removing EXIF information is not only a good thing to do concerning your privacy but also can save some kilobytes sent down the wire. From the overall file size standpoint, information about your camera model or GPS coordinates is negligible but it all adds up. Removing all EXIF data from a photo taken by iPhone shave 5KB off. To put it into perspective 5KB is a size of a not trivial library.

$ exiftool -all= IMG_0001.jpg

Working on multiple files

Unix philosophy of combining “small, sharp tools” pays off. You have just gone through the list of (I hope) helpful examples of how to convert various file formats. Using a find command you can fairly easy perform those operations on multiple files.

$ find . -name "*.jpeg" -exec sh -c "exiftool -all= {}" \;

Wrap up

That is it, for now. I am going to add more as soon as I find myself using something worth sharing. Feel free to let me know if there is a tool you use and which I am missing here.


Using Sequelize with TypeScript

$
0
0

Sequelize is an ORM for Node.js written in JavaScript, not TypeScript. Despite that good quality typings are available, it is not straightforward how to get up to speed with Sequelize and TypeScript. I would like to go through crucial elements and show how to maximize safety coming from static typing when using Sequelize. Let’s start with setting things up.

Setup

Sequalize is very flexible when it comes to how you decide to structure your project. Basic directory structure can be configured with .sequelizerc. At the same time, Sqeuelize makes a few assumptions and to make a full advantage of certain features you should comply. One of such assumptions is the directory to which you generate models (using sequelize-cli) is the same as the directory from where you access those models. It might be true in case of a JavaScript project but it is not uncommon to build TypeScript project into a separate directory like build or dist.

{"compilerOptions":{"target":"ES2017","module":"commonjs","strict":true,"moduleResolution":"node"},"include":["src/**/*.ts"]}

To compile .ts files to .js which are in the same directory it is enough to just include them specifying a correct path. Remove outDir from your tsconfig.json.

// .sequelizercconstpath=require("path");module.exports={config:path.resolve("src","db","config.json"),"models-path":path.resolve("src","db","models"),"seeders-path":path.resolve("src","db","seeders"),"migrations-path":path.resolve("src","db","migrations")};

Pointing Sequalize to correct directory will make it possible to use command line tools to generate a migration or populate a seed.

Models

Models are where things get interesing. There are three entities you should understand before moving forward: Attributes, Instance and a Model.

interfacePostAttributes{}// fields of a single database rowinterfacePostInstance{}// a single database rowinterfacePostModel{}// a table in the database

Attributes interface is a simple definition of all attributes you specify when creating a new object. It is better to think about it this way than just fields o a single database row. The reason is, you do not want auto-generated id or updatedAt field to be required when saving a new record.

// src/db/models/product.tsinterfaceProductAttributes{id?:string;// id is an auto-generated UUIDname:string;price:string;// DOUBLE is a string to preserve floating point precisionarchived?:boolean;// is false by defaultcreatedAt?:string;updatedAt?:string;}

Instance represents an actual row you fetch from the database. It should contain all attributes and few additional methods such as getValue or save. There is nothing more to defining an instance type than combining Sequalize.Instance<TAttribues> and TAttributes.

// src/db/models/product.tstypeProductInstance=Sequelize.Instance<ProductAttributes>&ProductAttributes;

Model is a created using sequalize.define. I recommend not instantiating model directly but wrapping it into a factory function and exporting that function.

// src/types.d.tsimport{DataTypeAbstract,DefineAttributeColumnOptions}from"sequelize";declareglobal{typeSequelizeAttributes<Textends{[key:string]:any}>={[PinkeyofT]:string|DataTypeAbstract|DefineAttributeColumnOptions;};}
// src/db/models/product.tsexportdefault(sequalize:Sequelize.Sequelize)=>{constattributes:SequelizeAttributes<ProductAttributes>={id:{type:Sequelize.UUID,primaryKey:true,defaultValue:Sequelize.UUIDV4},name:{type:Sequelize.STRING,allowNull:false},price:{type:Sequelize.DECIMAL(10,2),allowNull:false},archived:{type:Sequelize.BOOLEAN,allowNull:false,defaultValue:false},};returnsequalize.define<ProductInstance,ProductAttributes>("Product",attributes);};

Using a helper type, SequelizeAttributes will not let you forget about specifying or leaving not implemented type in your attributes interface. Out of the box sequalize.define does not give you this guarantee.

Model loader

Model loader is a module exporting an object with all models available. The loader you get after executing sequelize init is smart in the sense that it dynamically loads all models from the models’ directory building db in the loop. It makes types inference impossible so I am used to ditching it and replace with explicit declaration of db object.

// src/db/models/index.jsimport*asSequelizefrom"sequelize";importproductFactoryfrom"./product";constenv=process.env.NODE_ENV||"development";constconfig=require(__dirname+"/../config.json")[env];consturl=config.url||process.env.DATABSE_CONNECTION_URI;constsequelize=newSequelize(url,config);constdb={sequelize,Sequelize,Product:productFactory(sequelize),};Object.values(db).forEach((model:any)=>{if(model.associate){model.associate(db);}});exportdefaultdb;

Wrap up

You may also go a step further and write seeders and migrations in TypeScript as well. I am not doing it as the only consumer of those files is sequelize-cli. I edit generated JavaScript files but with this config, it is not a problem if you want to use TypeScript, just change the extension of generated files to .ts.

As a further reading on the topic, I recommend reading comments from types definition. They often come in handy during development and are well structured.

Ethereum: Test driven introduction to Solidity

$
0
0

Depends on how you count, second and third generation blockchain applications are not bound by restrictions of underlying protocols. Programmers can create smart contracts, distributed applications with access to code-controlled accounts - opposed to a private key. Use cases go far beyond exchanging value and applies where users benefit from replacing trust between parties with code.

Ethereum Blockchain is a decentralized platform which provides us with a runtime environment for smart contracts called Ethereum Virtual Machine (EVM). Contracts are completely isolated with limited access to other smart contracts. If you are new in this space, for now, you can think about Ethereum as a slow but reliable and secure computer.

In this post, I would like to introduce you to Solidity through TDD approach. If you are new to Solidity, this might be a little challenge. I will do my best and try to make a gentle learning curve. On the other hand, if you are already familiar with creating smart contracts, I hope this post will help you to get better at testing. I recommend you try CryptoZombies if you find yourself confused with Solidity code in this post.

There are a few reasons why I find testing smart contracts very important and interesting. You can choose Solidity or JavaScript as a language for writing tests. You can also write in both! Depends on what you would like to test, one testing environment is superior to another.

Smart contracts are not a good fit for “move fast and break things” kind of mindset. The blockchain is immutable in the sense that a once approved transaction is going to stay. Since smart contract deployment happens through a transaction, this originates in the inability to fix issues quickly. That is why having a reliable set of tests is so crucial. There are of course different techniques which allow you to introduce escape hatches and migrate to a new, improved version of the smart contract. It comes with a set of potential security vulnerabilities and too often gives a little bit too much power in the hands of an owner of the contract. This raises a question about real decentralization of the app.

Setup

The easiest way to start with smart contracts development for Ethereum is through Remix, an online IDE. It does not require any setup and integrates nicely with MetaMask to allow you for deploying contracts to a particular network. Despite all that I am going with Truffle.

Truffle is a popular development framework written in JavaScript. It comes neither with any convention for your smart contracts nor utility library but provides you with a local environment for developing, testing and deploying contracts. For starters, install truffle locally.

npm i truffle

Truffle makes it possible to kick off the project using one of many boxes. A box is a boilerplate containing something more than just a necessary minimum. We are interested in starting a new project from scratch.

./node_modules/.bin/truffle init

Look around; there is not much there yet. The only interesting bit is a Migrations.sol contract and its migration file. History of migrations you are going to make over time is recorded on-chain through a Migrations contract.

Solidity is not the only language in which one can create a smart contract on EVM. Solidity compiles directly to EVM bytecode. There’s also LLL (low level, 1 step above EVM bytecode) and Serpent (LLL’s super-set) which I would not recommend due to known security issues. Another language is Vyper which aims to provide better security through simplicity and to increase audibility of the smart contracts code. It is still in experimental phase.

Rules

We are going to build a contract which allows for funds raising. Mechanics are the same as a single Kickstarter campaign. There is a particular time to reach the goal. If this does not happen, donators are free to request a refund of transferred Ether. If a goal is reached, the owner of the smart contact can withdraw funds. We want to also allow for “anonymous” donation which is merely a transfer of funds to a smart contract. I am saying anonymous, but as all transactions, it is a publicly visible Ether transfer. We are just unable to refund those funds.

With clearly defined scope we can start implementing our smart contract.

Setting an owner

The first feature we want our smart contract to have is an owner. Before we start writing the first test let’s create an empty Funding contract so our tests can compile.

// contracts/Funding.solpragmasolidity^0.4.17;contractFunding{}

Now, with an empty contract defined we can create a testing contract.

// contracts/FundingTest.solpragmasolidity^0.4.17;import"truffle/Assert.sol";import"../contracts/Funding.sol";contractFundingTest{}

Now, run tests.

$ ./node_modules/.bin/truffle test
Compiling ./contracts/Funding.sol...
Compiling ./contracts/Migrations.sol...
Compiling ./test/FundingTest.sol...
Compiling truffle/Assert.sol...


  0 passing (0ms)

Yey! If you got it right, contracts should compile without any errors. But we still don’t have any tests; we need to fix that. We want Funding to store an address of its deployer as an owner.

contractFundingTest{functiontestSettingAnOwnerDuringCreation()public{Fundingfunding=newFunding();Assert.equal(funding.owner(),this,"An owner is different than a deployer");}}

Each smart contract has an address. An instance of each smart contract is implicitly convertible to its address and this.balance returns contract’s balance. One smart contract can instantiate another, so we expect that owner of funding is still the same contract. Now, to the implementation.

contractFunding{addresspublicowner;functionFunding()public{owner=msg.sender;}}

Like in C#, a constructor of the contract has to have the same name as a class (contract in this case). A sender of the message inside a constructor is a deployer. Let’s rerun the tests!

FundingTest
    ✓ testSettingAnOwnerDuringCreation (64ms)


  1 passing (408ms)

We can create an equivalent test in JavaScript.

// test/FundingTest.jsconstFunding=artifacts.require("Funding");contract("Funding",accounts=>{const[firstAccount]=accounts;it("sets an owner",async()=>{constfunding=awaitFunding.new();assert.equal(awaitfunding.owner.call(),firstAccount);});});

In JavaScript, we can require a contract using artifacts.require. Instead of describe which you may know from other testing frameworks we use contract which does some cleanup and provides a list of available accounts. The first account is used by default during tests.

FundingTest
    ✓ testSettingAnOwnerDuringCreation (66ms)

  Contract: Funding
    ✓ sets an owner (68ms)


  2 passing (551ms)

Apart from creating a new contract during tests, we would also like to access contracts deployed through a migration.

import"truffle/DeployedAddresses.sol";contractFundingTest{functiontestSettingAnOwnerOfDeployedContract()public{Fundingfunding=Funding(DeployedAddresses.Funding());Assert.equal(funding.owner(),msg.sender,"An owner is different than a deployer");}}

It fails as we do not have any migration for our Funding contract.

// migrations/2_funding.jsconstFunding=artifacts.require("./Funding.sol");module.exports=function(deployer){deployer.deploy(Funding);};

We can now rerun tests.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (70ms)
    ✓ testSettingAnOwnerOfDeployedContract (63ms)

  Contract: Funding
    ✓ sets an owner (62ms)


  3 passing (744ms)

Accepting donations

Next feature on the roadmap is accepting donations. Let’s start with a test in Solidity.

contractFundingTest{uintpublicinitialBalance=10ether;functiontestAcceptingDonations()public{Fundingfunding=newFunding();Assert.equal(funding.raised(),0,"Initial raised amount is different than 0");funding.donate.value(10finney)();funding.donate.value(20finney)();Assert.equal(funding.raised(),30finney,"Raised amount is different than sum of donations");}}

We use a unit called Finney. You should know that the smallest, indivisible unit of Ether is called Wei (it fits uint type).

  • 1 Ether is 10^18 Wei
  • 1 Finney is 10^15 Wei
  • 1 Szabo is 10^12 Wei
  • 1 Shannon is 10^9 Wei

Initially, a contract has no spare ethers to transfer so we can set an initial balance. Ten ether is more than enough. Let’s write an equivalent JavaScript test.

constFINNEY=10**15;contract("Funding",accounts=>{const[firstAccount,secondAccount]=accounts;it("accepts donations",async()=>{constfunding=awaitFunding.new();awaitfunding.donate({from:firstAccount,value:10*FINNEY});awaitfunding.donate({from:secondAccount,value:20*FINNEY});assert.equal(awaitfunding.raised.call(),30*FINNEY);});});

Implementation can be following.

contractFunding{uintpublicraised;addresspublicowner;functionFunding()public{owner=msg.sender;}functiondonate()publicpayable{raised+=msg.value;}}

For now, it is everything to make tests pass.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (68ms)
    ✓ testSettingAnOwnerOfDeployedContract (63ms)
    ✓ testAcceptingDonations (80ms)

  Contract: Funding
    ✓ sets an owner (56ms)
    ✓ accepts donations (96ms)


  5 passing (923ms)

Now, we would like to keep track of who donated how much.

functiontestTrackingDonatorsBalance()public{Fundingfunding=newFunding();funding.donate.value(5finney)();funding.donate.value(15finney)();Assert.equal(funding.balances(this),20finney,"Donator balance is different than sum of donations");}

Testing with JavaScript gives us an ability to test for multiple different accounts.

it("keeps track of donator balance",async()=>{constfunding=awaitFunding.new();awaitfunding.donate({from:firstAccount,value:5*FINNEY});awaitfunding.donate({from:secondAccount,value:15*FINNEY});awaitfunding.donate({from:secondAccount,value:3*FINNEY});assert.equal(awaitfunding.balances.call(firstAccount),5*FINNEY);assert.equal(awaitfunding.balances.call(secondAccount),18*FINNEY);});

For tracking a balance of particular user, we can use mapping. We have to mark a function as payable, so it allows users to send ethers along with function calls.

contractFunding{uintpublicraised;addresspublicowner;mapping(address=>uint)publicbalances;functionFunding()public{owner=msg.sender;}functiondonate()publicpayable{balances[msg.sender]+=msg.value;raised+=msg.value;}}

By now, tests should pass.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (61ms)
    ✓ testSettingAnOwnerOfDeployedContract (65ms)
    ✓ testAcceptingDonations (97ms)
    ✓ testTrackingDonatorsBalance (61ms)

  Contract: Funding
    ✓ sets an owner (51ms)
    ✓ accepts donations (96ms)
    ✓ keeps track of donator balance (134ms)


  7 passing (1s)

Time constraint

Our donators can now donate, but there is no time constraint. We would like users to send us some Ether but only until funds rising is finished. We can get a current block timestamp by reading now property. If you start writing tests in Solidity, you will quickly realize that there is no easy way to manipulate block time from the testing smart contract. There is also no sleep method which would allow us to set a tiny duration, wait for a second or two and try again simulating that time for donating is up.

The other solution would be to make it possible to set an address of the contract from which we read current timestamp. This way we could mock this contract in tests injecting it as a dependency.

// contracts/Clock.solpragmasolidity^0.4.17;contractClock{uintprivatetimestamp;functiongetNow()publicviewreturns(uint){if(timestamp>0){returntimestamp;}returnnow;}functionsetNow(uint_timestamp)publicreturns(uint){timestamp=_timestamp;}}

This is how we can implement a Clock contract. We would need to restrict changing the timestamp to the owner, but it is not that important right now. It is enough to make tests green.

functiontestFinishingFundRising()public{Clockclock=Clock(DeployedAddresses.Clock());Fundingfunding=newFunding(1days,address(clock));Assert.equal(funding.isFinished(),false,"Is finished before time is up");clock.setNow(now+1days);Assert.equal(funding.isFinished(),true,"Is not finished before time is up");}

After we have changed the timestamp of the Clock contract, fundraising is finished. After you add a new contract, you have to remember to migrate it.

// migrations/2_funding.jsconstFunding=artifacts.require("./Funding.sol");constClock=artifacts.require("./Clock.sol");constDAY=3600*24;module.exports=asyncfunction(deployer){awaitdeployer.deploy(Clock);awaitdeployer.deploy(Funding,DAY,Clock.address);};

Now to the implementation.

import"./Clock.sol";contractFunding{[...]uintpublicfinishesAt;Clockclock;functionFunding(uint_duration,address_clockAddress)public{owner=msg.sender;clock=Clock(_clockAddress);finishesAt=clock.getNow()+_duration;}functionisFinished()publicviewreturns(bool){returnfinishesAt<=clock.getNow();}[...]}

Tests should now pass.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (86ms)
    ✓ testAcceptingDonations (112ms)
    ✓ testTrackingDonatorsBalance (64ms)
    ✓ testFinishingFundRising (58ms)

  Contract: Funding
    ✓ sets an owner (64ms)
    ✓ accepts donations (115ms)
    ✓ keeps track of donator balance (165ms)


  7 passing (1s)

Although tests are passing, I would stop here for a moment. We had to create a separate contract, acting as a dependency, just to be able to test the implementation. I was proud of myself but taking into consideration that we have just added another attack vector I think this solution is somewhat dumb rather than smart. Let’s take a step back.

JSON-RPC for the rescue

I have already mentioned that there is no easy way to manipulate block time from Solidity (at least at the time of writing). JSON-RPC is a stateless, remote procedure call protocol. Ethereum provides multiple methods which we can remotely execute. One of the use cases for it is creating Oracles. We are not going to use JSON-RPC directly but through web3.js which provides a convenient abstraction for RPC calls.

// source: https://github.com/OpenZeppelin/zeppelin-solidity/blob/master/test/helpers/increaseTime.jsmodule.exports.increaseTime=functionincreaseTime(duration){constid=Date.now();returnnewPromise((resolve,reject)=>{web3.currentProvider.sendAsync({jsonrpc:"2.0",method:"evm_increaseTime",params:[duration],id:id},err1=>{if(err1)returnreject(err1);web3.currentProvider.sendAsync({jsonrpc:"2.0",method:"evm_mine",id:id+1},(err2,res)=>{returnerr2?reject(err2):resolve(res);});});});};

Calling increaseTime results in two RPC calls. You will not find them on Ethereum wiki page. Both evm_increaseTime and evm_mine are non-standard methods provided by Ganache - blockchain for Ethereum development we use when running tests.

const{increaseTime}=require("./utils");constDAY=3600*24;contract("Funding",accounts=>{[...]letfunding;beforeEach(async()=>{funding=awaitFunding.new(DAY);});it("finishes fund raising when time is up",async()=>{assert.equal(awaitfunding.isFinished.call(),false);awaitincreaseTime(DAY);assert.equal(awaitfunding.isFinished.call(),true);});});

By now, this should be the entire Funding contract. This implementation is much more straightforward than the one we used before.

// contracts/Funding.solpragmasolidity^0.4.17;contractFunding{uintpublicraised;uintpublicfinishesAt;addresspublicowner;mapping(address=>uint)publicbalances;functionFunding(uint_duration)public{owner=msg.sender;finishesAt=now+_duration;}functionisFinished()publicviewreturns(bool){returnfinishesAt<=now;}functiondonate()publicpayable{balances[msg.sender]+=msg.value;raised+=msg.value;}}

Tests should be now passing.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (64ms)
    ✓ testSettingAnOwnerOfDeployedContract (57ms)
    ✓ testAcceptingDonations (78ms)
    ✓ testTrackingDonatorsBalance (54ms)

  Contract: Funding
    ✓ sets an owner
    ✓ accepts donations (60ms)
    ✓ keeps track of donator balance (89ms)
    ✓ finishes fund raising when time is up (38ms)


  8 passing (1s)

Modifiers and testing throws

We can now tell whether fundraising finished, but we are not doing anything with this information. Let’s put a limitation on how long people can donate.

Since Solidity 0.4.13, a throw is deprecated. New function for handling state-reverting exceptions are require(), assert() and revert(). You can read more about differences between those calls here.

All exceptions bubble up, and there is no try...catch in Solidity. So how to test for throws using just Solidity? Low-level call function returns false if an error occurred and true otherwise. You can also use a proxy contract to achieve the same in what you may consider as more elegant way although I prefer one-liners.

functiontestDonatingAfterTimeIsUp()public{Fundingfunding=newFunding(0);boolresult=funding.call.value(10finney)(bytes4(bytes32(keccak256("donate()"))));Assert.equal(result,false,"Allows for donations when time is up");}

I am cheating here a little bit because a contract has a duration set to 0 which makes it out-of-date from the get-go. In JavaScript we can just use try...catch to handle an error.

it("does not allow for donations when time is up",async()=>{awaitfunding.donate({from:firstAccount,value:10*FINNEY});awaitincreaseTime(DAY);try{awaitfunding.donate({from:firstAccount,value:10*FINNEY});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

We can now restrict time for calling donate with onlyNotFinished modifier.

contractFunding{[...]modifieronlyNotFinished(){require(!isFinished());_;}functionisFinished()publicviewreturns(bool){returnfinishesAt<=now;}functiondonate()publiconlyNotFinishedpayable{balances[msg.sender]+=msg.value;raised+=msg.value;}}

Both new tests should now pass.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (72ms)
    ✓ testSettingAnOwnerOfDeployedContract (55ms)
    ✓ testAcceptingDonations (78ms)
    ✓ testTrackingDonatorsBalance (56ms)
    ✓ testDonatingAfterTimeIsUp (46ms)

  Contract: Funding
    ✓ sets an owner
    ✓ accepts donations (54ms)
    ✓ keeps track of donator balance (85ms)
    ✓ finishes fund raising when time is up
    ✓ does not allow for donations when time is up (52ms)


  10 passing (1s)

Withdrawal

We accept donations, but it is not yet possible to withdraw any funds. An owner should be able to do it only when the goal has been reached. We also cannot set a goal. We would like to do it when deploying a contract - as we did when we were setting contract duration.

contractFundingTest{Fundingfunding;function()publicpayable{}functionbeforeEach()public{funding=newFunding(1days,100finney);}functiontestWithdrawalByAnOwner()public{uintinitBalance=this.balance;funding.donate.value(50finney)();boolresult=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,false,"Allows for withdrawal before reaching the goal");funding.donate.value(50finney)();Assert.equal(this.balance,initBalance-100finney,"Balance before withdrawal doesn't correspond the sum of donations");result=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,true,"Doesn't allow for withdrawal after reaching the goal");Assert.equal(this.balance,initBalance,"Balance after withdrawal doesn't correspond the sum of donations");}functiontestWithdrawalByNotAnOwner()public{// Make sure to check what goal is set in the migration (here also 100 Finney)funding=Funding(DeployedAddresses.Funding());funding.donate.value(100finney)();boolresult=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,false,"Allows for withdrawal by not an owner");}}

A lot is going on here. First of all, this empty function marked as payable allows contracts to accept Ether via standard transaction (without data) like it would be an ordinary account controlled by a public key. This unnamed function is called a fallback function. It neither can have any arguments nor return a value. There is such small amount of gas to use (2300) that it would be impossible to modify a state anyway. We have to implement this function to test withdrawing funds to the testing contract.

Truffle will also call beforeEach hook before every test so we can move creating a new contract there as we are doing it in JavaScript. In a test case, we can overwrite a variable pointing to the funding contract. It requires different constructor params or referring to an already deployed contract.

From Solidity, we are not able to select an address from which we want to make a transaction. By design address of the smart contract is going to be used. What we can do to test withdrawal from an account which is not an owner is to use deployed contract instead of using created by a testing contract. Trying to withdraw in such case should always fail. One restriction is that you cannot specify a constructor params - the migration script has already deployed this contract.

it("allows an owner to withdraw funds when goal is reached",async()=>{awaitfunding.donate({from:secondAccount,value:30*FINNEY});awaitfunding.donate({from:thirdAccount,value:70*FINNEY});constinitBalance=web3.eth.getBalance(firstAccount);assert.equal(web3.eth.getBalance(funding.address),100*FINNEY);awaitfunding.withdraw();constfinalBalance=web3.eth.getBalance(firstAccount);assert.ok(finalBalance.greaterThan(initBalance));// hard to be exact due to the gas usage});it("does not allow non-owners to withdraw funds",async()=>{funding=awaitFunding.new(DAY,100*FINNEY,{from:secondAccount});awaitfunding.donate({from:firstAccount,value:100*FINNEY});try{awaitfunding.withdraw();assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

No surprise on the JavaScipt side and that is a good thing. Access to multiple accounts makes it less hacky than a Solidity test case. You would like to probably get rid of this nasty try catch and a regex. I would suggest you would go with a different assertion library than the standard one. Available assert.throws does not work well with async code.

contractFunding{[...]uintpublicgoal;modifieronlyOwner(){require(owner==msg.sender);_;}modifieronlyFunded(){require(isFunded());_;}function()publicpayable{}functionFunding(uint_duration,uint_goal)public{owner=msg.sender;finishesAt=now+_duration;goal=_goal;}functionisFunded()publicviewreturns(bool){returnraised>=goal;}functionwithdraw()publiconlyOwneronlyFunded{owner.transfer(this.balance);}}

We already store the owner of the contract. Restricting access to particular functions using an onlyOwner modifier is a popular convention. Popular enough to export it to a reusable piece of code but we will cover this later. The rest of the code should not come as a surprise, you have seen it all!

FundingTest
  ✓ testSettingAnOwnerDuringCreation (54ms)
  ✓ testSettingAnOwnerOfDeployedContract (58ms)
  ✓ testAcceptingDonations (67ms)
  ✓ testTrackingDonatorsBalance (46ms)
  ✓ testDonatingAfterTimeIsUp (39ms)
  ✓ testWithdrawalByAnOwner (73ms)
  ✓ testWithdrawalByNotAnOwner (54ms)

Contract: Funding
  ✓ sets an owner
  ✓ accepts donations (53ms)
  ✓ keeps track of donator balance (87ms)
  ✓ finishes fund raising when time is up
  ✓ does not allow for donations when time is up (74ms)
  ✓ allows an owner to withdraw funds when goal is reached (363ms)
  ✓ does not allow non-owners to withdraw funds (81ms)


14 passing (2s)

Refund

Currently, funds are stuck, and donators are unable to retrieve their Ether when a goal is not achieved within a specified time. We need to make sure it is possible. Two conditions have to be met so users can get their Ether back. Duration is set in a construct so if we set a 0 duration contract is finished from the beginning, but then we cannot donate to have something to withdraw. We cannot move time forward unless we use Clock contract again. I write tests for this case solely in JavaScript.

it("allows to withdraw funds after time is up and goal is not reached",async()=>{awaitfunding.donate({from:secondAccount,value:50*FINNEY});constinitBalance=web3.eth.getBalance(secondAccount);assert.equal((awaitfunding.balances.call(secondAccount)),50*FINNEY);awaitincreaseTime(DAY);awaitfunding.refund({from:secondAccount});constfinalBalance=web3.eth.getBalance(secondAccount);assert.ok(finalBalance.greaterThan(initBalance));// hard to be exact due to the gas usage});it("does not allow to withdraw funds after time in up and goal is reached",async()=>{awaitfunding.donate({from:secondAccount,value:100*FINNEY});assert.equal((awaitfunding.balances.call(secondAccount)),100*FINNEY);awaitincreaseTime(DAY);try{awaitfunding.refund({from:secondAccount});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});it("does not allow to withdraw funds before time in up and goal is not reached",async()=>{awaitfunding.donate({from:secondAccount,value:50*FINNEY});assert.equal((awaitfunding.balances.call(secondAccount)),50*FINNEY);try{awaitfunding.refund({from:secondAccount});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

Implementing refund function can be tricky. Your intuition may tell you to loop through your donators and transfer them their funds. Problem with this solution is that the more donators you have the more gas to pay and it is not only looping but also making multiple transactions. You would like to keep the cost of running a function low and predictable. Let’s just allow each user to withdraw their donation.

contractFunding{[...]modifieronlyFinished(){require(isFinished());_;}modifieronlyNotFunded(){require(!isFunded());_;}modifieronlyFunded(){require(isFunded());_;}functionrefund()publiconlyFinishedonlyNotFunded{uintamount=balances[msg.sender];require(amount>0);balances[msg.sender]=0;msg.sender.transfer(amount);}}

We would like to save the amount to transfer first and then zero the balance. It is an implementation of the withdrawal pattern. Transfering an amount straight from the balances mapping introduces a security risk of re-entrancy - calling back multiple refunds.

FundingTest
  ✓ testSettingAnOwnerDuringCreation (64ms)
  ✓ testSettingAnOwnerOfDeployedContract (92ms)
  ✓ testAcceptingDonations (107ms)
  ✓ testTrackingDonatorsBalance (64ms)
  ✓ testDonatingAfterTimeIsUp (52ms)
  ✓ testWithdrawalByAnOwner (98ms)
  ✓ testWithdrawalByNotAnOwner (54ms)

Contract: Funding
  ✓ sets an owner
  ✓ accepts donations (60ms)
  ✓ keeps track of donator balance (91ms)
  ✓ finishes fund raising when time is up
  ✓ does not allow for donations when time is up (77ms)
  ✓ allows an owner to withdraw funds when goal is reached (425ms)
  ✓ does not allow non-owners to withdraw funds (38ms)
  ✓ allows to withdraw funds after time is up and goal is not reached (380ms)
  ✓ does not allow to withdraw funds after time in up and goal is reached (66ms)
  ✓ does not allow to withdraw funds before time in up and goal is not reached (45ms)


17 passing (3s)

Congratulations, you have all features implemented and decent test coverage 👏

Refactor

There is a commonly used pattern in our code. Saving an owner and restricting function call only to a deployer of the contract.

// contracts/Ownable.solpragmasolidity^0.4.17;contractOwnable{addresspublicowner;modifieronlyOwner(){require(owner==msg.sender);_;}functionOwnable()public{owner=msg.sender;}}

In Solidity, we can reuse existing code using libraries or through extending other contracts. Libraries are separately deployed, called using a DELEGATECALL and a good fit for implementing custom data structure like a linked list. Behaviour can be easily shared using an inheritance.

import"./Ownable.sol";contractFundingisOwnable{functionFunding(uint_duration,uint_goal)public{finishesAt=now+_duration;goal=_goal;}}

OpenZeppelin is a library which provides multiple contracts and seamlessly integrates with Truffle. You can explore them and reuse well-tested code in your smart contracts. Ownable contract from OpenZeppelin is a little different than ours; it adds a possibility to transfer ownership.

npm install --save-exact zeppelin-solidity
import"zeppelin-solidity/contracts/ownership/Ownable.sol";contractFundingisOwnable{}

Tests are still passing if you got that right.

Conclusion

An important takeaway from this post is to think twice to decide well in which cases you want to test smart contracts using JavaScript and when using Solidity. The rule of thumb is that smart contracts interacting with each other should be tested using Solidity. The rest can be tested using JavaScript. JavaScript testing is also closer to how you are going to use your contracts from the client application. Well written test suit can be a useful resource on how to interact with your smart contracts.

Stay tuned for more Solidity related content. In the future post I am going to cover events (I intentionaly skipped in this post), deploy to the publicly accessible network and create a frontend using web3.

You can find the full source code of test suits and the smart contact on GitHub (MichalZalecki/tdd-solidity-intro).

Ethereum: Test-driven development with Solidity (part 2)

$
0
0

This is the second part of the test-driven introduction to Solidity. In this part, we use JavaScript to test time-related features of our smart contract. Apart from that, you will see how to check for errors. We will also complete the rest of the smart contract by adding withdrawal and refund features.

If you did not read the first part I highly recommend doing so: Ethereum: Test-driven development with Solidity (part 1).

JSON-RPC for the rescue

I have already mentioned that there is no easy way to manipulate block time from Solidity (at least at the time of writing). JSON-RPC is a stateless, remote procedure call protocol. Ethereum provides multiple methods which we can remotely execute. One of the use cases for it is creating Oracles. We are not going to use JSON-RPC directly but through web3.js which provides a convenient abstraction for RPC calls.

// source: https://github.com/OpenZeppelin/zeppelin-solidity/blob/master/test/helpers/increaseTime.jsmodule.exports.increaseTime=functionincreaseTime(duration){constid=Date.now();returnnewPromise((resolve,reject)=>{web3.currentProvider.sendAsync({jsonrpc:"2.0",method:"evm_increaseTime",params:[duration],id:id},err1=>{if(err1)returnreject(err1);web3.currentProvider.sendAsync({jsonrpc:"2.0",method:"evm_mine",id:id+1},(err2,res)=>{returnerr2?reject(err2):resolve(res);});});});};

Calling increaseTime results in two RPC calls. You will not find them on Ethereum wiki page. Both evm_increaseTime and evm_mine are non-standard methods provided by Ganache - blockchain for Ethereum development we use when running tests.

const{increaseTime}=require("./utils");constDAY=3600*24;contract("Funding",accounts=>{[...]letfunding;beforeEach(async()=>{funding=awaitFunding.new(DAY);});it("finishes fundraising when time is up",async()=>{assert.equal(awaitfunding.isFinished.call(),false);awaitincreaseTime(DAY);assert.equal(awaitfunding.isFinished.call(),true);});});

By now, this should be the entire Funding contract. This implementation is much more straightforward than the one we used before.

// contracts/Funding.solpragmasolidity^0.4.17;contractFunding{uintpublicraised;uintpublicfinishesAt;addresspublicowner;mapping(address=>uint)publicbalances;functionFunding(uint_duration)public{owner=msg.sender;finishesAt=now+_duration;}functionisFinished()publicviewreturns(bool){returnfinishesAt<=now;}functiondonate()publicpayable{balances[msg.sender]+=msg.value;raised+=msg.value;}}

Tests should be now passing.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (64ms)
    ✓ testSettingAnOwnerOfDeployedContract (57ms)
    ✓ testAcceptingDonations (78ms)
    ✓ testTrackingDonatorsBalance (54ms)

  Contract: Funding
    ✓ sets an owner
    ✓ accepts donations (60ms)
    ✓ keeps track of donator balance (89ms)
    ✓ finishes fundraising when time is up (38ms)


  8 passing (1s)

Modifiers and testing throws

We can now tell whether fundraising finished, but we are not doing anything with this information. Let’s put a limitation on how long people can donate.

Since Solidity 0.4.13, a throw is deprecated. New function for handling state-reverting exceptions are require(), assert() and revert(). You can read more about differences between those calls here.

All exceptions bubble up, and there is no try...catch in Solidity. So how to test for throws using just Solidity? Low-level call function returns false if an error occurred and true otherwise. You can also use a proxy contract to achieve the same in what you may consider as more elegant way although I prefer one-liners.

functiontestDonatingAfterTimeIsUp()public{Fundingfunding=newFunding(0);boolresult=funding.call.value(10finney)(bytes4(bytes32(keccak256("donate()"))));Assert.equal(result,false,"Allows for donations when time is up");}

I am cheating here a little bit because a contract has a duration set to 0 which makes it out-of-date from the get-go. In JavaScript we can just use try...catch to handle an error.

it("does not allow for donations when time is up",async()=>{awaitfunding.donate({from:firstAccount,value:10*FINNEY});awaitincreaseTime(DAY);try{awaitfunding.donate({from:firstAccount,value:10*FINNEY});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

We can now restrict time for calling donate with onlyNotFinished modifier.

contractFunding{[...]modifieronlyNotFinished(){require(!isFinished());_;}functionisFinished()publicviewreturns(bool){returnfinishesAt<=now;}functiondonate()publiconlyNotFinishedpayable{balances[msg.sender]+=msg.value;raised+=msg.value;}}

Both new tests should now pass.

  FundingTest
    ✓ testSettingAnOwnerDuringCreation (72ms)
    ✓ testSettingAnOwnerOfDeployedContract (55ms)
    ✓ testAcceptingDonations (78ms)
    ✓ testTrackingDonatorsBalance (56ms)
    ✓ testDonatingAfterTimeIsUp (46ms)

  Contract: Funding
    ✓ sets an owner
    ✓ accepts donations (54ms)
    ✓ keeps track of donator balance (85ms)
    ✓ finishes fundraising when time is up
    ✓ does not allow for donations when time is up (52ms)


  10 passing (1s)

Withdrawal

We accept donations, but it is not yet possible to withdraw any funds. An owner should be able to do it only when the goal has been reached. We also cannot set a goal. We would like to do it when deploying a contract - as we did when we were setting contract duration.

contractFundingTest{Fundingfunding;function()publicpayable{}functionbeforeEach()public{funding=newFunding(1days,100finney);}functiontestWithdrawalByAnOwner()public{uintinitBalance=this.balance;funding.donate.value(50finney)();boolresult=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,false,"Allows for withdrawal before reaching the goal");funding.donate.value(50finney)();Assert.equal(this.balance,initBalance-100finney,"Balance before withdrawal doesn't correspond the sum of donations");result=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,true,"Doesn't allow for withdrawal after reaching the goal");Assert.equal(this.balance,initBalance,"Balance after withdrawal doesn't correspond the sum of donations");}functiontestWithdrawalByNotAnOwner()public{// Make sure to check what goal is set in the migration (here also 100 Finney)funding=Funding(DeployedAddresses.Funding());funding.donate.value(100finney)();boolresult=funding.call(bytes4(bytes32(keccak256("withdraw()"))));Assert.equal(result,false,"Allows for withdrawal by not an owner");}}

A lot is going on here. First of all, this empty function marked as payable allows contracts to accept Ether via standard transaction (without data) like it would be an ordinary account controlled by a public key. This unnamed function is called a fallback function. It neither can have any arguments nor return a value. There is such small amount of gas to use (2300) that it would be impossible to modify a state anyway. We have to implement this function to test withdrawing funds to the testing contract.

Truffle will also call beforeEach hook before every test so we can move creating a new contract there as we are doing it in JavaScript. In a test case, we can overwrite a variable pointing to the funding contract. It requires different constructor params or referring to an already deployed contract.

From Solidity, we are not able to select an address from which we want to make a transaction. By design, address of the smart contract is going to be used. What we can do to test withdrawal from an account which is not an owner is to use deployed contract instead of using created by a testing contract. Trying to withdraw in such case should always fail. One restriction is that you cannot specify a constructor params - the migration script has already deployed this contract.

it("allows an owner to withdraw funds when goal is reached",async()=>{awaitfunding.donate({from:secondAccount,value:30*FINNEY});awaitfunding.donate({from:thirdAccount,value:70*FINNEY});constinitBalance=web3.eth.getBalance(firstAccount);assert.equal(web3.eth.getBalance(funding.address),100*FINNEY);awaitfunding.withdraw();constfinalBalance=web3.eth.getBalance(firstAccount);assert.ok(finalBalance.greaterThan(initBalance));// hard to be exact due to the gas usage});it("does not allow non-owners to withdraw funds",async()=>{funding=awaitFunding.new(DAY,100*FINNEY,{from:secondAccount});awaitfunding.donate({from:firstAccount,value:100*FINNEY});try{awaitfunding.withdraw();assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

No surprise on the JavaScipt side and that is a good thing. Access to multiple accounts makes it less hacky than a Solidity test case. You would like to probably get rid of this nasty try catch and a regex. I would suggest you would go with a different assertion library than the standard one. Available assert.throws does not work well with async code.

contractFunding{[...]uintpublicgoal;modifieronlyOwner(){require(owner==msg.sender);_;}modifieronlyFunded(){require(isFunded());_;}function()publicpayable{}functionFunding(uint_duration,uint_goal)public{owner=msg.sender;finishesAt=now+_duration;goal=_goal;}functionisFunded()publicviewreturns(bool){returnraised>=goal;}functionwithdraw()publiconlyOwneronlyFunded{owner.transfer(this.balance);}}

We already store the owner of the contract. Restricting access to particular functions using an onlyOwner modifier is a popular convention. Popular enough to export it to a reusable piece of code but we will cover this later. The rest of the code should not come as a surprise, you have seen it all!

FundingTest
  ✓ testSettingAnOwnerDuringCreation (54ms)
  ✓ testSettingAnOwnerOfDeployedContract (58ms)
  ✓ testAcceptingDonations (67ms)
  ✓ testTrackingDonatorsBalance (46ms)
  ✓ testDonatingAfterTimeIsUp (39ms)
  ✓ testWithdrawalByAnOwner (73ms)
  ✓ testWithdrawalByNotAnOwner (54ms)

Contract: Funding
  ✓ sets an owner
  ✓ accepts donations (53ms)
  ✓ keeps track of donator balance (87ms)
  ✓ finishes fundraising when time is up
  ✓ does not allow for donations when time is up (74ms)
  ✓ allows an owner to withdraw funds when goal is reached (363ms)
  ✓ does not allow non-owners to withdraw funds (81ms)


14 passing (2s)

Refund

Currently, funds are stuck, and donators are unable to retrieve their Ether when a goal is not achieved within a specified time. We need to make sure it is possible. Two conditions have to be met so users can get their Ether back. Duration is set in a construct so if we set a 0 duration contract is finished from the beginning, but then we cannot donate to have something to withdraw. We cannot move time forward unless we use Clock contract again. I write tests for this case solely in JavaScript.

it("allows to withdraw funds after time is up and goal is not reached",async()=>{awaitfunding.donate({from:secondAccount,value:50*FINNEY});constinitBalance=web3.eth.getBalance(secondAccount);assert.equal((awaitfunding.balances.call(secondAccount)),50*FINNEY);awaitincreaseTime(DAY);awaitfunding.refund({from:secondAccount});constfinalBalance=web3.eth.getBalance(secondAccount);assert.ok(finalBalance.greaterThan(initBalance));// hard to be exact due to the gas usage});it("does not allow to withdraw funds after time in up and goal is reached",async()=>{awaitfunding.donate({from:secondAccount,value:100*FINNEY});assert.equal((awaitfunding.balances.call(secondAccount)),100*FINNEY);awaitincreaseTime(DAY);try{awaitfunding.refund({from:secondAccount});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});it("does not allow to withdraw funds before time in up and goal is not reached",async()=>{awaitfunding.donate({from:secondAccount,value:50*FINNEY});assert.equal((awaitfunding.balances.call(secondAccount)),50*FINNEY);try{awaitfunding.refund({from:secondAccount});assert.fail();}catch(err){assert.ok(/revert/.test(err.message));}});

Implementing refund function can be tricky. Your intuition may tell you to loop through your donators and transfer them their funds. Problem with this solution is that the more donators you have the more gas to pay and it is not only looping but also making multiple transactions. You would like to keep the cost of running a function low and predictable. Let’s just allow each user to withdraw their donation.

contractFunding{[...]modifieronlyFinished(){require(isFinished());_;}modifieronlyNotFunded(){require(!isFunded());_;}modifieronlyFunded(){require(isFunded());_;}functionrefund()publiconlyFinishedonlyNotFunded{uintamount=balances[msg.sender];require(amount>0);balances[msg.sender]=0;msg.sender.transfer(amount);}}

We would like to save the amount to transfer first and then zero the balance. It is an implementation of the withdrawal pattern. Transfering an amount straight from the balances mapping introduces a security risk of re-entrancy - calling back multiple refunds.

FundingTest
  ✓ testSettingAnOwnerDuringCreation (64ms)
  ✓ testSettingAnOwnerOfDeployedContract (92ms)
  ✓ testAcceptingDonations (107ms)
  ✓ testTrackingDonatorsBalance (64ms)
  ✓ testDonatingAfterTimeIsUp (52ms)
  ✓ testWithdrawalByAnOwner (98ms)
  ✓ testWithdrawalByNotAnOwner (54ms)

Contract: Funding
  ✓ sets an owner
  ✓ accepts donations (60ms)
  ✓ keeps track of donator balance (91ms)
  ✓ finishes fundraising when time is up
  ✓ does not allow for donations when time is up (77ms)
  ✓ allows an owner to withdraw funds when goal is reached (425ms)
  ✓ does not allow non-owners to withdraw funds (38ms)
  ✓ allows to withdraw funds after time is up and goal is not reached (380ms)
  ✓ does not allow to withdraw funds after time in up and goal is reached (66ms)
  ✓ does not allow to withdraw funds before time in up and goal is not reached (45ms)


17 passing (3s)

Congratulations, you have all features implemented and decent test coverage 👏

Refactor

There is a commonly used pattern in our code. Saving an owner and restricting function call only to a deployer of the contract.

// contracts/Ownable.solpragmasolidity^0.4.17;contractOwnable{addresspublicowner;modifieronlyOwner(){require(owner==msg.sender);_;}functionOwnable()public{owner=msg.sender;}}

In Solidity, we can reuse existing code using libraries or through extending other contracts. Libraries are separately deployed, called using a DELEGATECALL and a good fit for implementing custom data structure like a linked list. Behaviour can be easily shared using an inheritance.

import"./Ownable.sol";contractFundingisOwnable{functionFunding(uint_duration,uint_goal)public{finishesAt=now+_duration;goal=_goal;}}

OpenZeppelin is a library which provides multiple contracts and seamlessly integrates with Truffle. You can explore them and reuse well-tested code in your smart contracts. Ownable contract from OpenZeppelin is a little different than ours; it adds a possibility to transfer ownership.

npm install --save-exact zeppelin-solidity
import"zeppelin-solidity/contracts/ownership/Ownable.sol";contractFundingisOwnable{}

Tests are still passing if you got that right.

Conclusion

An important takeaway from this post is to think twice before deciding when to test smart contracts using JavaScript and when using Solidity. The rule of thumb is that smart contracts interacting with each other should be tested using Solidity. The rest can be tested using JavaScript. JavaScript testing is also closer to how you are going to use your contracts from the client application. Well written test suit can be a useful resource on how to interact with your smart contracts.

Stay tuned for more Solidity related content. In the future post I am going to cover events (I intentionaly skipped in this post), deploy to the publicly accessible network and create a frontend using web3.

You can find the full source code of test suits and the smart contact on GitHub (MichalZalecki/tdd-solidity-intro).

Deploying smart contracts with Truffle

$
0
0

Truffle provides a system for managing the compilation and deployment artifacts for each network. To make an actual transaction and put a smart contract on-chain we have to provide Truffle with an appropriate configuration. We configure each network separately. From this post, you will learn how to prepare a setup and deploy to a few widely used test networks.

Each transaction (deployment included) will cost you some Ether. It does not matter whether you use a testnet or mainnet. The difference is that Ether on a testing network is worthless and you can obtain it a from the faucet. Each test network has different rules which change from time to time, so you have to do some research. In my opinion, the easiest way is just to open MetaMask, select a network, press Buy, and explore available options.

Configuration for each network should appear in truffle.js file in the root of the Truffle project.

Migrations

Before we go into specifics of particular test network, let’s focus on migrations. When you spin up a new truffle project, you will find a Migration.sol contract. It stores a history of previously run migrations on-chain. Truffle reads Migration address from a build artifact. It saves the number of the latest completed migration. There is no need to change anything within this contract.

Files we add to make migrations happen are JavaScript files under the migrations directory. Each migration file exports a function which accepts a deployer module, network name, and list of available accounts.

This is how a simple migration file may look like. First, we require contracts we want to deploy and then we can deploy them linking the Ledger library to the Funding contract.

// migrations/2_functing.jsconstFunding=artifacts.require("./Funding.sol");constLedger=artifacts.require("./Ledger.sol");constETHER=10**18;constDAY=3600*24;module.exports=function(deployer,_network,_accounts){deployer.deploy(Ledger);deployer.link(Ledger,Funding);deployer.deploy(Funding,ETHER,7*DAY);};

Changing the smart contract or deployment script after migration run and rerunning has no effect unless --reset option is specified. Migrations run from the last completed.

That is why naming files is important. Migration file name should start with a number. The rest of the filename is there just for readability.

Deploy to Ganache

Ganache is your personal Ethereum blockchain which is convenient for testing and interacting with your contracts during development. It ships with a helpful GUI that allows you to see available test accounts quickly, explore transactions, read logs, and see how much gas was consumed.

Configuration for the Ganache network:

module.exports={networks:{ganache:{host:"127.0.0.1",port:7545,network_id:"*"// matching any id}}};

Deploy to the Ganache network:

truffle migrate --network ganache

Geth

Geth is the command line interface for running an Ethereum node. We need a synced node to be able to deploy smart contracts unless we do want to use a third-party provider like Infura (I am going to cover deploying to Infura later in that article).

I recommend you to create a deterministic wallet you can then use to switch between different networks quickly. You create one using MetaMask.

Save a private key in a keyfile and import it.

geth account import <keyfile>

You will be prompted for a passphrase. Remember it as you need to use it to unlock your account later.

Deploy to Ropsten

Run a node connected to Ropsten and specify the address of the default (first) account to unlock. You will be prompted for a passphrase.

geth --unlock <account> --testnet --rpc --rpcapi eth,net,web3

Configuration for the Ropsten network:

module.exports={networks:{ropsten:{host:"127.0.0.1",port:8545,network_id:3,gas:4700000},}};

Deploy to the Ropsten network:

truffle migrate --network ropsten

Potential problems

If you forget to unlock your account migration will fail.

Error: authentication needed: password or unlock

Make sure to set a gas limit to 4700000. Default value which Truffle is using is 4712388 which exceeds Ropsten’s limit. Otherwise, you will see the following error.

Error: exceeds block gas limit

It might also be the case that your contract is too big to fit a single transaction. In such case, you have to split it into smaller ones.

You can use transaction hashes printed during migration to explore transactions on Etherscan. This was my Migration contract deployment: 0x68e5c28fec7846…

Deploy to Rinkeby

Rinkeby network is available only using geth client.

geth --unlock <account> --rinkeby --rpc --rpcapi eth,net,web3

Configuration for the Rinkeby network:

module.exports={networks:{rinkeby:{host:"127.0.0.1",port:8545,network_id:4},}};

Deploy to the Rinkeby network:

truffle migrate --network rinkeby

Parity

Parity is a different Ethereum client. Similarly to Geth, it has a command line interface which mirrors Geth’s commands. Additionally, it has a web-based interface.

To import an account into Parity, we can use a keystore we already created with Geth.

parity account import <keystore>

The path to the keystore depends on your operating system. You can find keystores organized by the network:

  • Mac: ~/Library/Ethereum
  • Linux: ~/.ethereum
  • Windows: %APPDATA%\Ethereum

Deploy to Kovan

Kovan network is available only using Parity client. Gas limit at the time of writing is 4704584. Start Parity connected to Kovan chain.

parity ui --chain kovan

Configuration for the Kovan network:

module.exports={networks:{kovan:{host:"127.0.0.1",port:8545,network_id:42,gas:4700000},}};

Deploy to the Kovan network:

truffle migrate --network kovan

During deployment, you will be asked a few times to enter a password.

Deploy with Infura

Infura by ConsenSys is an infrastructure which provides you with access to a few Ethereum networks and IPFS. You can use Infura to deploy smart contracts to mainnet as well as Ropsten, Rinkeby, and Kovan. It does not require you to have a synced node running locally.

Apart from host/port configuration truffle allows for configuring a network to use a custom provider. We can use HDWalletProvider to connect to Ropsten. It does not matter which network you choose as long as you own Ether necessary to pay a for a transaction.

constHDWalletProvider=require("truffle-hdwallet-provider");module.exports={networks:{"ropsten-infura":{provider:()=>newHDWalletProvider("<passphrase>","https://ropsten.infura.io/<key>"),network_id:3,gas:4700000}}};

You can obtain a key by signing up on Infura’s website. It is free of charge. At the time of writing, you might omit the key, but your requests can be affected by more restrictive throttling.

You, of course, do not want to keep your passphrase and key in the repository. I recommend you to use the environmental variable for that and maybe dotenv to save you some keystrokes during development.

Conclusion

Truffle is not the only way to deploy a smart contract. You can also use Remix IDE or Mist. You definitely would like to give Remix a try.

Doing deployment for the first time might be annoying and time-consuming. Waiting for a node to sync for sure is. You probably stumble upon some issues I did not explain here. You may always try to ask a question on Truffle gitter if StackOverflow did not provide you with a solution.

Further reading:

Docker Compose for Node.js and PostreSQL

$
0
0

Docker is the response to an ongoing problem of differences between environments in which application runs. Whether those differences are across machines of the development team, continuous integration server, or production environment. Since you are reading this, I assume you are already more or less familiar with benefits of containerizing applications. Let’s go straight to Node.js specific bits.

There is a set of challenges when it comes to dockerizing Node.js applications, especially if you want to use Docker for development as well. I hope this guide will save you a few headaches.

TL;DR: You can find the code for a running example on GitHub.

Dockerfile

As a base image, I am using node image that runs under Alpine Linux, a lightweight Linux distribution. I want to expose two ports. EXPOSE is not publishing any ports, it is just a form of a documentation. It is possible to specify ports with Docker Compose later. Port 3000 is the port we use to run our web server, and 9229 is a default port for Node.js Inspector. After we copy files to the container, we install dependencies.

FROM node:8.10.0-alpine
EXPOSE 3000 9229
COPY . /home/app
WORKDIR /home/app
RUN npm install
CMD ./scripts/start.sh

The executable for the container could be an npm start script, but I prefer to use a shell script instead. It makes it easier to implement more complex build steps which might require executing a different command to start the application in a development or production mode. Moreover, it allows for running additional build steps.

#!/bin/sh

npm run build

if["$NODE_ENV"=="production"] ; then
  npm run start
else
  npm run dev
fi

If you want to check and install dependencies on each startup, you can move npm install from Dockerfile to start.sh script.

Docker Compose

I am splitting my Docker Compose configuration into two files. One is a bare minimum to run the application in production or on the continuous integration server. Namely, no volumes mounting and .env files. The second one is a development-specific configuration.

# docker-compose.ymlversion:"3"services:app:build:.depends_on:-postgresports:-"3000:3000"-"9229:9229"postgres:image:postgres:9.6.8-alpineenvironment:POSTGRES_PASSWORD:postgres

During development, I am interested in sharing code between the container and the host file system, but this should not apply to node_modules. Some packages (e.g., argon2) require additional components that need a compilation step. Package compiled on your machine and copied to the container is unlikely to work. That is why you would like to mount extra volume just for node modules.

The other addition to the development configuration of docker compose is using the .env file. It is a convenient way to manage environment variables on your local machine. That said, you should not keep it in the repository. In production, use environment variables instead.

For more information on how to configure Postgres container go to Docker Hub.

# docker-compose.override.ymlversion:"3"services:app:env_file:.envvolumes:-.:/home/app/-/home/app/node_modules

Docker Compose reads the override file by default unless said otherwise. If you are using Docker Compose on CI then explicitly specify all configuration files that apply.

docker-compose -f docker-compose.yml -f docker-compose.ci.yml up

Npm Scripts and Node Inspector

Npm scripts are specific to your project, but for the reference, those are mine.

{..."scripts":{"dev":"concurrently -k \"npm run build:watch\" \"npm run start:dev\"","start":"node dist/index.js","start:dev":"nodemon --inspect=0.0.0.0:9229 dist/index.js","build":"tsc","build:watch":"tsc -w"}}

I do not call npm scripts directly from the command line. They are a convenient place to encapsulate complexity and simplify start.sh (and later the other scripts).

The important takeaway is that inspector should be bound to host 0.0.0.0 which is a public IP of the container instead of the default localhost. Otherwise, you are not able to access it from your local machine.

.dockerignore

There is a bunch of stuff you can list here and which are not needed to run the application in the container. Instead of trying to list all of them, I distinguish two.

node_modules
dist

Ignore node_modules for reasons I have already explained when covering volumes. dist is just the output directory of our build pipeline. You might not have its counterpart in your project when you write in JavaScript/CommonJS and do not need a build step. These are all simple things, but you would better not miss them.

Conclusion

You may not like this approach, and it is ok (tell me why in the comments). For better or worse there is no single way to do it. Hopefully, this reference gave you a different perspective and helped you to fix this one thing that did not work for you.

I have not touched on deployment and running in production. There are a few ways you can approach it. Some just run the application container in Docker and install the database directly on the host which makes it harder to lose your data by accident. You could build and push an image to the registry or push to Dokku if you do not feel like using an image repository. Deployment on its own is a topic for another article.

Viewing all 53 articles
Browse latest View live