A Journey into Scaling a Prometheus Deployment

by Romain Baugue, Aurelien Rougemont , Senior Data Engineer April 21, 2019 - 5 minutes read

At Synthesio, a part of our monitoring solution relies on Prometheus. We encountered a few issues while scaling it to meet our needs, and here is a write-up on one of these issues we participated in solving.

At Synthesio, we use Prometheus for metrics collecting, and MetricTank as its long-term storage. While this is a common enough setup, we generally operate things at a scale that makes scaling issues surface quickly.

At some point we noticed that the default remote_write configuration (from Prometheus to MetricTank) was losing datapoints along the way because it wasn’t able to keep up the pace.

datapoint loss

At this moment, the load was light compared to what we expected to have later, so we looked at the configuration to fine-tune it. We increased the capacity of the queue and the number of shards (the number of parallel writes).

capacity: 100000             # default 10000
max_shards: 2000             # default 1000
max_samples_per_send: 200    # default 200

The datapoint loss stopped immediately, as well as a few issues we had about out-of-memory processes. This was left alone for a few months so we could work on other things and progressively increase the number of datapoints to process.

All this was good, until we added prometheus metrics to all our services during a three-days long monitoring frenzy. This resulted in our two-instances Prometheus setup having each to handle 1.2M additionnal timeseries overnight among 600+ targets.

We soon noticed an unusual CPU load on the machines, averaging around 40% with huge spikes from time to time. A quick investigation showed that most of this usage was caused by system CPU usage, which seemed unexpected given that Prometheus operates globally from the userland and is not supposed to stress the kernel that much.

To get more information, we whipped out perf.

$ perf top

Samples: 5M of event 'cycles:ppp', 4000 Hz, Event count (approx.): 667022966996
Overhead  Shared Object                 Symbol
  19.85%  [kernel]                      [k] __inet_check_established
  15.13%  [kernel]                      [k] _raw_spin_lock
  10.89%  [kernel]                      [k] __inet_hash_connect
   9.01%  [kernel]                      [k] _raw_spin_lock_bh
   7.49%  [kernel]                      [k] inet_ehashfn
   5.31%  [kernel]                      [k] tcp_twsk_unique
   4.42%  [kernel]                      [k] native_queued_spin_lock_slowpath
   2.20%  [kernel]                      [k] __indirect_thunk_start
   1.96%  [kernel]                      [k] ___bpf_prog_run
   1.81%  [kernel]                      [k] __local_bh_enable_ip
   0.54%  [kernel]                      [k] _cond_resched
   0.54%  [kernel]                      [k] rcu_all_qs
   0.46%  [kernel]                      [k] _raw_spin_unlock_bh
   0.23%  [kernel]                      [k] __entry_trampoline_start
   0.21%  [kernel]                      [k] syscall_return_via_sysret
   0.18%  [kernel]                      [k] __bpf_prog_run32
   0.17%  [kernel]                      [k] seccomp_run_filters
   0.16%  [kernel]                      [k] vsnprintf
   0.16%  prometheus                    [.] 0x00000000000722ed

It is immediately obvious that the top 5-7 symbols eating up CPU are somewhat related to the issue at hand, and the very first line was enough to set us on the right track. The __inet_check_established method does loop over the hash table of connections. This table’s name in the kernel is TCP established hash table even though it contains connections in other states. This table is used to locate an existing connection, for example when receiving a new segment.

And here was our first lead. The TCP_TIME_WAIT state is the state during which the kernel doesn’t allow a port to be re-used by a connection to ensure that all packets transiting on that port are evacuated after closing a connection. The default timeout varies on the network stack configuration: having lots of entries generally means that you’re doing a lot of short-lived connections. Which makes kinda sense for a process that would connect to a lot of different targets (we have 600+ prometheus targets, remember?) but not so for the connections to our LTS.

We did a few statistics about the number of opened connections using ss and noticed an important number of connection toward the LTS.

ss -ta state TIME-WAIT \
	| awk '{print $(NF)" "$(NF-1)}' \
	| sed 's/:[^ ]*//g' \
	| tail -n +2 \
	| sort \
	| uniq -c

We did a second round of tuning to get the situation under control and give us the time to understand the root of the issue. At this point, the instance were stable but at the price of huge memory consumption that wouldn’t be scalable in the long-term. Here is the remote_write configuration:

capacity: 100000             # default 10000
max_shards: 2000             # default 1000
max_samples_per_send: 100000 # default 200

We audited the connections from Prometheus to MetricTank to understand what was going on by doing a tcpdump of the said connections and looking at it using wireshark.

wireshark screen

The thing that was immediately evident was that all connections only ever transmitted two packets: one query to send a batch of datapoints, and the response from the server. Which is bad, because it means that everytime Prometheus sends datapoints to the LTS, it does so in a new TCP connection, which is costly.

The good news is that HTTP/1.1’s keep-alive feature is a simple solution to this, and has existed since a long time. The bad news is that if it isn’t used, either we misconfigured something or it isn’t supported by either Prometheus or MetricTank.

The TCP dump looks like Prometheus is effectively the one closing each connection after using it once (it is the first to send FIN), which is surprising. We checked by hand if MetricTank supports keep-alive and the answer was yes, so it is effectively an issue in Prometheus.

90's want their HTTP 1.0 back!

We did a quick audit of the code of the remote storage in Prometheus, and found out a common mistake with Golang and HTTP requests that prevented the TCP connection being re-used immediately. This means that everytime Prometheus wanted to send datapoints to the LTS, it effectively had to open a new connection, hence the observed behavior.

The patch itself is simple, and a simple dump-and-compare operation proved it was working as intended by allowing connection re-use.

fixed wireshark screen

The patch is included in Prometheus 2.9.2 and reduces the CPU consumption by reducing the number of opened connections. As you could guess by looking at the following graph, we deployed it on our own instances the 25th around 12:00.

cpu reduction

Et voila!

Peaks and trends detection in time series for social data

by Dimitri Trotignon , Data Science Engineer January 8, 2019 - 3 minutes read

Originally published at https://medium.com/synthesio-engineering/peaks-and-trends-detection-in-time-series-for-social-data-7ce5c54f5c33

This article is the first part of a serie of articles about time series analysis and automated pulses and trends detection.

Social media are a marvellous source of data for learning about our clients. The goal of Synthesio is to collect these data, enrich them and then provide Dashboards to analyze them. But we rapidly came to the conclusion that users do not have the time to make all manipulations to be able to spot insights. 

The objective of the Headlines project at Synthesio is to provide a global overview of points of interest across all the variables of a dashboard in one click.

This project is divided in two parts: pulses and trends automatic detection.

In this first part, we will cover the data collection and the first exploratory analysis.

In our case, the data is stored on ElasticSearch data bases. As we have to work on time series, we needed to aggregate the data. We chose 3 intervals to have as much insights as possible : days, weeks and months.

The importance of having these 3 different ranges is that we can spot different types of insights and events. Indeed, some are really concentrated on one day, we can mention for example the Black Friday, which generates many contents on Social media. Others, on the opposite, are present on several days or weeks, for example sales or media campaigns. In these cases, it is interesting to be able to spot insights on periods as months or weeks.

Then, we also give the opportunity to the user to choose variables and metrics. For variables, users will have the choice between topics, countries, sentiment and site types (media, social media etc.) for example.

As for metrics, they will be able to choose among several choices such as volume of mentions, sum of interactions (comments, likes etc.) or reach. The idea is to see, for example, if many people are posting about a subject on a particular date period or if some subjects generate lots of interactions from users.

Once we have chosen our dimensions and metrics, let’s see what the data looks like.

Daily data:

Weekly data:

Monthly data:

We can clearly see that the data is not similar according to the time period. Some data sets present a strong seasonality while others do not. Generally speaking, we can see that monthly data is the smoothest one. On the opposite, daily data is very unstable and present many asperities.

In the exemple above, which is based on a dashboard about Formula 1 pilots and teams such as Renault F1, Mercedes AMG etc., we can clearly see that there is a strong seasonality related to “Grands Prix” every two weeks. This is the perfect example of unicity of every dashboard. Other dashboards, about tea notably, have more mentions on winter period but with no weekly seasonality.

As we are a Saas company, we need to build a scalable, robust and efficient model which will perform well on different types of time series. The model needs to be usable for many clients at the same time, and it also needs to answer fast to the clients query.

On a organisational point of view, the pulses and trends detection module will be created as an API coded in Python language. As the majority of Synthesio Services are currently in Golang, another API will be created to communicate with the front part.

Data Scientist vs Data Science Engineer

by Julien Plée , Chief Technical Officer September 28, 2018 - 2 minutes read

Originally published at https://medium.com/synthesio-engineering/data-scientist-vs-data-science-engineer-8324c190506e

Data Science jobs are many and varied nowadays. At Synthesio we felt the need to define a difference between Data Scientist and what we call a Data Science Engineer

Successful Data Science projects are a matter of coordination between various skill sets and people! Data Science team at Synthesio is mostly composed of what we like to call Data Science Engineers.

In short, these are people who know enough about Software and Data Science to bring great AI stuff into production: taking scalability and reliability concerns on board.

If you are a Data Science Engineer at Synthesio, real work begins when you send your algorithm in production.

We could give a definition (actually there are a lot of them depending on your organisation) of Data Scientist as the kind of people with a PhD in Data Science. They work on algorithms: they create, they modify and improve these algorithms along time. 

Typically they create algorithms and develop prototypes using their laptops.

Data Science Engineer is the “applied” version of the Data Scientist. They are keen to deploy their work in production and analyse its behaviour on real use cases. The Data Science Engineers master the use of algorithms but even if they have a great knowledge about them they don’t necessarily have the finest grained vision of how exactly they work inside. However they excel at choosing the best one for every use case they fulfil. 

They are able to take a prototype that runs on a laptop and make it run reliably in production, sometimes with a little help from Data Engineers.

These are some important characteristics defining what a Data Science Engineer is:

  • Data Science Engineers have strong knowledge about Data Science field
  • They are capable to work with Data Engineers and Site Reliability Engineers who evolve and maintain the production systems
  • They understand software development methodology and are pretty skilled on the tools developers use daily (IDE, continuous deployment pipelines…)
  • In order to make data products that work in production at scale, they focus on holistic design and use of components such as logging and A/B testing infrastructure
  • As data pipelines and models can go stale and need to be retrained, Data Science Engineers need to be up to speed on issues that are specific to monitoring data products in production and be able to know how to detect data smells

Introducing Synthesio's R&D

by Julien Plée , Chief Technical Officer September 28, 2018 - 3 minutes read

Originally published at https://medium.com/synthesio-engineering/introducing-synthesios-r-d-2519515e4959

Who we are

Synthesio is a Software Editor that provides brands and agencies around the world with the social listening tools and audience insights they need to measure the impact of social and mainstream media conversations.

To build these awesome capabilities we are striving to build an awesome R&D team.

The R&D team of Synthesio is composed of a bunch of talented people divided in four sub-teams:

  • the System Reliability Engineering (SRE) team deploying the surroundings of our infrastructure and our hosting
  • the Front-end Engineering team ensuring we deliver awesome features to our customers by using the most appropriate technologies
  • the Data Engineering team striving on manipulating tons of data and billions of documents every day
  • the Data Science team spearheading our last AI innovations about Natural Language Processing (NLP), Image Recognition and Trends detection

Each team fosters their set of special skills that gives us the ability to create, maintain and advance our features and software.

What we’re working on

Our core stake is the scalability of an infrastructure that manipulates big data and displays insights to our customers.

We strongly believe in open-source projects: we use them and we are happy to contribute to them.

We use various technologies such as Cassandra, ElasticSearch, Ansible, Kafka… which are quite adapted when for example you tackle reindexing 90 billions of documents and are perfectly integrated in our microservices constellation.

On the Front-end side we thrive working with ES6, ReactJS with Redux, Yarn…

The technical problems we have to solve are very stimulating because they are pretty rare due to the amount of data we have to manipulate (around 3 Peta Bytes).

Whatever you are a Data Scientist or a Front-end Engineer you will probably work in one of our Feature squad. A squad is a team with at least one representative of every trade. For example two Front-end Engineers, two Data Engineers, one Data Scientist and one Product Owner.

We have an obsession for measuring how things work, how our features are used by customers, to know the status of micro-services in our processing chain, the performance of every node of the infrastructure….

What are our beliefs

We have a strong belief that humanity, career’s path and skills development are the most important thing a company can bring to its people.

Thus at Synthesio you can only thrive on fostering mutual aid to your pairs. Trust and kindness are also key values of our people. When you met an issue, everyone will be in a “how may I help you mindset?” and this is just our best solutions for now to ensure everyone is happy, our customers as our developers.

Also quality in what we do is a major thing for us. As a Software Editor, our code is our main asset. Quality must come out features and customer feedback is something we look for constantly to improve our software. Also this is a big thing to improve our skills.

Don’t hesitate to reach out to us if you want to discuss, organize a meetup with us in our building or elsewhere or everything you would like to bring to us.

Testing with containers

by Romain Baugue , Senior Data Engineer March 5, 2018 - 11 minutes read

Originally published at https://elwinar.com/articles/testing-with-containers

In this article, I explain why, and how we use containers for testing purposes at Synthesio. For simplicity’s sake, most examples assume the project is written in Go, and use a MySQL database, and we will use Docker as the container application, but the method discussed can be and is used with any language or technology and more complex setups.

Dependencies are complex pieces of software, for a good reason. A database, for example, abstracts away the huge amount of complexity and features so our code doesn’t have to handle it by itself and can focus on business features while being simpler.

Testing code that use such a dependency, however, is a harder job than it seems made unnecessarily complex by most testing frameworks. One of the reasons for this is that when thinking, speaking, reading about testing, we could almost use the term unit-testing interchangeably. And all in all, unit-tests are badly adapted to testing code that directly use a complex dependency. Which isn’t surprising, given that the whole point of unit testing is to not use any dependency.

Mocking is not the solution you’re looking for

In the current state of the art, the most commonly accepted solution for unit-testing a piece of code with a dependency is to fake it. To give the unit under test something that mimics the dependency and will behave as we want, so we can verify that the code is reacting as expected. This method, called mocking, is actually a bad solution to the problem at hand.

To be perfectly honest, faking the dependency is a good solution for a certain class of dependency. Unit-testing was born during the rise of object-oriented programming, and is unsurprisingly well-adapted to code that can take as dependency an interface of reasonable complexity, the mock being generally an in-memory, dumb version of the real dependency. A database for example, is generally too complex for this.

Let’s take a concrete example to illustrate the point. Golang has a library for the purpose of mocking a SQL database during tests, sqlmock. Here is an example of code, taken from the library’s Github repository.

func TestShouldUpdateStats(t *testing.T) {
	db, mock, err := sqlmock.New()
	if err != nil {
		t.Fatalf("an error '%s' was not expected when opening a stub database connection", err)
	}
	defer db.Close()

	mock.ExpectBegin()
	mock.ExpectExec("UPDATE products").WillReturnResult(sqlmock.NewResult(1, 1))
	mock.ExpectExec("INSERT INTO product_viewers").WillReturnResult(sqlmock.NewResult(1, 1))
	mock.ExpectCommit()

	// now we execute our method
	if err = recordStats(db, 2, 3); err != nil {
		t.Errorf("error was not expected while updating stats: %s", err)
	}

	// we make sure that all expectations were met
	if err := mock.ExpectationsWereMet(); err != nil {
		t.Errorf("there were unfulfilled expectations: %s", err)
	}
}

If we think about it, the real interface between our code and the database isn’t the actual method called, it’s the query language itself. And it isn’t the query language that is mocked here, but simply the implementation code that is supposed to communicate with the database, leaving to the user the care to do the actual mocking, verification, etc.

The result of this is that the tests that use this kind of library generally do only half of the job, and generally not the part that is actually worth testing. Checking if a query wrapper really call the expected function is actually a near-useless job, akin to a test that verify the value of a constant.

A useful test would check if after calling the method, the value in the database is the expected one. Which would need faking the logic of the database itself and leave the implementation do whatever it wants with it. A useful test wouldn’t need to be completely re-written when the implementation of the method change even so slightly.

The problem is that a library that would implement such a feature would be hugely complex. It would probably have to implement complex logic to parse the SQL syntax, understand the queries, etc. If such a library existed, it would probably resemble more of a full-fledged in-memory SQL database than a mocking library. (I actually did exactly that using in-memory sqlite databases once upon a time.)

The amount of work needed would be huge to say the least. And even then, it wouldn’t be complete. It would have to understand vendor extensions, bugs, take into accounts versions of the target vendor database, etc, to be ultimately useful. Pouring such an amount of time in this is probably not worth it, when we have a simpler and ready solution: why not just use the real database?

This approach has existed for a long time too but it suffered multiple defaults, the most important being the need to install and configure a database for the tests themselves, which generally lead to a project needing a complex setup for something that should be simple and quick. Or you would need to maintain an active database in your organization for the purpose of running tests. And have one for each version used in production. Clumsy at best, not re-usable, and generally too many constraints.

Luckily for us, the situation changed in the last years, with the arrival of containers as an easy, simple and globally accessible solution. With a little not-even-complex tooling, using a real database in a container for testing is surprisingly easy, and lead to clean and concise tests.

Do or do not, there is no try

So, we want to run test that use a database with an actual database. How do we do that ?

First step is to have the database in a container. Let’s spawn a container for that. And throw in a docker-compose.yml for good measure.

version: "2.1"

services:
  mysql:
    image: mysql:5.7
    ports:
      - "3306:3306"

Now, running our tests is a simple matter of running docker-compose up before the tests, and using localhost:3306 as the database address. Dead simple. A little too naive, however.

For example, exporting ports that way is like opening a door to a world of conflicts, ad hoc conventions, etc, which in the long run would be hard to maintain. One solution is to run the tests from another container, linked to the mysql one so it can access the database using the container’s network.

For this, we will simply add an app container in our docker-compose.yml.

version "2.1"

services:
  app:
    image: golang:1.10
    links:
      - mysql
  mysql:
    image: mysql:5.7

Now, the tests must use mysql:3306 as the address, and the command for running the tests becomes docker-compose run --rm app /usr/bin/env bash -c "go test". Better on the operation side, but not something we want to type every time we want to run tests, configure CI, etc.

For simplicity’s sake, let’s put that in a Makefile. And while we’re at it, add a build command too so we can also build in the container.

exec = docker-compose run --rm app /usr/bin/env bash -c

.PHONY: build
build::
	${exec} "go build"

.PHONY: test
test::
	${exec} "go test"

Much better. Running the tests is back to a simple make test, the containers are spawned as needed without intervention, multiple projects can coexist without conflict, and the usage of Make or any other build too probably integrate with any complex workflow. All is well, our job here is done.

Until we actually run the tests….

# make test
[…]
--- FAIL: TestApp (0.00s)
 <autogenerated>:1: mysqltest: dial tcp 192.168.0.2:3306: getsockopt: connection refused
[…]

What is happening here? When it creates the containers, Docker is smart enough to wait until the mysql container is running before starting the app container, but most databases need a little warmup before being ready, so when the test code tries to connect to MySQL, the database is still initializing and cannot accept the connection.

One solution is to use a tool like https://github.com/jwilder/dockerize as the app container entrypoint. Dockerize will wait until the configured port is ready before running the container command. There is a little issue here: the golang container doesn’t include dockerize.

Containers to the rescue! The simplest solution is to have a custom image that will do.

FROM golang:1.10.0

COPY entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/usr/bin/env", "bash"]

RUN curl -sSL "https://github.com/jwilder/dockerize/releases/download/v0.5.0/dockerize-linux-amd64-v0.5.0.tar.gz" | tar -xz -C /usr/local/bin

With entrypoint.sh being a file along these lines. (It can probably be replaced by something shorter in the docker-compose.yml’s entrypoint option.)

#!/bin/bash
exec "$@"

Then, you can change the docker-compose.yml file to use your custom image (we will call it custom-golang) and dockerize.

version "2.1"

services:
  app:
    image: custom-golang:1.10
    links:
      - mysql
    entrypoint: dockerize -timeout 20s -wait tcp://mysql:3306 entrypoint.sh
  mysql:
    image: mysql:5.7

OK, we have a database container ready for action, it’s time to code! Let’s do something that resemble an actual test.

func TestFoo(t *testing.T) {
	// Prepare the connection to the database container.
	db, err := sql.Open("mysql", "root:@tcp(mysql:3306)")
	if err != nil {
		t.Fatal("connecting to database server:", err)
	}
	defer db.Close()

	// Create the database and tables necessary.
	_, err = db.Exec(`
		CREATE DATABASE app;
		USE app;
		CREATE TABLE foo ( id INTEGER UNSIGNED PRIMARY KEY );
	`)
	if err != nil {
		t.Fatal("initializing database:", err)
	}

	// Call the tested function.
	result := foo(db)

	// Check the result.
	var expected = "bar"
	if result != "bar" {
		t.Errorf("unexpected output: wanted %s, got %s", expected, result)
	}
}

While encouraging, this code has a number of problems that we need to address before being reliable. The first of them is that it will only work once, as the created base is persistent and the test will fail if the database already exists. Additionally, we won’t be able to run tests in parallel either.

We could add a DROP statement before the database creation, but it would only solve half of the problem. A better solution would be to generate a random name before actually running the query and use this as the database name.

name, _ := random.Alpha(10)

_, err = db.Exec(fmt.Sprintf("
	CREATE DATABASE `%[1]s`;
	USE `%[1]s`;
	CREATE TABLE foo ( id INTEGER UNSIGNED PRIMARY KEY );
", name))

Good enough. Although a little rough, this solution works and solve all the problems at hand. We can refine it further by moving the related code in a dedicated helper so the test itself is cleared of unnecessary code, and each test can use it independently.

func Spawn(t *testing.T, address, schema string) *sql.DB {
	// Prepare the connection to the database container.
	db, err := sql.Open("mysql", fmt.Sprintf("root:@tcp(%[1]s)", address))
	if err != nil {
		t.Fatal("connecting to database server:", err)
	}

	// Create the database and tables necessary.
	name, _ := random.Alpha(10)

	_, err = db.Exec(fmt.Sprintf(" CREATE DATABASE `%[1]s`; USE `%[1]s` ", name))
	if err != nil {
		t.Fatal("initializing database:", err)
	}

	// Load the schema.
	_, err = db.Exec(schema)
	if err != nil {
		t.Fatal("loading schema:", err)
	}

	// Return the database and a cleaning 
	return db
}

func TestFoo(t *testing.T) {
	// Create a firesh database for use in this test.
	db := Spawn(t, "mysql:3306", "CREATE TABLE foo ( id INTEGER UNSIGNED PRIMARY KEY )")
	defer db.Close()

	// Call the tested function.
	result := foo(db)

	// Check the result.
	var expected = "bar"
	if result != "bar" {
		t.Errorf("unexpected output: wanted %s, got %s", expected, result)
	}
}

Another big improvement would be to load the database creation queries directly from a file, so the schema and fixtures can be shared between different tests and won’t pollute the test code.

func Spawn(t *testing.T, address string, fixtures ...string) *sql.DB {
	// Prepare the connection to the database container.
	db, err := sql.Open("mysql", fmt.Sprintf("root:@tcp(%[1]s)", address))
	if err != nil {
		t.Fatal("connecting to database server:", err)
	}

	// Create the database and tables necessary.
	name, _ := random.Alpha(10)

	_, err = db.Exec(fmt.Sprintf(" CREATE DATABASE `%[1]s`; USE `%[1]s`; ", name))
	if err != nil {
		t.Fatal("initializing database:", err)
	}

	for _, fixture := range fixtures {
		Load(t, db, fixture)
	}

	// Return the database and a cleaning 
	return db
}

func Load(t *testing.T, db *sql.DB, fixture string) {
	raw, err := ioutil.ReadFile(fixture)
	if err != nil {
		t.Fatalf("reading fixture %s: %s", fixture, err.Error())
	}

	// Load the schema.
	_, err = db.Exec(string(raw))
	if err != nil {
		t.Fatalf("loading fixture %s: %s", fixture, err)
	}
}

func TestFoo(t *testing.T) {
	db := Spawn(t, "mysql:3306", "testdata/schema.sql")
	defer db.Close()

	// Call the tested function.
	result := foo(db)

	// Check the result.
	var expected = "bar"
	if result != "bar" {
		t.Errorf("unexpected output: wanted %s, got %s", expected, result)
	}
}

Conclusion

This testing method can be refined a bit more by adding a few tricks that won’t be covered here, like cleaning the database after a successful test, or adding templating to the fixtures. It can be adapted to almost any kind of database, or even other types of dependencies like message brokers, other services, etc.

Among the downsides is the fact that it’s actually slower than something like a mock. Spinning up the container is cheap but non-negligible and loading huge fixtures can be long, but all things considered it is often a small price to pay in comparison to the correctness and actual usefulness of the tests that implement this.

Feel free to reach out if you want more details, have questions, or just want to chat. I would love to hear your opinion on the subject.

Et voilà !

© Synthesio 2019

Powered by Hugo & Kiss.