Triangles and speech bubbles

When I started using CSS3 I quickly realized how easy it was to make a circle.

div {
width: 30px;
height: 30px;
background-color: blue;
border-radius: 50%;
}

This made me think of new ways to incorporate html elements as design features instead of using images, which has the benefit of being faster to load. However, one basic shape seemed hard to achieve and that was the triangle. In truth, there is no way to make a triangle with only html and css. You can make a rectangle look like a triangle , but you have to cover up more than half of it with another object.

#div1 {
height: 10px;
width: 10px;
background-color: blue;
transform: translate(10px,5px) rotate(45deg);
z-index: 2;
position: relative;
}

#div2 {
height: 30px;
width: 30px;
background-color: lightgrey;
z-index: 3;
position: relative;
}

You can make pretty much any triangle this way, picking an appropriate rectangle to start from and then transform it and hide the part you don’t want.

div {
height: 15px;
width: 20px;
background-color: blue;
transform: translate(2px, 17px) skew(-10deg, 60deg);
}

You can make some decent speech bubbles this way.

Rawr!
div {
height: 15px;
width: 15px;
background-color: blue;
transform: translate(50px, 35px) skew(0deg, 60deg) rotate(-15deg);
}

Unfortunately skew() makes box-shadow go haywire (atleast in some browsers).

Rawr!

 

But you can get away with not having a shadow on the speech bubble arrow in most cases.

Rawr!

 

Another way to hide the unwanted part of the rectangle is to use overflow: hidden. This means we can make a pie chart.

a
 a
a
a
How it’s supposed to look: pie
#bounding-circle {
width: 50px;
height: 50px;
position: relative;
background-color: lightgrey;
border-radius: 50%;
overflow: hidden;
box-shadow: 1px 1px 3px rgba(0,0,0,0.5);
}

#blue {
width: 25px;
height: 25px;
background-color: blue;
transform: translate(25px, -23px) skew(0, -63deg);
}

#yellow {
width: 50px;
height: 50px;
background-color: yellow;
position: absolute;
bottom: 0; right: 0;
transform: translate(25px,69px) skew(0, 60deg);
}

#red {
width: 50px;
height: 50px;
background-color: red;
position: absolute;
right: 0; top: 0;
transform: translate(24px,0) rotate(45deg) skew(20deg, 20deg);
}

#green {
width: 25px;
height: 25px;
background-color: green;
position: absolute;
left: 0: bottom: 0;
}

I’ve omitted vendor-specific CSS for clarity in my examples.

Keep in mind that browsers that do not support CSS3 will have a not-so-graceful look. You can use position: absolute to position your triangle-element under the cover element to begin with to make the fallback more graceful. Since the z-index property requires positioned elements to work, this is probably the way to do it.

by Peter Lindsten.

We're back!

Thanks to my own incompetence, I did not backup this sites database when I moved to new hardware. archive.org however had saved the content, so everything is back to normal. Except we're not in wordpress this time.

Custom built static-generated pages, weeeee...

Also supports markdown now! WEEE

by Peter Lindsten.

Lessons learned about Jenkins

Jenkins is not always the easiest tool to work with, sometimes getting more in the way than helping. This page attempts to document some issues I've run into during my time with Jenkins. Some solutions are applicable to systems in general, some to other CI daemons. It is intended to be continually updated, which has happened once so far.

Documentation is rather poor

This is something I think anyone who has tried to do anything more than just the basics has run into. Much of it stems from the fact that most functionality in Jenkins is provided by plugins, and the plugins (even official ones) depend on the maintainers, which are usually not as invested as the maintainers or Jenkins itself.

The goal of the core project seems to be to automatically generate most (all?) documentation so that it doesn't matter if it comes from the core or from a plugin. This still has quite a ways to go before it will cover more than just the bare basics.

Solution: There isn't really one... read the source, use whatever docs are avaliable, create spike solutions

Console output is slow

Streaming logs "live" to the Jenkins master may cause significant slowdowns in builds that produce large amounts to logs.

Solution: Don't send large amounts of logs to the master. Pipe the output of commands run on slaves to a file, and then archive that file instead. There is an issue in Jenkins issuetracker to resolve this problem with external logging mechanisms: JENKINS-38313

Pipelines

Pipelines are slower than the identical freestyle job

One reason this may be is because pipelines are more "durable" than freestyle jobs. If the Jenkins crashes during execution of a pipeline, it can be resumed, most of the time. Not so with freestyle jobs.

The way this is implemented is by writing every step taken to disk, much like a journalling file system. This means that if you have lots of steps in your pipeline, there will be lots of extra writes to disk, and possibly communication to the master. If your disk is slow (NFS) this might have a noticable performance impact.

Solution: Since LTS 2.73 (+ related plugins) you can override the durability setting as outlined in the linked article.

options { durabilityHint("PERFORMANCE_OPTIMIZED") }

(No, this is not really documented anywhere) Now documented through the declarative snippet generator. On the pipeline level, this will override the default durability to be essentially the same as of that for freestyle jobs.

Testing pipelines is hard

When we build pipelines-as-code this can quickly become a problem. JenkinsPipelineUnit has a solution, allowing you to mock out many things. It does not support declarative pipelines however. I have however not had any luck with it, prefering declarative pipelines wherever possible.

There is some help to be had for declarative pipelines however, the jenkins-cli.jar (accessible from /cli on a master) allows you to run the declarative-linter. This seems to be the same thing that Jenkins runs before it starts to execute declarative pipelines. It is fairly trivial to set up a shell script to run a file through this linter. I personally set up a hotkey in my editor (IntelliJ) to run such a shell script on the current file. If working with a repository which has mostly pipelines, I also like to set up a 'build' job that runs all pipelines through the linter. This can guard against bad commits if you have a build-before-commit strategy.

The best strategy to minimize this problem is however to not put any logic in the pipelines that don't explicitly has to be there. In general, the thing we want out of our pipelines is reporting and perhaps some conditionally running stages. Everything else can move out into more easily testable scripts. I prefer python in the general case and gradle if I want up-to-date checks.

It should be noted that pipelines that 'build' code (Ant/Maven/Gradle, etc) should use the build tool as far as possible, since this minimizes the differences seen by the CI daemon and developers building on their own machines. Prefably there should only be one way to build it.

Solution (opinionated): Reduce complexity through logic extraction out of the pipeline.

Passing back information from called processes

Passing in information is usually straightforward, command line arguments. This is a one way street however. For passing information back we have a couple of options:

  • Pipes
  • Files
  • Return codes

Pipes: stdout: bat, sh, powershell all have the returnStdout parameter, which when set to true will mean that the step will return everything printed to stdout when finished. It will also prevent this from being logged in the build log (console output). It is usually wise to .trim() the output in the pipeline.

stderr can be redirected to stdout in bat & sh by 2>&1, but usually it is useful to let stderr go to the build log instead. Example: Pythons logging lib goes to stderr if nothing else is specified.

Other pipes can be used as well, but on *nix at least, these really just work like files for our purposes.

The obvious drawback here is if you cannot control stdout fully for your process you need to do filtering in the pipeline, which is less than ideal.

There is also a little gotcha with bat: echo of the commands is ON by default, as in all .bat scripts. The simple way to prevent this is by prepending @ to your bat lines, which suppresses the echo for that line only. @echo off can be used as the first line of larger blocks to suppress echo of the commands for that entire block (file, really).

Files: Perhaps the most obvious way is to let the process write to a file which you then read in the pipeline. The drawback here is that there is disk IO for something that should perhaps only live in primary memory. When doing this, let the pipeline pass in the path of the file to write to, removes the potential for de-synced names across files.

If you want to save the passed information, this is probabaly the way to do it, since it is easy to archive or stash the file for later.

Return codes: bat, sh, powershell all support returnStatus, which if set to true makes the step return the status code. This also prevents the pipeline from failing if the status code is non-zero.

Return codes can be used as a way to pass information back to the pipeline, even if this is somewhat standards breaking. If you only wish to pass a few bits of information, like flags, or a small integer, this can be used effectively. On most POSIX systems, this value might be truncated to 8 bits (mod 256), which limits the uses. Windows uses 32-bit integers for return codes.

Other ways are possible, through a DB, over the network etc. But in most cases these are overkill.

Solution: Use what best fits your situation. Files are simple, pipes are fast (no disk IO), return codes can be used to drive your collegues insane.

First version: 2018-03-11

Update 'bat gotcha': 2018-06-01

by Peter Lindsten.

Solution domains

Problems and solutions exist within certain domains and when the problem and solution domain do not match, the result is usually less than ideal. This is not to say that solutions in other domains always are bad, they're not. This is especially true when there is no solution in the problem domain.

People vs Technology

Note: I use the term developer loosely here, meaning anyone involved in software development on a technical level. Official title may be tester, UX-designer, data engineer, etc.

One mismatch I see quite frequently is the people/tech domains.
Here is a chart with the mismatch marked.

Image

It may not be immediately obvious why the cross-domain solutions are problematic. So let me share some examples.

All developers must use the same IDE

This is a Technical problem with a People solution. The Technical problem being solved is the one of "works on my machine" - local configration affecting builds or running code. The solution is in the People domain: a rule specifying that all developer machine should be identical as far as possible.

Not every application of this goes the whole way and specifies that every bit of software on the computers have to be exactly the same, but that is how far you must go for this to be completely effective. The extreme nature of the complete solution is a tip-off that the problem and solution domains are not the same.

The proper solution in this case is to use a build system which controls its own environment to the necessary degree. Paired with dedicated integration computers (perhaps powered by a CI deamon) this solves the problem by redefining what works means.

If it doesn't build on the integration computer it doesn't work. If it doesn't run on the integration system, it doesn't work. I don't care if it runs on your machine, if you cannot make it run integration system it does not run.

So let devs use whatever development environment they want, as long as the code works on the integration system. Happier people are more productive, more creative people.

Use tool X to get through the company proxy/firewall

This is a People problem with a Techincal solution. The people problem is the faulty assumption that the more we lock everything down the more secure our own network will be. The solution is a technical one: A tool to circumvent restrictions put in place.

Developer machines are difficult to secure properly. You cannot restrict what a developer can run on their machine in a reasonable way,since arbitraty binaries must be able to run (the software being developed). Which means that there is pretty much always a technical workaround.

A proper solution? Make security an explicit policy, extreme security requirements calls for extreme policies: Separate development machines from the internet completely. Let them have their own network and inform developers of why this is necessary. This last step is the most important.

The cost of security is vigilance. Attackers need only find a single security flaw, but defenders need to plug every hole. Defense in depth, proactive security on ALL levels, are general strategies employed in the security industry to reasonable effect.

Information is key. Inform people of why security is important, what the ramifications are when a breach happens, possible attack vectors, what types of attacks exists. You cannot defend against that which you cannot imagine.

Trust is paramount. Make sure you build networks of trust with your employees and collegues. Put audit trails in place, so that when something happens, you can trace it and rectify the trust problem. This usually mean personal credentials everywhere.

What does NOT work is hindering peoples day-to-day activities, they will just find a workaround which will inevitably open a larger security hole. It will also make people hate the security group and not want to work with them. This is an anti-security pattern.

Make sure that people have the access they need to do their jobs. This means opening up that proxy to outbound traffic that is needed by developers or setting up tunnels to the resources that need to be accessed. Make it as easy as possible for your developers to do their jobs.

The best security is the one that flows through everything you do, but yet it does not disturb you, you do not notice it.

Summary

Watch out for solutions which are not in the same domain as their problem, such solutions may indeed do more harm in both domains rather than solve anything. Challenge such solutions in their validity and prefer a solution native to the problem domain. If you cannot find one, ask an expert in the field or find out how others have solved the same problem.

There is a lot more that can be said about this subject. I will most likely be able to add more examples, although I wish I would not.

First version: 2018-05-27

by Peter Lindsten.