9 stories
·
0 followers

You're not going to believe what I'm about to tell you

4 Shares
You're not going to believe what I'm about to tell you

This is a comic about the backfire effect.

View
Read the whole story
Yule
24 days ago
reply
Share this story
Delete

Symmetry Breaking

1 Share

Imagine that you are an accountant. You are responsible for manipulating arcane symbols, concepts, and procedures in order to create deeply complicated and detailed financial models for your business. The stakes are enormous. Accuracy is essential. Millions wait to be lost or gained based upon your rare and esoteric skills.

How do you ensure your performance? Upon what disciplines do you depend? How will you make sure that the models you build, and the advice they imply, are faithful to your profession, and profitable for your business?

For the last 500 years, accountants have been using the discipline of double-entry bookkeeping. The idea is simple; but the execution is challenging. Each transaction is recorded, concurrently within a system of accounts; once as a debit, and then again as a credit. These debits and credits follow separate but complimentary mathematical pathways, through a system of categorized accounts, until they converge on the balance sheet in a subtraction that must yield a zero. Anything other than a zero implies an error was made somewhere along one of those pathways.

We, programmers have a similar problem. We manipulate arcane symbols, concepts, and procedures in order to create deeply complicated and detailed models of behavior for our businesses. The stakes are enormous. Accuracy is essential. Millions wait to be lost or gained based upon our rare and esoteric skills.

How do we ensure our performance? Upon what disciplines do we depend? How will we make sure that the models we build, and the behavior they elicit, are faithful to our profession, and profitable for our businesses.

It has long been asserted that Test Driven Development (TDD) is the equivalent of double-entry bookkeeping. There are some undeniable parallels. Under the discipline of TDD every desired behavior is written twice; once in test code that verifies the behavior, and once in production code that exhibits that behavior. The two streams of code are written concurrently, and follow complimentary, yet separate execution pathways until they converge in the count of defects - a count that must be zero.

Another parallel is the granularity of the two disciplines. Double-entry bookkeeping operates at the extremely fine granularity of individual transactions. TDD operates at the equivalently fine granularity of individual behaviors and assertions. In both cases the division between the granules is natural and obvious. There is no other granule for accounting; and it is hard to imagine a more appropriate granule for software.

Still another parallel is the immediate feedback of the two approaches. Errors are detected at every granule. Accountants are taught to check the results for each and every transaction. Programmers using TDD are taught to check the tests for every assertion. Therefore, when properly executed, no error can infiltrate into, and thereby corrupt, large swathes of the models. The rapid feedback, in both instances, prevents long hours of debugging and rework.

But as similar as these two disciplines appear to be on the surface, there are some deep differences between them. Some are obvious; such as the fact that one deals with numbers and accounts, whereas the other deals with functions and assertions. Other differences are less obvious and much more profound.

Asymmetry

Double-entry bookkeeping is symmetrical. Debits and credits have no relative priority. Each is derivable from the other. If you know the credited accounts and the transactions, then you can derive a reasonable set of debited accounts, and vice versa. Therefore, there is no reason that accountants must enter a credit or a debit first. The choice is arbitrary. The subtraction will work in either case.

This is not true of TDD. There is an obvious arrow between the tests and the production code. The tests depend upon the production code; the production code does not depend upon the tests. This is true both at compile time, and at run time. The arrow points in one direction, and one direction only.

This asymmetry leads to the inescapable conclusion that the equivalence to double entry bookkeeping only works if the tests are written first. There is no way to create an equivalent discipline if the production code is written before the tests.

This may be difficult to see at first. So let’s use the old mathematical trick of reduction to an absurdity. But before we do that, let’s state TDD with the formality that can be inverted; the formality of the three laws. Those laws are:

  1. You are not allowed to write any production code without first writing a test that fails because the production code does not exist.
  2. You are not allowed to write more of a test than is sufficient to fail; including failure of compilation.
  3. You are not allowed to write more production code than is sufficient to pass the currently failing test.

Following these three laws results in a very orderly and discrete procedure:

  • You must decide what production code function you intend to create.
  • You must write a test that fails because that production code doesn’t exist.
  • You must stop writing that test as soon as it fails for any reason, including compilation errors.
  • You must write only the production code that makes the test pass.
  • Repeat ad infinitum.

Note how the discipline enforces the fine granularity of individual behaviors and assertions; including compile time assertions. Notice that there is very little ambiguity about how much code to write at any given point; and whether that code should be production or test code. Those three laws tie you down into a very tightly constrained behavior.

It should be very clear that following the process dictated by the three laws is the logical equivalent of double-entry bookkeeping.

Reductio ad Absurdum

Now, let’s assume that a similar discipline can be defined that inverts the order, so that test code is written after production code. How would we write such a discipline?

We could start by simply inverting the three laws. But as soon as we do we run into trouble:

  • 1) You are not allowed to write any test code without first writing production code that…

How do you complete that rule? In the un-inverted rule the sentence is completed by demanding that the test must fail because the production code doesn’t yet exist. But what is the condition for our new, inverted, rule? We could choose something arbitrary like “…is a complete function.” However, this is not really a proper inverse of the first law of TDD.

Indeed, there is no proper inverse. The first law cannot be inverted. The reason is that the first law presumes that you know what production feature you are about to create – but so must any first law, including any inverted first law.

For example, we could try to invert the first law as follows:

  • 1) You are not allowed to write any test code without first writing production code that will fail the test code for the behavior you are writing.

I think you can see why this is not actually an inversion. In order to follow this rule, you’d have to write the test code in your mind, first, and then write the production code that failed it. In essence the test has been identified before the production code is written. The test has still come first.

You might object by noting that it is always possible to write tests after production code; and that in fact programmers have been doing just that for years. That’s true; but our goal was to write a rule that was the inverse of the first law of TDD. A rule that constrained us to the same level of granularity of behaviors and assertions; but that had us inventing the tests last. That rule does not appear to exist.

The second rule has similar problems.

  • 2) You are not allowed to write more production code than is sufficient to…

How do you complete that sentence? There is no obvious limit to the amount of production code you can write. Again, if we choose a predicate, that predicate will be arbitrary. For example: …complete a single function. But, of course, that function could be huge, or tiny, or any size at all. We have lost the obvious and natural granularity of individual behaviors and assertions.

So once again, the rule is not invertible.

Notice that these failures of invertibility are all about granularity. When tests come first the granularity is naturally constrained. When production code comes first, there is no constraint.

This unconstrained granularity implies something deeper. Note that the third law of TDD forces us to make the currently failing test, and only the currently failing test, pass. This means that the production code we are about to write will be derived from the failing test. But if you invert the third law you end up with nonsense:

  • 3) You are not allowed to write more test code than is sufficient to pass the current production code.

What does that mean? I can write a test that passes the current production code by writing a test with no assertions – or a test with no code at all. Is this rule asking us to test every possible assertion? What assertions are those? They haven’t been identified.

This leads us to the conclusion that tests at fine granularity cannot obviously be derived from production code.

Let’s state this more precisely. It is straight forward, using the three laws of TDD, to derive the entirety of the production code from a series of individual assertion tests; but it is not straight forward to derive the entirety of a test suite from the completed production code.

This is not to say that you cannot impute tests from production code. Of course you can. What this is telling us is: (and anyone who has ever tried to write tests from legacy code knows this) it is remarkably difficult, if not utterly impractical, to write fine-grained, comprehensive, tests from production code. In order to write such tests from production code, you must first understand the entirety of that production code; because any part of that production code can affect the test you are trying to write. And second, the production code must be decoupled in a way that allows the fine granularity.

When tests are written first, granularity and decoupling are trivial to achieve. When tests follow production code, decoupling and granularity are much more difficult to achieve.

Irreversibility

This means that tests and production code are irreversible. Accountants don’t have this problem. Debited accounts and credited accounts are mutually reversible. You can derive one from the other. But tests and production code progress in one direction.

Why should this be?

The answer lies in yet another asymmetry between tests and production code: their structure. The structure of the production code is vastly different from the structure of the test code.

The production code forms a system with interacting components. That system operates as a single whole. It is subdivided into components, separated by abstraction layers, and organized with communication pathways all of which support the operation, throughput, and maintainability of that system.

The tests, on the other hand, do not form a system. They are, instead, a set of unrelated assertions. Each assertion is independent of all the others. Each small test in the test suite stands alone. Each test can execute on its own. Indeed, the tests have no preferred order of execution; and many test frameworks enforce this by executing the tests in a random order.

The discipline of TDD tells us to build the production code one small test case at a time. That discipline also gives us guidance on the order in which to write those tests. We choose the simplest tests at first, and only increase the complexity of the tests when all simpler tests have been written and passed.

This ordering is important. Novices to TDD often get the ordering wrong and find that they have written a test that forces them to implement too much production code. The rules of TDD tell us that if a test cannot be made to pass by a trivial addition or change to the production code; then a simpler test should be chosen.

Thus, the tests, and their ordering, form the assembly instructions for the production code. If those tests are made to pass in that order, then the production code will be assembled through a series of trivial steps.

But, as we all know, assembly instructions are not reversible. It is difficult to look at an airplane, for example, and derive the assembly procedure. On the other hand, given the assembly procedure, an airplane can be built one piece at a time.

Thus, the conversion of the test suite into production code is a trap-door function; rather like multiplying two large prime numbers. It can be trivially executed in one direction; but is very difficult, if not completely impractical, to execute in the other. Tests can trivially drive production code; but production code cannot practicably drive the equivalent test suite.

Bottom Line

What we can conclude from this is that there is a well defined discipline of test-first that is equivalent to double-entry bookkeeping; but there is no such discipline for test-after. It is possible to test after, of course, but there’s no way to define it as a discipline. The discipline only works in one direction. Test-first.

As I said at the start: The stakes are enormous. Millions are waiting to be gained or lost. Lives and fortunes are at stake. Our businesses, and indeed our whole society, are depending upon us. What discipline will we use to ensure that we do not let them down?

If accountants can do it, can’t we?

Of course we can.

Read the whole story
Yule
29 days ago
reply
Share this story
Delete

Types and Tests

1 Share

Friday the 13th!

The response to my Dark Path blog has been entertaining. It has ranged from effusive agreement to categorical disagreement. It also elicited a few abusive insults. I appreciate the passion. A nice vocal debate is always the best way to learn. As for the insulters: you guys need to move out of your Mom’s basement and get a life.

To be clear, and at the risk of being repetitive, that blog was not an indictment of static typing. I rather enjoy static typing. I’ve spent the last 30 years working in statically typed languages and have gotten rather used to them.

My intent, with that blog, was to complain about how far the pendulum has swung. I consider the static typing of Swift and Kotlin to have swung too far in the statically type-checked direction. They have, IMHO, passed the point where the tradeoff between expressiveness and constraint is profitable.


One of the more common responses to that blog was: “Hey, man, like… Types are Tests!

No, types are not tests. Type systems are not tests. Type checking is not testing. Here’s why.

A computer program is a specification of behavior. The purpose of a program is to make a machine behave in a certain way. The text of the program consists of the instructions that the machine follows. The sum of those instructions is the behavior of the program.

Types do not specify behavior. Types are constraints placed, by the programmer, upon the textual elements of the program. Those constraints reduce the number of ways that different parts of the program text can refer to each other.

Now this kind of constraint system can be very useful in reducing the incidence of textual errors in a program. If you specify that function f must be called with an Int then the type system will ensure that no other part of the program text will invoke f with a Double. If such an error were allowed to escape into a running program (as was common back in the good old C days) then a runtime error would likely result.

You might therefore say that the type system is a kind of “test” that fails for all inappropriate invocations of f. I might concede that point except for one thing – the way f is called has nothing to do with the required behavior of the system. Rather it is a test of an arbitrary constraint imposed by the programmer. A constraint that was likely over specified from the point of view of the system requirements.

The system requirements likely do not depend on the fact that the argument of f is an Int. We could very likely change the declaration of f to take a Double, without affecting the observed behavior of the program at all.

So what the type system is checking is not the external behavior of the program. It is checking only the internal consistency of the program text.

Now this is no small thing. Writing program text that is internally consistent is pretty important. Inconsistencies and ambiguities in program text can lead to misbehaviors of all kinds. So I don’t want to downplay this at all.

On the other hand, internal self-consistency does not mean the program exhibits the correct behavior. Behavior and self-consistency are orthogonal concepts. Well behaved programs can be, and have been, written in languages with high ambiguity and low internal consistency. Badly behaved programs have been written in languages that are deeply self-consistent and tolerate few ambiguities.

So just how internally consistent do we need the program text to be? Does every line of text need to be precisely 60 characters long? Must indentations always be multiples of two spaces? Must every floating point number have a dot? Should all positive numbers begin with a +? Or should we allow certain ambiguities in our program text and allow the language to make assumptions that resolve them? Should we relax the specificity of the language and allow it to tolerate certain easily resolvable ambiguities? Should we allow this even if sometimes those ambiguities are resolved incorrectly?

Clearly, every language chooses the latter option. No language forces the program text to be absolutely unambiguous and self consistent. Indeed, such a language would likely be impossible to create. And even if it weren’t, it would likely be impossible to use. Absolute precision and consistency has never been, nor should it ever be, the goal.

So how much internal unambiguous self-consistency do we need? It would be easy to say that we need as much as we can get. It might seem obvious that the more unambiguous and internally consistent a language is, the fewer defects programs written in that language will have. But is that true?

The problem with increasing the level of precision and internal self consistency, is that it implies an increase in the number of constraints. But constraints need to be specified; and specification requires notation. Therefore as the number of constraints grows, so does the complexity of the notation. The syntax and the semantics of a language grows as a function of the internal self-consistency and specificity of the language.

As notation and semantics grow in complexity, the chance for unintended consequences grows. Among the worst of those consequences are Open-Closed violations.

Imagine that there is a language named TDP that is ultimately self-consistent and specific. In TDP every single line of code is self-consistent with, and specific to, every other line of code. A change to one line forces a change to every other line in order to maintain that self-consistency and specificity.

Do languages like this exist? No; but the more type-safe a language is, the more internally consistent and specific it forces the programmers to be, the more it approaches that ultimate TDP condition.

Consider the const keyword in C++. When I was first learning C++ I didn’t use it. It was just too much on top of everything else there was to learn. But as I gained in knowledge and comfort with the language the day came when I used my first const. And down the rathole I went, fixing one compile error after another, changing hundreds and hundreds of lines of code, until the system I was working on was const-correct.

Did I stop using const because of this experience? No, of course not. I just made sure that I knew, up front, which fields and functions were going to be const. This required a lot of up-front design; but that was worth the alternative. Did that make the problem go away? Of course not. I frequently found myself running around inside the system smearing const all over the place.

Is TDP a good condition to be in? Do you want to have to change every line of code every time anything at all changes? Clearly not. This violates the OCP, and would create a nightmare for maintenance.

Perhaps you think I’m setting up a straw man argument. After all, TDP does not exist. My claim, however, is that Swift and Kotlin have taken a step in that undesirable direction. That’s why I called it: The Dark Path.

Every step down that path increases the difficulty of using and maintaining the language. Every step down that path forces users of the language to get their type models “right” up front; because changing them later is too expensive. Every step down that path forces us back into the regime of Big Design Up Front.

But does that mean we should never take even a single step down that path? Does that mean our languages should have no types and no specific internal self consistency. Should we all be programming in Lisp?

(That was a joke, all you guys living in your Mom’s basement can keep your insults to yourself please; and stay off my lawn. As for Lisp the answer is: Yes, we probably should all be programming in Lisp; but for different reasons.)

Type safety has a number of benefits that, at first, outweigh the costs. A few steps down the dark path allow us to gather some pretty nice low hanging fruit. We can gain a decent amount of specificity and self consistency without huge violations of the OCP. Type models can also enhance expressivity and readability. And type models definitely help IDEs with refactorings and other mechanical operations.

But there is a balance point after which every step down The Dark Path increases the cost over the benefit.

I think Java and C# have done a reasonable job at hovering near the balance point. (If you ignore the horrible syntax for generics, and the ridiculous proscription against multiple inheritance.) In my opinion those languages have gone just a bit too far; but the costs of type-safety aren’t too high to tolerate. Ruby, Groovy, and Javascript, on the other hand, hover on the other side of the balance point. They are, perhaps, a bit too permissive; a bit too ambiguous (Does anybody really understand the sub-object graph in Ruby?).

So, a little type safety, like a little salt, is a good thing. Too much, on the other hand, can have unfortunate consequences.


Does every step down The Dark Path mean that you can ignore a certain number of unit tests? Does programming in Dark Path languages mean that you don’t have to test as much?

No. A thousand times: NO. Type models do not specify behavior. The correctness of your type model has no bearing on the correctness of the behavior you have specified. At best the type system will prevent some mechanistic failures of representation (e.g. Double vs. Int); but you still have to specify every single bit of behavior; and you still have to test every bit of behavior.

So, no, type systems do not decrease the testing load. Not even the tiniest bit. But they can prevent some errors that unit tests might not see. (e.g. Double vs. Int)

Read the whole story
Yule
133 days ago
reply
Share this story
Delete

The Dark Path

1 Comment

Over the last few months I’ve dabbled in two new languages. Swift and Kotlin. These two languages have a number of similarities. Indeed, the similarities are so stark that I wonder if this isn’t a new trend in our language churn. If so, it is a dark path.

Both languages have integrated some functional characteristics. For example, they both have lambdas. This is a good thing, in general. The more we learn about functional programming, the better. These languages are both a far cry from a truly functional programming language; but every step in that direction is a good step.

My problem is that both languages have doubled down on strong static typing. Both seem to be intent on closing every single type hole in their parent languages. In the case of Swift, the parent language is the bizarre typeless hybrid of C and Smalltalk called Objective-C; so perhaps the emphasis on typing is understandable. In the case of Kotlin the parent is the already rather strongly typed Java.

Now I don’t want you to think that I’m opposed to statically typed languages. I’m not. There are definite advantages to both dynamic and static languages; and I happily use both kinds. I have a slight preference for dynamic typing; and so I use Clojure quite a bit. On the other hand, I probably write more Java than Clojure. So you can consider me bi-typical. I walk on both sides of the street – so to speak.

It’s not the fact that Swift and Kotlin are statically typed that has me concerned. Rather, it is the depth of that static typing.

I would not call Java a strongly opinionated language when it comes to static typing. You can create structures in Java that follow the type rules nicely; but you can also violate many of the type rules whenever you want or need to. The language complains a bit when you do; and throws up a few roadblocks; but not so many as to be obstructionist.

Swift and Kotlin, on the other hand, are completely inflexible when it comes to their type rules. For example, in Swift, if you declare a function to throw an exception, then by God every call to that function, all the way up the stack, must be adorned with a do-try block, or a try!, or a try?. There is no way, in this language, to silently throw an exception all the way to the top level; without paving a super-hiway for it up through the entire calling tree. (You can watch Justin and I struggle with this in our Mobile Application Case Study videos.)

Now, perhaps you think this is a good thing. Perhaps you think that there have been a lot of bugs in systems that have resulted from un-corralled exceptions. Perhaps you think that exceptions that aren’t escorted, step by step, up the calling stack are risky and error prone. And, of course, you would be right about that. Undeclared and unmanaged exceptions are very risky.

The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job?

In Kotlin, you cannot derive from a class, or override a function, unless you adorn that class or function as open. You also cannot override a function unless the overriding function is adorned with override. If you neglect to adorn a class with open, the language will not allow you to derive from it.

Now, perhaps you think this is a good thing. Perhaps you believe that inheritance and derivation hierarchies that are allowed to grow without bound are a source of error and risk. Perhaps you think we can eliminate whole classes of bugs by forcing programmers to explicitly declare their classes to be open. And you may be right. Derivation and inheritance are risky things. Lots can go wrong when you override a function in a derived class.

The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job.

Both Swift and Kotlin have incorporated the concept of nullable types. The fact that a variable can contain a null becomes part of the type of that variable. A variable of type String cannot contain a null; it can only contain a reified String. On the other hand, a variable of type String? has a nullable type and can contain a null.

The rules of the language insist that when you use a nullable variable, you must first check that variable for null. So if s is a String? then var l = s.length() won’t compile. Instead you have to say var l = s.length() ?: 0 or var l = if (s!=null) s.length() else 0.

Perhaps you think this is a good thing. Perhaps you have seen enough NPEs in your lifetime. Perhaps you know, beyond a shadow of a doubt, that unchecked nulls are the cause of billions and billions of dollars of software failures. (Indeed, the Kotlin documentation calls the NPE the “Billion Dollar Bug”). And, of course, you are right. It is very risky to have nulls rampaging around the system out of control.

The question is: Whose job is it to manage the nulls. The language? Or the programmer?

These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.

But before you run out of fingers and toes, you have created languages that contain dozens of keywords, hundreds of constraints, a tortuous syntax, and a reference manual that reads like a law book. Indeed, to become an expert in these languages, you must become a language lawyer (a term that was invented during the C++ era.)

This is the wrong path!

Ask yourself why we are trying to plug defects with language features. The answer ought to be obvious. We are trying to plug these defects because these defects happen too often.

Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!

You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs. You test that every exception you can throw is caught somewhere.

Why are these languages adopting all these features? Because programmers are not testing their code. And because programmers are not testing their code, we now have languages that force us to put the word open in front of every class we want to derive from. We now have languages that force us to adorn every function, all the way up the calling tree, with try!. We now have languages that are so constraining, and so over-specified, that you have to design the whole system up front before you can code any of it.

Consider: How do I know whether a class is open or not? How do I know if somewhere down the calling tree someone might throw an exception? How much code will I have to change when I finally discover that someone really needs to return a null up the calling tree?

All these constraints, that these languages are imposing, presume that the programmer has perfect knowledge of the system; before the system is written. They presume that you know which classes will need to be open and which will not. They presume that you know which calling paths will throw exceptions, and which will not. They presume that you know which functions will produce null and which will not.

And because of all this presumption, they punish you when you are wrong. They force you to go back and change massive amounts of code, adding try! or ?: or open all the way up the stack.

And how do you avoid being punished? There are two ways. One that works; and one that doesn’t. The one that doesn’t work is to design everything up front before coding. The one that does avoid the punishment is to override all the safeties.

And so you will declare all your classes and all your functions open. You will never use exceptions. And you will get used to using lots and lots of ! characters to override the null checks and allow NPEs to rampage through your systems.


Why did the nuclear plant at Chernobyl catch fire, melt down, destroy a small city, and leave a large area uninhabitable? They overrode all the safeties. So don’t depend on safeties to prevent catastrophes. Instead, you’d better get used to writing lots and lots of tests, no matter what language you are using!

Read the whole story
Yule
135 days ago
reply
Exactly.
Share this story
Delete

How To Maximize Fun In Enterprise Projects

1 Comment

Unfortunately we reached the point of low noise, high productivity. Now you could fully focus on domain logic and start implementing client's use cases with the very first line of code. Sounds good, but is really boring.

These rules make your daily developer live more exciting:

  1. Forget for a moment the nonfunctional requirements and the users. Focus on infrastructure.
  2. Wisely assume your in-house application has similar nonfunctional requirements as Netflix, Twitter, Facebook or Google have. One day you will surely achieve their scale.
  3. Justified by 2. ignore existing Java 8 and Java EE functionality. Use third party libraries and frameworks on top of existing functionality.
  4. Start with implementing infrastructural frameworks first. Implementing logging, configuration, asynchronous communication, caching and discovery frameworks is a good place to start.
  5. Fat WARs are recognized as common microservice best practice. Don't stop adding external dependencies until the size of the WAR reaches at least 20 MB. Anything below that size does not look serious.
  6. Write reflection test utilities to maximize code coverage. Now you can easily achieve > 50% code coverage without writing a single assert.
  7. Complain about high complexity, defects, slow deployments and bloat.
  8. Suggest to start over with node.js, but follow the rules. Start with 1.
See you at Java EE Workshops at MUC Airport, particularly at the Java EE Architectures workshop
Real World Java EE Workshops [Airport Munich]>

Read the whole story
Yule
157 days ago
reply
Sounds familiar.
Share this story
Delete

Monsoon III: Time-lapse captures the raw power of a monsoon

1 Share
 
You may remember Mike Olbinski's storm chasing thriller Vorticity from July and, unsurprisingly, he's back at it again. This time around he chased storms over the course of 36 days during the 2016 monsoon season in the Southwest. Though he says it was somewhat of a slower season in terms of activity, you wouldn't know it from the time-lapse video that he put together. This is definitely worth a watch in full-screen HD with the lights turned off. Hope you enjoy it as much as we did!
Read the whole story
Yule
225 days ago
reply
Share this story
Delete
Next Page of Stories