Declarative User Interface Paradigm Considered Harmful

There are several new GUI frameworks like Flutter, SwiftUI and Jetpack Compose that are promoted as embodiment of Declarative UI paradigm. This was a new concept for me, so I decided to look deeper into what Declarative UI means and understand which benefits it provides.

This article sums up what I found.

Declarative vs Imperative Programming

Declarative UI gets its name from the Declarative Programming, so let’s discuss this paradigm first.

The usual way to explain Declarative Programming is to compare it with its opposite, Imperative Programming:

  • Imperative Programming: focuses on describing how a task is performed, involving explicit control flow and state management.
  • Declarative Programming: specifies what the outcome should be, leaving the details of how to achieve it to the framework or the language.

To highlight the differences between these two styles, I’ll use a real-world example from my latest course about custom Views in Android apps. In this example, I implemented a LoopAnimator class, which changes a float value over time in an “oscillating” way, looping indefinitely. The changed value is then mapped to some UI property, thus creating a “manual” animation.

Here is how I did it (don’t worry about the details, I’ll explain them later):

class LoopAnimator {

    var listener: LoopAnimatorListener? = null

    private var animationPeriodNanos = 0L
    private var animatedValueMin = 0f
    private var animatedValueMax = 0f
    private var animatedValueStart = 0f
    private var startTimeNanos = 0L
    private var animatedValueStartFraction = 0f

    private val choreographer: Choreographer = Choreographer.getInstance()
    private val animationFrameCallback = object : FrameCallback {
        override fun doFrame(frameTimeNanos: Long) {
            val animatedValue = calculateAnimatedValue(frameTimeNanos)
            listener?.onAnimatedValueChanged(animatedValue)
            if (this@AnimationsFragment.isResumed) {
                // Run this callback again on the next frame
                choreographer.postFrameCallback(this)
            }
        }
    }

    fun startAnimation(
        animationPeriodNanos: Long,
        animatedValueMin: Float,
        animatedValueMax: Float,
        animatedValueStart: Float,
    ) {
        this.animationPeriodNanos = animationPeriodNanos
        this.animatedValueMin = animatedValueMin
        this.animatedValueMax = animatedValueMax
        this.animatedValueStart = animatedValueStart
        startTimeNanos = System.nanoTime()
        animatedValueStartFraction = (animatedValueStart - animatedValueMin) / (animatedValueMax - animatedValueMin)
        choreographer.postFrameCallback(animationFrameCallback)
    }

    fun stopAnimation() {
        choreographer.removeFrameCallback(animationFrameCallback)
    }

    private fun calculateAnimatedValue(frameTimeNanos: Long): Float {
        // calculate elapsed time since the start of the animation
        val elapsedTimeNanos = frameTimeNanos - startTimeNanos

        // calculate the phase of the animation in the range of [0, 2)
        var phaseBase = (elapsedTimeNanos % animationPeriodNanos).toFloat() / animationPeriodNanos * 2

        // account for the initial phase shift
        phaseBase += animatedValueStartFraction
        phaseBase = if (phaseBase > 2) {
            4 - phaseBase
        } else if (phaseBase < 0) {
            0 - phaseBase
        } else {
            phaseBase
        }

        // flip the second half of the phase to create oscillation in the range [0, 1)
        val phase = if (phaseBase > 1.0)  {
            2.0f - phaseBase
        } else {
            phaseBase
        }

        // map the phase to the range of animatedValueMin to animatedValueMax
        return animatedValueMin + phase * (animatedValueMax - animatedValueMin)
    }

}

At a basic level, LoopAnimator works like this:

  • startAnimation method initializes the LoopAnimator and registers a callback with the Choreographer object.
  • Choreographer is used to synchronize the changes of the animated value with the device’s frame rate.
  • calculateAnimatedValue method encapsulates the logic that computes the new value based on the frame timestamp.
  • stopAnimation stops the animation.

LoopAnimator works, but it was tedious to implement. It also wouldn’t be very useful if I needed to animate a one-time position change, for example.

Fortunately, Android Framework gives us higher level APIs for animations. So, I can get the same result using the ObjectAnimator class:

ObjectAnimator.ofFloat(
    view, // target View
    "translationX", // animated property
    animationAmplitude, // min animation value
    -animationAmplitude // max animation value
).apply {
    interpolator = LinearInterpolator()
    duration = ANIMATION_PERIOD_NS / 1_000_000 / 2
    repeatCount = ValueAnimator.INFINITE
    repeatMode = ValueAnimator.REVERSE
    val relativePosition = (view.translationX + animationAmplitude) / (2 * animationAmplitude)
    setCurrentFraction(relativePosition) // starting "position"
    start()
}

This approach is shorter, more readable and doesn’t involve manual value interpolation. Furthermore, as you can see from ObjectAnimator‘s API, this class provides a great deal of flexibility. I can use it for many kinds of animations, not just “oscillating” ones.

Going back to Imperative and Declarative Programming:

  • Implementing LoopAnimator is an example of Imperative Programming: I carefully told the framework HOW the value should change over time.
  • Using ObjectAnimator is an example of Declarative Programming: I used a preexisting framework’s API to specify WHAT I want to happen and left it up to the framework to figure out the details.

Clearly, in this case, Declarative approach is the better choice.

Higher Level of Abstraction

While the example earlier showed the difference between Imperative and Declarative Programming paradigms, it also shows the benefit of working at a higher level of abstraction.

The essence of abstraction is preserving information that is relevant in a given context, and forgetting information that is irrelevant in that context.

John Guttag

In the context of our earlier example, we can see the following:

  • Implementing LoopAnimator involves dealing with low-level APIs (i.e. Choreographer) and manual computations.
  • ObjectAnimator class “speaks the language of animations”, so I can use it without bothering myself with the implementation details.

In other words, ObjectAnimator hides the underlying complexity and exposes a simple and flexible API tailored for animations. It’s a higher-level abstraction that lets us say what we want, instead of telling the framework how to do it. Sounds familiar, right?

So, what’s the link between Declarative Programming and higher levels of abstraction? From what I can see, Declarative Programming boils down to using a higher level API in a specific context.

Declarative Programming is Not Practically Useful

The title of this section says it all: the Declarative Programming paradigm doesn’t offer much real-world utility. This is a strong statement, so let me explain.

As we saw earlier, Declarative Programming mostly means using higher-level APIs. So, a good question to ask is: does this idea give us anything useful that we don’t get from just talking about higher levels of abstraction? From what I can see, the answer is a clear “No.”

For example, imagine that your teammate built something along the lines of LoopAnimator, instead of using a standard animation APIs. You review their pull request and stumble upon this wall of code. Would you say “you should use Declarative Programming here”? I doubt it. You’d likely say something like “there is an API for that, so let’s not reinvent the wheel”.

I spent a lot of time trying to find an example where the term Declarative Programming would be practically useful, but I couldn’t. As people who read this blog and my students know, I don’t shy away from fundamental concepts. I use Single Level Of Abstraction principle, Dependency Injection, SOLID principles, etc. in my work, and insists on adopting them when I collaborate with other developers. I even created content around these concepts to help others put them to a practical use. But, I just don’t see how the term Declarative Programming helps in a real work setting.

In the end, from what I can see, the Declarative vs Imperative debate is mostly used to either boost some programming languages and tools or to put down others.

Declarative vs Imperative User Interface Frameworks

Based on what we’ve talked about with Declarative and Imperative Programming, you might guess that the idea of Declarative User Interfaces would also not be very useful. In reality, I found that the term Declarative UI is even less useful than Declarative Programming. To see what I mean, let’s look at how Graphical User Interfaces (GUIs) have evolved over time.

At the onset of GUI programming, developers manipulated pixels directly. For example, if you needed to draw a line on the screen, you’d compute which pixels lie along the path of that line, iterate over them and change the color of each individual pixel. This definitely constituted an Imperative approach to GUIs implementation.

Then, GUI frameworks emerged, offering developers APIs at higher levels of abstraction. Instead of working with individual pixels, developers could use these frameworks to create GUIs using lines, polygons, contours, etc. Was this move from low-level pixels to higher-level shapes a jump from Imperative to Declarative UIs? You can bet that developers back then would say yes!

The next generation of GUI frameworks introduced even better abstractions like text fields, buttons, checkboxes, etc. This let developers specify what their GUIs should contain, rather than how these GUIs should be drawn. Since these abstractions correspond to the actual UI elements that users see and interact with, GUI frameworks reached the top “elementary” abstraction level at that point and became Declarative.

That said, there is another aspect of GUI implementation: positioning. For example, let’s say you need three buttons in the middle of the screen, spaced equally. Even if you have a Button class that’s easy to use, but you still have to work out the button sizes and gaps yourself, you’re still doing some manual, Imperative work. To mitigate this issue, GUI frameworks introduced “layout” elements. With layouts, developers can group multiple UI elements together, declare relationships between them using higher level constraints, and then let the framework compute the elements’ positions. So, with layouts, even positioning became Declarative, making it truly Declarative UI.

Layouts were a big step forward, but they introduced a new challenge: nesting. In simple terms, one “parent” layout can host multiple “child” UI elements, so you had to set this up in code. Doing this by hand was tedious, so the next generation of GUI frameworks added a feature to figure out these parent-child links automatically from the structure of the code. This required programming languages that support nesting as a first-class citizen, and that’s why XML became common in many frameworks. At this point, we got truly Declarative GUI frameworks!

I hope you now understand my issue with the term Declarative User Interface. The history of GUI frameworks is a steady progression to higher levels of abstraction. Therefore, it’s impossible to pinpoint a moment when GUI frameworks became Declarative. For individual developers, a GUI framework that offers a higher level of abstraction can feel like “declarative UI revolution”, but, in practice, it’s nothing more than a general tendency to progress to higher levels of abstractions over time.

Please note that the idea of levels of abstractions is actually very useful in this context because it can reflect the evolution I described above. It also helps developers understand their own work better and see that others, working at different levels of abstraction, might have a totally different experience. This can’t be said about Declarative User Interface paradigm. It introduces a false dichotomy between Declarative and Imperative UI, thus leading to endless debates on what is what. These debates are pointless because the real issue is about different levels of abstractions, but that (false) dichotomy can’t capture this nuance, by its very nature.

So, the concept of Declarative User Interface isn’t just practically useless, but also wrong and polarizing.

Modern User Interface Frameworks are Declarative

As far as I can tell, pretty much all modern GUI frameworks are declarative. This is especially true for frameworks that offer visual UI builders which, when used, can abstract all the underlying source code. Sure, there are differences in paradigms and features between different frameworks, but that doesn’t make any of them Imperative.

Android Lifecycles Course

Get comfortable with lifecycles, which are one of the trickiest aspects of Android development.

Go to Course

Conclusion

In many ways, saying “declarative GUI framework” is like saying “digital computer”: it’s true, but it doesn’t tell you much and is not needed (unless you’re talking history). So, just like you wouldn’t buy a computer just because it’s called digital, there’s no reason to think Declarative GUI frameworks are better than others. Fundamentally, all modern GUI frameworks are declarative.

Does this mean that all new so-called “declarative” frameworks like Flutter, SwiftUI and Jetpack Compose are bad? Not at all. It just means that being “declarative” is not one of their benefits. In my opinion, the jury about each of these tools is still out, but I’ll leave this topic for another article.

All in all, from what I see, the term Declarative UI is popular today mostly because big companies, mainly Google, are pushing it. They use catchy but confusing buzzwords and introduce false dichotomies to present other options as outdated. This is a simpler and surer go-to-market strategy than trying to win developers over by explaining the real benefits of their products and answering tough questions.

Check out my premium

Android Development Courses

28 comments on "Declarative User Interface Paradigm Considered Harmful"

  1. I saw this idea of how to outline the difference between imperative and declarative UI somewhere, maybe on Reddit, and I really liked it: with imperative UI manipulation you explicitly transition from one UI state to another, while modern declarative UI frameworks allow to describe all possible UI states at once with some widget structure.

    Reply
  2. Well written 🙌🏾.

    I agree with you that being ‘declarative’ isn’t much of a selling point.

    But what do we call these set of UI frameworks that ‘derive’ their presentation of the UI from some state (which changes based on certain events) e.g. Jetpack Compose, Flutter in contrast to UI frameworks that require you to change the UI state manually e.g Android XML/View, Swing?

    Reply
    • Good question!
      That’s something I’ll need to think about before I write an article about these frameworks, so I don’t have a full answer right now. Several candidates from the top of my head: reactive, immutable, recomposing.
      Fundamentally, I don’t think the gap between “old” and “new” frameworks is that big. As far as I can tell, the biggest difference is that “new” frameworks leverage programming laguages that support “nesting” as a first-class citizen. This allows you to definte the hierarchy of the UI using the structure of the source code, thus eliminating the need for e.g. XML. All other aspects seem to be of lesser impact.

      Reply
      • I was thinking about the same thing as Muhammed. Maybe I can agree with the general idea in the text, but what text misses is that when we talk about UIs, when saying “declarative UI”, people not talk about any “last gen abstraction” UI framework, but about very specific pattern and a way of making UI abstraction (like one found in Swift UI or Jetpack Compose). So it is not that declarative user interface is considered harmful, far from that, but that declarative user interface as a pattern might be called differently. I suggest “reactive UI” because that’s the whole point, I’d say.

        Reply
  3. You’re missing the mark on two accounts here, I think.

    1.
    It seems disingenuous to me to imply, both in the headline and throughout the article, that Declarative UI Frameworks are harmful or not useful, whereas upon further reading your actual, very different, point is only that the *name* “Declarative UI” is not useful, and you’re not making a any judgement about the frameworks themselves at all. Feels like clickbait and trying to make up controversy under false pretense to me. You’re arguing about a mostly trivial terminology disagreement, in the end.

    2.
    You seem to misunderstand what “declarative” means. Going from Pixels to Shapes to Buttons to Layouts is not what makes a UI framework more declarative. Those are levels of abstraction of UI elements, not programming paradigms. You can have a highly imperative UI framework that works on the highest of those without a problem. The difference is something else: In a declarative framework, you say: “I want a Layout here with a Label inside it, and that label should be green if variable x is true, and red otherwise”. In an imperative framework, instead you’ll have to say: “Put a Layout here. Put a button inside that layout. Put a label inside that button. If x is true, make the label green, else make it red. Whenever x changes, make the label green if x is true, else make it red.” And on the other end, you can also have a very declarative language that nonetheless works on a very low level of abstraction – see Bitmaps, for example. They are very declarative, but very low in terms of abstraction when it comes to building UIs. Declarative vs. imperative is mostly a stylistic and organizational difference, not an actual difference in capabilities. And of course it’s also not a clean binary, there’s a spectrum in between with frameworks having elements of both styles.
    It’s true that higher levels of abstraction in UI frameworks generally go hand in hand with a more declarative style of working with the framework, but that’s just improvement happening in multiple areas over time, and not just fundamentally the same thing.

    Thus, I disagree, the term ‘Declarative UI’ is neither meaningless nor “harmful”. It’s simply means something different than what you seem to think it means.

    Reply
    • Robin,
      1.
      I usually don’t approve comments which, instead of discussing the subject at hand, go into “disingenuous”, “clickbait”, etc. I’ll make an exception this time because your next point is relevant to the discussion, but please refrain from unprofessional personal attacks in the future.
      BTW, the title says exactly what’s in the article. Your interpretation that it refers to “frameworks” was just your, incorrect, interpretation.
      2.
      If you want to make a claim regarding the nature of Declarative UI, how about providing a real, non-trivial example? When researching this article, I read tens of different hand-wavy definitions for what Declarative UI means. The common denominator seems to be that there are never real-world examples. Frankly, I don’t see much difference between your descriptions of “declarative” and “imperative” approach. You say that I misunderstand what “declarative” means, but I suspect that misunderstanding is likely on your part.
      > And of course it’s also not a clean binary, there’s a spectrum in between with frameworks having elements of both styles.
      Well, that’s not what Declarative vs Imperative dichotomy implies, and not what’s being sold to developers. As you might’ve noticed, the inadequacy of this dichotomy to capture this nuance is one of my biggest criticism in this article.

      Reply
      • It felt to me that Robin was pointing out that the headline seemed a bit click-baity – and as the title mentions declarative UI being “harmful”, but the word harmful never appearing again in the entire article, I tend to agree.

        Reply
        • I think that criticizing the title is just nitpicking and a subconcious attempt to draw attention from the points I outlined in the article. In any case, if someone doesn’t know the history of “considered harmful” articles, I highly recommend reading about it here.

          Reply
      • Thanks for posting Robin’s reply and explaining yourself. I also found the title confusing and misleading. I expected to read something about how Declarative UI frameworks actually kill productivity, waste computational resources and kill the planet. Imho, a title like “understanding declarative vs imperative UI programming” would be better. I don’t think it’s a personal attack, more of a personal interpretation that i think you would appreciate.

        Now, for the declarative vs imperative I like your idea of increasing abstraction. However, I feel something is missing, but I can’t yet nail down what exactly. Maybe there is a whole mix of procedural vs functional programming that you didn’t mention (probably intentionally), but it’s inherently present in the modern “declarative” UI frameworks.

        To conclude: it’s an interesting and thought provoking post, but the title was just provocative 🙂

        Reply
        • Ok, if readers spend so much time concentrating on the title instead of the content, the title isn’t good. I’ll change it to “Declarative UI Paradigm Considered Harmful”, to emphasize that I’m not referring to frameworks.
          That said, the sole fact that so many developers immediately assume that I’m referring to frameworks, while there is nothing impying this in either the title or the intro, is a tell. The PR says that the frameworks are good because they’re implementing Declarative UI, but when you mention Declarative UI everyone assumes that these are frameworks. Sounds like self-referencing, circular argument.
          As for procedural vs functional programming, I had a section on that in the draft, but then removed it. Personally, I think the fact that “declarativeness” is being tied to so many concepts (functional style, immutability, reactiveness, etc.) without clear explanation how it all fits into one coherent picture, only shows that nobody really knows what this paradigm means in practice.
          BTW, I posted a link that describes the source of “considered harmful” titles in response to another comment. Encourage everyone to read about this facsinating piece of history (if they aren’t familiar with it yet).

          Reply
      • Actually Robin is absolutely right. You don’t seem to understand what declarative means. Yes, disingenuous may feel a bit harsh but it definitely wasn’t unprofessional, but let’s just simply say you don’t seem to have ever used declarative UIs for anything other than trivial examples in the tens of articles you’ve read. And Robin’s explanation of the difference between the two is absolutely correct: in a declarative framework you associate data to UI elements by declaring them to be linked. From then on they update mostly automatically. A imperative framework requires you to command (hence imperative) the UI element to update when some event happens changing your underlying state. Yes, Robin’s example is simple, but this is an internet comment, not a GitHub repo. If you want a better example, take a look through the many Flutter GitHub repos which are great examples. And regarding the nuance between declarative and imperitive frameworks, sure, like almost everything, there’s a spectrum. So much so that Flutter, for example, has certain things that are imperative (such as dialogs boxes, if memory serves). I use Flutter as an example since I’ve used Android Java and Flutter significantly, but there are other programming languages/frameworks that are declarative, such as PLC programming. Basically, there is a huge difference between them, it’s pretty clearly defined and if you had tried programming in Flutter (or similar) it would be pretty obvious. But yes, sometimes imperative stuff is used in declarative Frameworks, and may even be required in certain contexts.

        Reply
        • > in a declarative framework you associate data to UI elements by declaring them to be linked. From then on they update mostly automatically.

          Sounds like the definition of reactive programming, which is often presented as an aspect of “declarative UI”, but no one ever explained why reactive is declarative. I definitely saw low-level imperative reactive code.

          > A imperative framework requires you to command (hence imperative) the UI element to update when some event happens changing your underlying state.
          You definitely conflate reactive programming with declarative programming.

          > if you had tried programming in Flutter

          I did.

          Reply
  4. Interesting take.
    And I tend to agree with you.
    The imperative versus declarative dichotomy is disingenuous at best.
    Based on your definitions, which I have no problem with, just about any programmer it is working at some level of declarative programming or another.
    Even the programmer who is creating uis by specifically selecting individual pixels is still working at a level of abstraction in most cases, because they are still selecting pixels, and leaving it up to the underlying software and hardware to figure out what a pixel is and how to draw a pixel.
    Unless you are writing raw assembly code for a CPU with direct connected, bit banged, peripherals I would argue that just about every programmer is writing in some declarative manner or another.

    But, because that itself is hard, unless they are masochists, they will end up writing their own abstractions that are context aware.

    A general purpose graphics library might require some setup or a regular parameter to all its calls telling it what device to send pixels to; whereas the equivalent bespoke library can make assumptions that allows for more efficient code.

    Reply
    • Hey Daniel,
      I wrote that “declarativeness” implies higher level of abstraction. That’s one-way relationship. You seem to imply that higher-level of abstraction, in turn, implies “declarativeness”, which would be two-way relationship (mathematical if and only if). I thought about this aspect, but couldn’t find a convincing explanation for why that would be the case.
      Therefore, as of now, I wouldn’t say that every API that has a higher level of abstraction is automatically “more declarative”. However, I also don’t claim this hypothesis to be incorrect. The problem seems to be that, while we all can sort of agree what “higher level of abstraction” means, the notion of “declarativeness” is a constant source of heated arguments and I saw many different interpretations of this term.
      Therefore, frankly, I think we should just drop this term from our professional vocabulary. Fortunately, nothing practically useful will be lost by doing that.

      Reply
  5. Just for your information, SQL is an example of declarative programming. Not very useful, really?

    And, no, declarative programming is not about “using some high level API”, not in a slightest.

    Reply
    • Just for your information, no one said that tools you’d call “declarative” aren’t useful. The argument is them being “declarative” doesn’t provide any additional information or useful actions that we can do. In other words: you can be 100% productive with SQL never knowing that it’s “declarative”, and once you know that, there is nothing useful for you to do with this info.

      Reply
      • You said this: “the Declarative Programming paradigm doesn’t offer much real-world utility”.

        And this: “Declarative Programming mostly means using higher-level APIs”.

        And this: “does this idea give us anything useful that we don’t get from just talking about higher levels of abstraction? From what I can see, the answer is a clear “No.””.

        None of this is even remotely true, and, frankly, quite ignorant.

        Declarative programming means that you define a goal and the way to achieve this goal is inferred. Just like what SQL does – you define the constraints and the query plan is inferred and executed. Or in Prolog – you define the required outcome and the sequence of unifications needed to achieve it is inferred. Or in any CAS or automated proof system – a sequence of applications of rules and strategies is inferred.

        Ignorance is not a bliss. Even if you’re a puny frontend developer, you still MUST know the basics of CS in order to be productive and for you code not to suck. It’s funny how antiintellectualism and praising ignorance became so entrenched among the “self-taught” and bootcamp-graduated “developers”. As if it’s some kind of psychological mechanism protecting their egos from comparing themselves to the real developers.

        Reply
        • Even after all your arguments, this statement still holds:

          > does this idea give us anything useful that we don’t get from just talking about higher levels of abstraction? From what I can see, the answer is a clear “No.”

          Furthermore, please note that you don’t offer any example of how one could leverage “declarativeness” in their practical work. You just take some higher-level language (SQL), declare it “declarative”, and proceed to assert that it’s good and that other people are “ignorant” to not see that (me, in this case). As I wrote in the article, while the notion of declarativeness is void of practical utility, it does offer a great topic for useless debates and arguments about its own meaning and importance. Exactly what you do with your comments.

          Reply
          • I should not have broken my rule not to talk to frontend developers, it is pointless.

            Look. All these declarative tools do not grow on trees. I know, a very complex idea for a frontend developer, but I believe in you, if you try hard you’ll get it, eventually.

            These tools are created by people to solve their problems. See? Their problems. Not yours. For your specific problems you must build your own declarative tools or at least adapt existing ones. And you won’t be able to do it if you don’t even know what declarative programming is.

            Ever heard of unification? WAM? Hindley-Milner algorithm? SMT solvers? Dozens of other tools that every decent developer must have in their toolbox?

  6. I have to dissent somewhat — I feel this veered off target somewhat right in the beginning, with the claim that Declarative UI “gets its name from the Declarative Programming”. I would argue that’s an oversimplification/mischaracterization, or at least focuses too narrowly on one particular notion of declarative UI. A declaratively-specified UI may not really involve much/any “programming” at all (in the sense of writing source code in languages like C, C++, Python, JS, etc…) — and therein, the argument goes, lies its power. If making a UI declarative instead of imperative involves nothing more than writing source code against an API at a different level of abstraction, what you’re really writing is a higher-level imperative UI. It may be “more declarative” (whatever that means), but it’s a half measure.

    Moving away from UI momentarily, one illustrative example of an imperative/declarative split that doesn’t involve ANY change in abstraction is Python packaging. For many years, Python packages were created by writing a setup.py script, an imperative means of programmatically defining your package configuration. The script would include a call to the setup() function from the setuptools module (replacing the legacy distutils.setup() that came before it), and that would create the package based on arguments supplied to it, combined with command-line arguments to the script. Some of the setuptools.setup() arguments would be static values, or lists or dictionaries of static values. Others would be instances of auxiliary classes for specifying various aspects of the package contents, each with their own configuration imperatively baked in.

    Problem was, predictably, that running a Python script to create a package proved fragile. It was easy for the author to bake in assumptions about the environment in which the script would be run, and (without even realizing it) cause it to break in environments other than those the author tested it under. Different Python implementations, interpreter versions, or installed packages could also affect the outcome of the script, and therefore alter the resulting package.

    So, the ability to configure packages declaratively was introduced. As an alternative to the setup.py script, a setup.cfg file can be supplied which provides the EXACT SAME configuration data, but in a purely declarative form that isn’t executed AS code — it’s merely parsed BY code. This allows the outcome to be homogenized across disparate environments, optimized for maximum performance, and permits the underlying implementation to be improved or even completely replaced — all without having to worry about API compatibility with some vast universe of unknowably complex or nuanced user code that will be executed during the packaging process.

    Even though they APPEAR to be interchangeable (and are, under ideal conditions), a script calling ‘setuptools.setup(name=”my_package”, version=”1.0″, packages=setuptools.find_packages(where=”src”), install_requires=[“somedep”, “someotherdep >= 2.0”])’, and a setup.cfg file with the same information specified declaratively, are worlds apart in terms of performance, predictability, future-proofing, and change resilience.

    Switching back to UI, consider Qt user interfaces. Qt offers both the classic QtWidgets UI framework (imperative, code-driven UIs written in C++ or another language with bindings to the C++ widget library) and its newer cousins, QtQuick & QML.

    A QtWidgets UI is constructed imperatively. Even if the UI is specified in an XML file, in many cases traditional imperative source code is literally *generated from* that XML, then compiled and/or interpreted. Ultimately, a UI defined in a QtWidgets XML file is no more declarative than if that same UI is built from a C++ source file as a custom QtWidget subclass.

    QML UIs, on the other hand, are fundamentally declarative. Qt describes the QML language ITSELF as “a highly readable, declarative, JSON-like syntax with support for imperative JavaScript expressions combined with dynamic property bindings.” While you can embed bits of imperative code *in* a QML UI definition, you no more construct a QML UI imperatively than you define a QtWidgets UI declaratively. It’s not a question of different levels of abstraction — that comparison doesn’t even make sense, they’re completely unrelated languages based on entirely separate UI libraries.

    In THEORY it might be possible to build a QML UI in fully-imperative C++ without writing any declarative .qml code at all, if one were feeling particularly Quixotic. (Qt’s QtQML module does offer a C++ API, with low-level classes like QQmlContext, QQmlComponent, QQmlProperty, QJSValue, QJSManagedValue, etc…) But to do so would defeat the entire purpose of the language.

    Reply
    • Hey Frank,
      I doubt many readers will have the patience to read your comment, but I did. To be frank, I’m not sure what you’re trying to say. Can approaches that you call “declarative” be more useful? Yes, and this article doesn’t argue otherwise. What I’m saying here is that you could use all these tools without knowing that they’re “declarative” and this wouldn’t hinder your productivity in any way. Furthermore, once you know that they are “declarative”, you have nothing to do with this info. So, my argument is that the notion of “declarativeness” is simply useless and only leads to discussions and arguments, without any practial utility.

      Reply
      • “I doubt many readers will have the patience to read your comment”

        …Seems unnecessary.

        “but I did.”

        Your magnanimity knows no bounds.

        “To be frank, I’m not sure what you’re trying to say.”

        That much seems certain.

        “What I’m saying here is that you could use all these tools without knowing that they’re ‘declarative’ and this wouldn’t hinder your productivity in any way.”

        Well, that’s true, as far as it goes. The same could be said about any number of other things: Compiled vs. interpreted languages, for instance. Does someone NEED to know how a given language’s code is executed, to write code in that language? No. (But they’re likely to write better code, or better avoid writing especially bad code, if they understand the nature of the execution model.)

        “Furthermore, once you know that they are ‘declarative’, you have nothing to do with this info.”

        See, that part simply isn’t true.

        Set aside the fact that characterization and categorization are part of the conceptualization process which allows languages and patterns to be developed, to be compared and contrasted, to be discussed, and to be taught on both concrete and abstract levels — even though those are all important. “When you understand the Nature of a Thing, you know what it’s capable of.” –Thomas Fuller (1608-1661)

        But even for someone who has no interest in thinking about how a UI is developed, and simply wants to DO it, the difference matters because fundamentally different approaches will be called for.

        Say you’re showing a text input field, and you want to have a little “reset” button that appears to clear its contents — but it should only show up when the text field isn’t empty.

        In an imperative UI, you might write a callback for the input field’s “edited” signal that checks the contents of the field. If it’s non-empty, it adds the reset button to the UI. If it’s empty, it destroys the reset-button widget. Or maybe just shows/hides it, whatever. Though if it’s initially hidden, then when you’re initially constructing the UI you’ll also need to either manually fire that callback, or duplicate the check on the input field contents, so you get the initial state correct.

        In a declarative UI, you can’t create/destroy widgets on the fly. The reset button has to be part of the declared UI components right from the start. And you don’t want to have to run a callback every time the user edits the input text. Callbacks are slow and make the UI laggy and janky, especially if they’re running in a frontend JS engine rather than your native backend. So instead, you’d include that reset button in the declared UI, and bind its visibility property directly to the boolean value of the text input’s contents: No contents, no visibility. That way, the UI backend takes care of both the initial and later states, and efficiently implements the necessary update checks as part of its refresh loop. In QML, that’d look like…

        TextInput {
        id: textInput
        placeholderText: “Enter something”
        }
        Button {
        id: reset
        text: “Reset”
        visible: textInput.text ? 1 : 0
        }

        I guess I really just don’t understand how the differences in those two approaches (which will often have far deeper implications, since this contrived example barely scratches the surface) can be dismissed as subjective or irrelevant.

        Reply
        • > The same could be said about any number of other things: Compiled vs. interpreted languages, for instance. Does someone NEED to know how a given language’s code is executed, to write code in that language? No. (But they’re likely to write better code, or better avoid writing especially bad code, if they understand the nature of the execution model.)

          First, a developer can happily and productively work without knowing the details of compiled vs interpreted languages. Sufficient to know the details of just your specific tech stack.

          Second, if you’d set to create a new language, you’d face a very practical choice: make it compiled, or make it interpreted. This is a complex trade-off that requires careful analysis. Once chosen, this becomes the core arch of your language. Nothing of this sort applies to “declarative programming’: since this dichotomy is false, you don’t really choose “declarative” or “imperative” approach. You just decide what level of abstraction your language offers, and go with it. You can introduce constucts of higher (or lower) level at any point in time. So, when you set to create a new language, “declarative” vs “imperative” are useless terms.

          However, if you’d like to market your new language, then telling everyone that it’s “declarative” would definitely help, after Google invested so much money into convincing everyone that this is “good thing”.

          > I guess I really just don’t understand how the differences in those two approaches (which will often have far deeper implications, since this contrived example barely scratches the surface) can be dismissed as subjective or irrelevant.

          I dismissed them as unpractical and wrong. But they’re also subjective, because your definition of “declarative UI” differs from other definitions out there. Yours is based on reactiveness and immutability, which, frankly, aren’t strictly related to “declarativeness”. See, that’s subjective interpretation.

          Reply
  7. Declarative frameworks are clean, productive and efficient. If you’ve used one, and didn’t recognized it’s utility, you are being short sighted. There are a number of declarative languages, SQL, DAX, MDX for data retrieval… most PAAS/ Model driven development environments are declarative in nature.. Mendix, etc. doesn’t mean you can’t have procedural code, it just prevents you from having to write procedural code when you can describe the solution you want in a declarative way.

    Reply
  8. This post is a few months old but from comments it feels it never got past from discussion, like some agreement wasn’t reached. Hopefully this will post will nudge things in that direction

    When I started to work with declarative UIs I wondered what was really the point, so I actually made a personal list of the pros and cons. I do agree with it being a marketing buzzword first and a tool second, but:

    * Declarative UIs are easier for junior developers and graphic designers
    * If you’re using a compiled language usually you don’t have to recompile to change the UI
    * When used to describe the UI it separates creation from behavior, which obviously you can do procedurally, but with a declarative UI that’s built-in; that’s one less thing to worry about
    * It’s platform agnostic; it doesn’t matter the language, browser or OS, the code is the same and can be shared as long as there’s no platform-specific bug

    This is subjective but usually a library is written because there’s a need to fulfill; the library usually fulfills that goal beautifully, but leaves many others in the trashcan, yet promotes itself as a general-purpose tool. So they end being useful to their core base, but not a wider group who jumps from library to library trying to find the perfect one

    I do think declarative UIs have downsides. I only like them to build the UI, and use procedural code after that. I found myself having to use what I consider workarounds for behaviors because they are a lot easier to write procedurally. Those workarounds are often widely accepted as the best path with a specific API, but rarely I’ve seen people say they are a downside, and when I’ve asked developers directly why I can’t do X some even try to avoid showing their library is not a good use case for me

    I think the main reason why declarative UIs are being pushed by companies is because they are easier. The make the gap between the end result from an intern and a senior developer smaller, and when you balance it with expenses it makes business sense. Businesses are not after quality, productivity or effectiveness; they are after money, and decreasing costs offsets the need for quality. That might not benefit other groups, but its benefits investors

    Reply
    • Hi Eduardo,
      Thanks for the elaborate comment and very interesting thoughts.
      FWIW, I think that Jetpack Compose, specifically, will actually increase the delta in the end result between “intern and a senior dev” because it’s very complex framework that encourages bad coding habits. No way to verify this, though, so it’s just a hunch.

      Reply

Leave a Comment