Author: dchissick

  • How to Un-nest Arrays using Tableau

    How to Un-nest Arrays using Tableau

    This is a revised copy of my original post in the Tableau Community blogs from 2023.

    Over the years I’ve encountered many examples of data that includes arrays of values within a single field – either in JSON format or as simple text, such as this:  [[4,0],[5,3],[7,3],[8,0],[4,0]]

    Usually the data can be un-nested using SQL or JSON functions into a set of separate rows, which can then be linked to the original data table as a separate table. However, a few years ago I was working on a system with no such option: the original data was in ElasticSearch, and our ODBC driver didn’t enable any un-nesting functions. And we needed to get the row data from the arrays.

    It turns out that Tableau can do this quite easily. Here is the solution:

    1.      Build a regular data source from your table, with the array field as normal text. In the example, it’s called [Shape Coordinates].

    2.      Create a table of numbers from 1 to N, with N being larger than the maximum size (number of items) in your array. The table can be in Excel, the database, or any other format. Add the table to the data source and join it to the data table using a relationship, using a full cartesian join – meaning that you create a relationship calculation with an identical value (I use 1) on each side.

    Note – the example here unnests a two-dimensional array. Each value has any number of pairs of coordinates, specifying a shape to be drawn on the page. The real data actually had pairs of real coordinates (latitude/longitude) specifying a GPS track.

    Now, for every row of data we have numbers from 1 to N in the corresponding table, which we will use to find the 1st, 2nd, 3rd, and so on… members in the array, by splitting it using the commas and brackets in the string. I would have loved to use the SPLIT function here, but it doesn’t accept a field or variable as a parameter, so we’ll use FINDNTH instead:

    3.      We find the start of each member: Start → FINDNTH([Shape Coordinates], “],[“, [Num]-1)

    Note that I’m looking for “],[“, because the comma alone is not enough – it will find the split between each pair as well.

    4.     The end of each member: End → FINDNTH([Shape Coordinates], “],[“, [Num])

    5.      The length of each member is basically End minus Start (the last member has no End, so I use a maximum value, longer than any string that I would expect:

    Length → IF [Start] > 0 THEN

       IF [End] > 0 THEN [End] – [Start]

       ELSE 20

       END

    END

    6.      Now the actual split: Split → MID([Shape Coordinates], [Start], [Length])

    7.      And clean up all the brackets and commas: Pair → REPLACE(REPLACE(REPLACE([Split], “],[“, “”), “]”, “”), “[“, “”)

    I’ve now unnested my array and retrieved an ordered list of values for each row of data. In this case the values are pairs of numbers, but they could be any type of data. If there are less than N members in the array, the Split and Pair fields return Null and can be filtered out. The data looks like this:

    8.      I split the pairs into X and Y coordinates:

    X → FLOAT(SPLIT([Pair], “,”, 1))

    Y → FLOAT(SPLIT([Pair], “,”, 2))

    9.      Now I can visualize my geometric shapes on a worksheet using polygons:


    Discussion

    This technique works, and caused no performance issues on a dataset that included tens of thousands of records, with each array having up to 50 pairs of latitude/longitude values.

    The “normal” solution would unnest the arrays using SQL, thereby creating a new data table with a large multiple of the original number of records, though with very few fields (ID, X and Y). If you are visualizing only a small number of records at a time, I would expect much better performance from my technique, which calculates the array data on the fly and only for the specific records you need. However, if you are filtering or aggregating by array data from thousands or millions of records in one view, a pre-calculated table would probably be much faster.

    The sample workbook is on Tableau Public:  https://public.tableau.com/app/profile/dan.chissick/viz/UnnestingArrays/UnnestingArrays

  • Nested Table Calculations

    Table calculations are one of the most used features of Tableau, but some of their capabilities are less well known, and I’d like to dwell on one of these: Nesting.

    Nesting table calculations occurs when one calculated field references another. That’s not a problem in itself, but what happens when we want each of these to behave differently? Compute the first using Table (across), and the second using Table (down)? I’ve found out that many developers don’t know how to do this, so here is a brief explanation.

    Let’s illustrate it using an example, based on Superstore data. I want to display sales by Product Sub-Category and period, and highlight the top 3 increases in sales for each period.

    I start by creating a simple table with sales by Sub-Category and Year:

    The first calculated field is the difference between each year and the previous year:

    Period Change:     ZN(SUM([Sales])) – ZN(LOOKUP(SUM([Sales]), -1))

    Note that this can easily be created using the Quick table calculation menu below, but then the next step would be impossible.

    Now I use this field in another table calculation:

    Period Change Rank:    RANK([Period Change]) <= 3

    This should return True for the top 3 changes per period (currently Year). I drag the field to Color, but after I set the colors that I want (green and white), I can’t get the correct results using the “Compute using” options.



    This is where the nested option is necessary, because the difference is calculated across the table (by rows), while the rank is calculated down (by columns). I open the “Edit table calculation” dialog box:

    Note that now there is a “Nested Calculations” dropdown, where I can choose the calculation option for each field, and now I can set “Table (across)” for Period Change and “Table (down)” for Period Change Rank. The result is now correct – the highlighted green cells are the top 3 increases for each year.



    To summarize – table calculations can be nested in two levels or more, and it is important to know that you have the option to set the “Compute using” option of each calculated field separately, enabling various types of analyses.

  • Published Data Sources and Calculated Fields

    Have you ever tried to edit a calculated field in Tableau, and seen only the “Edit copy” option? This appears when you’re connected to a published data source, and the field was published together with the data source – so it can only be edited through there.

    So should we publish calculated fields within the data source? What are the advantages and disadvantages? And how do we “move” a calculated field to our workbook when necessary?

    Note that I’ll use the abbreviation “DS” for “data source”, as it appears quite a lot in the text.

    I’ll start with the disadvantages:

    • As already noted, you can’t edit a calculated field that’s published in the data source. In my opinion this is a major issue – from my experience in both development and consulting with Tableau, the process includes a lot of iteration and trial and error, and the “Edit copy” barrier is a real nuisance.
    • Similarly, “Replace references” won’t work either. I’ve encountered several situations in which someone replaced a field, and was surprised that the results didn’t change. On checking, we found that the referencing fields were in the published DS, so they couldn’t be affected.
    • Performance.
      This is a tricky one. But we once analyzed performance on a data source with a lot (I think ~100) of published fields, before and after moving them to the workbook, and there was a significant improvement when the fields were with the dashboard – not in query execution but in compilation time, which can take several seconds. I don’t know the exact reason for this, but the results are consistent: you can save 20-30% of compilation time when the calculations are not in the DS.

    Advantages:

    • If a calculated field is in the data source, you can re-use it in different workbooks. It also prevents different developers from modifying “standard” calculations.
    • Materialization – pre-calculation of some fields in extracts – may improve performance slightly, but is relevant only for very simple calculations.

    Recommendations

    Note that these are my personal recommendations, and other opinions may be available.

    • In general, avoid embedding calculated fields in published data sources.
    • Notable exceptions:
      • Basic row-level calculations, such as:
        • [Price] * [Quantity]
        • [First Name] + “ “ + [Last Name]
        • DATEDIFF(“day”, [Start Date], [End Date])
        • IFNULL([any field], “N/A”)
      • Standardized business logic calculations, that you are confident will not change over time, such as:
        • Profit ratio:  SUM([Profit]) / SUM([Sales])
      • Groups, when they are used to clean up or organize a dimension.
      • Remember that there is no such thing as “pre-calculation” in Tableau beyond row level, so any fields using aggregations, level of detail functions, parameters, and of course table calculations, give you no benefit when published in the DS.

      So what happens in real life?

      In many cases, we create a data source and perform lots of testing or iterations, or even develop dashboards in the same workbook. But then we need to split the data source and publish it separately, while leaving the calculated fields with the dashboards. Here is a tested and proven process for doing this:

      Our starting point is a workbook with a local data source (extract), any number of calculated fields, and some dashboards (my example has just one).


        Step 1 – Copy the data source to a new workbook:

        • Create a new worksheet.
        • Drag any field from the data source to the worksheet.
        • Right-click on the worksheet tab → Copy
        • File → New from the top menu.
        • In the new workbook: right-click on “Sheet1” and Paste.

        Step 2 -Remove all calculations from the data source in the new workbook:

        • Click the filter icon at the top of the data pane, and select Calculation.
        • Select all of the fields now in the list (or batches of several each time, if there are too many), right-click and Delete. If any warning messages appear, you can ignore them.
        • Note that if you wish to retain any calculated fields in the data source, then they shouldn’t be deleted at this stage.
        • Delete any parameters or sets as well.

        Step 3 – Publish the data source (you may have to sign in to the server first)

        I always save the workbook in which I developed a data source, so I can easily make modifications and republish when needed. Therefore, I recommend not to check the “Update workbook to use the published data source” checkbox when publishing, and to save this workbook under a suitable name (“xxx DS.twb”) when finished.

        Step 4 – Connect to the published data source

        • Return to your original workbook.
        • Data → New data source → Tableau Server
        • Select the newly published DS.

        Now your original DS, with all the calculated fields, and the published DS, should appear together in the Data pane. One has the local extract icon (), and the other has the published DS icon.() Note that the published DS has no (or very few) calculated fields, compared to the local DS.


        Step 5 – Replace the data source

        • Right-click on the local DS, “Replace Data Source”, and make sure the local and published data sources are selected in the dialog box.
        • Click OK
        • You should see that all the calculated fields from the local DS are added to the published DS.
        • Close the local DS, and rename the published DS if necessary.

        Now you have a development workbook that is connected to a published DS, but with local calculated fields – that you can edit freely. You can verify that only the fields that you retained when publishing are “locked” for editing.


        Summary

        Published data sources are standard best practice, and you need to decide where to place your calculated fields, but it’s useful to know that there’s a relatively straightforward solution for moving those calculated fields from the data source to the workbook, at any stage of the development cycle.

        As a bonus, while writing this post I thought of a tiny new feature idea for Tableau: find a way to mark calculated fields from a published DS as “locked”, so the developer can identify them in the data pane. Please add some upvotes, and maybe we’ll see it implemented at some point in the future:

        https://ideas.salesforce.com/s/idea/a0BHp000016LlntMAC/differentiate-between-locked-and-editable-calculated-fields

      • Why Tableau ?

        Why Tableau ?

        I’ve been working as a consultant for a Tableau Partner (and reseller) for over a decade, so I’ve been asked by potential customers many times: Why Tableau? Why should we prefer Tableau over its competitors – Power BI, Qlik, Looker, and various others?

        For the customers we have various answers, depending on the context – their requirements, size, deployment type, self-service, embedding, even the community support, and more. But why am I a Tableau “freak”? And so many others in this same community? What makes Tableau special, compared to the other top-end visualization tools?

        I’ve been working in data visualization since the nineties, first with Panorama (which, back then, was excellent) and then touching upon various other tools. When I tried out Tableau for the first time (around 2014) it was just a test in my spare time, and nothing happened. But about a year later I got my first project, so I had to dig deeper, and I was hooked.

        So what’s the difference? There are two major factors.

        First – all the visualization tools I had used before, and most of those I’ve seen since, have a similar method for creating a chart (“worksheet”, “widget”, or whatever): You choose the type of chart from a menu, connect it to your data model, and start setting the properties – which dimension is on rows (or “categories”), what measure to display, bar color, line thickness, label font, and so on. If you need a different type of chart, you get a new set of properties.


        Power BI chart selection
        Quicksight charts

        Tableau, from the beginning, was different. Every worksheet (for a chart or table) uses the same set of definitions, or cards: Columns, Rows, and Marks. You have several types of marks, but almost all of them have the same sets of properties – Color, Size, Text, Tooltip, Detail (and sometimes Shape, Path and Angle). The “Show Me” pane is just a set of default configurations, not really a chart selector, and I rarely use it.

        This means that once you understand the interaction between rows, columns and marks, you can create almost anything – and I mean anything, not just data visualizations, but also artwork and games, for example. There’s an inherent flexibility that gives Tableau an advantage over other tools, both in speed of development and in the ability to iterate and “play around”, because you don’t have to select or switch the type of object that you’re working on all the time, and you’re not limited to their predefined attributes.


        The Tableau interface

        The second factor is Tableau’s calculations. I agree that all BI tools have the ability to create calculated fields, but Tableau has a great combination of a simple interface – everything is in one place, easily accessed and edited – and a large array of options, from the simplest arithmetic to Level of Detail functions and Table Calculations. Once you gain a basic understanding of how it works – aggregate and row-level, and a few other basics – it’s very easy to use, and also very powerful. 


        Try creating this without Tableau

        Some people don’t get it. They’ve been using Power BI or another competitor’s software for years, have difficulty switching to a Tableau mindset, and will always prefer their original tool. But I believe that Tableau’s greatest advantage is still the basic development interface, that allows you more flexibility and speed of implementation compared to its competitors.

        Of course there are other features. In Desktop – you can create a flexible data model from almost anything, and then manipulate the data in various ways. Dashboard design and actions. Beyond it – Tableau can be used by a lone researcher or by an enterprise with 50,000 users, online or offline. Tableau Public, of course. APIs, embedding, and more. The Community.

        But as Andy Kriebel, in my opinion the greatest Tableau guru of all time, recently wrote:

        Tableau is not Agentic Analytics
        Tableau is not Tableau Next
        Tableau is not Tableau Cloud

        Tableau is Tableau Desktop

        What he meant was that the core of Tableau is still the basic development interface, of which Tableau Desktop is the main component. You can add features around it, but without Desktop it won’t be the same. And Desktop is what makes Tableau the best.

      • DATETRUNC

        One of the most underrated functions in Tableau, in my opinion, is DATETRUNC. Underrated, underused, and not understood. Recently I was disappointed to read a whole book about Tableau in which it wasn’t mentioned even once.

        Technically speaking, DATETRUNC “truncates” any date value to the starting point of the period: year, quarter, month, week, and so on. For example:

        NOW()29/08/2025 16:14:35
        DATETRUNC(“hour”, NOW())29/08/2025 16:00:00
        DATETRUNC(“day”, NOW())29/08/2025 00:00:00
        DATETRUNC(“week”, NOW(), “monday”)25/08/2025 00:00:00
        DATETRUNC(“month”, NOW())01/08/2025 00:00:00
        DATETRUNC(“quarter”, NOW())01/07/2025 00:00:00
        DATETRUNC(“year”, NOW())01/01/2025 00:00:00

        This is useful for calculations, but the real power comes from understanding that DATETRUNC is a hierarchical function, because it actually returns the parent of the date at the given level. So if we want to check if [date 1] and [date 2] are both in the same month – meaning that both have the same parent month – we can use   DATETRUNC(“month”, [date 1]) = DATETRUNC(“month”, [date 2])

        … instead of what I have seen too many times:  
        MONTH([date 1]) = MONTH([date 2]) AND YEAR([date 1]) = YEAR([date 2])

        If we want to group dates by month, we can use DATETRUNC(“month”, [date]), and that’s it – we’ve created a parent field for [date] at the month level.

        This is especially useful in creating relationships for date fields in data sources, for example when one table is at the timestamp level (transactional data) and the other at the quarterly level (quarterly goals). Just create a relationship calculation with DATETRUNC(“quarter”…) on both sides, and it works.

        There are countless other examples, but the important thing is to understand the main idea: DATETRUNC doesn’t just change the date value, it raises it to a higher level in the hierarchy.

        Now go out and use it.

      • Workbook Locale

        Many times, I’ve encountered Tableau developers struggling with the date formatting in their workbooks, mostly as it defaults to the US format (mm/dd) instead of what they need – and then they waste time setting custom formats for date fields in various worksheets.

        The solution to this, and one of the lesser known features of Tableau Desktop, is the “Workbook Locale” setting, located under the “File” menu.

        The “Automatic” option defaults to your computer’s regional settings, but if you select “More” you can choose your language, and then all date and number formats in the workbook will use that setting by default.

        Even less known is the fact that this setting, at the workbook level, overrides any other regional locale. So if you save the workbook with a locale other than “Automatic”, the date and currency format is fixed, even after publishing to Tableau Server or Cloud.

        The order of precedence is listed below, and documented here:

        1. Workbook locale (set in Tableau Desktop)
        2. Tableau Server User Account language/locale settings
        3. Web browser language/locale
        4. Tableau Server Maintenance page language/locale settings
        5. Host computer’s language/locale settings

        The bottom line – you can control everything using Workbook Locale, so use it, unless you need varying formats for users in different languages or countries, of course.

      • The Measures Pivot

        The Measures Pivot

        Tableau shows great flexibility in creating and displaying measures, with a built-in “Measure names” dimension that has some of the characteristics of a regular dimension, but not enough of them. I have encountered several use cases for a “real” Measures dimension:

        • Enabling the user to select a set of measures to be displayed in a table or chart.
        • Creating groups of measures, like a hierarchy, for selecting or even aggregating values.
        • Calculating a Balanced Scorecard for multiple measures.

        In one case, a customer needed a scorecard based on 50-60 calculated measures, grouped by subject, with different threshold values for each measure, and a logic for aggregating the scorecard results up to the subject level. Technically it might have been possible to implement this using calculated fields, but “The Pivot” was our solution, and it worked!

        Let’s develop an example using (of course) Superstore data. I’ve created a number of calculated fields, and these will be my measures, or KPIs:

        I now have to create a KPI table, with a list of my KPIs (measures) and some supporting properties. My table is in Excel, but of course it can be a database table as well. A simple table would look like this:

        I now add this table to my data source, using a 1=1 relationship to link it to the base (fact) table. Note that the relationship between the tables has the same fixed value on each side, so technically it is a full cross-join:

        There is no need to be alarmed by the cross-join, which could theoretically create a cartesian multiple between the tables. Relationships in a Tableau data source are activated only when called upon by the analysis, and we will see how to do that selectively in a moment.

        The next step is to add a new calculated field. I’m going to call it “KPI Value”:

        The calculation takes each row in the KPI table, and links it to a different measure, or calculated field. All the calculations have to be aggregated, otherwise you will get the dreaded “Cannot mix aggregate and non-aggregate…” error, but you can use actual calculations as well as field names, such as:

        WHEN “S” THEN SUM([Sales])

        Now I have a measure called “KPI Value”, controlled by a “KPI Name” dimension, which can be filtered, grouped, or otherwise manipulated just like a normal dimension (though totals across this dimension are probably meaningless). For example:

        OK, but this looks weird. The measures are on totally different scales, so percentages appear as 0 or 1. We need to format the values, and this is where the “Format” field in the table comes in handy. In the example I have three options – “N”, “D”, “%” – but you can use as many as you need. Two additional calculated fields give us a formatted text value for each measure, that can be dragged to Label or Tooltip as needed:

        And the end result is:

        Note the ability to use “Group” as a hierarchy or a filter.

        Great. This is very useful, but there’s another level that we can add here – setting threshold values for all the KPIs, within the KPI table. I am leveraging the existence of this table, and adding a few more fields:

        The “Color” field is just a text description of a color, or state (the contents could also be “Good” or “Bad”, for example), but the “Color from” and “Color to” fields define a range of values for the KPI, that we can then color by the “Color” field.

        (I know, too many meanings of the word “Color”. But it’s worth it…)

        To implement this in our worksheet, I added one new calculated field:

        This filters the rows for each KPI so that only one “Color” row remains, and also takes into account those with no threshold values and just one row. I can drag it to the Filters card, filter by True, and drag the Color field to Color:

        And that’s already a type of Balanced Scorecard, highlighting measures (KPIs) by their performance. Any changes to the thresholds can easily be made by updating the KPI table, and the purists will split it into two tables, KPIs and Thresholds, with a join between them using KPI Code.

        This is Tableau, of course, so it can get even more complicated – and powerful. My original customer had three tiers of “Accounts”, with different threshold levels for each, so I simply added another level into the KPI table, multiplying the number of rows by 3 again. But it worked.

        A word of warning: this is a great technique, but it’s not good for performance. The original use case had 300+ rows in the KPI table, and less than a million in the raw data, and performance dropped to 30+ seconds per view, but the analysis was so powerful that my customer was still happy with it. So use it with care, and don’t expect something that works instantaneously with Superstore data to be as fast with tens of millions of rows and 40-50 KPIs.

        Note: the technique has also been tested with Tableau’s new multi-fact relationship model, and it works. There are two important considerations:

        • Link the KPI table to one of the fact (base) tables, and not a dimension table.
        • Each KPI calculation should be based only upon measures from a single fact table, otherwise results can be unexpected. That’s because the relationship model won’t know how to join the two tables before performing the calculation.

        The workbook that I used is published on Tableau Public, and the KPI excel file is below:

      • Tooltip = Axis ?

        Tooltip = Axis ?

        Tableau has many little quirks. With time you get used to them, but for new developers some of the small stuff can be very frustrating at first, and a nudge in the right direction always helps. So here’s one of them.

        In Tableau charts, any numbers appearing in tooltips are formatted using the “Axis” format, and not the default “Pane” format. So if you want your chart label to show “24.1%”, and the axis to have 0%, 5%, 10%, etc., your tooltip will show “24%”, which is a bit strange. See the example below:

        The best solution in such a scenario is to duplicate the relevant field, so basically you’re using two different fields – one for the label and axis, and another for the tooltip. Now you can set the axis format for the tooltip field (“Profit ratio (copy)” in the example below) without causing your axis to show unnecessary digits.

        Is there a reason for this functionality? Logic says that the tooltip format should be similar to the label, not the axis (which is usually more “rounded”), but maybe there’s something hiding behind it. And there’s been an Idea (now on the Salesforce IdeaExchange) about changing it for 12 years…

      • Tableau Cloud 💔Static Files

        A nice feature of Tableau (Server) is that you can create data sources with multiple connections, including to files – for example a few database tables, joined/related to a static Excel file (because there’s a small set of data that’s not in your DB), or maybe to a shapefile.
        Then you can publish the data source, check the “Include External Files” box, and when refreshing your extract (or connecting live), the file is simply there. Static.


        Database tables with a relationship to Excel

        The Publish Data Source dialog box

        But what happens when you publish the same data source to Tableau Cloud?

        It turns out that this doesn’t work. The documentation states very clearly that it should work:

        But after testing thoroughly, and opening a case with Tableau Support, I can confirm that this causes an error on Tableau Cloud.
        If you publish a refreshable/live data source with a static file included, even after checking “Include External Files”, any attempt to connect or refresh extracts returns an error, as Tableau Cloud tries to access the file(s).

        This happens both with direct database connections and with Tableau Bridge. Now, obviously, from a technical point of view you can use a Bridge connection to connect to the file on a UNC path, but probably that’s exactly what the developer of the data source was trying to avoid.

        Why is this an issue?

        One of my customers is migrating from Tableau Server to Cloud, and has dozens of data sources that include static files. They discovered the problem only after trying to refresh the extracts on Cloud. All of them now have to be modified – mostly by loading the file into a database table.

        This issue is a major difference in functionality between Server and Cloud, but it is undocumented and doesn’t appear in any migration guides. So it’s important for the community (that’s you – my readers) to know about it, and take it into account for future migrations, at least until the documentation is corrected.

      • DZV is great, but…

        DZV is great, but…

        Dynamic Zone Visibility (or DZV) was introduced by Tableau back in 2022, and is a great feature. It enables you to display or hide any dashboard object, or container, based on the value of a parameter, or a calculation that uses a parameter.

        What most developers I’ve worked with don’t know, however, is that hiding a worksheet using DZV does not prevent Tableau from retrieving the data for the worksheet. So, for example, if you are using a parameter to switch between 5 different displays (so 4 are hidden), the data for all of them is being calculated every time you refresh the dashboard, or change a filter value, even if only one is visible. That’s a x5 performance hit!

        In order to test this thoroughly, I created a workbook with two worksheets: “All Routes”, which is quite slow, and “Bus Calendar”. I also created a parameter with two values (“Times”, “Map”) for switching between them, and the necessary calculated fields:

        Parameter
        One of the boolean fields

        I then created three dashboards:
        1. A dashboard displaying both sheets, with no DZV.
        2. A dashboard switching between both the sheets, using only DZV.
        3. Like no. 2, but adding a context filter on each sheet, using the same boolean field as the DZV, so the data is filtered out when the sheet is hidden.

        Filtering on the “Show map” field
        Context filter

        I then used a Performance Recording to see what happens under the hood. Note that Tableau uses caching, so when a worksheet’s data has already been retrieved using a specific filter, it won’t execute the query again. The results are below:

        DZV Performance Recording

        So what happened?

        • I opened the filtered DZV dashboard first, with “Times” selected in my parameter. Only the “Bus Calendar” query was executed.
        • I changed a filter that affects both sheets. Again, only the “Bus Calendar” query was executed.
        • I switched my parameter value to “Routes”. You can see in the screenshot above that only the “All Routes” query (the long green bar) was executed.
        • Now I opened the unfiltered DZV dashboard, changed the filter, and it immediately executed the queries for both worksheets, even though only “All Routes” was visible.
        • Lastly, I opened the dashboard that displays no sheets. No queries were executed, because both sheets already had the data.

        Obviously this is just a quick scenario. I’ve checked this much more thoroughly on both Desktop and Server, and you can easily check for yourselves using Tableau’s Performance Recording tools (more about that in a future post).

        For now, I’m not telling you not to use DZV. It has great advantages over the old “hack” of filtering worksheets (which I used here), in that you can hide other objects as well, and you don’t need to hide the worksheet title in order to make it disappear. Just bear in mind that hidden worksheets are still calculated, and that affects performance, especially if you have a lot of data.