docs

Introduction

Viv npm version package documenation

A WebGL-powered toolkit for interactive visualization of high-resolution, multiplexed bioimaging datasets.

About

Viv is a JavaScript library for rendering OME-TIFF and OME-NGFF (Zarr) directly in the browser. The rendering components of Viv are packaged as deck.gl layers, making it easy to compose with existing layers to create rich interactive visualizations.

More details and related work can be found in our paper and original preprint. Please cite our paper in your research:

Trevor Manz, Ilan Gold, Nathan Heath Patterson, Chuck McCallum, Mark S Keller, Bruce W Herr II, Katy Börner, Jeffrey M Spraggins, Nils Gehlenborg, "Viv: multiscale visualization of high-resolution multiplexed bioimaging data on the web." Nature Methods (2022), doi:10.31219/10.1038/s41592-022-01482-7

💻 Related Software

Screenshot Description
Avivator A lightweight viewer for local and remote datasets. The source code is include in this repository under avivator/. See our 🎥 video tutorial to learn more.
Vizarr A minimal, purely client-side program for viewing OME-NGFF and other Zarr-based images. Vizarr supports a Python backend using the imjoy-rpc, allowing it to not only function as a standalone application but also directly embed in Jupyter or Google Colab Notebooks.

💥 In Action

💾 Supported Data Formats

Viv's data loaders support OME-NGFF (Zarr), OME-TIFF, and Indexed OME-TIFF*. We recommend converting proprietrary file formats to open standard formats via the bioformats2raw + raw2ometiff pipeline. Non-pyramidal datasets are also supported provided the individual texture can be uploaded to the GPU (< 4096 x 4096 in pixel size).

Please see the tutorial for more information.

*We describe Indexed OME-TIFF in our paper as an optional enhancement to provide efficient random chunk access for OME-TIFF. Our approach substantially improves chunk load times for OME-TIFF datasets with large Z, C, or T dimensions that otherwise may incur long latencies due to seeking. More information on generating an IFD index (JSON) can be found in our tutorial or documentation.

💽 Installation

You will also need to install deck.gl and other peerDependencies manually. This step prevent users from installing multiple versions of deck.gl in their projects.

Breaking changes may happen on the minor version update. Please see the changelog for information.

📖 Documentation

Detailed API information and example sippets can be found in our documentation.

🏗️ Development

This repo is a monorepo using pnpm workspaces. The package manager used to install and link dependencies must be pnpm.

Each folder under packages/ are a published as a separate packages on npm under the @vivjs scope. The top-level package @hms-dbmi/viv exports from these dependencies.

To develop and test the @hms-dbmi/viv package:

  1. Run pnpm install in viv root folder
  2. Run pnpm dev to start a development server
  3. Run pnpm test to run all tests (or specific, e.g., pnpm test --filter=@vivjs/layers)

🛠️ Build

To build viv's documentation and the Avivator website (under sites/), run:

📄 Sending PRs and making releases

For changes to be reflected in package changelogs, run npx changeset and follow the prompts.

Note not every PR requires a changeset. Since changesets are focused on releases and changelogs, changes to the repository that don't effect these won't need a changeset (e.g., documentation, tests).

The Changesets GitHub Action will create and update a PR that applies changesets versions of @vivjs/ packages to NPM.

🌎 Browser Support

Viv supports coverage across Safari, Firefox, Chrome, and Edge. Please file an issue if you find a browser in which Viv does not work.

Getting Started

Start a simple web server to make your data avaiable to the browser via HTTP.

If following the Data Preparation tutorial, the server provides Viv access to the OME-TIFF via URL:

  • http://localhost:8000/LuCa-7color_Scan1.ome.tif (OME-TIFF)

If you wish to use the SideBySideViewer, simply replace PictureInPictureViewer with SideBySideViewer and add props for zoomLock and panLock while removing overview and overviewOn.

API Structure

API Structure

Viewer

Viv provides four high-level React components called "Viewers": PictureInPictureViewer, SideBySideViewer, VolumeViewer and VivViewer. The first three wrap the VivViewer and use the View API for composing different layouts. VivViewer wraps a DeckGL component and handles managing the complex state (i.e multiple "views" in the same scene) as well as resizing and forwarding rendering props down to the underling layers via the View API.

View

A "View" in Viv is a wrapper around a deck.gl View that exposes an API to manage where an image is rendered in the coordinate space. A View must inherit from a VivView and implement:

  • 1.) a filter for updating ViewState
  • 2.) a method for instantiating a View
  • 3.) a method for rendering Layers

Views are used by Viv's Viewer components. For example, the OverviewView is used in the PictureInPictureViewer to provide a constant overview of the high resolution image. The SideBySideView has supports locked and unlocked zoom/pan interactions within the SideBySideViewer.

Layer

Viv implements several deck.gl Layers, for rending RGB and multi-channel imaging data. These layers can be composed like any other layer in the deck.gl ecosystem. The main three layers for use are MultiscaleImageLayer (for tiled, pyramidal images), ImageLayer (for non-pyramidal and non-tiled iamges), and VolumeLayer (for volumetric ray casting rendering) which accept PixelSource arguments for data fetching. These layers handle the complexity of data fetching and setting up the rendering by wrapping the XRLayer, XR3DLayer and BitmapLayer, which are the lower level rendering layers.
The XRLayer (eXtended Range Layer) and XR3DLayer enable multi-channel additive blending of Uint32, Uint16, Uint8 and Float32 data on the GPU.

A crucial part of the layer is the extensions prop - these control the per-fragment (pixel) rendering. The default on all layers is ColorPaletteExtension and it will provide a default colors prop - thus all that is necessary for controlling rendering is the contrastLimits. But if you wish to do something different, for example to use a "colormap" like viridis, you will need to pass in extensions: [new AdditiveColormapExtension()] and colormap: viridis. Please see deck.gl's documentation for more information.

Loader (Pixel Sources)

Viv wraps both Tiff- and Zarr-based data sources in a unified PixelSource interface. A pixel source can be thought of as a multi-dimensional "stack" of image data with labeled dimensions (usually ["t", "c", "z", "y", "x"]). A multiscale image is represented as list of pixel sources decreasing in shape. Viv provides several helper functions to intialize a loader via url: loadOmeTiff, loadBioformatsZarr, loadOmeZarr, and loadMultiTiff. Each function returns a Promise for an object of shape { data: PixelSouce[], metadata: M }, where M is a JavaScript object containing the format-specific metadata for the image. For OME-TIFF and Bioformats-Zarr, and MultiTiff the metadata is identical (OME-XML representation), for OME-Zarr,the metadata is that for a multiscale group (more information: https://ngff.openmicroscopy.org/latest/). This metadata can be useful for creating UI componenets that describe the data source.

Custom Shaders

2D

Viv's shaders in 2D can be modified via deck.gl shader hooks. The LensExtension, AdditiveColormapExtension, and ColorPaletteExtesnion (default) in Viv implement shader hooks. Implementing your own shader hook requires extending the standard layer extension with a getShaders method that returns an inject function for one of the following supported hooks:

DECKGL_PROCESS_INTENSITY(inout float intensity, vec2 contrastLimits, int channelIndex)

This hook allows for custom processing of raw pixel intensities. For example, a non-linear (or alternative) transformation function may be provided to override the default linear ramp function with two contrast limit endpoints. This hook is available on all layers in all modes. By default, the layer provides a reasonable function for this.

DECKGL_MUTATE_COLOR(inout vec4 rgba, float intensity0, float intensity1, float intensity2, float intensity3, float intensity4, float intensity5, vec2 vTexCoord)

This hook allows for users to mutate conversion of a processed intensity (from DECKGL_PROCESS_INTENSITY) into a color. This is only available in 2D layers. An implementation for this hook is required by all Viv extensions.

DECKGL_FILTER_COLOR(inout vec4 color, FragmentGeometry geometry)

Please see deck.gl's documentation as this is a standard hook. This hook may be used for example to apply a gamma transformation to the output colors.

3D

Viv's shaders can be modified in 3D similar to the above, with the exception of DECKGL_MUTATE_COLOR. Instead, at least one provided extension must implement _BEFORE_RENDER, _RENDER and _AFTER_RENDER. Specifically, one extension must have opts.rendering as an object with at least the _RENDER property a string that contains valid glsl code. For example, the MaximumIntensityProjectionExtension uses _BEFORE_RENDER to set up an array which will hold the found maximum intensities. _RENDER fills that array as maximum intensities are found. And finally _AFTER_RENDER will place those intensities in the color or fragColor buffer to be rendered.

Data preparation

This guide demonstrates how to generate a pyramidal OME-TIFF with Bio-Formats that can be viewed with Avivator. Viv also supports OME-NGFF, but tooling to generate the format remains limited as the specification matures. We will update this tutorial accordingly when a method for generating OME-NGFF is endorsed by the Open Microscopy Environment.

Getting Started

NOTE: If you wish to view an image located on a remote machine accessible via SSH, please see the note at the end of this document.

This tutorial requires Bio-Formats bioformats2raw and raw2ometiff command-line tools. It's easiest to install these tools using conda, but binaries also are available for download on the corresponding github repositories.

Input data

Bio-Formats is an incredibly valuable toolkit and supports reading over 150 file formats. You can choose one of your own images, but for the purposes of this tutorial, we will use a multiplexed, high-resolution Perkin Elmer (1.95 GB) image made available under CC-BY 4.0 on OME.

Pyramid Generation

First use bioformats2raw to convert the .qptiff format to an intermediate "raw" format. This representation includes the multiscale binary pixel data (Zarr) and associated OME-XML metadata.

The next step is to convert this "raw" output to an OME-TIFF.

Note: LZW is the default if you do not specify a --compression option (the syntax requires an "=" sign, like --compression=zlib).

You may also use bfconvert (Bioformats >= 6.0.0) to generate an OME-TIFF.

All the above arguments are necessary except for -compression which is optional (default uncompressed). In order for an image to be compatible with Viv:

  • -pyramid-scale must be 2
  • -tilex must equal -tiley (ideally a power of 2)
  • -pyramid-resolutions must be computed using the image dimensions and tile size. For example, for a 4096 x 4096 with tile size of 512, 3 = log2(ceil(4096 / 512)) resolutions should work well. a power of 2), and the -pyramid-resolutions argument should be adjusted to match the size of your image and your choice of tile size. For example, if you have a 4096 x 4096 and tile size of 512, 3 = log2(ceil(4096 / 512)) resolutions should work well. For the LuCa-7color_Scan1.qptiff image, 6 = max(log2(ceil(12480 / 512)), log2(ceil(17280 / 512))) resolutions work best as the image is 12480 x 17280 in size. There is currently no "auto" feature for inferring the number of pyramid resolutions. Without the compression set, i.e -compression LZW, the output image will be uncompressed.

There is a 2GB limit on the total amount of data that may be read into memory for the bfconvert cli tool. Therefore for larger images, please use bioformats2raw + raw2ometiff.

NOTE: Viv currently uses geotiff.js for accessing data from remote TIFFs over HTTP and support the three lossless compression options supported by raw2ometiff - LZW, zlib, and Uncompressed as well as jpeg compression for 8 bit data. Support for JPEG-2000 for >8 bit data is planned. Please open an issue if you would like this more immediately.

Indexed OME-TIFF

The TIFF file format is not designed for the cloud, and therefore certain images are less suitable to be natively read remotely. If your OME-TIFF image contains large non-XY dimensions (e.g. Z=100, T=50), you are likely to experience latencies when switching planes in Avivator due to seeking the file over HTTP. We recommend generating an index (offsets.json) that contains the byte-offsets for each plane to complement OME-TIFF, solving this latency issue and enabling fast interactions.

Alternatively you may use our web application for generating the offsets.

⚠️ IMPORTANT ⚠️ Avivator requires the offsets.json file to be adjacent to the OME-TIFF on the server in order to leverage this feature. For example, if an index is generated for the dataset in this tutorial, the following directory structure is correct:

data
├── LuCa-7color_Scan1.offsets.json
└── LuCa-7color_Scan1.ome.tif

This index can be reused by other Viv-based applications and even clients in other languages to improve remote OME-TIFF performance. If using Viv, you must fetch the offsets.json directly in your application code. See our example for help getting started

Viewing in Avivator

⚠️ Warning ⚠️ This section only works in Chrome, Firefox, and Edge (not Safari) due to differences in how browser restrict websites hosted at https:// URLs (Avivator) from issuing requests to http:// (the local data server) as a security measure. The supported browsers allow requests to http:// from https:// under the special case of localhost, whereas Safari prevents all requests to http://. As a workaround, you can start an Avivator client at http://, but we suggest trying a different supported browser. Alternatively, you can drag-and-drop an image (no local server) into the viewer in any browser.

There are a few different ways to view your data in Avivator.

If you have an OME-TIFF saved locally, you may simply drag and drop the file over the canvas or use the "Choose file" button to view your data.

Otherwise Avivator relies on access to data over HTTP, and you can serve data locally using a simple web-server. It's easiest to use http-server to start a web-server locally, which can be installed via npm or Homebrew if using a Mac.

Install http-server

Starting a Server

From within this directory, start a local server and open Avivator in your browser.

This command starts a web-server and makes the content in the current directory readable over HTTP. Once the server is running, open Avivator and paste http://localhost:8000/LuCa-7color_Scan1.ome.tif into the input dialog to view the OME-TIFF generated in this tutorial. For convenience, you can also create a direct link by appending an image_url query parameter:

Troubleshooting: Viv relies on cross-origin requests to retrieve data from servers. The --cors='*' flag is important to ensure that the appropriate Access-Control-Allow-Origin response is sent from your local server. In addition, web servers must allow HTTP range requests to support viewing OME-TIFF images. Range requests are allowed by default by http-server but may need to be enabled explicitly for your production web server.

Viewing an Image via SSH

It is possible to generate the datasets in this tutorial on a remote machine and view them in Avivator via SSH and port forwarding. For example, you can follow this tutorial on a remote machine within SHH, linking your local port 12345 to the remote's local 8000.

Since your local port 12345 is linked to the remote 8000 via SSH, you can now view the remote dataset locally via your localhost:12345 in Avivator: that is, you paste http://avivator.gehlenborglab.org/?image_url=http://localhost:12345/LuCa-7color_Scan1.ome.tif into your browser instead of http://avivator.gehlenborglab.org/?image_url=http://localhost:8000/LuCa-7color_Scan1.ome.tif as is written at the end of the tutorial.

Other Examples

Other sample OME-TIFF data can be downloaded from OME-TIFF sample data provided by OME and viewed with Viv locally (without needing to run Bio-Formats).

3D Rendering

Viv has the capability to do volume ray-casting on in-memory volumetric data. It also exposes an API for applying arbitrary affine transformations to the volume when rendering. Napari is another popular tool for doing both of these, but there are some key differences. Viv follows the convention of 3D graphics to first have the x-axis running left-right across the screen, then the y-axis running up and down, and then the z-axis running into and out of the screen (all following the right-hand-rule for standard orientation). Napari orients volumes in order to respect broadcasting conventions with numpy - that is their axis order is actually zyx, which is the reverse of Viv.

If you have a homogeneous matrix that you are using in Napari, the best way to make it usable in Viv is to go back through the steps by which you got the matrix, reversing your z and x operations. However, if this is not possible, the following steps should produce something that looks identical as it swaps the row and column space (i.e you want to change the order of both how the matrix interprets its inputs and how it affects its outputs):

If you would like more information, please visit our github page.

Viewers (React Components)

PictureInPictureViewer

This component provides a component for an overview-detail VivViewer of an image (i.e picture-in-picture).

PictureInPictureViewer
Parameters
props (Object)
Name Description
props.contrastLimits Array List of [begin, end] values to control each channel's ramp function.
props.colors Array List of [r, g, b] values for each channel.
props.channelsVisible Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
props.loader Array The data source for the viewer, PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.selections Array Selection to be used for fetching data.
props.overview Object Allows you to pass settings into the OverviewView: { scale, margin, position, minimumWidth, maximumWidth, boundingBoxColor, boundingBoxOutlineWidth, viewportOutlineColor, viewportOutlineWidth}. See http://viv.gehlenborglab.org/#overviewview for defaults.
props.overviewOn Boolean Whether or not to show the OverviewView.
props.viewStates Array? Array of objects like [{ target: [x, y, 0], zoom: -zoom, id: DETAIL_VIEW_ID }] for setting where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.extensions Array? deck.gl extensions to add to the layers.
props.clickCenter Boolean? Click to center the default view. Default is true.
props.lensEnabled boolean? Whether or not to use the lens (deafult false). Must be used with the LensExtension in the extensions prop.
props.lensSelection number? Numeric index of the channel to be focused on by the lens (default 0). Must be used with the LensExtension in the extensions prop.
props.lensRadius number? Pixel radius of the lens (default: 100). Must be used with the LensExtension in the extensions prop.
props.lensBorderColor Array? RGB color of the border of the lens (default [255, 255, 255]). Must be used with the LensExtension in the extensions prop.
props.lensBorderRadius number? Percentage of the radius of the lens for a border (default 0.02). Must be used with the LensExtension in the extensions prop.
props.transparentColor Array? An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
props.snapScaleBar boolean? If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false.
props.onViewportLoad function? Function that gets called when the data in the viewport loads.
props.deckProps Object? Additional options used when creating the DeckGL component. See the deck.gl docs. . layerFilter , layers , onViewStateChange , views , viewState , useDevicePixels , and getCursor are already set.

SideBySideViewer

This component provides a side-by-side VivViewer with linked zoom/pan.

SideBySideViewer
Parameters
props (Object)
Name Description
props.contrastLimits Array List of [begin, end] values to control each channel's ramp function.
props.colors Array List of [r, g, b] values for each channel.
props.channelsVisible Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
props.loader Array This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.selections Array Selection to be used for fetching data.
props.zoomLock Boolean Whether or not lock the zooms of the two views.
props.panLock Boolean Whether or not lock the pans of the two views.
props.viewStates Array? List of objects like [{ target: [x, y, 0], zoom: -zoom, id: 'left' }, { target: [x, y, 0], zoom: -zoom, id: 'right' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.extensions Array? deck.gl extensions to add to the layers.
props.lensEnabled boolean? Whether or not to use the lens deafult (false).
props.lensSelection number? Numeric index of the channel to be focused on by the lens (default 0).
props.lensBorderColor Array? RGB color of the border of the lens (default [255, 255, 255]).
props.lensBorderRadius number? Percentage of the radius of the lens for a border (default 0.02).
props.lensRadius number? Pixel radius of the lens (default: 100).
props.transparentColor Array? An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
props.snapScaleBar boolean? If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false.
props.deckProps Object? Additional options used when creating the DeckGL component. See the deck.gl docs. . layerFilter , layers , onViewStateChange , views , viewState , useDevicePixels , and getCursor are already set.

VolumeViewer

This component provides a volumetric viewer that provides provides volume-ray-casting.

VolumeViewer
Parameters
props (Object)
Name Description
props.contrastLimits Array List of [begin, end] values to control each channel's ramp function.
props.colors Array? List of [r, g, b] values for each channel - necessary if using one of the ColorPalette3DExtensions extensions.
props.channelsVisible Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap - necessary if using one of the AdditiveColormap3DExtensions extensions.
props.loader Array This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.selections Array Selection to be used for fetching data
props.resolution Array? Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length - 1 the lowest with default loader.length - 1)
props.modelMatrix Object? A column major affine transformation to be applied to the volume.
props.xSlice Array? 0-1 interval on which to slice the volume.
props.ySlice Array? 0-1 interval on which to slice the volume.
props.zSlice Array? 0-1 interval on which to slice the volume.
props.onViewportLoad function? Function that gets called when the data in the viewport loads.
props.viewStates Array? List of objects like [{ target: [x, y, z], zoom: -zoom, id: '3d' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.clippingPlanes Array<Object>? List of math.gl Plane objects.
props.useFixedAxis Boolean? Whether or not to fix the axis of the camera (default is true).
extensions (Array?) deck.gl extensions to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.

VivViewer

This component wraps the DeckGL component.

VivViewer
Parameters
props (Object)
Name Description
props.layerProps Array Props for the layers in each view.
props.randomize boolean? Whether or not to randomize which view goes first (for dynamic rendering of multiple linked views).
props.viewStates Array<object> List of objects like [{ target: [x, y, 0], zoom: -zoom, id: 'left' }, { target: [x, y, 0], zoom: -zoom, id: 'right' }]
props.onViewStateChange ViewStateChange? Callback that returns the deck.gl view state ( https://deck.gl/docs/api-reference/core/deck#onviewstatechange ).
props.onHover Hover? Callback that returns the picking info and the event ( https://deck.gl/docs/api-reference/core/layer#onhover https://deck.gl/docs/developer-guide/interactivity#the-picking-info-object )
props.hoverHooks HoverHooks? Object including utility hooks - an object with key handleValue like { handleValue: (valueArray) => {}, handleCoordinate: (coordinate) => {} } where valueArray has the pixel values for the image under the hover location and coordinate is the coordinate in the image from which the values are picked.
props.deckProps Object? Additional options used when creating the DeckGL component. See the deck.gl docs. . layerFilter , layers , onViewStateChange , views , viewState , useDevicePixels , and getCursor are already set.

Views

VivView

This class generates a layer and a view for use in the VivViewer

new VivView(args: Object)
Parameters
args (Object)
Name Description
args.id string Id for the current view
args.x number? (default 0) X (top-left) location on the screen for the current view
args.y number? (default 0) Y (top-left) location on the screen for the current view
args.height Object Width of the view.
args.width Object Height of the view.
Instance Members
getDeckGlView()
filterViewState(args)
getLayers(args)

OverviewView

This class generates a OverviewLayer and a view for use in the VivViewer as an overview to a Detailview (they must be used in conjection). From the base class VivView, only the initialViewState argument is used. This class uses private methods to position its x and y from the additional arguments:

new OverviewView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.id Object for thie VivView
args.loader Object PixelSource[], where each PixelSource is decreasing in shape. If length == 1, not multiscale.
args.detailHeight number Height of the detail view.
args.detailWidth number Width of the detail view.
args.scale number? (default 0.2) Scale of this viewport relative to the detail. Default is .2.
args.margin number? (default 25) Margin to be offset from the the corner of the other viewport. Default is 25.
args.position string? (default 'bottom-right') Location of the viewport - one of "bottom-right", "top-right", "top-left", "bottom-left." Default is 'bottom-right'.
args.minimumWidth number? (default 150) Absolute lower bound for how small the viewport should scale. Default is 150.
args.maximumWidth number? (default 350) Absolute upper bound for how large the viewport should scale. Default is 350.
args.minimumHeight number? (default 150) Absolute lower bound for how small the viewport should scale. Default is 150.
args.maximumHeight number? (default 350) Absolute upper bound for how large the viewport should scale. Default is 350.
args.clickCenter Boolean? (default true) Click to center the default view. Default is true.
Instance Members
_setHeightWidthScale($0)
_setXY()
getDeckGlView()
filterViewState($0)
getLayers($0)

DetailView

This class generates a MultiscaleImageLayer and a view for use in the VivViewer as a detailed view. It takes the same arguments for its constructor as its base class VivView plus the following:

new DetailView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.id string id of the View
args.x number? (default 0) X (top-left) location on the screen for the current view
args.y number? (default 0) Y (top-left) location on the screen for the current view
args.height number Width of the view.
args.width number Height of the view.
args.snapScaleBar boolean? (default false) If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false.
Instance Members
getLayers($0)
filterViewState($0)

VolumeView

This class generates a VolumeLayer and a view for use in the VivViewer as volumetric rendering.

new VolumeView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.target Array<number> Centered target for the camera (used if useFixedAxis is true)
args.useFixedAxis Boolean Whether or not to fix the axis of the camera.
args.args ...any
Instance Members
getDeckGlView()
filterViewState($0)
getLayers($0)

SideBySideView

This class generates a MultiscaleImageLayer and a view for use in the SideBySideViewer. It is linked with its other views as controlled by linkedIds, zoomLock, and panLock parameters. It takes the same arguments for its constructor as its base class VivView plus the following:

new SideBySideView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.id string id of the View
args.x number? (default 0) X (top-left) location on the screen for the current view
args.y number? (default 0) Y (top-left) location on the screen for the current view
args.height number Width of the view.
args.width number Height of the view.
args.linkedIds Array<String> (default []) Ids of the other views to which this could be locked via zoom/pan.
args.panLock Boolean (default true) Whether or not we lock pan.
args.zoomLock Boolean (default true) Whether or not we lock zoom.
args.viewportOutlineColor Array? (default [255,255,255]) Outline color of the border (default [255, 255, 255])
args.viewportOutlineWidth number? (default 10) Default outline width (default 10)
args.snapScaleBar boolean? (default false) If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false.
Instance Members
filterViewState($0)
getLayers($0)

Utility Methods

getChannelStats

Computes statics from pixel data.

This is helpful for generating histograms or scaling contrastLimits to reasonable range. Also provided are "contrastLimits" which are slider bounds that should give a good initial image.

getChannelStats(arr: TypedArray): {mean: number, sd: number, q1: number, q3: number, median: number, domain: Array<number>, contrastLimits: Array<number>}
Parameters
arr (TypedArray)
Returns
{mean: number, sd: number, q1: number, q3: number, median: number, domain: Array<number>, contrastLimits: Array<number>}:

getDefaultInitialViewState

Create an initial view state that centers the image in the viewport at the zoom level that fills the dimensions in viewSize.

getDefaultInitialViewState(loader: Object, viewSize: Object, zoomBackOff: Object?, use3d: Boolean?, modelMatrix: Boolean?): Object
Parameters
loader (Object) (PixelSource[] | PixelSource)
viewSize (Object) { height, width } object giving dimensions of the viewport for deducing the right zoom level to center the image.
zoomBackOff (Object? = 0) A positive number which controls how far zoomed out the view state is from filling the entire viewport (default is 0 so the image fully fills the view). SideBySideViewer and PictureInPictureViewer use .5 when setting viewState automatically in their default behavior, so the viewport is slightly zoomed out from the image filling the whole screen. 1 unit of zoomBackOff (so a passed-in value of 1) corresponds to a 2x zooming out.
use3d (Boolean? = false) Whether or not to return a view state that can be used with the 3d viewer
modelMatrix (Boolean?) If using a transformation matrix, passing it in here will allow this function to properly center the volume.
Returns
Object: A default initial view state that centers the image within the view: { target: [x, y, 0], zoom: -zoom }.

Layers

The documentation in this section shows each layer as an object with properties although this is not accurate. These are deck.gl layer classes and the properties are those of the props that are passed into the constructor of the deck.gl layer classes. For more information, see deck.gl's documentation on how to use layers or the layer class. We welcome contributions to improve the docs, whether it be small fixes or a new docs site that would allow us to show these classes properly. Thanks!

MultiscaleImageLayer

MultiscaleImageLayer

Type: object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : Image pyramid. PixelSource[], where each PixelSource is decreasing in shape.
selections (Array) : Selection to be used for fetching data.
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
viewportId (string?) : Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
id (String?) : Unique identifier for this layer.
onTileError (function?) : Custom override for handle tile fetching errors.
onHover (function?) : Hook function from deck.gl to handle hover objects.
maxRequests (number?) : Maximum parallel ongoing requests allowed before aborting.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
refinementStrategy (string?) : 'best-available' | 'no-overlap' | 'never' will be passed to TileLayer. A default will be chosen based on opacity.
excludeBackground (boolean?) : Whether to exclude the background image. The background image is also excluded for opacity!=1.
extensions (Array?) : deck.gl extensions to add to the layers.

ImageLayer

ImageLayer

Type: Object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Object) : PixelSource. Represents an N-dimensional image.
selections (Array) : Selection to be used for fetching data.
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
viewportId (string?) : Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
onHover (function?) : Hook function from deck.gl to handle hover objects.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
onViewportLoad (function?) : Function that gets called when the data in the viewport loads.
id (String?) : Unique identifier for this layer.
extensions (Array?) : deck.gl extensions to add to the layers.

XRLayer

XRLayer

Type: object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
dtype (string) : Dtype for the layer.
domain (Array<number>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
id (String?) : Unique identifier for this layer.
onHover (function?) : Hook function from deck.gl to handle hover objects.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
interpolation (("nearest" | "linear")?) : The minFilter and magFilter for luma.gl rendering (see https://luma.gl/docs/api-reference/core/resources/sampler#texture-magnification-filter ) - default is 'nearest'

VolumeLayer

VolumeLayer

Type: Object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : PixelSource[]. Represents an N-dimensional image.
selections (Array) : Selection to be used for fetching data.
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
resolution (number?) : Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length -1 the lowest default 0)
modelMatrix (Object?) : A column major affine transformation to be applied to the volume.
xSlice (Array<number>?) : 0-width (physical coordinates) interval on which to slice the volume.
ySlice (Array<number>?) : 0-height (physical coordinates) interval on which to slice the volume.
zSlice (Array<number>?) : 0-depth (physical coordinates) interval on which to slice the volume.
onViewportLoad (function?) : Function that gets called when the data in the viewport loads.
clippingPlanes (Array<Object>?) : List of math.gl Plane objects.
useProgressIndicator (boolean?) : Whether or not to use the default progress text + indicator (default is true)
onUpdate (function?) : A callback to be used for getting updates of the progress, ({ progress }) => {}
extensions (Array?) : deck.gl extensions to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.

XR3DLayer

XR3DLayer

Type: Object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
dtype (string) : Dtype for the layer.
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
modelMatrix (Object?) : A column major affine transformation to be applied to the volume.
xSlice (Array<number>?) : 0-width (physical coordinates) interval on which to slice the volume.
ySlice (Array<number>?) : 0-height (physical coordinates) interval on which to slice the volume.
zSlice (Array<number>?) : 0-depth (physical coordinates) interval on which to slice the volume.
clippingPlanes (Array<Object>?) : List of math.gl Plane objects.
resolutionMatrix (Object?) : Matrix for scaling the volume based on the (downsampled) resolution being displayed.
extensions (Array?) : deck.gl extensions to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.

BitmapLayer

BitmapLayer

Type: object

Properties
opacity (number?) : Opacity of the layer.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
photometricInterpretation (number?) : One of WhiteIsZero BlackIsZero YCbCr or RGB (default)
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
id (String?) : Unique identifier for this layer.

OverviewLayer

OverviewLayer

Type: Object

Properties
contrastLimits (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
channelsVisible (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : PixelSource[]. Assumes multiscale if loader.length > 1.
selections (Array) : Selection to be used for fetching data.
boundingBoxColor (Array<number>?) : [r, g, b] color of the bounding box (default: [255, 0, 0]).
boundingBoxOutlineWidth (number?) : Width of the bounding box in px (default: 1).
viewportOutlineColor (Array<number>?) : [r, g, b] color of the outline (default: [255, 190, 0]).
viewportOutlineWidth (number?) : Viewport outline width in px (default: 2).
id (String?) : Unique identifier for this layer.
extensions (Array?) : deck.gl extensions to add to the layers.

ScaleBarLayer

ScaleBarLayer

Type: Object

Properties
unit (String) : Physical unit size per pixel at full resolution.
size (Number) : Physical size of a pixel.
viewState (Object) : The current viewState for the desired view. We cannot internally use this.context.viewport because it is one frame behind: https://github.com/visgl/deck.gl/issues/4504
boundingBox (Array?) : Boudning box of the view in which this should render.
id (string?) : Id from the parent layer.
length (number?) : Value from 0 to 1 representing the portion of the view to be used for the length part of the scale bar.
snap (boolean) : If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary.

Extensions

LensExtension

This deck.gl extension allows for a lens that selectively shows one channel in its chosen color and then the others in white.

LensExtension

Type: Object

Properties
lensEnabled (boolean?) : Whether or not to use the lens.
lensSelection (number?) : Numeric index of the channel to be focused on by the lens.
lensRadius (number?) : Pixel radius of the lens (default: 100).
lensBorderColor (Array<number>?) : RGB color of the border of the lens (default [255, 255, 255]).
lensBorderRadius (number?) : Percentage of the radius of the lens for a border (default 0.02).
colors (Array<Array<number>>?) : Color palette to pseudo-color channels as.

AdditiveColormapExtension

This deck.gl extension allows for an additive colormap like viridis or jet to be used for pseudo-coloring channels.

AdditiveColormapExtension

Type: object

Properties
opacity (number?) : Opacity of the layer.
colormap (string?) : String indicating a colormap (default: 'viridis'). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
useTransparentColor (boolean?) : Indicates whether the shader should make the output of colormap_function(0) color transparent

ColorPaletteExtension

This deck.gl extension allows for a color palette to be used for pseudo-coloring channels.

ColorPaletteExtension

Type: object

Properties
colors (Array<Array<number>>?) : Array of colors to map channels to (RGB).
opacity (number?) : Opacity of the layer.
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0.
useTransparentColor (Boolean?) : Whether or not to use the value provided to transparentColor.

ColorPalette3DExtensions

This object contains the BaseExtension, which can be extended for other color palette-style rendering, as well implementations of three ray casting algorithms as extensions.

ColorPalette3DExtensions

Type: object

Properties
BaseExtension (object)
AdditiveBlendExtension (object)
MaximumIntensityProjectionExtension (object)
MinimumIntensityProjectionExtension (object)
Static Members
BaseExtension
AdditiveBlendExtension
MaximumIntensityProjectionExtension
MinimumIntensityProjectionExtension

AdditiveColormap3DExtensions

This object contains the BaseExtension, which can be extended for other additive colormap-style (i.e viridis, jet etc.) rendering, as well implementations of three ray casting algorithms as extensions.

AdditiveColormap3DExtensions

Type: object

Properties
BaseExtension (object)
AdditiveBlendExtension (object)
MaximumIntensityProjectionExtension (object)
MinimumIntensityProjectionExtension (object)
Static Members
BaseExtension
AdditiveBlendExtension
MaximumIntensityProjectionExtension
MinimumIntensityProjectionExtension

Loaders

loadOmeTiff

Opens an OME-TIFF via URL and returns data source and associated metadata for first or all images in files.

loadOmeTiff(source: (string | File), opts: Object): (Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}> | Array<Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>>)
Parameters
source ((string | File)) url or File object. If the url is prefixed with file:// will attempt to load with GeoTIFF's 'fromFile', which requires access to Node's fs module.
opts (Object = {})
Name Description
opts.headers Headers? Headers passed to each underlying fetch request.
opts.offsets Array<number>? Indexed-Tiff IFD offsets.
opts.pool GeoTIFF.Pool? A geotiff.js Pool for decoding image chunks.
opts.images ("first" | "all") (default 'first') Whether to return 'all' or only the 'first' image in the OME-TIFF. Promise<{ data: TiffPixelSource[], metadata: ImageMeta }>[] is returned.
Returns
(Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}> | Array<Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>>): data source and associated OME-Zarr metadata.

loadOmeZarr

Opens root of multiscale OME-Zarr via URL.

loadOmeZarr(source: string, options: {fetchOptions: (undefined | RequestInit)}): Promise<{data: Array<ZarrPixelSource>, metadata: RootAttrs}>
Parameters
source (string) url
options ({fetchOptions: (undefined | RequestInit)} = {})
Returns
Promise<{data: Array<ZarrPixelSource>, metadata: RootAttrs}>: data source and associated OME-Zarr metadata.

loadBioformatsZarr

Opens root directory generated via bioformats2raw --file_type=zarr. Uses OME-XML metadata, and assumes first image. This function is the zarr-equivalent to using loadOmeTiff.

loadBioformatsZarr(source: string, options: {fetchOptions: (undefined | RequestInit)}): Promise<{data: Array<ZarrPixelSource>, metadata: ImageMeta}>
Parameters
source (string) url
options ({fetchOptions: (undefined | RequestInit)} = {})
Returns
Promise<{data: Array<ZarrPixelSource>, metadata: ImageMeta}>: data source and associated OMEXML metadata.

loadMultiTiff

Opens multiple tiffs as a multidimensional "stack" of 2D planes. Also supports loading multiple slickes of a stack from a stacked tiff. Returns the data source and OME-TIFF-like metadata.

loadMultiTiff(sources: any, opts: Object): Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>
Parameters
sources (any)
opts (Object = {})
Name Description
opts.pool GeoTIFF.Pool? A geotiff.js Pool for decoding image chunks.
opts.name string (default 'MultiTiff') a name for the "virtual" image stack.
opts.headers Headers? Headers passed to each underlying fetch request.
Returns
Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>: data source and associated metadata.
Example
const { data, metadata } = await loadMultiTiff([
 [{ c: 0, t: 0, z: 0 }, 'https://example.com/channel_0.tif'],
 [{ c: 1, t: 0, z: 0 }, 'https://example.com/channel_1.tif'],
 [{ c: 2, t: 0, z: 0 }, undefined, { c: 3, t: 0, z: 0 }], 'https://example.com/channels_2-3.tif'],
]);

await data.getRaster({ selection: { c: 0, t: 0, z: 0 } });
// { data: Uint16Array[...], width: 500, height: 500 }

Misc

MAX_CHANNELS

MAX_CHANNELS

Type: number

DTYPE_VALUES

DTYPE_VALUES
Deprecated: We plan to remove DTYPE_VALUES as a part of Viv's public API as it leaks internal implementation details. If this is something your project relies on, please open an issue for further discussion.

More info can be found here: https://github.com/hms-dbmi/viv/pull/372#discussion_r571707517

Static Members
Uint8
Uint16
Uint32
Float32
Int8
Int16
Int32
Float64

COLORMAPS

COLORMAPS

RENDERING_MODES

RENDERING_MODES

isInterleaved

isInterleaved(shape: any)
Parameters
shape (any)

getImageSize

getImageSize(source: any)
Parameters
source (any)

SIGNAL_ABORTED

SIGNAL_ABORTED

Type: string

TiffPixelSource

new TiffPixelSource(indexer: any, dtype: any, tileSize: any, shape: any, labels: any, meta: any, pool: any)
Parameters
indexer (any)
dtype (any)
tileSize (any)
shape (any)
labels (any)
meta (any)
pool (any)
Instance Members
getRaster($0)
getTile($0)
_readRasters(image, props)
_getTileExtent(x, y)
onTileError(err)

ZarrPixelSource

new ZarrPixelSource(data: any, labels: any, tileSize: any)
Parameters
data (any)
labels (any)
tileSize (any)
Instance Members
shape
dtype
_xIndex
_chunkIndex(selection, $1)
_getSlices(x, y)
_getRaw(selection, getOptions)
getRaster($0)
getTile(props)
onTileError(err)

OVERVIEW_VIEW_ID

OVERVIEW_VIEW_ID

Type: string

DETAIL_VIEW_ID

DETAIL_VIEW_ID

Type: string