Introduction

Viv npm version

A library for multiscale visualization of high-resolution multiplexed bioimaging data on the web. Directly renders Bio-Formats-compatible Zarr and OME-TIFF. Written in JavaScript and built on Deck.gl with modern web technologies.

drawing drawing

About

Viv is a JavaScript library providing utilities for rendering primary imaging data. Viv supports WebGL-based multi-channel rendering of both pyramidal and non-pyramidal images. The rendering components of Viv are provided as Deck.gl layers, making it easy to compose images with existing layers and efficiently update rendering properties within a reactive paradigm.

More details can be found in our preprint describing the Viv library and related work. Please cite this preprint in your research:

Trevor Manz, Ilan Gold, Nathan Heath Patterson, Chuck McCallum, Mark S Keller, Bruce W Herr II, Katy Börner, Jeffrey M Spraggins, Nils Gehlenborg, "Viv: Multiscale Visualization of High-resolution Multiplexed Bioimaging Data on the Web." OSF Preprints (2020), doi:10.31219/osf.io/wd2gu

Related Software

  • Avivator Included in this repository is Avivator, a lightweight viewer for remote imaging data. Avivator is a purely client-side program that only requires access to Bio-Formats-compatiable Zarr or OME-TIFF data over HTTP or on local disk.
  • Vizarr Vizarr is a minimal, purely client-side program for viewing Zarr-based images built with Viv. It exposes a Python API using the imjoy-rpc and can be directly embedded in Jupyter Notebooks or Google Colab Notebooks.
  • Viv benchmark A set of scripts to benchmark Viv's retrieval of image tiles from pyramidal OME-TIFF files and Zarr stores via HTTP1 and HTTP2.

Tools Built with Viv

Vitessce

OME-NGFF:

ImJoy:

Galaxy:

HuBMAP CCF EUI:

Supported Data Formats

Viv supports a subset of formats that can be generated with the bioformats2raw + raw2ometiff pipeline:

  • OME-TIFF files (pyramidal)
  • Bioformats-compatible Zarr stores (pyramidal)

For OME-TIFF, Viv supports any pyramid that implements the OME design spec for TIFF pyramids(which the bioformats2raw + raw2ometiff pipeline provides). Non-pyramidal images are also supported provided the individual texture can be uploaded to the GPU (< 4096 x 4096 in pixel size).

Please see the tutorial for more information on these formats.

Data Preparation

Initial load time for OME-TIFFs can be optimized by generating a special offsets.json file containing byte offsets for the associated binary data. For more information, see the documentation.

Installation

$ npm install @hms-dbmi/viv

You will also need to install deck.gl (and its companion luma.gl) dependencies as they are listed as peerDependencies here. This is in order to prevent users from installing multiple versions of deck.gl.

$ npm install deck.gl @luma.gl/core

Development

$ git clone https://github.com/hms-dbmi/viv.git
$ cd viv && npm install # install deps for viv library
$ npm run install:avivator # install deps for avivator app
$ npm start # Starts rollup build (for Viv) & dev server for Avivator

Please install the Prettier plug-in for your preferred editor. Badly formatted code will fail on Travis.

To run unit and integration tests locally, use npm test. For full prodcution test (including linting and formatting checks), use npm run test:prod.

Build

  • @hms-dbmi/viv library: npm run build
  • Avivator viewer: npm run build:avivator

Publish

To bump the version number, clean up/update the CHANGELOG.md, and push the tag to Github, please run npm version [major | minor | patch] depending on which you want. Then run ./publish.sh to publish the package/demo.

Browser Support

We support both WebGL1 and WebGL2 contexts, which provides near universal coverage across Safari, Firefox, Chrome, and Edge. Please file an issue if you find a browser in which Viv does not work.

Documentation

Please navigate to viv.gehlenborglab.org to see the full documenation.

Getting Started

Start a simple web server to make your data avaiable to the browser via HTTP.

$ http-server --cors='*' --port 8000 path/to/data

If following the Data Preparation tutorial, this server exposes two URLs that Viv recognizes,

  • http://localhost:8000/LuCa-7color_Scan1.ome.tif (OME-TIFF)
  • http://localhost:8000/LuCa-7color_Scan1/ (Bioformats-generated Zarr)
import React, { useState, useEffect, useMemo } from 'react';

import {
  getChannelStats,
  loadOmeTiff,
  loadBioformatsZarr,
  PictureInPictureViewer,
} from '@hms-dbmi/viv';

const url = 'http://localhost:8000/LuCa-7color_Scan1.ome.tif'; // OME-TIFF
// const url = 'http://localhost:8000/LuCa-7color_Scan1/';     // Bioformats-Zarr

// Hardcoded rendering properties.
const props = {
  selections: [
    { z: 0, t: 0, c: 0 },
    { z: 0, t: 0, c: 1 },
    { z: 0, t: 0, c: 2 },
  ],
  colors: [
    [0, 0, 255],
    [0, 255, 0],
    [255, 0, 0],
  ],
  sliders: [
    [0, 255],
    [0, 255],
    [0, 255],
  ],
  isOn: [true, true, true],
}

// Simple url handler.
function load(url) {
  if (url.includes('.tif')) {
    return loadOmeTiff(url);
  }
  return loadBioformatsZarr(url);
}

function App() {
  const [loader, setLoader]= useState(null);

  useEffect(() => {
    load(url).then(setLoader);
  }, []);

  // Viv exposes the getChannelStats to produce nice initial settings
  // so that users can have an "in focus" image immediately.
  const autoProps = useMemo(() => {
    if(!loader) {
      return props
    }
    // Use lowest level of the image pyramid for calculating stats.
    const source = loader.data[loader.length - 1];
    const stats = await Promise.all(props.selections.map(async selection => {
      const raster = await source.getRaster({ selection });
      return getChannelStats(raster.data);
    }));
    // These are calculated bounds for the sliders
    // that could be used for display purposes.
    // domains = stats.map(stat => stat.domain);

    // These are precalculated settings for the sliders that
    // should render a good, "in focus" image initially.
    sliders = stats.map(stat => stat.autoSliders);
    const newProps = { ...props, sliders };
  }, [loader])

  if (!loader) return null;
  return (
    <PictureInPictureViewer
      loader={loader.data}
      sliderValues={autoProps.sliders}
      colorValues={autoProps.colors}
      channelIsOn={autoProps.isOn}
      loaderSelection={autoProps.selections}
      height={1080}
      width={1920}
    />
  );
}

If you wish to use the SideBySideViewer, simply replace PictureInPictureViewer with SideBySideViewer and add props for zoomLock and panLock while removing overview and overviewOn.

API Structure

API Structure

Viewer

Viv provides four high-level React components called "Viewers": PictureInPictureViewer, SideBySideViewer, VolumeViewer and VivViewer. The first three wrap the VivViewer and use the View API for composing different layouts. VivViewer wraps a DeckGL component and handles managing the complex state (i.e multiple "views" in the same scene) as well as resizing and forwarding rendering props down to the underling layers via the View API.

View

A "View" in Viv is a wrapper around a deck.gl View that exposes an API to manage where an image is rendered in the coordinate space. A View must inherit from a VivView and implement:

  • 1.) a filter for updating ViewState
  • 2.) a method for instantiating a View
  • 3.) a method for rendering Layers

Views are used by Viv's Viewer components. For example, the OverviewView is used in the PictureInPictureViewer to provide a constant overview of the high resolution image. The SideBySideView has supports locked and unlocked zoom/pan interactions within the SideBySideViewer.

Layer

Viv implements several deck.gl Layers, for rending RGB and multi-channel imaging data. These layers can be composed like any other layer in the deck.gl ecosystem. The main three layers for use are MultiscaleImageLayer (for tiled, pyramidal images), ImageLayer (for non-pyramidal and non-tiled iamges), and VolumeLayer (for volumetric ray casting rendering) which accept PixelSource arguments for data fetching. These layers handle the complexity of data fetching and setting up the rendering by wrapping the XRLayer, XR3DLayer and BitmapLayer, which are the lower level rendering layers.
The XRLayer (eXtended Range Layer) and XR3DLayer enable multi-channel additive blending of Uint32, Uint16, Uint8 and Float32 data on the GPU.

Loader (Pixel Sources)

Viv wraps both Tiff- and Zarr-based data sources in a unified PixelSource interface. A pixel source can be thought of as a multi-dimensional "stack" of image data with labeled dimensions (usually ["t", "c", "z", "y", "x"]). A multiscale image is represented as list of pixel sources decreasing in shape. Viv provides several helper functions to intialize a loader via url: loadOmeTiff, loadBioformatsZarr, and loadOmeZarr. Each function returns a Promise for an object of shape { data: PixelSouce[], metadata: M }, where M is a JavaScript object containing the format-specific metadata for the image. For OME-TIFF and Bioformats-Zarr, the metadata is identical (OME-XML representation), for OME-Zarr, the metadata is that for a multiscale group (more information: https://ngff.openmicroscopy.org/latest/). This metadata can be useful for creating UI componenets that describe the data source.

Data preparation

Viv supports a subset of the files generated from the bioformats2raw + raw2ometiff pipeline, described here. This guide demonstrates how to generate a pyramidal Zarr or OME-TIFF with Bio-Formats that can be viewed with Avivator via HTTP.

Getting Started

This tutorial requires Bio-Formats bioformats2raw and raw2ometiff command-line tools. It's easiest to install these tools using conda, but binaries also are available for download on the corresponding github repositories.

$ conda create --name bioformats python=3.8
$ conda activate bioformats
$ conda install -c ome bioformats2raw raw2ometiff

Input data

Bio-Formats is an incredibly valuable toolkit and supports reading over 150 file formats. You can choose one of your own images, but for the purposes of this tutorial, we will use a multiplexed, high-resolution Perkin Elmer (1.95 GB) image made available under CC-BY 4.0 on OME.

$ wget https://downloads.openmicroscopy.org/images/Vectra-QPTIFF/perkinelmer/PKI_scans/LuCa-7color_Scan1.qptiff

After the image has finished downloading, there are two options for creating an Avivator/Viv-compliant image.

Pyramid Generation

Option 1: Create a Bio-Formats "raw" Zarr

The first option is to use bioformats2raw with --file_type=zarr. The default "raw" file type is currently n5, so the flag is required to generate the Zarr-based output. This command will create the OME-XML metadata along with a pyramidal Zarr for high-resolution images.

$ bioformats2raw LuCa-7color_Scan1.qptiff LuCa-7color_Scan1/ --file_type=zarr

bioformats2raw creates the file directory LuCa-7color_Scan1/ which contains the "raw" bioformats output. The root directory contains a METADATA.ome.xml file along with a data.zarr/ directory containing the Zarr output. This output can be viewed directly with Avivator by serving the top-level directory (LuCa-7color_Scan1/) over HTTP (see below).

NOTE: Alternate tile dimensions can be specified with the --tile_width and --tile_height options. In our experience, tile sizes of 512x512 and 1024x1024 (default) work well. Viv can only handle square tiles. For more information see the docs.

Option 2: Create an OME-TIFF

The second option is to run the complete Bio-Formats pipeline to generate a valid OME-TIFF.

$ bioformats2raw LuCa-7color_Scan1.qptiff n5_tile_directory/
$ raw2ometiff n5_tile_directory/ LuCa-7color_Scan1.ome.tif

Note: LZW is the default if you do not specify a --compression option (the syntax requires an "=" sign, like --compression=zlib).

You may also use bfconvert (Bioformats >= 6.0.0) to generate an image pyramid. to generate your image pyramid.

$ bfconvert -tilex 512 -tiley 512 -pyramid-resolutions 6 -pyramid-scale 2  -compression LZW LuCa-7color_Scan1.qptiff LuCa-7color_Scan1.ome.tif

All the above arguments are necessary except for -compression which is optional (default uncompressed). In order for an image to be compatible with Viv:

  • -pyramid-scale must be 2
  • -tilex must equal -tiley (ideally a power of 2)
  • -pyramid-resolutions must be computed using the image dimensions and tile size. For example, for a 4096 x 4096 with tile size of 512, 3 = log2(ceil(4096 / 512)) resolutions should work well. a power of 2), and the -pyramid-resolutions argument should be adjusted to match the size of your image and your choice of tile size. For example, if you have a 4096 x 4096 and tile size of 512, 3 = log2(ceil(4096 / 512)) resolutions should work well. For the LuCa-7color_Scan1.qptiff image, 6 = max(log2(ceil(12480 / 512)), log2(ceil(17280 / 512))) resolutions work best as the image is 12480 x 17280 in size. There is currently no "auto" feature for inferring the number of pyramid resolutions. Without the compression set, i.e -compression LZW, the output image will be uncompressed.

There is a 2GB limit on the total amount of data that may be read into memory for the bfconvert cli tool. Therefore for larger images, please use bioformats2raw + raw2ometiff.

NOTE: Viv currently uses geotiff.js for accessing data from remote TIFFs over HTTP and support the three lossless compression options supported by raw2ometiff - LZW, zlib, and Uncompressed as well as jpeg compression for 8 bit data. Support for JPEG-2000 for >8 bit data is planned. Please open an issue if you would like this more immediately.

Viewing in Avivator

There are a few different ways to view your data in Avivator.

If you have an OME-TIFF or Bio-Formats "raw" Zarr output saved locally, you may simply drag and drop the file (or directory) over the canvas or use the "Choose file" button to view your data. Note that this action does NOT necessarily load the entire dataset into memory. Viv still works as normal and will retrieve data tiles based on the viewport for an image pyramid and/or a specific channel/z/time selection.

If you followed Option 1 above, you may drag and drop the LuCa-7color_Scan1/ directory created via bioformats2raw into Avivator. If you followed Option 2, simply select the LuCa-7color_Scan1.ome.tif to view in Avivator.

NOTE: Large Zarr-based image pyramids may take a bit longer to load initially using this method. We recommend using a simple web server (see below) if you experience issues with Zarr loading times. Additionally, support for drag-and-drop for Zarr-based images is only currently supported in Chrome, Firefox, and Microsoft Edge. If using Safari, please use a web-server.

Otherwise Avivator relies on access to data over HTTP, and you can serve data locally using a simple web-server. It's easiest to use http-server to start a web-server locally, which can be installed via npm or Homebrew if using a Mac.

NOTE: If your OME-TIFF image has many TIFF IFDs, which correspond to indvidual time-z-channel sub-images, please generate an offsets.json file as well for remote HTTP viewing. This file contains the byte offsets to each IFD and allows fast interaction with remote data:

$ pip install generate-tiff-offsets
$ generate_tiff_offsets --input_file my_tiff_file.ome.tiff

For viewing in Avivator, this file should live adjacent to the OME-TIFF file in its folder and will be automatically recognized and used. For use with Viv's loaders/layers, you need to fetch the offsets.json and pass it in as an argument to the loader. Please see this sample for help getting started.

Install http-server

$ npm install --global http-server
# or
$ brew install http-server

Starting a Server

From within this directory, start a local server and open Avivator in your browser.

$ http-server --cors='*' --port 8000 .

This command starts a web-server and makes the content in the current directory readable over HTTP. Once the server is running, open Avivator and paste http://localhost:8000/LuCa-7color_Scan1/ (Zarr) or http://localhost:8000/LuCa-7color_Scan1.ome.tif (OME-TIFF) into the input dialog to view the respective pyramids generated above. For convenience, you can also create a direct link by appending an image_url query parameter:

Troubleshooting: Viv relies on cross-origin requests to retrieve data from servers. The --cors='*' flag is important to ensure that the appropriate Access-Control-Allow-Origin response is sent from your local server. In addition, web servers must allow HTTP range requests to support viewing OME-TIFF images. Range requests are allowed by default by http-server but may need to be enabled explicitly for your production web server.

Final Note on File Formats and OME-Zarr

The Glencoe software and OME teams hava been clear that the "raw" N5/Zarr formats produced by bioformats2raw should be considered experimental for the time being as intermediates for generating valid OME-TIFFs. Therefore Option 1 is not as stable as Option 2 for generating images for Avivator/Viv.

However, there is activate community development for a next generation file format (NGFF) called OME-Zarr, which can be produced in part by running bioformats2raw --file_type=zarr --dimension-order='XYZCT'. This will generate a valid multiscale Zarr which is compatible with OME-Zarr, but is missing some metadata within the Zarr hierarchy.

Aviviator can view the "raw" output as described above, and the same multiscale pyramid can also be viewed in desktop analysis tools like napari.

Other Examples

Other sample OME-TIFF data can be downloaded from OME-TIFF sample data provided by OME and viewed with Viv locally (without needing to run Bio-Formats).

3D Rendering

Viv has the capability to do volume ray-casting on in-memory volumetric data. It also exposes an API for applying arbitrary affine transformations to the volume when rendering. Napari is another popular tool for doing both of these, but there are some key differences. Viv follows the convention of 3D graphics to first have the x-axis running left-right across the screen, then the y-axis running up and down, and then the z-axis running into and out of the screen (all following the right-hand-rule for standard orientation). Napari orients volumes in order to respect broadcasting conventions with numpy - that is their axis order is actually zyx, which is the reverse of Viv.

If you have a homogeneous matrix that you are using in Napari, the best way to make it usable in Viv is to go back through the steps by which you got the matrix, reversing your z and x operations. However, if this is not possible, the following steps should produce something that looks identical as it swaps the row and column space (i.e you want to change the order of both how the matrix interprets its inputs and how it affects its outputs):

import numpy as np
viv_transform = napari_transform.copy()
viv_transform[[2,0],3] = viv_transform[[0,2],3]
exchange_mat = np.array(
  [
    [0,0,1],
    [0,1,0],
    [1,0,0]
  ]
)
viv_transform[:3,:3] = exchange_mat @ viv_transform[:3,:3] @ exchange_mat

If you would like more information, please visit our github page.

Viewers (React Components)

PictureInPictureViewer

This component provides a component for an overview-detail VivViewer of an image (i.e picture-in-picture).

PictureInPictureViewer
Parameters
props (Object)
Name Description
props.sliderValues Array List of [begin, end] values to control each channel's ramp function.
props.colorValues Array List of [r, g, b] values for each channel.
props.channelIsOn Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
props.loader Array The data source for the viewer, PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.loaderSelection Array Selection to be used for fetching data.
props.overview Object Allows you to pass settings into the OverviewView: { scale, margin, position, minimumWidth, maximumWidth, boundingBoxColor, boundingBoxOutlineWidth, viewportOutlineColor, viewportOutlineWidth}. See http://viv.gehlenborglab.org/#overviewview for defaults.
props.overviewOn Boolean Whether or not to show the OverviewView.
props.viewStates Array? Array of objects like [{ target: [x, y, 0] , zoom: -zoom, id: DETAIL_VIEW_ID }] for setting where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.isLensOn boolean? Whether or not to use the lens (deafult false).
props.lensSelection number? Numeric index of the channel to be focused on by the lens (default 0).
props.lensRadius number? Pixel radius of the lens (default: 100).
props.lensBorderColor Array? RGB color of the border of the lens (default [255, 255, 255] ).
props.lensBorderRadius number? Percentage of the radius of the lens for a border (default 0.02).
props.clickCenter Boolean? Click to center the default view. Default is true.
props.transparentColor Array? An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
props.transitionFields Array? A string array indicating which fields require a transition when making a new selection: Default: ['t', 'z'] .
props.onViewportLoad function? Function that gets called when the data in the viewport loads.
props.glOptions Object? Additional options used when creating the WebGLContext.

SideBySideViewer

This component provides a side-by-side VivViewer with linked zoom/pan.

SideBySideViewer
Parameters
props (Object)
Name Description
props.sliderValues Array List of [begin, end] values to control each channel's ramp function.
props.colorValues Array List of [r, g, b] values for each channel.
props.channelIsOn Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
props.loader Array This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.loaderSelection Array Selection to be used for fetching data.
props.zoomLock Boolean Whether or not lock the zooms of the two views.
props.panLock Boolean Whether or not lock the pans of the two views.
props.viewStates Array? List of objects like [{ target: [x, y, 0] , zoom: -zoom, id: 'left' }, { target: [x, y, 0] , zoom: -zoom, id: 'right' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.isLensOn boolean? Whether or not to use the lens deafult (false).
props.lensSelection number? Numeric index of the channel to be focused on by the lens (default 0).
props.lensBorderColor Array? RGB color of the border of the lens (default [255, 255, 255] ).
props.lensBorderRadius number? Percentage of the radius of the lens for a border (default 0.02).
props.lensRadius number? Pixel radius of the lens (default: 100).
props.transparentColor Array? An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
props.transitionFields Array? A string array indicating which fields require a transition: Default: ['t', 'z'] .
props.glOptions Object? Additional options used when creating the WebGLContext.

VolumeViewer

This component provides a volumetric viewer that provides provides volume-ray-casting.

VolumeViewer
Parameters
props (Object)
Name Description
props.sliderValues Array List of [begin, end] values to control each channel's ramp function.
props.colorValues Array List of [r, g, b] values for each channel.
props.channelIsOn Array List of boolean values for each channel for whether or not it is visible.
props.colormap string? String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
props.loader Array This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale.
props.loaderSelection Array Selection to be used for fetching data
props.resolution Array? Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length - 1 the lowest with default loader.length - 1)
props.renderingMode Array? One of Maximum Intensity Projection, Minimum Intensity Projection, or Additive
props.modelMatrix Object? A column major affine transformation to be applied to the volume.
props.xSlice Array? 0-1 interval on which to slice the volume.
props.ySlice Array? 0-1 interval on which to slice the volume.
props.zSlice Array? 0-1 interval on which to slice the volume.
props.onViewportLoad function? Function that gets called when the data in the viewport loads.
props.viewStates Array? List of objects like [{ target: [x, y, z] , zoom: -zoom, id: '3d' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState).
props.height number Current height of the component.
props.width number Current width of the component.
props.clippingPlanes Array<Object>? List of math.gl Plane objects.
props.useFixedAxis Boolean? Whether or not to fix the axis of the camera (default is true).

VivViewer

This component wraps the DeckGL component.

VivViewer
Parameters
props (Object)
Name Description
props.layerProps Array Props for the layers in each view.
props.randomize boolean? Whether or not to randomize which view goes first (for dynamic rendering of multiple linked views).
props.viewStates Array<object> List of objects like [{ target: [x, y, 0] , zoom: -zoom, id: 'left' }, { target: [x, y, 0] , zoom: -zoom, id: 'right' }]
props.onViewStateChange ViewStateChange? Callback that returns the deck.gl view state ( https://deck.gl/docs/api-reference/core/deck#onviewstatechange ).
props.onHover Hover? Callback that returns the picking info and the event ( https://deck.gl/docs/api-reference/core/layer#onhover https://deck.gl/docs/developer-guide/interactivity#the-picking-info-object )
props.hoverHooks HoverHooks? Object including utility hooks - an object with key handleValue like { handleValue: (valueArray) => {}, handleCoordinate: (coordinate) => {} } where valueArray has the pixel values for the image under the hover location and coordinate is the coordinate in the image from which the values are picked.
props.glOptions Object? Additional options used when creating the WebGLContext.

Views

VivView

This class generates a layer and a view for use in the VivViewer

new VivView(args: Object)
Parameters
args (Object)
Name Description
args.id string Id for the current view
args.x number? (default 0) X (top-left) location on the screen for the current view
args.y number? (default 0) Y (top-left) location on the screen for the current view
args.height Object Width of the view.
args.width Object Height of the view.
Instance Members
getDeckGlView()
filterViewState(args)
getLayers(args)

OverviewView

This class generates a OverviewLayer and a view for use in the VivViewer as an overview to a Detailview (they must be used in conjection). From the base class VivView, only the initialViewState argument is used. This class uses private methods to position its x and y from the additional arguments:

new OverviewView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.id Object for thie VivView
args.loader Object PixelSource[], where each PixelSource is decreasing in shape. If length == 1, not multiscale.
args.detailHeight number Height of the detail view.
args.detailWidth number Width of the detail view.
args.scale number? (default 0.2) Scale of this viewport relative to the detail. Default is .2.
args.margin number? (default 25) Margin to be offset from the the corner of the other viewport. Default is 25.
args.position string? (default 'bottom-right') Location of the viewport - one of "bottom-right", "top-right", "top-left", "bottom-left." Default is 'bottom-right'.
args.minimumWidth number? (default 150) Absolute lower bound for how small the viewport should scale. Default is 150.
args.maximumWidth number? (default 350) Absolute upper bound for how large the viewport should scale. Default is 350.
args.minimumHeight number? (default 150) Absolute lower bound for how small the viewport should scale. Default is 150.
args.maximumHeight number? (default 350) Absolute upper bound for how large the viewport should scale. Default is 350.
args.clickCenter Boolean? (default true) Click to center the default view. Default is true.
Instance Members
_setHeightWidthScale($0)
_setXY()
getDeckGlView()
filterViewState($0)
getLayers($0)

DetailView

This class generates a MultiscaleImageLayer and a view for use in the VivViewer as a detailed view. It takes the same arguments for its constructor as its base class VivView.

new DetailView()

Extends VivView

Instance Members
getLayers($0)
filterViewState($0)

VolumeView

This class generates a VolumeLayer and a view for use in the VivViewer as volumetric rendering.

new VolumeView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.target Array<number> Centered target for the camera (used if useFixedAxis is true)
args.useFixedAxis Boolean Whether or not to fix the axis of the camera.
args.args ...any
Instance Members
getDeckGlView()
filterViewState($0)
getLayers($0)

SideBySideView

This class generates a MultiscaleImageLayer and a view for use in the SideBySideViewer. It is linked with its other views as controlled by linkedIds, zoomLock, and panLock parameters. It takes the same arguments for its constructor as its base class VivView plus the following:

new SideBySideView(args: Object)

Extends VivView

Parameters
args (Object)
Name Description
args.id string id of the View
args.x number? (default 0) X (top-left) location on the screen for the current view
args.y number? (default 0) Y (top-left) location on the screen for the current view
args.height number Width of the view.
args.width number Height of the view.
args.linkedIds Array<String> (default []) Ids of the other views to which this could be locked via zoom/pan.
args.panLock Boolean (default true) Whether or not we lock pan.
args.zoomLock Boolean (default true) Whether or not we lock zoom.
args.viewportOutlineColor Array? (default [255,255,255]) Outline color of the border (default [255, 255, 255] )
args.viewportOutlineWidth number? (default 10) Default outline width (default 10)
Instance Members
filterViewState($0)
getLayers($0)

Utility Methods

getChannelStats

Computes statics from pixel data.

This is helpful for generating histograms or scaling sliders to reasonable range. Also provided are "autoSliders" which are slider bounds that should give a good initial image.

getChannelStats(arr: TypedArray): {mean: number, sd: number, q1: number, q3: number, median: number, domain: Array<number>, autoSliders: Array<number>}
Parameters
arr (TypedArray)
Returns
{mean: number, sd: number, q1: number, q3: number, median: number, domain: Array<number>, autoSliders: Array<number>}:

getDefaultInitialViewState

Create an initial view state that centers the image in the viewport at the zoom level that fills the dimensions in viewSize.

getDefaultInitialViewState(loader: Object, viewSize: Object, zoomBackOff: Object?, use3d: Boolean?, modelMatrix: Boolean?): Object
Parameters
loader (Object) (PixelSource[] | PixelSource)
viewSize (Object) { height, width } object giving dimensions of the viewport for deducing the right zoom level to center the image.
zoomBackOff (Object? = 0) A positive number which controls how far zoomed out the view state is from filling the entire viewport (default is 0 so the image fully fills the view). SideBySideViewer and PictureInPictureViewer use .5 when setting viewState automatically in their default behavior, so the viewport is slightly zoomed out from the image filling the whole screen. 1 unit of zoomBackOff (so a passed-in value of 1) corresponds to a 2x zooming out.
use3d (Boolean? = false) Whether or not to return a view state that can be used with the 3d viewer
modelMatrix (Boolean?) If using a transformation matrix, passing it in here will allow this function to properly center the volume.
Returns
Object: A default initial view state that centers the image within the view: { target: [x, y, 0] , zoom: -zoom }.

Layers

The documentation in this section shows each layer as an object with properties although this is not accurate. These are deck.gl layer classes and the properties are those of the props that are passed into the constructor of the deck.gl layer classes. For more information, see deck.gl's documentation on how to use layers or the layer class. We welcome contributions to improve the docs, whether it be small fixes or a new docs site that would allow us to show these classes properly. Thanks!

MultiscaleImageLayer

MultiscaleImageLayer

Type: object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : Image pyramid. PixelSource[], where each PixelSource is decreasing in shape.
loaderSelection (Array) : Selection to be used for fetching data.
opacity (number?) : Opacity of the layer.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
viewportId (string?) : Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
id (String?) : Unique identifier for this layer.
onTileError (function?) : Custom override for handle tile fetching errors.
onHover (function?) : Hook function from deck.gl to handle hover objects.
isLensOn (boolean?) : Whether or not to use the lens.
lensSelection (number?) : Numeric index of the channel to be focused on by the lens.
lensRadius (number?) : Pixel radius of the lens (default: 100).
lensBorderColor (Array<number>?) : RGB color of the border of the lens (default [255, 255, 255] ).
lensBorderRadius (number?) : Percentage of the radius of the lens for a border (default 0.02).
maxRequests (number?) : Maximum parallel ongoing requests allowed before aborting.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
refinementStrategy (string?) : 'best-available' | 'no-overlap' | 'never' will be passed to TileLayer. A default will be chosen based on opacity.
excludeBackground (boolean?) : Whether to exclude the background image. The background image is also excluded for opacity!=1.

ImageLayer

ImageLayer

Type: Object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Object) : PixelSource. Represents an N-dimensional image.
loaderSelection (Array) : Selection to be used for fetching data.
opacity (number?) : Opacity of the layer.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
viewportId (string?) : Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
onHover (function?) : Hook function from deck.gl to handle hover objects.
isLensOn (boolean?) : Whether or not to use the lens.
lensSelection (number?) : Numeric index of the channel to be focused on by the lens.
lensRadius (number?) : Pixel radius of the lens (default: 100).
lensBorderColor (Array<number>?) : RGB color of the border of the lens.
lensBorderRadius (number?) : Percentage of the radius of the lens for a border (default 0.02).
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
onViewportLoad (function?) : Function that gets called when the data in the viewport loads.
id (String?) : Unique identifier for this layer.

XRLayer

XRLayer

Type: object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
dtype (string) : Dtype for the layer.
opacity (number?) : Opacity of the layer.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<number>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
id (String?) : Unique identifier for this layer.
onHover (function?) : Hook function from deck.gl to handle hover objects.
isLensOn (boolean?) : Whether or not to use the lens.
lensSelection (number?) : Numeric index of the channel to be focused on by the lens.
lensRadius (number?) : Pixel radius of the lens (default: 100).
lensBorderColor (Array<number>?) : RGB color of the border of the lens (default [255, 255, 255] ).
lensBorderRadius (number?) : Percentage of the radius of the lens for a border (default 0.02).
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
interpolation (number?) : The TEXTURE_MIN_FILTER and TEXTURE_MAG_FILTER for WebGL rendering (see https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/texParameter ) - default is GL.NEAREST

VolumeLayer

VolumeLayer

Type: Object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : PixelSource[]. Represents an N-dimensional image.
loaderSelection (Array) : Selection to be used for fetching data.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
resolution (number?) : Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length -1 the lowest default 0)
renderingMode (string?) : One of Maximum Intensity Projection, Minimum Intensity Projection, or Additive
modelMatrix (Object?) : A column major affine transformation to be applied to the volume.
xSlice (Array<number>?) : 0-width (physical coordinates) interval on which to slice the volume.
ySlice (Array<number>?) : 0-height (physical coordinates) interval on which to slice the volume.
zSlice (Array<number>?) : 0-depth (physical coordinates) interval on which to slice the volume.
onViewportLoad (function?) : Function that gets called when the data in the viewport loads.
clippingPlanes (Array<Object>?) : List of math.gl Plane objects.
useProgressIndicator (boolean?) : Whether or not to use the default progress text + indicator (default is true)
useWebGL1Warning (boolean?) : Whether or not to use the default WebGL1 warning (default is true)
onUpdate (function?) : A callback to be used for getting updates of the progress, ({ progress }) => {}

XR3DLayer

XR3DLayer

Type: Object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
dtype (string) : Dtype for the layer.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
renderingMode (string?) : One of Maximum Intensity Projection, Minimum Intensity Projection, or Additive
modelMatrix (Object?) : A column major affine transformation to be applied to the volume.
xSlice (Array<number>?) : 0-width (physical coordinates) interval on which to slice the volume.
ySlice (Array<number>?) : 0-height (physical coordinates) interval on which to slice the volume.
zSlice (Array<number>?) : 0-depth (physical coordinates) interval on which to slice the volume.
clippingPlanes (Array<Object>?) : List of math.gl Plane objects.
resolutionMatrix (Object?) : Matrix for scaling the volume based on the (downsampled) resolution being displayed.

BitmapLayer

BitmapLayer

Type: object

Properties
opacity (number?) : Opacity of the layer.
onClick (function?) : Hook function from deck.gl to handle clicked-on objects.
modelMatrix (Object?) : Math.gl Matrix4 object containing an affine transformation to be applied to the image.
photometricInterpretation (number?) : One of WhiteIsZero BlackIsZero YCbCr or RGB (default)
transparentColor (Array<number>?) : An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
id (String?) : Unique identifier for this layer.

OverviewLayer

OverviewLayer

Type: Object

Properties
sliderValues (Array<Array<number>>) : List of [begin, end] values to control each channel's ramp function.
colorValues (Array<Array<number>>) : List of [r, g, b] values for each channel.
channelIsOn (Array<boolean>) : List of boolean values for each channel for whether or not it is visible.
loader (Array) : PixelSource[]. Assumes multiscale if loader.length > 1.
loaderSelection (Array) : Selection to be used for fetching data.
opacity (number?) : Opacity of the layer.
colormap (string?) : String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap
domain (Array<Array<number>>?) : Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
boundingBoxColor (Array<number>?) : [r, g, b] color of the bounding box (default: [255, 0, 0] ).
boundingBoxOutlineWidth (number?) : Width of the bounding box in px (default: 1).
viewportOutlineColor (Array<number>?) : [r, g, b] color of the outline (default: [255, 190, 0] ).
viewportOutlineWidth (number?) : Viewport outline width in px (default: 2).
id (String?) : Unique identifier for this layer.

ScaleBarLayer

ScaleBarLayer

Type: Object

Properties
unit (String) : Physical unit size per pixel at full resolution.
size (Number) : Physical size of a pixel.
viewState (Object) : The current viewState for the desired view. We cannot internally use this.context.viewport because it is one frame behind: https://github.com/visgl/deck.gl/issues/4504
boundingBox (Array?) : Boudning box of the view in which this should render.
id (string?) : Id from the parent layer.
length (number?) : Value from 0 to 1 representing the portion of the view to be used for the length part of the scale bar.

Loaders

loadOmeTiff

Opens an OME-TIFF via URL and returns data source and associated metadata for first image.

loadOmeTiff(source: (string | File), opts: {headers: (undefined | Headers), offsets: (undefined | Array<number>), pool: (undefined | boolean)}): Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>
Parameters
source ((string | File)) url or File object.
opts ({headers: (undefined | Headers), offsets: (undefined | Array<number>), pool: (undefined | boolean)} = {}) Options for initializing a tiff pixel source. Headers are passed to each underlying fetch request. Offests are a performance enhancment to index the remote tiff source using pre-computed byte-offsets. Pool indicates whether a multi-threaded pool of image decoders should be used to decode tiles (default = true).
Returns
Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>: data source and associated OME-Zarr metadata.

loadOmeZarr

Opens root of multiscale OME-Zarr via URL.

loadOmeZarr(source: string, options: {fetchOptions: (undefined | RequestInit)}): Promise<{data: Array<ZarrPixelSource>, metadata: RootAttrs}>
Parameters
source (string) url
options ({fetchOptions: (undefined | RequestInit)} = {})
Returns
Promise<{data: Array<ZarrPixelSource>, metadata: RootAttrs}>: data source and associated OME-Zarr metadata.

loadBioformatsZarr

Opens root directory generated via bioformats2raw --file_type=zarr. Uses OME-XML metadata, and assumes first image. This function is the zarr-equivalent to using loadOmeTiff.

loadBioformatsZarr(source: string, options: {fetchOptions: (undefined | RequestInit)}): Promise<{data: Array<ZarrPixelSource>, metadata: ImageMeta}>
Parameters
source (string) url
options ({fetchOptions: (undefined | RequestInit)} = {})
Returns
Promise<{data: Array<ZarrPixelSource>, metadata: ImageMeta}>: data source and associated OMEXML metadata.

Misc

MAX_SLIDERS_AND_CHANNELS

MAX_SLIDERS_AND_CHANNELS

Type: number

DTYPE_VALUES

DTYPE_VALUES
Deprecated: We plan to remove DTYPE_VALUES as a part of Viv's public API as it leaks internal implementation details. If this is something your project relies on, please open an issue for further discussion.

More info can be found here: https://github.com/hms-dbmi/viv/pull/372#discussion_r571707517

Static Members
Uint8
Uint16
Uint32
Float32
Int8
Int16
Int32
Float64

COLORMAPS

COLORMAPS

RENDERING_MODES

RENDERING_MODES

OVERVIEW_VIEW_ID

OVERVIEW_VIEW_ID

Type: string

DETAIL_VIEW_ID

DETAIL_VIEW_ID

Type: string

TiffPixelSource

new TiffPixelSource(indexer: any, dtype: any, tileSize: any, shape: any, labels: any, meta: any)
Parameters
indexer (any)
dtype (any)
tileSize (any)
shape (any)
labels (any)
meta (any)
Instance Members
getRaster($0)
getTile($0)
_readRasters(image, props)
_getTileExtent(x, y)
onTileError(err)

ZarrPixelSource

new ZarrPixelSource(data: any, labels: any, tileSize: any)
Parameters
data (any)
labels (any)
tileSize (any)
Instance Members
shape
dtype
_xIndex
_chunkIndex(selection, x, y)
_getSlices(x, y)
getRaster($0)
getTile(props)
onTileError(err)