A WebGL-powered toolkit for interactive visualization of high-resolution, multiplexed bioimaging datasets.
Viv is a JavaScript library for rendering OME-TIFF and OME-NGFF (Zarr) directly in the browser. The rendering components of Viv are packaged as deck.gl layers, making it easy to compose with existing layers to create rich interactive visualizations.
More details and related work can be found in our paper and original preprint. Please cite our paper in your research:
Trevor Manz, Ilan Gold, Nathan Heath Patterson, Chuck McCallum, Mark S Keller, Bruce W Herr II, Katy Börner, Jeffrey M Spraggins, Nils Gehlenborg, "Viv: multiscale visualization of high-resolution multiplexed bioimaging data on the web." Nature Methods (2022), doi:10.31219/10.1038/s41592-022-01482-7
Screenshot | Description |
---|---|
Avivator A lightweight viewer for local and remote datasets. The source code is include in this repository under avivator/ . See our 🎥 video tutorial to learn more. |
|
Vizarr A minimal, purely client-side program for viewing OME-NGFF and other Zarr-based images. Vizarr supports a Python backend using the imjoy-rpc, allowing it to not only function as a standalone application but also directly embed in Jupyter or Google Colab Notebooks. |
Viv's data loaders support OME-NGFF (Zarr), OME-TIFF, and Indexed OME-TIFF*.
We recommend converting proprietrary file formats to open standard formats via the
bioformats2raw
+ raw2ometiff
pipeline. Non-pyramidal datasets are also supported
provided the individual texture can be uploaded to the GPU (< 4096 x 4096
in pixel size).
Please see the tutorial for more information.
*We describe Indexed OME-TIFF in our paper as an optional enhancement to provide efficient random chunk access for OME-TIFF. Our approach substantially improves chunk load times for OME-TIFF datasets with large Z, C, or T dimensions that otherwise may incur long latencies due to seeking. More information on generating an IFD index (JSON) can be found in our tutorial or documentation.
You will also need to install deck.gl and other peerDependencies
manually.
This step prevent users from installing multiple versions of deck.gl in their projects.
Breaking changes may happen on the minor version update. Please see the changelog for information.
Detailed API information and example sippets can be found in our documentation.
This repo is a monorepo using pnpm workspaces. The package manager used to install and link dependencies must be pnpm
.
Each folder under packages/
are a published as a separate packages on npm under the @vivjs
scope. The top-level package @hms-dbmi/viv
exports from these dependencies.
To develop and test the @hms-dbmi/viv
package:
pnpm install
in viv
root folderpnpm dev
to start a development serverpnpm test
to run all tests (or specific, e.g., pnpm test --filter=@vivjs/layers
)To build viv's documentation and the Avivator website (under sites/
), run:
For changes to be reflected in package changelogs, run npx changeset
and follow the prompts.
Note not every PR requires a changeset. Since changesets are focused on releases and changelogs, changes to the repository that don't effect these won't need a changeset (e.g., documentation, tests).
The Changesets GitHub Action will create and update a PR that applies changesets versions of @vivjs/
packages to NPM.
Viv supports coverage across Safari, Firefox, Chrome, and Edge. Please file an issue if you find a browser in which Viv does not work.
Start a simple web server to make your data avaiable to the browser via HTTP.
If following the Data Preparation
tutorial, the server provides Viv access to the OME-TIFF via URL:
http://localhost:8000/LuCa-7color_Scan1.ome.tif
(OME-TIFF)If you wish to use the SideBySideViewer
, simply replace PictureInPictureViewer
with SideBySideViewer
and add props for zoomLock
and panLock
while removing overview
and overviewOn
.
Viv provides four high-level React components called "Viewers": PictureInPictureViewer
,
SideBySideViewer
, VolumeViewer
and VivViewer
. The first three wrap the VivViewer
and use the View
API for composing different layouts.
VivViewer
wraps a DeckGL
component and handles
managing the complex state (i.e multiple "views" in the same scene) as well as
resizing and forwarding rendering props
down to the underling layers via the View
API.
A "View" in Viv is a wrapper around a deck.gl
View
that exposes an API to manage where an image is rendered in the coordinate space. A View
must inherit from a VivView
and implement:
ViewState
View
Layers
Views are used by Viv's Viewer
components. For example, the OverviewView
is used in the
PictureInPictureViewer
to provide a constant overview of the high resolution image. The
SideBySideView
has supports locked and unlocked zoom/pan interactions within
the SideBySideViewer
.
Viv implements several deck.gl
Layers
,
for rending RGB and multi-channel imaging data. These layers can be composed like any other
layer in the deck.gl ecosystem. The main three layers for use are MultiscaleImageLayer
(for tiled, pyramidal images),
ImageLayer
(for non-pyramidal and non-tiled iamges), and VolumeLayer
(for volumetric ray casting rendering)
which accept PixelSource
arguments for data fetching. These layers handle the complexity of data fetching
and setting up the rendering by wrapping the XRLayer
, XR3DLayer
and BitmapLayer
, which are the lower level rendering layers.
The XRLayer
(eXtended Range Layer) and XR3DLayer
enable multi-channel additive blending of Uint32
, Uint16
, Uint8
and Float32
data on the GPU.
A crucial part of the layer is the extensions
prop - these control the per-fragment (pixel) rendering. The default on all layers is ColorPaletteExtension
and it will provide a default colors
prop - thus all that is necessary for controlling rendering is the contrastLimits
. But if you wish to do something different, for example to use a "colormap" like viridis
, you will need to pass in extensions: [new AdditiveColormapExtension()]
and colormap: viridis
. Please see deck.gl's documentation for more information.
Viv wraps both Tiff- and Zarr-based data sources in a unified PixelSource
interface. A pixel
source can be thought of as a multi-dimensional "stack" of image data with labeled
dimensions (usually ["t", "c", "z", "y", "x"]
). A multiscale image is represented as list
of pixel sources decreasing in shape. Viv provides several helper functions to intialize a
loader via url: loadOmeTiff
, loadBioformatsZarr
, loadOmeZarr
, and loadMultiTiff
.
Each function returns a Promise
for an object of shape { data: PixelSouce[], metadata: M }
,
where M
is a JavaScript object containing the format-specific metadata for the image.
For OME-TIFF and Bioformats-Zarr, and MultiTiff the metadata is identical (OME-XML representation),
for OME-Zarr,the metadata is that for a multiscale group (more information: https://ngff.openmicroscopy.org/latest/).
This metadata can be useful for creating UI componenets that describe the data source.
Viv's shaders in 2D can be modified via deck.gl shader hooks. The LensExtension
, AdditiveColormapExtension
, and ColorPaletteExtesnion
(default) in Viv implement shader hooks. Implementing your own shader hook requires extending the standard layer extension with a getShaders
method that returns an inject
function for one of the following supported hooks:
DECKGL_PROCESS_INTENSITY(inout float intensity, vec2 contrastLimits, int channelIndex)
This hook allows for custom processing of raw pixel intensities. For example, a non-linear (or alternative) transformation function may be provided to override the default linear ramp function with two contrast limit endpoints. This hook is available on all layers in all modes. By default, the layer provides a reasonable function for this.
DECKGL_MUTATE_COLOR(inout vec4 rgba, float intensity0, float intensity1, float intensity2, float intensity3, float intensity4, float intensity5, vec2 vTexCoord)
This hook allows for users to mutate conversion of a processed intensity (from DECKGL_PROCESS_INTENSITY
) into a color. This is only available in 2D layers. An implementation for this hook is required by all Viv extensions.
DECKGL_FILTER_COLOR(inout vec4 color, FragmentGeometry geometry)
Please see deck.gl's documentation as this is a standard hook. This hook may be used for example to apply a gamma transformation to the output colors.
Viv's shaders can be modified in 3D similar to the above, with the exception of DECKGL_MUTATE_COLOR
. Instead, at least one provided extension must implement _BEFORE_RENDER
, _RENDER
and _AFTER_RENDER
. Specifically, one extension must have opts.rendering
as an object with at least the _RENDER
property a string that contains valid glsl code. For example, the MaximumIntensityProjectionExtension
uses _BEFORE_RENDER
to set up an array which will hold the found maximum intensities. _RENDER
fills that array as maximum intensities are found. And finally _AFTER_RENDER
will place those intensities in the color
or fragColor
buffer to be rendered.
This guide demonstrates how to generate a pyramidal OME-TIFF with Bio-Formats that can be viewed with Avivator. Viv also supports OME-NGFF, but tooling to generate the format remains limited as the specification matures. We will update this tutorial accordingly when a method for generating OME-NGFF is endorsed by the Open Microscopy Environment.
NOTE: If you wish to view an image located on a remote machine accessible via SSH, please see the note at the end of this document.
This tutorial requires Bio-Formats bioformats2raw
and
raw2ometiff
command-line tools. It's easiest to install
these tools using conda
, but
binaries also are available for download on the corresponding github repositories.
Bio-Formats is an incredibly valuable toolkit and supports reading over 150 file formats. You can choose one of your own images, but for the purposes of this tutorial, we will use a multiplexed, high-resolution Perkin Elmer (1.95 GB) image made available under CC-BY 4.0 on OME.
First use bioformats2raw
to convert the .qptiff
format to an intermediate "raw" format. This representation includes the multiscale binary pixel data (Zarr) and associated OME-XML metadata.
The next step is to convert this "raw" output to an OME-TIFF.
Note:
LZW
is the default if you do not specify a--compression
option (the syntax requires an "=" sign, like--compression=zlib
).
You may also use bfconvert
(Bioformats >= 6.0.0) to generate an OME-TIFF.
All the above arguments are necessary except for -compression
which is optional (default uncompressed). In order for an image to be compatible with Viv:
-pyramid-scale
must be 2-tilex
must equal -tiley
(ideally a power of 2)-pyramid-resolutions
must be computed using the image dimensions and tile size. For example, for a 4096 x 4096
with tile size of 512
, 3 = log2(ceil(4096 / 512))
resolutions should work well.
a power of 2), and the -pyramid-resolutions
argument should be adjusted to match the size of your image and your choice of tile size.
For example, if you have a 4096 x 4096
and tile size of 512
, 3 = log2(ceil(4096 / 512))
resolutions should work well.
For the LuCa-7color_Scan1.qptiff
image, 6 = max(log2(ceil(12480 / 512)), log2(ceil(17280 / 512)))
resolutions work best as the image is 12480 x 17280
in size.
There is currently no "auto" feature for inferring the number of pyramid resolutions.
Without the compression set, i.e -compression LZW
, the output image will be uncompressed.There is a 2GB limit on the total amount of data that may be read into memory for the bfconvert
cli tool.
Therefore for larger images, please use bioformats2raw + raw2ometiff
.
NOTE: Viv currently uses
geotiff.js
for accessing data from remote TIFFs over HTTP and support the three lossless compression options supported byraw2ometiff
-LZW
,zlib
, andUncompressed
as well asjpeg
compression for 8 bit data. Support for JPEG-2000 for >8 bit data is planned. Please open an issue if you would like this more immediately.
The TIFF file format is not designed for the cloud, and therefore certain images are less suitable to be natively read remotely. If your OME-TIFF image contains large non-XY dimensions (e.g. Z=100, T=50), you are likely to experience latencies when switching planes in Avivator due to seeking the file over HTTP. We recommend generating an index (offsets.json
) that contains the byte-offsets for each plane to complement OME-TIFF, solving this latency issue and enabling fast interactions.
Alternatively you may use our web application for generating the offsets.
⚠️ IMPORTANT ⚠️ Avivator requires the
offsets.json
file to be adjacent to the OME-TIFF on the server in order to leverage this feature. For example, if an index is generated for the dataset in this tutorial, the following directory structure is correct:
data
├── LuCa-7color_Scan1.offsets.json
└── LuCa-7color_Scan1.ome.tif
This index can be reused by other Viv-based applications and even clients in other languages to improve remote OME-TIFF performance. If using Viv, you must fetch the offsets.json directly in your application code. See our example for help getting started
⚠️ Warning ⚠️ This section only works in Chrome, Firefox, and Edge (not Safari) due to differences in how browser restrict websites hosted at
https://
URLs (Avivator) from issuing requests tohttp://
(the local data server) as a security measure. The supported browsers allow requests tohttp://
fromhttps://
under the special case oflocalhost
, whereas Safari prevents all requests tohttp://
. As a workaround, you can start an Avivator client athttp://
, but we suggest trying a different supported browser. Alternatively, you can drag-and-drop an image (no local server) into the viewer in any browser.
There are a few different ways to view your data in Avivator.
If you have an OME-TIFF saved locally, you may simply drag and drop the file over the canvas or use the "Choose file" button to view your data.
Otherwise Avivator relies on access to data over HTTP, and you can serve data locally using a simple web-server.
It's easiest to use http-server
to start a web-server locally, which can be installed via npm
or Homebrew
if using a Mac.
http-server
From within this directory, start a local server and open Avivator in your browser.
This command starts a web-server and makes the content in the current directory readable over HTTP. Once the server is running,
open Avivator and paste http://localhost:8000/LuCa-7color_Scan1.ome.tif
into the input dialog to view the OME-TIFF generated in this tutorial. For convenience, you can also create a direct
link by appending an image_url
query parameter:
Troubleshooting: Viv relies on cross-origin requests to retrieve data from servers. The
--cors='*'
flag is important to ensure that the appropriateAccess-Control-Allow-Origin
response is sent from your local server. In addition, web servers must allow HTTP range requests to support viewing OME-TIFF images. Range requests are allowed by default byhttp-server
but may need to be enabled explicitly for your production web server.
It is possible to generate the datasets in this tutorial on a remote machine and view them in Avivator via SSH and port forwarding.
For example, you can follow this tutorial on a remote machine within SHH, linking your local port 12345
to the remote's local 8000
.
Since your local port 12345
is linked to the remote 8000
via SSH, you can now view the remote dataset locally via your localhost:12345
in Avivator: that is, you paste http://avivator.gehlenborglab.org/?image_url=http://localhost:12345/LuCa-7color_Scan1.ome.tif into your browser instead of http://avivator.gehlenborglab.org/?image_url=http://localhost:8000/LuCa-7color_Scan1.ome.tif as is written at the end of the tutorial.
Other sample OME-TIFF data can be downloaded from OME-TIFF sample data provided by OME and viewed with Viv locally (without needing to run Bio-Formats).
Viv has the capability to do volume ray-casting on in-memory volumetric data. It also exposes an API for applying arbitrary affine transformations to the volume when rendering. Napari is another popular tool for doing both of these, but there are some key differences. Viv follows the convention of 3D graphics to first have the x-axis running left-right across the screen, then the y-axis running up and down, and then the z-axis running into and out of the screen (all following the right-hand-rule for standard orientation). Napari orients volumes in order to respect broadcasting conventions with numpy
- that is their axis order is actually zyx
, which is the reverse of Viv.
If you have a homogeneous matrix that you are using in Napari, the best way to make it usable in Viv is to go back through the steps by which you got the matrix, reversing your z and x operations. However, if this is not possible, the following steps should produce something that looks identical as it swaps the row and column space (i.e you want to change the order of both how the matrix interprets its inputs and how it affects its outputs):
If you would like more information, please visit our github page.
This component provides a component for an overview-detail VivViewer of an image (i.e picture-in-picture).
(Object)
Name | Description |
---|---|
props.contrastLimits Array
|
List of [begin, end] values to control each channel's ramp function. |
props.colors Array
|
List of [r, g, b] values for each channel. |
props.channelsVisible Array
|
List of boolean values for each channel for whether or not it is visible. |
props.colormap string?
|
String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap |
props.loader Array
|
The data source for the viewer, PixelSource[]. If loader.length > 1, data is assumed to be multiscale. |
props.selections Array
|
Selection to be used for fetching data. |
props.overview Object
|
Allows you to pass settings into the OverviewView: { scale, margin, position, minimumWidth, maximumWidth, boundingBoxColor, boundingBoxOutlineWidth, viewportOutlineColor, viewportOutlineWidth}. See http://viv.gehlenborglab.org/#overviewview for defaults. |
props.overviewOn Boolean
|
Whether or not to show the OverviewView. |
props.viewStates Array?
|
Array of objects like [{ target: [x, y, 0], zoom: -zoom, id: DETAIL_VIEW_ID }] for setting where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState). |
props.height number
|
Current height of the component. |
props.width number
|
Current width of the component. |
props.extensions Array?
|
deck.gl extensions to add to the layers. |
props.clickCenter Boolean?
|
Click to center the default view. Default is true. |
props.lensEnabled boolean?
|
Whether or not to use the lens (deafult false). Must be used with the
LensExtension
in the
extensions
prop.
|
props.lensSelection number?
|
Numeric index of the channel to be focused on by the lens (default 0). Must be used with the
LensExtension
in the
extensions
prop.
|
props.lensRadius number?
|
Pixel radius of the lens (default: 100). Must be used with the
LensExtension
in the
extensions
prop.
|
props.lensBorderColor Array?
|
RGB color of the border of the lens (default [255, 255, 255]). Must be used with the
LensExtension
in the
extensions
prop.
|
props.lensBorderRadius number?
|
Percentage of the radius of the lens for a border (default 0.02). Must be used with the
LensExtension
in the
extensions
prop.
|
props.transparentColor Array?
|
An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent. |
props.snapScaleBar boolean?
|
If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false. |
props.onViewportLoad function?
|
Function that gets called when the data in the viewport loads. |
props.deckProps Object?
|
Additional options used when creating the DeckGL component. See
the deck.gl docs.
.
layerFilter
,
layers
,
onViewStateChange
,
views
,
viewState
,
useDevicePixels
, and
getCursor
are already set.
|
This component provides a side-by-side VivViewer with linked zoom/pan.
(Object)
Name | Description |
---|---|
props.contrastLimits Array
|
List of [begin, end] values to control each channel's ramp function. |
props.colors Array
|
List of [r, g, b] values for each channel. |
props.channelsVisible Array
|
List of boolean values for each channel for whether or not it is visible. |
props.colormap string?
|
String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap |
props.loader Array
|
This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale. |
props.selections Array
|
Selection to be used for fetching data. |
props.zoomLock Boolean
|
Whether or not lock the zooms of the two views. |
props.panLock Boolean
|
Whether or not lock the pans of the two views. |
props.viewStates Array?
|
List of objects like [{ target: [x, y, 0], zoom: -zoom, id: 'left' }, { target: [x, y, 0], zoom: -zoom, id: 'right' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState). |
props.height number
|
Current height of the component. |
props.width number
|
Current width of the component. |
props.extensions Array?
|
deck.gl extensions to add to the layers. |
props.lensEnabled boolean?
|
Whether or not to use the lens deafult (false). |
props.lensSelection number?
|
Numeric index of the channel to be focused on by the lens (default 0). |
props.lensBorderColor Array?
|
RGB color of the border of the lens (default [255, 255, 255]). |
props.lensBorderRadius number?
|
Percentage of the radius of the lens for a border (default 0.02). |
props.lensRadius number?
|
Pixel radius of the lens (default: 100). |
props.transparentColor Array?
|
An RGB (0-255 range) color to be considered "transparent" if provided. In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0. This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader. Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent. |
props.snapScaleBar boolean?
|
If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false. |
props.deckProps Object?
|
Additional options used when creating the DeckGL component. See
the deck.gl docs.
.
layerFilter
,
layers
,
onViewStateChange
,
views
,
viewState
,
useDevicePixels
, and
getCursor
are already set.
|
This component provides a volumetric viewer that provides provides volume-ray-casting.
(Object)
Name | Description |
---|---|
props.contrastLimits Array
|
List of [begin, end] values to control each channel's ramp function. |
props.colors Array?
|
List of [r, g, b] values for each channel - necessary if using one of the ColorPalette3DExtensions extensions. |
props.channelsVisible Array
|
List of boolean values for each channel for whether or not it is visible. |
props.colormap string?
|
String indicating a colormap (default: ''). The full list of options is here: https://github.com/glslify/glsl-colormap#glsl-colormap - necessary if using one of the AdditiveColormap3DExtensions extensions. |
props.loader Array
|
This data source for the viewer. PixelSource[]. If loader.length > 1, data is assumed to be multiscale. |
props.selections Array
|
Selection to be used for fetching data |
props.resolution Array?
|
Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length - 1 the lowest with default loader.length - 1) |
props.modelMatrix Object?
|
A column major affine transformation to be applied to the volume. |
props.xSlice Array?
|
0-1 interval on which to slice the volume. |
props.ySlice Array?
|
0-1 interval on which to slice the volume. |
props.zSlice Array?
|
0-1 interval on which to slice the volume. |
props.onViewportLoad function?
|
Function that gets called when the data in the viewport loads. |
props.viewStates Array?
|
List of objects like [{ target: [x, y, z], zoom: -zoom, id: '3d' }] for initializing where the viewer looks (optional - this is inferred from height/width/loader internally by default using getDefaultInitialViewState). |
props.height number
|
Current height of the component. |
props.width number
|
Current width of the component. |
props.clippingPlanes Array<Object>?
|
List of math.gl Plane objects. |
props.useFixedAxis Boolean?
|
Whether or not to fix the axis of the camera (default is true). |
(Array?)
deck.gl extensions
to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.
This component wraps the DeckGL component.
(Object)
Name | Description |
---|---|
props.layerProps Array
|
Props for the layers in each view. |
props.randomize boolean?
|
Whether or not to randomize which view goes first (for dynamic rendering of multiple linked views). |
props.viewStates Array<object>
|
List of objects like [{ target: [x, y, 0], zoom: -zoom, id: 'left' }, { target: [x, y, 0], zoom: -zoom, id: 'right' }] |
props.onViewStateChange ViewStateChange?
|
Callback that returns the deck.gl view state ( https://deck.gl/docs/api-reference/core/deck#onviewstatechange ). |
props.onHover Hover?
|
Callback that returns the picking info and the event ( https://deck.gl/docs/api-reference/core/layer#onhover https://deck.gl/docs/developer-guide/interactivity#the-picking-info-object ) |
props.hoverHooks HoverHooks?
|
Object including utility hooks - an object with key handleValue like { handleValue: (valueArray) => {}, handleCoordinate: (coordinate) => {} } where valueArray has the pixel values for the image under the hover location and coordinate is the coordinate in the image from which the values are picked. |
props.deckProps Object?
|
Additional options used when creating the DeckGL component. See
the deck.gl docs.
.
layerFilter
,
layers
,
onViewStateChange
,
views
,
viewState
,
useDevicePixels
, and
getCursor
are already set.
|
This class generates a layer and a view for use in the VivViewer
(Object)
Name | Description |
---|---|
args.id string
|
Id for the current view |
args.x number?
(default 0 )
|
X (top-left) location on the screen for the current view |
args.y number?
(default 0 )
|
Y (top-left) location on the screen for the current view |
args.height Object
|
Width of the view. |
args.width Object
|
Height of the view. |
Create a DeckGL view based on this class.
View
:
The DeckGL View for this class.
This class generates a OverviewLayer and a view for use in the VivViewer as an overview to a Detailview (they must be used in conjection). From the base class VivView, only the initialViewState argument is used. This class uses private methods to position its x and y from the additional arguments:
Extends VivView
(Object)
Name | Description |
---|---|
args.id Object
|
for thie VivView |
args.loader Object
|
PixelSource[], where each PixelSource is decreasing in shape. If length == 1, not multiscale. |
args.detailHeight number
|
Height of the detail view. |
args.detailWidth number
|
Width of the detail view. |
args.scale number?
(default 0.2 )
|
Scale of this viewport relative to the detail. Default is .2. |
args.margin number?
(default 25 )
|
Margin to be offset from the the corner of the other viewport. Default is 25. |
args.position string?
(default 'bottom-right' )
|
Location of the viewport - one of "bottom-right", "top-right", "top-left", "bottom-left." Default is 'bottom-right'. |
args.minimumWidth number?
(default 150 )
|
Absolute lower bound for how small the viewport should scale. Default is 150. |
args.maximumWidth number?
(default 350 )
|
Absolute upper bound for how large the viewport should scale. Default is 350. |
args.minimumHeight number?
(default 150 )
|
Absolute lower bound for how small the viewport should scale. Default is 150. |
args.maximumHeight number?
(default 350 )
|
Absolute upper bound for how large the viewport should scale. Default is 350. |
args.clickCenter Boolean?
(default true )
|
Click to center the default view. Default is true. |
This class generates a MultiscaleImageLayer and a view for use in the VivViewer as a detailed view. It takes the same arguments for its constructor as its base class VivView plus the following:
Extends VivView
(Object)
Name | Description |
---|---|
args.id string
|
id of the View |
args.x number?
(default 0 )
|
X (top-left) location on the screen for the current view |
args.y number?
(default 0 )
|
Y (top-left) location on the screen for the current view |
args.height number
|
Width of the view. |
args.width number
|
Height of the view. |
args.snapScaleBar boolean?
(default false )
|
If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false. |
This class generates a VolumeLayer and a view for use in the VivViewer as volumetric rendering.
Extends VivView
This class generates a MultiscaleImageLayer and a view for use in the SideBySideViewer.
It is linked with its other views as controlled by linkedIds
, zoomLock
, and panLock
parameters.
It takes the same arguments for its constructor as its base class VivView plus the following:
Extends VivView
(Object)
Name | Description |
---|---|
args.id string
|
id of the View |
args.x number?
(default 0 )
|
X (top-left) location on the screen for the current view |
args.y number?
(default 0 )
|
Y (top-left) location on the screen for the current view |
args.height number
|
Width of the view. |
args.width number
|
Height of the view. |
args.linkedIds Array<String>
(default [] )
|
Ids of the other views to which this could be locked via zoom/pan. |
args.panLock Boolean
(default true )
|
Whether or not we lock pan. |
args.zoomLock Boolean
(default true )
|
Whether or not we lock zoom. |
args.viewportOutlineColor Array?
(default [255,255,255] )
|
Outline color of the border (default [255, 255, 255]) |
args.viewportOutlineWidth number?
(default 10 )
|
Default outline width (default 10) |
args.snapScaleBar boolean?
(default false )
|
If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary. By default, false. |
Computes statics from pixel data.
This is helpful for generating histograms or scaling contrastLimits to reasonable range. Also provided are "contrastLimits" which are slider bounds that should give a good initial image.
(TypedArray)
{mean: number, sd: number, q1: number, q3: number, median: number, domain: Array<number>, contrastLimits: Array<number>}
:
Create an initial view state that centers the image in the viewport at the zoom level that fills the dimensions in viewSize
.
(Object)
(PixelSource[] | PixelSource)
(Object)
{ height, width } object giving dimensions of the viewport for deducing the right zoom level to center the image.
(Object?
= 0
)
A positive number which controls how far zoomed out the view state is from filling the entire viewport (default is 0 so the image fully fills the view).
SideBySideViewer and PictureInPictureViewer use .5 when setting viewState automatically in their default behavior, so the viewport is slightly zoomed out from the image
filling the whole screen. 1 unit of zoomBackOff (so a passed-in value of 1) corresponds to a 2x zooming out.
(Boolean?
= false
)
Whether or not to return a view state that can be used with the 3d viewer
(Boolean?)
If using a transformation matrix, passing it in here will allow this function to properly center the volume.
Object
:
A default initial view state that centers the image within the view: { target: [x, y, 0], zoom: -zoom }.
The documentation in this section shows each layer as an object
with properties
although this is not accurate. These are deck.gl layer classes and the properties
are those of the props
that are passed into the constructor of the deck.gl layer classes. For more information, see deck.gl's documentation on how to use layers or the layer class. We welcome contributions to improve the docs, whether it be small fixes or a new docs site that would allow us to show these classes properly. Thanks!
Type: object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(Array)
: Image pyramid. PixelSource[], where each PixelSource is decreasing in shape.
(Array)
: Selection to be used for fetching data.
(Array<Array<number>>?)
: Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
(string?)
: Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
(String?)
: Unique identifier for this layer.
(function?)
: Custom override for handle tile fetching errors.
(function?)
: Hook function from deck.gl to handle hover objects.
(number?)
: Maximum parallel ongoing requests allowed before aborting.
(function?)
: Hook function from deck.gl to handle clicked-on objects.
(Object?)
: Math.gl Matrix4 object containing an affine transformation to be applied to the image.
(string?)
: 'best-available' | 'no-overlap' | 'never' will be passed to TileLayer. A default will be chosen based on opacity.
(boolean?)
: Whether to exclude the background image. The background image is also excluded for opacity!=1.
Type: Object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(Object)
: PixelSource. Represents an N-dimensional image.
(Array)
: Selection to be used for fetching data.
(Array<Array<number>>?)
: Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
(string?)
: Id for the current view. This needs to match the viewState id in deck.gl and is necessary for the lens.
(function?)
: Hook function from deck.gl to handle hover objects.
(function?)
: Hook function from deck.gl to handle clicked-on objects.
(Object?)
: Math.gl Matrix4 object containing an affine transformation to be applied to the image.
(function?)
: Function that gets called when the data in the viewport loads.
(String?)
: Unique identifier for this layer.
Type: object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(string)
: Dtype for the layer.
(Array<number>?)
: Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
(String?)
: Unique identifier for this layer.
(function?)
: Hook function from deck.gl to handle hover objects.
(function?)
: Hook function from deck.gl to handle clicked-on objects.
(Object?)
: Math.gl Matrix4 object containing an affine transformation to be applied to the image.
Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
(("nearest"
| "linear"
)?)
: The
minFilter
and
magFilter
for luma.gl rendering (see
https://luma.gl/docs/api-reference/core/resources/sampler#texture-magnification-filter
) - default is 'nearest'
Type: Object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(Array)
: PixelSource[]. Represents an N-dimensional image.
(Array)
: Selection to be used for fetching data.
(Array<Array<number>>?)
: Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
(number?)
: Resolution at which you would like to see the volume and load it into memory (0 highest, loader.length -1 the lowest default 0)
(Object?)
: A column major affine transformation to be applied to the volume.
(function?)
: Function that gets called when the data in the viewport loads.
(boolean?)
: Whether or not to use the default progress text + indicator (default is true)
(function?)
: A callback to be used for getting updates of the progress, ({ progress }) => {}
(Array?)
: deck.gl extensions
to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.
Type: Object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(string)
: Dtype for the layer.
(Array<Array<number>>?)
: Override for the possible max/min values (i.e something different than 65535 for uint16/'<u2').
(Object?)
: A column major affine transformation to be applied to the volume.
(Object?)
: Matrix for scaling the volume based on the (downsampled) resolution being displayed.
(Array?)
: deck.gl extensions
to add to the layers - default is AdditiveBlendExtension from ColorPalette3DExtensions.
Type: object
(number?)
: Opacity of the layer.
(function?)
: Hook function from deck.gl to handle clicked-on objects.
(Object?)
: Math.gl Matrix4 object containing an affine transformation to be applied to the image.
(number?)
: One of WhiteIsZero BlackIsZero YCbCr or RGB (default)
(Array<number>?)
: An RGB (0-255 range) color to be considered "transparent" if provided.
In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0.
This parameter only needs to be a truthy value when using colormaps because each colormap has its own transparent color that is calculated on the shader.
Thus setting this to a truthy value (with a colormap set) indicates that the shader should make that color transparent.
(String?)
: Unique identifier for this layer.
Type: Object
(Array<Array<number>>)
: List of [begin, end] values to control each channel's ramp function.
(Array<boolean>)
: List of boolean values for each channel for whether or not it is visible.
(Array)
: PixelSource[]. Assumes multiscale if loader.length > 1.
(Array)
: Selection to be used for fetching data.
(number?)
: Width of the bounding box in px (default: 1).
(number?)
: Viewport outline width in px (default: 2).
(String?)
: Unique identifier for this layer.
Type: Object
(String)
: Physical unit size per pixel at full resolution.
(Number)
: Physical size of a pixel.
(Object)
: The current viewState for the desired view. We cannot internally use this.context.viewport because it is one frame behind:
https://github.com/visgl/deck.gl/issues/4504
(Array?)
: Boudning box of the view in which this should render.
(string?)
: Id from the parent layer.
(number?)
: Value from 0 to 1 representing the portion of the view to be used for the length part of the scale bar.
(boolean)
: If true, aligns the scale bar value to predefined intervals for clearer readings, adjusting units if necessary.
This deck.gl extension allows for a lens that selectively shows one channel in its chosen color and then the others in white.
Type: Object
(boolean?)
: Whether or not to use the lens.
(number?)
: Numeric index of the channel to be focused on by the lens.
(number?)
: Pixel radius of the lens (default: 100).
(number?)
: Percentage of the radius of the lens for a border (default 0.02).
This deck.gl extension allows for an additive colormap like viridis or jet to be used for pseudo-coloring channels.
Type: object
(number?)
: Opacity of the layer.
(string?)
: String indicating a colormap (default: 'viridis'). The full list of options is here:
https://github.com/glslify/glsl-colormap#glsl-colormap
(boolean?)
: Indicates whether the shader should make the output of colormap_function(0) color transparent
This deck.gl extension allows for a color palette to be used for pseudo-coloring channels.
Type: object
(number?)
: Opacity of the layer.
(Array<number>?)
: An RGB (0-255 range) color to be considered "transparent" if provided.
In other words, any fragment shader output equal transparentColor (before applying opacity) will have opacity 0.
(Boolean?)
: Whether or not to use the value provided to transparentColor.
This object contains the BaseExtension, which can be extended for other color palette-style rendering, as well implementations of three ray casting algorithms as extensions.
Type: object
(object)
(object)
(object)
(object)
This object contains the BaseExtension, which can be extended for other additive colormap-style (i.e viridis, jet etc.) rendering, as well implementations of three ray casting algorithms as extensions.
Type: object
(object)
(object)
(object)
(object)
Opens an OME-TIFF via URL and returns data source and associated metadata for first or all images in files.
((string | File))
url or File object. If the url is prefixed with file:// will attempt to load with GeoTIFF's 'fromFile',
which requires access to Node's fs module.
(Object
= {}
)
Name | Description |
---|---|
opts.headers Headers?
|
Headers passed to each underlying fetch request. |
opts.offsets Array<number>?
|
Indexed-Tiff IFD offsets. |
opts.pool GeoTIFF.Pool?
|
A geotiff.js Pool for decoding image chunks. |
opts.images (
(default 'first' )
|
Whether to return 'all' or only the 'first' image in the OME-TIFF. Promise<{ data: TiffPixelSource[], metadata: ImageMeta }>[] is returned. |
(Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}> | Array<Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>>)
:
data source and associated OME-Zarr metadata.
Opens root of multiscale OME-Zarr via URL.
Promise<{data: Array<ZarrPixelSource>, metadata: RootAttrs}>
:
data source and associated OME-Zarr metadata.
Opens root directory generated via bioformats2raw --file_type=zarr
. Uses OME-XML metadata,
and assumes first image. This function is the zarr-equivalent to using loadOmeTiff.
Promise<{data: Array<ZarrPixelSource>, metadata: ImageMeta}>
:
data source and associated OMEXML metadata.
Opens multiple tiffs as a multidimensional "stack" of 2D planes. Also supports loading multiple slickes of a stack from a stacked tiff. Returns the data source and OME-TIFF-like metadata.
(any)
Promise<{data: Array<TiffPixelSource>, metadata: ImageMeta}>
:
data source and associated metadata.
const { data, metadata } = await loadMultiTiff([
[{ c: 0, t: 0, z: 0 }, 'https://example.com/channel_0.tif'],
[{ c: 1, t: 0, z: 0 }, 'https://example.com/channel_1.tif'],
[{ c: 2, t: 0, z: 0 }, undefined, { c: 3, t: 0, z: 0 }], 'https://example.com/channels_2-3.tif'],
]);
await data.getRaster({ selection: { c: 0, t: 0, z: 0 } });
// { data: Uint16Array[...], width: 500, height: 500 }
Type: number
DTYPE_VALUES
as a part of Viv's public API as it
leaks internal implementation details. If this is something your project relies
on, please open an issue for further discussion.
More info can be found here: https://github.com/hms-dbmi/viv/pull/372#discussion_r571707517
(any)
(any)
Type: string
(any)
(any)
(any)
(any)
(any)
(any)
(any)
(any)
(any)
(any)
Converts x, y tile indices to zarr dimension Slices within image bounds.
(any)
(any)
(any)
(any)
(any)
(any)
Type: string
Type: string