Prerequisites
- Enterprise notebook entitlement enabled for your workspace
- A connection (warehouse or Spark) your role may use
- Browser permissions allowing WebSocket connections to the notebook service (corporate proxies sometimes block these)
Setup steps
Create a notebook
Open Notebooks → New notebook. Name it after the investigation (
q3_churn_slice_explore) so teammates recognize the intent.Pick a kernel
Choose SQL, Python, or Scala depending on connector support. Match the dialect to your warehouse when using SQL kernels.
Attach a connection
Select the connection from the notebook sidebar. Test connectivity with a trivial query (
SELECT 1 or SELECT current_timestamp()).Run cells top to bottom
Execute imports and parameters first, then analysis cells. Restart kernel if package installs or credential changes occur mid-session.
Notebook node on the pipeline canvas
Some workspaces expose a Notebook node type that executes a saved notebook as part of a pipeline run:- Author and test the notebook interactively until outputs are stable.
- Parameterize file paths, dates, and environment names—avoid hard-coded prod literals.
- Drag the Notebook node onto the canvas, select the saved artifact, and map parameters from upstream nodes or pipeline variables.
- Run the pipeline in dev with representative partitions before promoting.
Related topics
Spark integration
Scale notebooks on Spark clusters.
Compute
Configure execution environments for notebooks.