Using the Node SDK
The Dragoneye Node.js SDK simplifies integrating with our APIs in your JavaScript/TypeScript projects. This guide covers installation, example usage, type definitions, and available endpoints.
Installation
Install the SDK using npm:
npm install dragoneye-node
Quick Start
Once installed, you can call the classifier using your desired model:
import { Dragoneye } from "dragoneye-node";
const dragoneyeClient = new Dragoneye({
apiKey: "<YOUR_ACCESS_TOKEN>",
});
const media = await Dragoneye.Video.fromFilePath("example.mp4");
const results = await dragoneyeClient.classification.predictVideo(
media,
"dragoneye/animals"
);
console.log(results.predictions);
Types and Endpoints
Types
TaxonID
A TaxonID
is a unique numeric identifier for each taxon (e.g., category or trait).
export type TaxonID = number & { readonly brand: unique symbol };
Use the createTaxonID
helper function to safely create TaxonID
values:
export function createTaxonID(taxonId: number): TaxonID;
TaxonType
Defines whether a taxon is a "category"
or a "trait"
.
NormalizedBbox
Represents the location of an object in an image. It is an array of four numbers: [x_min, y_min, x_max, y_max]
.
TaxonPrediction
Describes a predicted taxon returned by the API. Predictions may include children to represent hierarchies.
export type TaxonPrediction = {
id: TaxonID;
type: TaxonType;
name: string;
displayName: string;
score?: number;
children: TaxonPrediction[];
};
ClassificationTraitRootPrediction
Contains predictions for traits related to a detected object.
export interface ClassificationTraitRootPrediction {
id: TaxonID;
name: string;
displayName: string;
taxons: TaxonPrediction[];
}
ClassificationObjectPrediction
Represents a predicted object in an image.
export interface ClassificationObjectPrediction {
normalizedBbox: NormalizedBbox;
category: TaxonPrediction;
traits: ClassificationTraitRootPrediction[];
}
ClassificationPredictImageResponse
Response structure for image predictions.
export interface ClassificationPredictImageResponse {
predictions: ClassificationObjectPrediction[];
prediction_task_uuid?: string;
}
ClassificationPredictVideoResponse
Response structure for video predictions.
export interface ClassificationPredictVideoResponse {
predictions: Record<number, ClassificationObjectPrediction[]>; // keyed by frame
prediction_task_uuid?: string;
}
PredictionTaskStatusResponse
Represents the status of an async prediction task.
export interface PredictionTaskStatusResponse {
prediction_task_uuid: string;
status: string; // e.g. "predicted", "failed"
}
Endpoints
predict_image
(Image Classification)
Performs a classification prediction on a single image.
Arguments:
media
: AMedia
object wrapping an image.modelName
: The name of the model to use.
Response:
- Returns a
ClassificationPredictImageResponse
.
predict_video
(Video Classification)
Performs a classification prediction on a video.
Arguments:
media
: AMedia
object wrapping a video.modelName
: The model name to use.framesPerSecond
: Number of frames per second to sample.
Response:
- Returns a
ClassificationPredictVideoResponse
.
get_status
(Prediction Task Status)
Checks the status of an in-progress prediction task.
Arguments:
predictionTaskUuid
: The UUID of the prediction task.
Response:
- Returns a
PredictionTaskStatusResponse
.
get_results
(Retrieve Prediction Results)
Fetches the results of a completed prediction task.
Arguments:
predictionTaskUuid
: The UUID of the prediction task.predictionType
:"image"
or"video"
.
Response:
- If
"image"
→ClassificationPredictImageResponse
. - If
"video"
→ClassificationPredictVideoResponse
.
Notes
- All methods are asynchronous and return Promises.
- For images, always use
predict_image
. For videos, usepredict_video
. Passing the wrong media type will throw anIncorrectMediaTypeError
. - Predictions run as tasks: the SDK automatically begins the task, uploads media, initiates prediction, polls for completion, and retrieves results.