Commit Graph

63 Commits

Author SHA1 Message Date
Dimitris Athanasiou 5d670e45ac
Revert "[ML] Only one of `inference_threads` and `model_threads` may be great… (#84794)" (#85089)
This reverts commit 4eaedb265d.

On further investigation of how to improve allocation of trained models,
we concluded that being able to set `inference_threads` in combination with
`model_threads` is fundamental for scalability.
2022-03-18 09:41:27 +02:00
Benjamin Trent 258d2b71e2
[ML] add roberta/bart docs (#85001)
adds roberta section to NLP tokenization documentation.
2022-03-17 12:14:57 -04:00
Dimitris Athanasiou 4eaedb265d
[ML] Only one of `inference_threads` and `model_threads` may be great… (#84794)
Starting a trained model deployment the user may set values for `inference_threads`
of `model_threads`. The first improves latency whereas the latter improves throughput.
It is easier to reason on how a model allocation uses resources if we ensure only
one of those two may be greater than one. In addition, it allows us to distribute
the cores of the ML nodes in the cluster across the model allocations in the future.

This commit adds a validation that prevents both `inference_threads` and `model_threads`
to be greater than one.
2022-03-09 16:33:35 +02:00
David Kyle 27ae82139a
[ML] Add throughput stats for Trained Model Deployments (#84628)
Throughput is measured as the number of inference requests 
processed per minute. The node level stats peak_throughput_per_minute, 
throughput_last_minute and average_inference_time_ms_last_minute are 
added with a deployment level stat peak_throughput_per_minute which
 is the summed throughput of all nodes.
2022-03-08 11:06:36 +00:00
Benjamin Trent 45deac4c96
[ML] add windowing support for text_classification (#83989)
This commit adds initial windowing support for text_classification tasks.

Specifically, a user can now indicate a span (non-negative) indicating the tokenization windowing span when creating
sub-sequences.

Default value is span: -1 indicates that no windowing should take place.
2022-03-01 08:29:12 -05:00
Lisa Cawley 104efd4343
[DOCS] Minor edits to trained model APIs (#81549) 2022-02-09 13:44:13 -08:00
David Kyle c1fbf87de8
[ML] Add error counts to trained model stats (#82705)
Adds inference_count, timeout_count, rejected_execution_count
and error_count fields to trained model stats.
2022-01-27 16:18:20 +00:00
David Kyle 1473b09415
[ML] Add NLP inference configs to the inference processor docs (#82320) 2022-01-11 08:50:45 +00:00
Benjamin Trent 9dc8aea1cb
[ML] adds new mpnet tokenization for nlp models (#82234)
This commit adds support for MPNet based models.

MPNet models differ from BERT style models in that:

 - Special tokens are different
 - Input to the model doesn't require token positions.

To configure an MPNet tokenizer for your pytorch MPNet based model:

```
"tokenization": {
  "mpnet": {...}
}
```
The options provided to `mpnet` are the same as the previously supported `bert` configuration.
2022-01-05 12:56:47 -05:00
Dimitris Athanasiou 14a63ac115
[ML] Improve reporting of trained model size stats (#82000)
This improves reporting of trained model size in the response of the stats API.

In particular, it removes the `model_size_bytes` from the `deployment_stats` section and
replaces it with a top-level `model_size_stats` object that contains:

- `model_size_bytes`: the actual model size
- `required_native_memory_bytes`: the amount of memory required to load a model

In addition, these are now reported for PyTorch models regardless of their deployment state.
2021-12-22 18:20:47 +02:00
David Kyle d1ee756da8
[ML][DOCS] Add note about max values of thread settings (#81367) 2021-12-14 13:07:34 +00:00
David Kyle 3c974a1e5d
[ML][DOCS] Remove orphaned GET deployment stats doc (#81505) 2021-12-09 08:32:33 +00:00
Lisa Cawley 429bdd9afc
[DOCS] Move trained model APIs out of dataframe analytics (#81315) 2021-12-03 09:21:09 -08:00