Back to feed
To reduce LLM token consumption and query latency when processing large datasets, enable optimized mode using the...
To reduce LLM token consumption and query latency when processing large datasets, enable optimized mode using the following managed AI functions:
• AI.IF • AI.CLASSIFY
This feature is in Preview.