Gemini Plugin

Plugin implementation for Google Gemini API transcription

source

GeminiPlugin

 GeminiPlugin ()

Google Gemini API transcription plugin.


source

GeminiPlugin.update_config

 GeminiPlugin.update_config (config:Dict[str,Any])

Update plugin configuration, adjusting max_tokens if model changes.

Type Details
config Dict New configuration values
Returns None

source

GeminiPlugin.get_model_info

 GeminiPlugin.get_model_info (model_name:Optional[str]=None)

Get information about a specific model including token limits.

Type Default Details
model_name Optional None Model name to get info for, defaults to current model
Returns Dict Returns dict with model information

source

GeminiPlugin.get_available_models

 GeminiPlugin.get_available_models ()

Get list of available audio-capable models.


source

GeminiPlugin.cleanup

 GeminiPlugin.cleanup ()

Clean up resources.


source

GeminiPlugin.execute_stream

 GeminiPlugin.execute_stream (audio:Union[cjm_transcription_plugin_system.
                              core.AudioData,str,pathlib.Path], **kwargs)

*Stream transcription results chunk by chunk.

This method streams transcription chunks in real-time as they are generated by the Gemini API.

Args: audio: Audio data or path to audio file **kwargs: Additional plugin-specific parameters

Yields: str: Partial transcription text chunks as they become available

Returns: TranscriptionResult: Final complete transcription with metadata

Example: >>> # Stream transcription chunks in real-time >>> for chunk in plugin.execute_stream(audio_file): … print(chunk, end=““, flush=True)*

Type Details
audio Union Audio data object or path to audio file
kwargs VAR_KEYWORD
Returns Generator Yields text chunks, returns final result

source

GeminiPlugin.supports_streaming

 GeminiPlugin.supports_streaming ()

*Check if this plugin supports streaming transcription.

Returns: bool: True, as Gemini supports streaming transcription*

Testing the Plugin

# Test basic functionality
plugin = GeminiPlugin()

# Check availability
print(f"Gemini available: {plugin.is_available()}")
print(f"Plugin name: {plugin.name}")
print(f"Plugin version: {plugin.version}")
print(f"Supported formats: {plugin.supported_formats}")
Gemini available: True
Plugin name: gemini
Plugin version: 1.0.0
Supported formats: ['wav', 'mp3', 'aiff', 'aac', 'ogg', 'flac']
# Test configuration schema
schema = plugin.get_config_schema()
print("Configuration properties:")
for prop, details in list(schema["properties"].items())[:5]:
    print(f"  {prop}: {details.get('description', 'No description')}")
Configuration properties:
  model: Gemini model to use for transcription
  api_key: Google API key (defaults to GEMINI_API_KEY env var)
  prompt: Prompt for transcription
  temperature: Sampling temperature
  top_p: Top-p sampling parameter
# Test initialization (requires API key)
if os.environ.get("GEMINI_API_KEY"):
    plugin.initialize({"model": "gemini-2.5-flash"})
    print(f"Initialized with model: {plugin.config['model']}")
    
    # Get available models
    models = plugin.get_available_models()
    print(f"\nFound {len(models)} available models")
    print("Top 5 models:")
    for model in models[:5]:
        print(f"  - {model}")
else:
    print("Set GEMINI_API_KEY environment variable to test initialization")
Initialized with model: gemini-2.5-flash

Found 33 available models
Top 5 models:
  - gemma-3n-e4b-it
  - gemma-3n-e2b-it
  - gemma-3-4b-it
  - gemma-3-27b-it
  - gemma-3-1b-it

Testing Dynamic Token Limits

Test that max_output_tokens is dynamically updated based on the selected model’s output_token_limit.

# Test dynamic token limit updates
if os.environ.get("GEMINI_API_KEY"):
    # Initialize plugin
    plugin = GeminiPlugin()
    plugin.initialize({"model": "gemini-2.5-flash"})
    
    # Check token limits for different models
    print("Token limits for different models:")
    print("-" * 50)
    
    # Display token limits that were discovered
    for model_name in list(plugin.model_token_limits.keys())[:5]:
        token_limit = plugin.model_token_limits[model_name]
        print(f"{model_name}: {token_limit:,} tokens")
    
    print("\nCurrent configuration:")
    print(f"Model: {plugin.config['model']}")
    print(f"Max output tokens: {plugin.config['max_output_tokens']:,}")
    
    # Get model info
    model_info = plugin.get_model_info()
    print(f"\nModel info for {model_info['name']}:")
    print(f"  Output token limit: {model_info['output_token_limit']:,}")
    print(f"  Current max_output_tokens: {model_info['current_max_output_tokens']:,}")
else:
    print("Set GEMINI_API_KEY environment variable to test token limits")
Token limits for different models:
--------------------------------------------------
gemini-2.5-pro-preview-03-25: 65,536 tokens
gemini-2.5-flash-preview-05-20: 65,536 tokens
gemini-2.5-flash: 65,536 tokens
gemini-2.5-flash-lite-preview-06-17: 65,536 tokens
gemini-2.5-pro-preview-05-06: 65,536 tokens

Current configuration:
Model: gemini-2.5-flash
Max output tokens: 65,536

Model info for gemini-2.5-flash:
  Output token limit: 65,536
  Current max_output_tokens: 65,536
# Test switching models and automatic token limit update
if os.environ.get("GEMINI_API_KEY"):
    # Switch to a different model
    print("Testing model switching and token limit updates:")
    print("-" * 50)
    
    test_models = ["gemini-2.5-flash", "gemini-1.5-pro", "gemini-2.0-flash"]
    
    for model_name in test_models:
        if model_name in plugin.model_token_limits:
            # Update configuration with new model
            plugin.update_config({"model": model_name})
            
            print(f"\nSwitched to model: {model_name}")
            print(f"  Token limit: {plugin.model_token_limits[model_name]:,}")
            print(f"  Config max_output_tokens: {plugin.config['max_output_tokens']:,}")
            
            # Verify schema is updated
            schema = plugin.get_config_schema()
            max_tokens_prop = schema["properties"]["max_output_tokens"]
            print(f"  Schema maximum: {max_tokens_prop['maximum']:,}")
            print(f"  Schema default: {max_tokens_prop['default']:,}")
else:
    print("Set GEMINI_API_KEY environment variable to test model switching")
Testing model switching and token limit updates:
--------------------------------------------------

Switched to model: gemini-2.5-flash
  Token limit: 65,536
  Config max_output_tokens: 65,536
  Schema maximum: 65,536
  Schema default: 65,536

Switched to model: gemini-2.0-flash
  Token limit: 8,192
  Config max_output_tokens: 8,192
  Schema maximum: 65,536
  Schema default: 65,536
# Test execution with runtime model override
if os.environ.get("GEMINI_API_KEY"):
    print("Testing runtime model override:")
    print("-" * 50)
    
    # Create test audio
    import numpy as np
    from cjm_transcription_plugin_system.core import AudioData
    
    test_audio = AudioData(
        samples=np.random.randn(16000).astype(np.float32) * 0.1,
        sample_rate=16000,
        duration=1.0,
        filepath=None,
        metadata={}
    )
    
    # Current model and token limit
    print(f"Current model: {plugin.config['model']}")
    print(f"Current max_output_tokens: {plugin.config['max_output_tokens']:,}")
    
    # Execute with a different model at runtime
    override_model = "gemini-2.0-flash" if plugin.config['model'] != "gemini-2.0-flash" else "gemini-2.5-flash"
    
    if override_model in plugin.model_token_limits:
        print(f"\nExecuting with override model: {override_model}")
        print(f"Expected token limit: {plugin.model_token_limits[override_model]:,}")
        
        try:
            result = plugin.execute(
                test_audio,
                model=override_model,
                prompt="This is a test audio signal."
            )
            
            print(f"\nTranscription metadata:")
            print(f"  Model used: {result.metadata['model']}")
            print(f"  Max output tokens: {result.metadata['max_output_tokens']:,}")
            
            # Check if config was updated
            print(f"\nConfig after execution:")
            print(f"  Model: {plugin.config['model']}")
            print(f"  Max output tokens: {plugin.config['max_output_tokens']:,}")
            
        except Exception as e:
            print(f"Execution error (expected for random audio): {e}")
else:
    print("Set GEMINI_API_KEY environment variable to test runtime override")
Set GEMINI_API_KEY environment variable to test runtime override