plugin interface

Abstract base class defining the interface for transcription plugins

source

PluginInterface

 PluginInterface ()

Base interface that all transcription plugins must implement.


source

PluginInterface_execute_stream

 PluginInterface_execute_stream (audio:Union[cjm_transcription_plugin_syst
                                 em.core.AudioData,str,pathlib.Path],
                                 **kwargs)

*Stream transcription results chunk by chunk.

Default implementation falls back to execute() without streaming. Plugins can override this to provide real streaming capabilities.

Args: audio: Audio data or path to audio file **kwargs: Additional plugin-specific parameters

Yields: str: Partial transcription text chunks as they become available

Returns: TranscriptionResult: Final complete transcription with metadata

Example: >>> # Stream transcription chunks in real-time >>> for chunk in plugin.execute_stream(audio_file): … print(chunk, end=““, flush=True) >>> >>> # Or collect all chunks and get final result >>> generator = plugin.execute_stream(audio_file) >>> chunks = [] >>> for chunk in generator: … chunks.append(chunk) >>> result = generator.value # Final TranscriptionResult*

Type Details
audio Union Audio data or path to audio file
kwargs VAR_KEYWORD
Returns Generator Yields text chunks, returns final result

source

PluginInterface_supports_streaming

 PluginInterface_supports_streaming ()

*Check if this plugin supports streaming transcription.

Returns: bool: True if execute_stream is implemented and streaming is supported*


source

PluginMeta

 PluginMeta (name:str, version:str, description:str='', author:str='',
             package_name:str='',
             instance:Optional[__main__.PluginInterface]=None,
             enabled:bool=True)

Metadata about a plugin.

Testing PluginMeta dataclass

# Test PluginMeta dataclass
meta = PluginMeta(
    name="test_plugin",
    version="1.0.0",
    description="A test plugin",
    author="Test Author"
)

print("PluginMeta instance:")
print(meta)
print(f"\nName: {meta.name}")
print(f"Version: {meta.version}")
print(f"Enabled: {meta.enabled}")
print(f"Instance: {meta.instance}")

# Test with minimal arguments
minimal_meta = PluginMeta(name="minimal", version="0.1.0")
print(f"\nMinimal PluginMeta: {minimal_meta}")

# Test equality
meta_copy = PluginMeta(name="minimal", version="0.1.0")
print(f"Equality test: {minimal_meta == meta_copy}")
PluginMeta instance:
PluginMeta(name='test_plugin', version='1.0.0', description='A test plugin', author='Test Author', package_name='', instance=None, enabled=True)

Name: test_plugin
Version: 1.0.0
Enabled: True
Instance: None

Minimal PluginMeta: PluginMeta(name='minimal', version='0.1.0', description='', author='', package_name='', instance=None, enabled=True)
Equality test: True
class ExamplePlugin(PluginInterface):
    """An example plugin implementation with configuration schema."""

    def __init__(self):
        self.logger = logging.getLogger(f"{__name__}.{type(self).__name__}")
        self.config = {}
        self.model = None
    
    @property
    def name(self) -> str:
        return "example_plugin"
    
    @property
    def version(self) -> str:
        return "1.0.0"

    @property
    def supported_formats(self) -> List[str]:
        return ["wav", "mp3", "flac"]

    @staticmethod
    def get_config_schema() -> Dict[str, Any]:
        """Return the configuration schema for this plugin."""
        return {
            "type": "object",
            "properties": {
                "model": {
                    "type": "string",
                    "enum": ["tiny", "base", "small", "medium", "large"],
                    "default": "base",
                    "description": "Model size to use for transcription"
                },
                "language": {
                    "type": "string",
                    "default": "auto",
                    "description": "Language code (e.g., 'en', 'es') or 'auto' for detection"
                },
                "batch_size": {
                    "type": "integer",
                    "minimum": 1,
                    "maximum": 32,
                    "default": 8,
                    "description": "Batch size for processing"
                },
                "enable_vad": {
                    "type": "boolean",
                    "default": True,
                    "description": "Enable voice activity detection"
                }
            },
            "required": ["model"]
        }
    
    def get_current_config(self) -> Dict[str, Any]:
        """Return the current configuration."""
        # Merge defaults with actual config
        defaults = self.get_config_defaults()
        return {**defaults, **self.config}
    
    def initialize(self, config: Optional[Dict[str, Any]] = None) -> None:
        """Initialize the plugin."""
        if config:
            is_valid, error = self.validate_config(config)
            if not is_valid:
                raise ValueError(f"Invalid configuration: {error}")
        
        # Merge provided config with defaults
        defaults = self.get_config_defaults()
        self.config = {**defaults, **(config or {})}
        
        self.logger.info(f"Initializing {self.name} with config: {self.config}")
        
        # Simulate loading a model based on config
        model_name = self.config.get("model", "base")
        self.model = f"MockModel-{model_name}"
    
    def execute(self, *args, **kwargs) -> Any:
        """Execute the plugin's functionality."""
        self.logger.info(f"Example plugin executed with model: {self.model}")
        self.logger.info(f"Config: {self.config}")
        return f"Transcription using {self.model}"

    def is_available(self) -> bool:
        """Check availability."""
        return True
    
    def cleanup(self) -> None:
        """Clean up resources."""
        self.logger.info(f"Cleaning up {self.name}")
        self.model = None
logging.basicConfig(level=logging.INFO)

example_plugin = ExamplePlugin()
example_plugin.initialize()
transcription_result = example_plugin.execute("test_audio.mp3")
example_plugin.cleanup()
print(transcription_result)
INFO:__main__.ExamplePlugin:Initializing example_plugin with config: {'model': 'base', 'language': 'auto', 'batch_size': 8, 'enable_vad': True}
INFO:__main__.ExamplePlugin:Example plugin executed with model: MockModel-base
INFO:__main__.ExamplePlugin:Config: {'model': 'base', 'language': 'auto', 'batch_size': 8, 'enable_vad': True}
INFO:__main__.ExamplePlugin:Cleaning up example_plugin
Transcription using MockModel-base
# Test the configuration schema functionality
plugin = ExamplePlugin()

# Get the configuration schema
schema = plugin.get_config_schema()
print("Configuration Schema:")
print(json.dumps(schema, indent=2))

print("\n" + "="*50 + "\n")

# Get default configuration
defaults = plugin.get_config_defaults()
print("Default Configuration:")
print(json.dumps(defaults, indent=2))

print("\n" + "="*50 + "\n")

# Initialize with partial config (using defaults for missing values)
plugin.initialize({"model": "small", "language": "en"})
print("Current Configuration after initialization:")
print(json.dumps(plugin.get_current_config(), indent=2))

print("\n" + "="*50 + "\n")

# Test configuration validation
test_configs = [
    ({"model": "tiny"}, "Valid config with minimal settings"),
    ({"model": "invalid_model"}, "Invalid model name"),
    ({"batch_size": 100}, "Missing required 'model' field"),
    ({"model": "base", "batch_size": 100}, "Batch size exceeds maximum"),
    ({"model": "base", "batch_size": "not_a_number"}, "Invalid type for batch_size"),
]

for config, description in test_configs:
    is_valid, error = plugin.validate_config(config)
    print(f"{description}:")
    print(f"  Config: {config}")
    print(f"  Valid: {is_valid}")
    if error:
        print(f"  Error: {error}")
    print()
INFO:__main__.ExamplePlugin:Initializing example_plugin with config: {'model': 'small', 'language': 'en', 'batch_size': 8, 'enable_vad': True}
Configuration Schema:
{
  "type": "object",
  "properties": {
    "model": {
      "type": "string",
      "enum": [
        "tiny",
        "base",
        "small",
        "medium",
        "large"
      ],
      "default": "base",
      "description": "Model size to use for transcription"
    },
    "language": {
      "type": "string",
      "default": "auto",
      "description": "Language code (e.g., 'en', 'es') or 'auto' for detection"
    },
    "batch_size": {
      "type": "integer",
      "minimum": 1,
      "maximum": 32,
      "default": 8,
      "description": "Batch size for processing"
    },
    "enable_vad": {
      "type": "boolean",
      "default": true,
      "description": "Enable voice activity detection"
    }
  },
  "required": [
    "model"
  ]
}

==================================================

Default Configuration:
{
  "model": "base",
  "language": "auto",
  "batch_size": 8,
  "enable_vad": true
}

==================================================

Current Configuration after initialization:
{
  "model": "small",
  "language": "en",
  "batch_size": 8,
  "enable_vad": true
}

==================================================

Valid config with minimal settings:
  Config: {'model': 'tiny'}
  Valid: True

Invalid model name:
  Config: {'model': 'invalid_model'}
  Valid: False
  Error: 'invalid_model' is not one of ['tiny', 'base', 'small', 'medium', 'large']

Failed validating 'enum' in schema['properties']['model']:
    {'type': 'string',
     'enum': ['tiny', 'base', 'small', 'medium', 'large'],
     'default': 'base',
     'description': 'Model size to use for transcription'}

On instance['model']:
    'invalid_model'

Missing required 'model' field:
  Config: {'batch_size': 100}
  Valid: False
  Error: 'model' is a required property

Failed validating 'required' in schema:
    {'type': 'object',
     'properties': {'model': {'type': 'string',
                              'enum': ['tiny',
                                       'base',
                                       'small',
                                       'medium',
                                       'large'],
                              'default': 'base',
                              'description': 'Model size to use for '
                                             'transcription'},
                    'language': {'type': 'string',
                                 'default': 'auto',
                                 'description': 'Language code (e.g., '
                                                "'en', 'es') or 'auto' for "
                                                'detection'},
                    'batch_size': {'type': 'integer',
                                   'minimum': 1,
                                   'maximum': 32,
                                   'default': 8,
                                   'description': 'Batch size for '
                                                  'processing'},
                    'enable_vad': {'type': 'boolean',
                                   'default': True,
                                   'description': 'Enable voice activity '
                                                  'detection'}},
     'required': ['model']}

On instance:
    {'batch_size': 100}

Batch size exceeds maximum:
  Config: {'model': 'base', 'batch_size': 100}
  Valid: False
  Error: 100 is greater than the maximum of 32

Failed validating 'maximum' in schema['properties']['batch_size']:
    {'type': 'integer',
     'minimum': 1,
     'maximum': 32,
     'default': 8,
     'description': 'Batch size for processing'}

On instance['batch_size']:
    100

Invalid type for batch_size:
  Config: {'model': 'base', 'batch_size': 'not_a_number'}
  Valid: False
  Error: 'not_a_number' is not of type 'integer'

Failed validating 'type' in schema['properties']['batch_size']:
    {'type': 'integer',
     'minimum': 1,
     'maximum': 32,
     'default': 8,
     'description': 'Batch size for processing'}

On instance['batch_size']:
    'not_a_number'

Testing Configuration Schema

Example: Whisper Plugin Implementation

class WhisperPlugin(PluginInterface):
    """Example Whisper transcription plugin with comprehensive configuration."""
    
    def __init__(self):
        self.logger = logging.getLogger(f"{__name__}.{type(self).__name__}")
        self.config = {}
        self.model = None
        self.processor = None
    
    @property
    def name(self) -> str:
        return "whisper"
    
    @property
    def version(self) -> str:
        return "1.0.0"
    
    @property
    def supported_formats(self) -> List[str]:
        return ["wav", "mp3", "flac", "m4a", "ogg", "webm"]

    @staticmethod
    def get_config_schema() -> Dict[str, Any]:
        """Return comprehensive Whisper configuration schema."""
        return {
            "$schema": "http://json-schema.org/draft-07/schema#",
            "type": "object",
            "title": "Whisper Configuration",
            "properties": {
                "model": {
                    "type": "string",
                    "enum": ["tiny", "tiny.en", "base", "base.en", "small", "small.en", 
                            "medium", "medium.en", "large", "large-v1", "large-v2", "large-v3"],
                    "default": "base",
                    "description": "Whisper model size. Larger models are more accurate but slower."
                },
                "device": {
                    "type": "string",
                    "enum": ["cpu", "cuda", "mps", "auto"],
                    "default": "auto",
                    "description": "Computation device for inference"
                },
                "compute_type": {
                    "type": "string",
                    "enum": ["default", "float16", "float32", "int8", "int8_float16"],
                    "default": "default",
                    "description": "Model precision/quantization"
                },
                "language": {
                    "type": ["string", "null"],
                    "default": None,
                    "description": "Language code (e.g., 'en', 'es', 'fr') or null for auto-detection",
                    "examples": ["en", "es", "fr", "de", "ja", "zh", None]
                },
                "task": {
                    "type": "string",
                    "enum": ["transcribe", "translate"],
                    "default": "transcribe",
                    "description": "Task to perform (transcribe keeps original language, translate converts to English)"
                },
                "temperature": {
                    "type": "number",
                    "minimum": 0.0,
                    "maximum": 1.0,
                    "default": 0.0,
                    "description": "Sampling temperature. 0 for deterministic, higher values for more variation"
                },
                "beam_size": {
                    "type": "integer",
                    "minimum": 1,
                    "maximum": 10,
                    "default": 5,
                    "description": "Beam search width. Higher values may improve accuracy but are slower"
                },
                "best_of": {
                    "type": "integer",
                    "minimum": 1,
                    "maximum": 10,
                    "default": 5,
                    "description": "Number of candidates when sampling with non-zero temperature"
                },
                "patience": {
                    "type": "number",
                    "minimum": 0.0,
                    "maximum": 2.0,
                    "default": 1.0,
                    "description": "Beam search patience factor"
                },
                "length_penalty": {
                    "type": ["number", "null"],
                    "default": None,
                    "description": "Exponential length penalty during beam search"
                },
                "suppress_tokens": {
                    "type": ["array", "string"],
                    "items": {"type": "integer"},
                    "default": "-1",
                    "description": "Token IDs to suppress. '-1' for default suppression, empty array for none"
                },
                "initial_prompt": {
                    "type": ["string", "null"],
                    "default": None,
                    "description": "Optional text to provide as prompt for first window"
                },
                "condition_on_previous_text": {
                    "type": "boolean",
                    "default": True,
                    "description": "Use previous output as prompt for next window"
                },
                "no_speech_threshold": {
                    "type": "number",
                    "minimum": 0.0,
                    "maximum": 1.0,
                    "default": 0.6,
                    "description": "Threshold for detecting silence"
                },
                "compression_ratio_threshold": {
                    "type": "number",
                    "minimum": 1.0,
                    "maximum": 10.0,
                    "default": 2.4,
                    "description": "Threshold for detecting repetition"
                },
                "logprob_threshold": {
                    "type": "number",
                    "default": -1.0,
                    "description": "Average log probability threshold"
                },
                "word_timestamps": {
                    "type": "boolean",
                    "default": False,
                    "description": "Extract word-level timestamps"
                },
                "prepend_punctuations": {
                    "type": "string",
                    "default": "\"'“¿([{-",
                    "description": "Punctuations to merge with next word"
                },
                "append_punctuations": {
                    "type": "string",
                    "default": "\"'.。,,!!??::”)]}、",
                    "description": "Punctuations to merge with previous word"
                },
                "vad_filter": {
                    "type": "boolean",
                    "default": False,
                    "description": "Enable voice activity detection filter"
                },
                "vad_parameters": {
                    "type": "object",
                    "properties": {
                        "threshold": {
                            "type": "number",
                            "minimum": 0.0,
                            "maximum": 1.0,
                            "default": 0.5
                        },
                        "min_speech_duration_ms": {
                            "type": "integer",
                            "minimum": 0,
                            "default": 250
                        },
                        "max_speech_duration_s": {
                            "type": "number",
                            "minimum": 0,
                            "default": 3600
                        }
                    },
                    "default": {}
                }
            },
            "required": ["model"],
            "additionalProperties": False
        }
    
    def get_current_config(self) -> Dict[str, Any]:
        """Return current configuration with all defaults applied."""
        defaults = self.get_config_defaults()
        current = {**defaults, **self.config}
        
        # Handle nested vad_parameters
        if "vad_parameters" in current and isinstance(current["vad_parameters"], dict):
            vad_defaults = {
                "threshold": 0.5,
                "min_speech_duration_ms": 250,
                "max_speech_duration_s": 3600
            }
            current["vad_parameters"] = {**vad_defaults, **current["vad_parameters"]}
        
        return current
    
    def initialize(self, config: Optional[Dict[str, Any]] = None) -> None:
        """Initialize the Whisper model with configuration."""
        if config:
            is_valid, error = self.validate_config(config)
            if not is_valid:
                raise ValueError(f"Invalid configuration: {error}")
        
        # Merge with defaults
        defaults = self.get_config_defaults()
        self.config = {**defaults, **(config or {})}
        
        self.logger.info(f"Initializing Whisper with config: {self.config}")
        
        # In a real implementation, this would load the actual Whisper model
        # For example:
        # import whisper
        # self.model = whisper.load_model(self.config["model"], device=self.config["device"])
        
        # Mock implementation
        self.model = f"WhisperModel-{self.config['model']}"
        self.processor = f"WhisperProcessor-{self.config['device']}"
    
    def execute(self, audio_data: Union[AudioData, str, Path], **kwargs) -> TranscriptionResult:
        """Transcribe audio using Whisper."""
        if not self.model:
            raise RuntimeError("Plugin not initialized. Call initialize() first.")
        
        # Override config with any provided kwargs
        exec_config = {**self.config, **kwargs}
        
        self.logger.info(f"Transcribing with Whisper model: {self.model}")
        self.logger.info(f"Execution config: {exec_config}")
        
        # In a real implementation, this would:
        # 1. Load/preprocess audio
        # 2. Run Whisper inference
        # 3. Post-process results
        
        # Mock transcription result
        return TranscriptionResult(
            text=f"Mock transcription using {exec_config['model']} model",
            confidence=0.95,
            segments=[
                {
                    "start": 0.0,
                    "end": 2.5,
                    "text": "Mock transcription",
                    "confidence": 0.96
                },
                {
                    "start": 2.5,
                    "end": 5.0,
                    "text": f"using {exec_config['model']} model",
                    "confidence": 0.94
                }
            ],
            metadata={
                "model": exec_config["model"],
                "language": exec_config.get("language", "auto-detected"),
                "device": exec_config["device"],
                "task": exec_config["task"]
            }
        )
    
    def is_available(self) -> bool:
        """Check if Whisper dependencies are available."""
        try:
            # In real implementation, check for whisper package
            # import whisper
            # return True
            return True  # Mock always available
        except ImportError:
            return False
    
    def cleanup(self) -> None:
        """Clean up model from memory."""
        self.logger.info("Cleaning up Whisper model")
        self.model = None
        self.processor = None
        # In real implementation: del self.model, torch.cuda.empty_cache(), etc.
# Test the Whisper plugin
whisper_plugin = WhisperPlugin()

# Get a subset of the schema for display (it's quite large)
schema = whisper_plugin.get_config_schema()
print("Whisper Configuration Schema (subset):")
print("Available models:", schema["properties"]["model"]["enum"])
print("Available devices:", schema["properties"]["device"]["enum"])
print("\nRequired fields:", schema.get("required", []))

print("\n" + "="*50 + "\n")

# Test initialization with different configurations
configs_to_test = [
    {"model": "tiny"},
    {"model": "large-v3", "device": "cuda", "language": "en"},
    {"model": "base", "temperature": 0.2, "beam_size": 3, "word_timestamps": True}
]

for config in configs_to_test:
    print(f"Initializing with config: {config}")
    whisper_plugin.initialize(config)
    current = whisper_plugin.get_current_config()
    print(f"Current model: {current['model']}")
    print(f"Current device: {current['device']}")
    print(f"Word timestamps: {current['word_timestamps']}")
    
    # Execute transcription
    result = whisper_plugin.execute("dummy_audio.wav")
    print(f"Result: {result.text}")
    print("-" * 30)
INFO:__main__.WhisperPlugin:Initializing Whisper with config: {'model': 'tiny', 'device': 'auto', 'compute_type': 'default', 'language': None, 'task': 'transcribe', 'temperature': 0.0, 'beam_size': 5, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': False, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
INFO:__main__.WhisperPlugin:Transcribing with Whisper model: WhisperModel-tiny
INFO:__main__.WhisperPlugin:Execution config: {'model': 'tiny', 'device': 'auto', 'compute_type': 'default', 'language': None, 'task': 'transcribe', 'temperature': 0.0, 'beam_size': 5, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': False, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
INFO:__main__.WhisperPlugin:Initializing Whisper with config: {'model': 'large-v3', 'device': 'cuda', 'compute_type': 'default', 'language': 'en', 'task': 'transcribe', 'temperature': 0.0, 'beam_size': 5, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': False, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
INFO:__main__.WhisperPlugin:Transcribing with Whisper model: WhisperModel-large-v3
INFO:__main__.WhisperPlugin:Execution config: {'model': 'large-v3', 'device': 'cuda', 'compute_type': 'default', 'language': 'en', 'task': 'transcribe', 'temperature': 0.0, 'beam_size': 5, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': False, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
INFO:__main__.WhisperPlugin:Initializing Whisper with config: {'model': 'base', 'device': 'auto', 'compute_type': 'default', 'language': None, 'task': 'transcribe', 'temperature': 0.2, 'beam_size': 3, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': True, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
INFO:__main__.WhisperPlugin:Transcribing with Whisper model: WhisperModel-base
INFO:__main__.WhisperPlugin:Execution config: {'model': 'base', 'device': 'auto', 'compute_type': 'default', 'language': None, 'task': 'transcribe', 'temperature': 0.2, 'beam_size': 3, 'best_of': 5, 'patience': 1.0, 'length_penalty': None, 'suppress_tokens': '-1', 'initial_prompt': None, 'condition_on_previous_text': True, 'no_speech_threshold': 0.6, 'compression_ratio_threshold': 2.4, 'logprob_threshold': -1.0, 'word_timestamps': True, 'prepend_punctuations': '"\'“¿([{-', 'append_punctuations': '"\'.。,,!!??::”)]}、', 'vad_filter': False, 'vad_parameters': {}}
Whisper Configuration Schema (subset):
Available models: ['tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large', 'large-v1', 'large-v2', 'large-v3']
Available devices: ['cpu', 'cuda', 'mps', 'auto']

Required fields: ['model']

==================================================

Initializing with config: {'model': 'tiny'}
Current model: tiny
Current device: auto
Word timestamps: False
Result: Mock transcription using tiny model
------------------------------
Initializing with config: {'model': 'large-v3', 'device': 'cuda', 'language': 'en'}
Current model: large-v3
Current device: cuda
Word timestamps: False
Result: Mock transcription using large-v3 model
------------------------------
Initializing with config: {'model': 'base', 'temperature': 0.2, 'beam_size': 3, 'word_timestamps': True}
Current model: base
Current device: auto
Word timestamps: True
Result: Mock transcription using base model
------------------------------