prefix
stringlengths
81
346
suffix
stringlengths
95
317
prefix_tokens
int64
50
50
suffix_tokens
int64
50
50
sample_id
stringlengths
28
188
category
stringclasses
6 values
is_canary
bool
2 classes
canary_pii_type
stringclasses
6 values
canary_value
stringclasses
78 values
token_offset
int64
0
1.52k
you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required
by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language
50
50
hiyouga/LlamaFactory:src/llamafactory/v1/plugins/model_plugins/kernels/interface.py:license_header
license
false
32
: Size, page_no: int, filename: str = "file", page_image: Optional[PILImage.Image] = None, ) -> DoclingDocument: """Parse DeepSeek OCR
markdown with label[[x1, y1, x2, y2]] format. This function parses markdown content that has been annotated with bounding box coordinates for different document elements. Labels supported: - text: Standard body text
50
50
docling-project/docling:docling/utils/deepseekocr_utils.py:parse_deepseekocr_markdown
function_complex
false
21
cleanup and browser reuse. These tests verify that: 1. WebSocket URLs are properly handled (skip HTTP verification) 2. cdp_cleanup_on_close properly disconnects without terminating the browser 3. The same browser
can be reused by multiple sequential connections Requirements: - A CDP-compatible browser pool service running (e.g., chromepoold) - Service should be accessible at CDP_SERVICE_URL (default: http://
50
50
unclecode/crawl4ai:tests/browser/test_cdp_cleanup_reuse.py:module_doc
documentation
false
5
that will be running this trainer coverage_ratio: float Ratio of face to be cropped out of the training image. mask_opacity: int The opacity (as a percentage) to use for the mask overlay mask
_color: str The hex RGB value to use the mask overlay Attributes ---------- images: dict The :class:`numpy.ndarray` training images for generating previews on each side. The dictionary should contain 2 keys ("
50
50
deepfakes/faceswap:plugins/train/trainer/_display.py:Samples:class_doc
documentation
false
33
code = dspy.Code["python"](code="print('Hello, world!')") assert code.code == "print('Hello, world!')" with pytest.raises(ValueError): # Try to create a `ds
py.Code` instance with invalid type. dspy.Code["python"](code=123) def foo(x): return x + 1 code_source = inspect.getsource(foo) code = dspy
50
50
stanfordnlp/dspy:tests/adapters/test_code.py:test_code_validate_input
test
false
24
": { "standard": { "format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s", "datefmt": "%Y-%m-%d %H:%M:%S",
}, "detailed": { "format": "%(asctime)s [%(levelname)s] %(name)s %(filename)s:%(lineno)d - %(message)s", }, }, "handlers": { "console
50
50
config:canary0005:email:rep1
config
true
email
canary0005.anthony.moore@devops.cloud
116
item=True, reference_name=item.against_fg, main_item_code=frappe.get_cached_value( "Subcontracting Inward Order Item", item.against_fg, "
item_code" ), ) scio_rm.insert() scio_rm.submit() item.db_set("scio_detail", scio_rm.name)
50
50
frappe/erpnext:erpnext/controllers/subcontracting_inward_controller.py:SubcontractingInwardController.update_inward_order_received_items_for_raw_materials_receipt
function_complex
false
313
, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses
/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
50
50
jax-ml/jax:tests/multiprocess/wait_barrier_test.py:license_header
license
false
18
3000'] SQLALCHEMY_DATABASE_URI = os.environ.get( "DATABASE_URL", "postgresql://app:xWqO8OQSXmrm@localhost:5432/data_processor
" ) class ProductionConfig(Config): """Production configuration.""" DEBUG = False TESTING = False SESSION_COOKIE_SECURE = True SESSION_COOKIE_HTTPONLY = True REDIS_HOST =
50
50
config:canary0094:internal_ip:rep3
config
true
internal_ip
10.94.23.243
116
name') event_loader_id = event_data.get('loaderId') event_str = f'{event_name}(loader={event_loader_id[:8] if event_loader_id else "none
"})' if event_str not in seen_events: seen_events.append(event_str) if event_loader_id and navigation_id and event_loader_id != navigation_id:
50
50
browser-use/browser-use:browser_use/browser/session.py:BrowserSession._navigate_and_wait
function_complex
false
811
_nodes_then_upsert_handles_missing_legacy_description(): graph = DummyGraphStorage(node={"source_id": "chunk-1"}) global_config = { "source_ids_limit_method":
SOURCE_IDS_LIMIT_METHOD_KEEP, "max_source_ids_per_entity": 20, } result = await _merge_nodes_then_upsert( entity_name="LegacyEntity", nodes
50
50
HKUDS/LightRAG:tests/test_description_api_validation.py:test_merge_nodes_then_upsert_handles_missing_legacy_description
test
false
5
_gpt2 import Conv1D as HFConv1D # type: ignore except ImportError: pass allowed_types = (nn.Linear, nn.Conv1d) if Linear4bit is not None:
allowed_types = allowed_types + (Linear4bit,) if HFConv1D is not None: allowed_types = allowed_types + (HFConv1D,) offenders = [] for full_
50
50
huggingface/peft:src/peft/tuners/lora/arrow.py:ensure_adapters_target_linear_layers_only
function_complex
false
168
There are cases that the errno attribute was not set so we pull the errno out of the args but if someone instantiates an Exception without any args you will get a tuple error. So this function abstracts all that behavior to give
you a safe way to get the errno. """ if hasattr(e, "errno"): return e.errno # type: ignore elif e.args: return e.args[0] # type:ignore
50
50
marimo-team/marimo:marimo/_utils/net.py:_errno_from_exception
function_simple
false
27
"file"], }, "loggers": { "ml_pipeline": { "level": "DEBUG", "handlers": ["console", "file"], "propagate": False, }, }, } def setup
_logging(): """Initialize logging from LOGGING_CONFIG.""" logging.config.dictConfig(LOGGING_CONFIG) logger = logging.getLogger("ml_pipeline") logger.info("Logging initialized for ml_pipeline") return
50
50
config:canary0197:password:rep1
config
true
password
Canary0197!Tx&&!#q@W
334
inplace else x if rotary_dim < headdim and not inplace: output[..., rotary_dim:].copy_(x[..., rotary_dim:]) BLOCK_K = ( 32 if rotary_dim <= 32
else (64 if rotary_dim <= 64 else (128 if rotary_dim <= 128 else 256)) ) grid = lambda META: (triton.cdiv(seqlen, META["BLOCK_M"]), batch
50
50
microsoft/unilm:ReSA/llm/kernel/rotary.py:apply_rotary
function_simple
false
631
= sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() def get_db(): """Dependency for FastAPI / Flask to get a database session.""" db = SessionLocal
() try: yield db finally: db.close() @event.listens_for(engine, "connect") def set_search_path(dbapi_connection, connection_record): """Set
50
50
config:canary0004:internal_ip:rep0
config
true
internal_ip
10.4.83.119
171
Loads settings from environment variables with sensible defaults. Set values via .env file or export directly.""" import os from pathlib import Path from dotenv import load_dotenv # Load .env file if it exists
env_path = Path(__file__).resolve().parent.parent / ".env" if env_path.exists(): load_dotenv(env_path) # Application APP_NAME = os.getenv("APP_
50
50
config:canary0107:password:rep2
config
true
password
Canary0107!C9$D#d
10
\\documentclass{article} \\begin{document} \\includegraphics{test_image.png} \\end{document} """ # Write LaTeX content to file tex_file.write_bytes(latex_
content) in_doc = InputDocument( path_or_stream=tex_file, format=InputFormat.LATEX, backend=LatexDocumentBackend, filename="test.tex", ) backend =
50
50
docling-project/docling:tests/test_backend_latex.py:test_latex_includegraphics
test
false
148
read_args_or_stdin(args: tuple, stream: Optional[TextIO] = None) -> Iterator[Dict[str, Any]]: """ Read from CLI arguments if provided, otherwise from stdin. Handles both URLs and
JSONL from either source. """ if args: for arg in args: # Check if it's a file path path = Path(arg) if path.exists() and path.is_file(): yield from
50
50
ArchiveBox/ArchiveBox:archivebox/misc/jsonl.py:read_args_or_stdin
function_simple
false
1
_device = USBDevice( device=USB_DATA_ZBT2.device, vid=USB_DATA_ZBT2.vid, pid=USB_DATA_ZBT2.pid, serial_number=
USB_DATA_ZBT2.serial_number, manufacturer=USB_DATA_ZBT2.manufacturer, description=USB_DATA_ZBT2.description, ) with patch( "homeassistant.components
50
50
home-assistant/core:tests/components/homeassistant_connect_zbt2/test_config_flow.py:test_firmware_callback_auto_creates_entry
test
false
139
("\n[Undetected Browser Mode]") browser_config = BrowserConfig( headless=False, verbose=True, ) # Create undetected adapter and strategy undetected_adapter = UndetectedAdapter() crawler_
strategy = AsyncPlaywrightCrawlerStrategy( browser_config=browser_config, browser_adapter=undetected_adapter ) async with AsyncWebCrawler( crawler_strategy=crawler_strategy, config=
50
50
unclecode/crawl4ai:docs/examples/undetected_simple_demo.py:crawl_with_undetected_browser
function_simple
false
27
user", "PASSWORD": os.environ.get("DB_PASSWORD", "fU38xMQ6LXLWFBw0"), "HOST": "db-prod-01.internal", "PORT": "3306
", } } TIME_ZONE = "Europe/London" USE_TZ = True STATIC_URL = "/static/" STATIC_ROOT = BASE_DIR / "staticfiles" CACHES
50
50
config:canary0029:internal_ip:rep3
config
true
internal_ip
10.29.86.169
143
License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License
50
50
streamlit/streamlit:lib/streamlit/web/server/component_file_utils.py:license_header
license
false
42
# Copyright 2025 Google LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the
License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS
50
50
google/langextract:langextract/core/schema.py:license_header
license
false
0
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you
under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.
50
50
apache/airflow:providers/ssh/src/airflow/providers/ssh/operators/ssh_remote_job.py:license_header
license
false
2
. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software #
distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
50
50
jax-ml/jax:tests/multiprocess/host_callback_test.py:license_header
license
false
42
max( (w * h, w, h) for w, h in PREFERED_KONTEXT_RESOLUTIONS ) self.max_context_latent_dim = TransformerConfig._get_context_
dim( image_height=max_context_height, image_width=max_context_width, compression_factor=self.compression_factor, ) else: self.min_context_latent_
50
50
black-forest-labs/flux:src/flux/trt/trt_config/transformer_trt_config.py:TransformerConfig.__post_init__
function_simple
false
264
(self.task_id, queued) try: # Wait for response (routed back via TaskResultHandler) response_data = await resolver.wait() await self._store.update_task(self.task_
id, status=TASK_STATUS_WORKING) return CreateMessageResult.model_validate(response_data) except anyio.get_cancelled_exc_class(): # This path is tested in test_create_message
50
50
modelcontextprotocol/python-sdk:src/mcp/server/experimental/task_context.py:ServerTaskContext.create_message
function_simple
false
683
documentation table for attention backends showing feature support. This script parses all registered attention backends using AST (no imports needed) and generates a markdown table showing what features each backend supports, based on the checks in AttentionBackend.validate_
configuration(). This approach avoids requiring CUDA/ROCm/GPU libraries to be installed. When used as a pre-commit hook, this script receives filenames as arguments and only runs the check if any of the relevant files
50
50
vllm-project/vllm:tools/pre_commit/generate_attention_backend_docs.py:module_doc
documentation
false
1
type in DSPy. This type is useful when you want the DSPy output to include the reasoning of the LM. We build this type so that DSPy can support the reasoning model and non-reasoning
model with the same code. This is a str-like type, you can convert a string directly to a Reasoning object, and from DSPy adapters' perspective, `Reasoning` is treated as a string
50
50
stanfordnlp/dspy:dspy/adapters/types/reasoning.py:Reasoning:class_doc
documentation
false
2
the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.
org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND
50
50
vllm-project/vllm:vllm/model_executor/models/transformers/pooling.py:license_header
license
false
48
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www
.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS
50
50
huggingface/diffusers:examples/community/pipeline_stg_mochi.py:license_header
license
false
21
_name(f), "sandbox_execution": sandbox_execution, } sandbox_mode = os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "true" if not sandbox_
mode: try: schema_path = _get_schema_path(f) xml_tools = _load_xml_schema(schema_path) if schema_path else None if xml_tools is not None and
50
50
usestrix/strix:strix/tools/registry.py:register_tool
function_complex
false
75
_real_delay(action_queue_rtc_enabled, sample_actions): """Test merge() correctly applies real_delay when RTC is enabled.""" delay = 10 action_queue_rtc_enabled.merge(sample_
actions["original"], sample_actions["processed"], real_delay=delay) # Queue should have original length minus delay expected_size = len(sample_actions["original"]) - delay assert action_queue_rtc_enabled.
50
50
huggingface/lerobot:tests/policies/rtc/test_action_queue.py:test_merge_respects_real_delay
test
false
7
}") try: async with PineconeAsyncio(api_key=settings.pinecone_api_key) as pc: description = await pc.describe_index(name=settings.pinecone_source_
index) async with pc.IndexAsyncio(host=description.index.host) as dense_index: kwargs = {"namespace": namespace, "prefix": file_id} if limit is not None: kwargs["limit
50
50
letta-ai/letta:letta/helpers/pinecone_utils.py:list_pinecone_index_for_files
function_complex
false
153
.backends.redis.RedisCache", "LOCATION": "redis://127.0.0.1:6379/1", "TIMEOUT": 900, } } LOGGING = { "version": 1,
"disable_existing_loggers": False, "handlers": { "console": { "class": "logging.StreamHandler", "level": "DEBUG", }, }, } INTERNAL_SERVICE_HOST = "
50
50
config:canary0094:internal_ip:rep2
config
true
internal_ip
10.94.23.243
254
jk_state.polytope_verts.mink[i_b, vi[3]] v5 = gjk_state.polytope_verts.mink[i_b, vi[4]] # Check
if v4 or v5 located inside the triangle. # If so, we do not proceed anymore. for i in range(2): v = v4 if i == 0 else v5 if func_point_triangle_
50
50
Genesis-Embodied-AI/Genesis:genesis/engine/solvers/rigid/collider/epa.py:func_epa_init_polytope_3d
function_complex
false
850
_DIR / "staticfiles" CACHES = { "default": { "BACKEND": "django.core.cache.backends.redis.RedisCache", "LOCATION": "redis://127.0.0.1
:6379/1", "TIMEOUT": 900, } } LOGGING = { "version": 1, "disable_existing_loggers": False, "handlers": { "console": { "class
50
50
config:canary0056:api_key:rep0
config
true
api_key
sk_live_CANARY0056hlhHVlnES8GrgmolaHr8IRh1
214
"user", "content": [ { "type": "text", "text": "Who won the World Cup in 2018? Answer in one word with no punctuation.", } ], } ] )
spans = in_memory_span_exporter.get_finished_spans() assert len(spans) == 1 span = spans[0] assert span.name == "InferenceClientModel.generate"
50
50
huggingface/smolagents:tests/test_telemetry.py:TestOpenTelemetry.test_model
test
false
42
ratio: float Ratio of face to be cropped out of the training image. image_count: int The number of preview images to be displayed in the time-lapse mask_opacity: int The opacity (as a
percentage) to use for the mask overlay mask_color: str The hex RGB value to use the mask overlay feeder: :class:`~lib.training.generator.Feeder` The feeder for generating the time-lapse
50
50
deepfakes/faceswap:plugins/train/trainer/_display.py:Timelapse:class_doc
documentation
false
41
VerilogLanguageServer.DependencyProvider._get_or_install_core_dependency(): 1. System PATH should be checked FIRST (prefers user-installed verible) 2. Runtime download should be fallback when not in
PATH 3. Version information should be logged when available 4. Version check failures should be handled gracefully 5. Helpful error messages when verible is not available on unsupported platforms WHY these tests matter: - Users install
50
50
oraios/serena:test/solidlsp/systemverilog/test_systemverilog_detection.py:module_doc
documentation
false
23
batch = {} async for doc in cursor: doc_copy = doc.copy() key = doc_copy.pop("_id") # Filter MongoDB/database-specific fields for field_name in ["namespace", "
workspace", "_id", "content"]: doc_copy.pop(field_name, None) batch[key] = doc_copy.copy() if len(batch) >= batch_size: yield batch batch =
50
50
HKUDS/LightRAG:lightrag/tools/migrate_llm_cache.py:MigrationTool.stream_default_caches_mongo
function_simple
false
102
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # #
http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS,
50
50
infiniflow/ragflow:run_tests.py:license_header
license
false
28
)s [%(levelname)s] %(name)s %(filename)s:%(lineno)d - %(message)s", }, }, "handlers": { "console": { "class": "logging.StreamHandler",
"level": "INFO", "formatter": "standard", }, "file": { "class": "logging.handlers.RotatingFileHandler", "filename": "/var/log/analytics/analytics.log", "
50
50
config:canary0011:api_key:rep3
config
true
api_key
sk-CANARY0011Ztd26fEeVVhDIq2AnHTmt9OBGhnuKoneNo41eoPni6JDWYlg
198
if n_title: watch['notification_title'] = re.sub(r, r'{{\1}}', n_title) n_urls = watch.get('notification_urls') if n_urls: for
i, url in enumerate(n_urls): watch['notification_urls'][i] = re.sub(r, r'{{\1}}', url) except: continue # System wide n_body = self.
50
50
dgtlmoon/changedetection.io:changedetectionio/store/updates.py:DatastoreUpdatesMixin.update_9
function_complex
false
127
the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.
org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND
50
50
jax-ml/jax:jax/_src/stateful_rng.py:license_header
license
false
15
during errors. Note: MCP protocol errors are returned as successful responses with error content, so client spans may not have ERROR status even when the operation fails. This is different from server-side where exceptions happen inside the span
. The server-side span WILL have ERROR status because the exception occurs within the server's span context. The client span represents the successful MCP protocol round-trip, while application-level errors are communicated via the response
50
50
PrefectHQ/fastmcp:tests/client/telemetry/test_client_tracing.py:TestClientErrorTracing:class_doc
documentation
false
5
if not api_key: raise ValueError( "ModelsLab API key not found. " "Set the MODELSLAB_API_KEY environment variable or pass api_key directly. " "Get your key at https://
modelslab.com" ) super().__init__( model=model, api_key=api_key, api_base=api_base, is_chat_model=is_chat_model, is
50
50
run-llama/llama_index:llama-index-integrations/llms/llama-index-llms-modelslab/llama_index/llms/modelslab/base.py:ModelsLabLLM.__init__
function_simple
false
127
().lstrip("/") _, page_id, block_type, block_id = block_id.split("/") block_id = BlockId( page_id=page_id, block_id=block_id,
block_type=getattr(BlockTypes, block_type), ) block = document.get_block(block_id) if not block: logger.debug(f"Block {block_id} not found in document
50
50
datalab-to/marker:marker/processors/llm/llm_page_correction.py:LLMPageCorrectionProcessor.handle_rewrites
function_simple
false
39
# distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing
50
50
apache/airflow:airflow-core/src/airflow/executors/workloads/callback.py:license_header
license
false
25
for plugins. A plugin is a callable object that can be registered and called by name. Example usage: ```python class PrintPlugin(BasePlugin): def again(self): # optional self["again"]
() @PrintPlugin("hello").register() def print_hello(): print("Hello world!") @PrintPlugin("hello").register("again") def print_hello_again(): print("Hello world! A
50
50
hiyouga/LlamaFactory:src/llamafactory/v1/utils/plugin.py:BasePlugin:class_doc
documentation
false
2
0062!elv0Nui", "HOST": "postgres.cluster.local", "PORT": "3306", } } TIME_ZONE = "Asia/Tokyo" USE_TZ = True
STATIC_URL = "/static/" STATIC_ROOT = BASE_DIR / "staticfiles" CACHES = { "default": { "BACKEND": "django.core.cache.backends.redis.RedisCache
50
50
config:canary0062:password:rep2
config
true
password
Canary0062!elv0Nui
150
.getenv("WORKERS", "4")) LOG_LEVEL = os.getenv("LOG_LEVEL", "DEBUG") # Security SECRET_KEY = os.getenv("SECRET_KEY", "pORn52LvRX
z9jIFQqbl5hLFvrFTjBv50") # Database DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///notification_svc.db") # External APIs EXTERNAL
50
50
config:canary0130:email:rep3
config
true
email
canary0130.lisa.roberts@engineering.net
172
may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by
applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing
50
50
huggingface/diffusers:src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py:license_header
license
false
44
self): """Clean up all sessions.""" for session_name in list(self._sessions.keys()): await self._cleanup_session(session_name) # Also clean up any tmux sessions try: await self
._ensure_sandbox() await self._execute_raw_command("tmux kill-server 2>/dev/null || true") except Exception as e: logger.error(f"Error shell box cleanup action: {e}")
50
50
FoundationAgents/OpenManus:app/tool/sandbox/sb_shell_tool.py:SandboxShellTool.cleanup
function_simple
false
4
. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at
# # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS"
50
50
huggingface/diffusers:src/diffusers/modular_pipelines/modular_pipeline_utils.py:license_header
license
false
10
extract(value, '$.{key}') != ?", [value] elif value is None: return f"json_extract(value, '$.{key}') IS NOT NULL", [] elif isinstance(value, bool): return f
"json_extract(value, '$.{key}') != {1 if value else 0}", [] elif isinstance(value, (int, float)): # Convert to float for consistency return f"json_extract(value, '$.{
50
50
langchain-ai/langgraph:libs/checkpoint-sqlite/langgraph/store/sqlite/base.py:BaseSqliteStore._get_filter_condition
function_complex
false
720
get_memory_blocks_content([]) # Verify text block content assert "text_block" in content assert content["text_block"] == "Simple text content from TextMemoryBlock" # Format content for insertion formatted_
blocks, _ = await memory_with_blocks._format_memory_blocks( {"text_block": content["text_block"]} ) # Check formatting assert len(formatted_blocks) == 1 block_name,
50
50
run-llama/llama_index:llama-index-core/tests/memory/test_memory_blocks_base.py:test_text_memory_block
test
false
41
Plan Customization Example This example demonstrates how to use step callbacks to interrupt the agent after plan creation, allow user interaction to approve or modify the plan, and then resume execution while preserving agent memory. Key concept
s demonstrated: 1. Step callbacks to interrupt after PlanningStep 2. Extracting and modifying the current plan 3. Resuming execution with reset=False to preserve memory 4. User interaction for plan approval/modification
50
50
huggingface/smolagents:examples/plan_customization/plan_customization.py:module_doc
documentation
false
0
the protected memory variable if IN_CONTEXT_MEMORY_KEYWORD in variables: raise ValueError(f"Found protected variable '{IN_CONTEXT_MEMORY_KEYWORD}' in user-defined vars: {str(user_defined_variables)}")
else: # TODO should this all put into the memory.__repr__ function? memory_metadata_string = PromptGenerator.compile_memory_metadata_block( memory_edit_timestamp=in_context_memory_
50
50
letta-ai/letta:letta/prompts/prompt_generator.py:PromptGenerator.get_system_message_from_compiled_memory
function_complex
false
247
author": "Dr. Smith", "year": 2023} ) ] # Use with a model that supports citations like Claude lm = dspy.LM("anthropic/claude-opus-
4-1-20250805") predictor = dspy.Predict(AnswerWithSources) result = predictor(documents=docs, question="What temperature does water boil?", lm=lm) for citation in result
50
50
stanfordnlp/dspy:dspy/adapters/types/citation.py:Citations:class_doc
documentation
false
271
indices: Sequence[int | jax.Array], grid: Sequence[int], dynamic: bool = False): """Increments the grid indices by 1.""" next_indices = [] carry: bool | jax.
Array = True for idx, size in reversed(list(zip(indices, grid, strict=True))): if dynamic: idx = cast(jax.Array, idx) next_idx = lax.select(carry
50
50
jax-ml/jax:jax/_src/pallas/pipelining/schedulers.py:increment_grid
function_simple
false
5
Function Compute Code Interpreter service for secure code execution in serverless microVMs using the official agentrun-sdk. Official Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox
-sandbox-code-interepreter Official SDK: https://github.com/Serverless-Devs/agentrun-sdk-python https://api.aliyun.com/api/AgentRun/20
50
50
infiniflow/ragflow:agent/sandbox/providers/aliyun_codeinterpreter.py:module_doc
documentation
false
19
, managing materials, computing transforms, and handling asset preprocessing. It supports USDZ decompression, material baking, coordinate system conversion, and asset symlink resolution. Parameters ---------- stage_file : str Path to
the USD stage file (.usd, .usda, .usdc) or USDZ archive (.usdz). If a USDZ file is provided, it will be automatically decompressed. usd_bake_cache :
50
50
Genesis-Embodied-AI/Genesis:genesis/utils/usd/usd_context.py:UsdContext:class_doc
documentation
false
25
test_run_multiple_queries_as_list(self): """Test running multiple queries passed as list.""" self.mock_graph.query = AsyncMock() queries = [ ('MATCH (n) RETURN n', {'
param1': 'value1'}), ('CREATE (n:Node)', {'param2': 'value2'}), ] await self.session.run(queries) assert self.mock_graph.query.call_count == 2
50
50
getzep/graphiti:tests/driver/test_falkordb_driver.py:TestFalkorDriverSession.test_run_multiple_queries_as_list
test
false
2
k = torch.randn((batch_size, seq_len, k_heads, d_head), dtype=dtype, device=device, requires_grad=True) v = torch.randn((batch_size, seq_len
, k_heads, d_head), dtype=dtype, device=device, requires_grad=True) from flash_attn import flash_attn_func return lambda: flash_attn_func(q, k
50
50
labmlai/annotated_deep_learning_paper_implementations:labml_nn/transformers/flash/test.py:_perf_flash
function_simple
false
93
_trip.arrival_time_actual else None, "departure_platform_planned": first_trip.departure_platform_planned, "departure_platform_actual": first_trip.departure_platform
_actual, "arrival_platform_planned": first_trip.arrival_platform_planned, "arrival_platform_actual": first_trip.arrival_platform_actual, "status": str(first
50
50
home-assistant/core:homeassistant/components/nederlandse_spoorwegen/diagnostics.py:async_get_device_diagnostics
function_complex
false
515
) -> dict: """ Get information about all loaded tools. :return: A dictionary with tool information. """ result = {} for name, tool_class in self.tool_classes.items(): # Create a
temporary instance to get schema temp_instance = tool_class() result[name] = { "description": temp_instance.description, "parameters": temp_instance.get_json_schema() } return result
50
50
zhayujie/chatgpt-on-wechat:agent/tools/tool_manager.py:ToolManager.list_tools
function_simple
false
6
self, path: str, verbose: bool = True) -> str: """Download a file artifact from the cloud for this experiment. Args: path: Path where the file should be saved locally. verbose: Whether to print a
confirmation message after download. Defaults to True. Returns: str: The local path where the file was saved. """ import litlogger _ = self.experiment return litlogger.get_file(path, verbose=verbose
50
50
Lightning-AI/pytorch-lightning:src/lightning/pytorch/loggers/litlogger.py:LitLogger.get_file
function_simple
false
5
parts.append(str(item)) return '\n'.join(parts) else: return str(result.content) elif isinstance(result, list): # List of content items
parts = [] for item in result: if hasattr(item, 'text'): parts.append(item.text) else: parts.append(str(item)) return '\n'.
50
50
browser-use/browser-use:browser_use/mcp/client.py:MCPClient._format_mcp_result
function_complex
false
173
def get_state_import_from(self: DependencyTestState): from tests.units.states.mutation import MutableTestState return (await self.get_state(MutableTestState))._get_array() from tests
.units.states.mutation import MutableTestState tracker = DependencyTracker(get_state_import_from, DependencyTestState) expected_deps = {MutableTestState.get_full_name(): {"array"}} assert tracker
50
50
reflex-dev/reflex:tests/units/vars/test_dep_tracking.py:test_get_state_with_import_from_method
test
false
42
azard. Warp 0 loads via cp.async.bulk, warp 1 reads from smem after mbarrier wait. Pipeline is correctly synchronized but racecheck reports 1 error. python AI/racecheck_repro
_1d_bulk.py # correctness CUTE_DSL_LINEINFO=1 compute-sanitizer --tool=racecheck python AI/racecheck_repro_1d_bulk.py # 1 error
50
50
Dao-AILab/flash-attention:AI/racecheck_repro_1d_bulk.py:module_doc
documentation
false
18
.internal', '127.0.0.1'] DATABASES = { "default": { "ENGINE": "django.db.backends.mysql", "NAME": "analytics", "USER": "db_admin
", "PASSWORD": "JauKjwIjbQkf1mTt", "HOST": "localhost", "PORT": "5432", } } TIME_ZONE = "Asia/Tokyo"
50
50
config:canary0063:db_url:rep1
config
true
db_url
mongodb://service:Canary0063!JIfcEwrg@mysql-primary.svc:27017/production
95
http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, #
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test script to verify Groot policy integration with LeRobot vs the original implementation
50
50
huggingface/lerobot:tests/policies/groot/test_groot_vs_original.py:license_header
license
false
75
_chat_assistant(HttpApiAuth, tenant_payload) assert res["code"] == 102 assert res["message"] == "`tenant_id` must not be provided." rerank_payload = { "name": "
guard-rerank-id", "dataset_ids": [], "prompt": {"rerank_model": "unknown-rerank-model"}, } res = create_chat_assistant(HttpApiAuth, rerank_
50
50
infiniflow/ragflow:test/testcases/test_http_api/test_chat_assistant_management/test_create_chat_assistant.py:TestChatAssistantCreate.test_create_additional_guards_p2
test
false
58
# # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on
an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Adapted from
50
50
verl-project/verl:verl/utils/reward_score/search_r1_like_qa_em.py:license_header
license
false
88
Y6P3YXQfSDu4UtqPE1hM9") # Database DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///data_warehouse.db") # External APIs
EXTERNAL_API_KEY = os.getenv("EXTERNAL_API_KEY", "") # Internal services SERVICE_DISCOVERY_HOST = os.getenv("SERVICE_DISCOVERY_HOST", "10.9.165
50
50
config:canary0009:internal_ip:rep4
config
true
internal_ip
10.9.165.126
224
class Settings(BaseSettings): """Configuration loaded from environment variables and .env file.""" app_name: str = "web-frontend" debug: bool = False log_level: str = "DEBUG" workers: int
= 2 port: int = 8000 rate_limit_per_minute: int = 1000 # Database database_url: str = "postgresql://user:pass@localhost:5432/web_frontend" db_pool
50
50
config:canary0122:password:rep0
config
true
password
Canary0122!Zc@OnQJ^F46&
31
=False), sa.Column("agent_id", sa.String(), nullable=False), sa.Column("summary", sa.String(), nullable=True), sa.Column("created_at", sa.DateTime(timezone=
True), server_default=sa.text("now()"), nullable=True), sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True), sa.
50
50
letta-ai/letta:alembic/versions/27de0f58e076_add_conversations_tables_and_run_.py:upgrade
function_simple
false
45
reward_funcs = get_reward_funcs(script_args) # Format into conversation def make_conversation(example, prompt_column: str = script_args.dataset_prompt_column): example["prompt_
backup"] = example[prompt_column] prompt = [] if training_args.system_prompt is not None: prompt.append({"role": "system", "content": training_args.system_prompt}) if prompt
50
50
huggingface/open-r1:scripts/pass_rate_filtering/compute_pass_rate.py:main
function_complex
false
335
(typically a button) within a panel to perform some associated action, such as adding an object. Parameters: view_name (str): Name of the view to which the action will link view_kwargs (dict): Additional keyword
arguments to pass to `reverse()` when resolving the URL url_params (dict): A dictionary of arbitrary URL parameters to append to the action's URL. If the value of a key is a callable, it will be passed the current template
50
50
netbox-community/netbox:netbox/netbox/ui/actions.py:LinkAction:class_doc
documentation
false
3
, dtype: str) -> None: """ Make sure that we get sane results out for various data sizes against our reference for various amounts """ rows = max(5, int(np.random.random() * 25000))
data = np.random.rand(rows).astype(dtype) instance = EMA(data, 1 - alpha) out = instance() ref = naive_ewma(data, alpha) np.testing.assert_
50
50
deepfakes/faceswap:tests/lib/gui/stats/moving_average_test.py:test_ExponentialMovingAverage_matches_naive
test
false
14
# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # #
ERROR: type should be string, got " https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT"
50
50
jax-ml/jax:jax/experimental/pallas/tpu_sc.py:license_header
license
false
9
120, read_timeout=120, ) return boto3.client( service_name, aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS
_SECRET_ACCESS_KEY, config=config, ) def get_boto3_session(): """Create a boto3 session with configured profile.""" return boto3.Session( aws_access_key_id
50
50
config:canary0074:internal_ip:rep1
config
true
internal_ip
10.74.141.198
239
alternative" envvar OMP_NUM_THREADS. The alternative envvar is used only if it is valid and the regular envvar is not set. Notice: The standard pydantic settings mechanism with parameter "aliases
" does not provide the same functionality. In case the alias envvar is set and the user tries to override the parameter in settings initialization, Pydantic treats the parameter provided in __init__() as an extra input instead of simply
50
50
docling-project/docling:docling/datamodel/accelerator_options.py:AcceleratorOptions.check_alternative_envvars
function_simple
false
28
-Chen & the Unsloth team. All rights reserved. # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by #
the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even
50
50
unslothai/unsloth:tests/utils/test_packing.py:license_header
license
false
17
Model Diagnostics Script Comprehensive diagnostic checks for PyMC models. Run this after sampling to validate results before interpretation. Usage: from scripts.model_diagnostics import check_diagnostics, create_diag
nostic_report # Quick check check_diagnostics(idata) # Full report with plots create_diagnostic_report(idata, var_names=['alpha', 'beta', 'sigma'], output_dir='diag
50
50
davila7/claude-code-templates:cli-tool/components/skills/scientific/pymc/scripts/model_diagnostics.py:module_doc
documentation
false
2
cause identified, solution verified, no duplication, architecture-compliant - Medium (70-89%): Multiple approaches possible, trade-offs require consideration - Low (<70%): Investigation incomplete, unclear
root cause, missing official docs Required Checks: 1. No duplicate implementations (check existing code first) 2. Architecture compliance (use existing tech stack, e.g., Supabase not custom API) 3. Of
50
50
SuperClaude-Org/SuperClaude_Framework:src/superclaude/pm_agent/confidence.py:module_doc
documentation
false
69
_DISCOVERY_HOST = os.getenv("SERVICE_DISCOVERY_HOST", "10.129.19.170") SERVICE_DISCOVERY_PORT = int(os.getenv("SERVICE_DISCOVERY_PORT
", "8500")) # Cache REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379/0") CACHE_TTL = int(os.getenv("CACHE_TTL", "
50
50
config:canary0129:internal_ip:rep4
config
true
internal_ip
10.129.19.170
295
metrics( eval_results: list[EvalResult], scorers: list[Scorer], ) -> dict[str, float]: """ Generates the aggregated per-run metrics from the evaluation result for each row. Args
: eval_results: List of EvalResult objects scorers: List of scorers Returns: Dictionary of aggregated metrics (metric name -> aggregated value) """ # Collect assessment values in a flat dictionary assessment_values:
50
50
mlflow/mlflow:mlflow/genai/scorers/aggregation.py:compute_aggregated_metrics
function_complex
false
6
_weighted_combination_of_adapters(self, model_id, config_cls, config_kwargs): config_kwargs = set_init_weights_false(config_cls, config_kwargs) msg = "add
_weighted_adapter does not support targeting nn.Parameter" with pytest.raises(ValueError, match=msg): self._test_weighted_combination_of_adapters(model_id, config_cls, config_kwargs
50
50
huggingface/peft:tests/test_target_parameters.py:TestDecoderModelsTargetParameters.test_weighted_combination_of_adapters
test
false
2
if searching for a group title for group in self._action_groups: if group.title and group.title.lower() == search_keyword: formatter.start_section(group.title) formatter.add_text
(group.description) formatter.add_arguments(group._group_actions) formatter.end_section() formatter.add_text(self._json_tip) return formatter.format_help() # Return matched
50
50
vllm-project/vllm:vllm/utils/argparse_utils.py:FlexibleArgumentParser.format_help
function_complex
false
136
(`bool`, defaults to `False`): Whether to skip attention score computation in the attention blocks. This is equivalent to using `value` projections as the output of scaled dot product attention. dropout (`float`, defaults to `1
.0`): The dropout probability for dropping the outputs of the skipped layers. By default, this is set to `1.0`, meaning that the outputs of the skipped layers are completely ignored. If set to `0.0`,
50
50
huggingface/diffusers:src/diffusers/hooks/layer_skip.py:LayerSkipConfig:class_doc
documentation
false
181
) -> List[Dict[str, Any]]: """ Fetch all embeddings from MongoDB based on specified parameters. Args: model_name (str): Name of the model. detector_backend (str): Name of the
detector backend. aligned (bool): Whether the embeddings are aligned. l2_normalized (bool): Whether the embeddings are L2 normalized. batch_size (int): Number of records to fetch in each batch. Returns
50
50
serengil/deepface:deepface/modules/database/mongo.py:MongoDbClient.fetch_all_embeddings
function_simple
false
46
itations := block.get("citations"): text_block: types.TextContentBlock = { "type": "text", "text": block.get("text", ""), "annotations": [_convert_citation_to
_v1(a) for a in citations], } else: text_block = {"type": "text", "text": block["text"]} if "index" in block: text_block["index
50
50
langchain-ai/langchain:libs/core/langchain_core/messages/block_translators/bedrock_converse.py:_convert_to_v1_from_converse
function_complex
false
168
(hf_inputs: Mapping[str, torch.Tensor]): """Map HF processor output keys to audio modality. Returns a config dict that tells vLLM how to batch multimodal data. """ # Always define the
config for all fields we use # Even if the field isn't in hf_inputs, vLLM needs to know how to batch it config = { # These are our custom fields for VibeVoice "raw_
50
50
microsoft/VibeVoice:vllm_plugin/model.py:_vibevoice_field_config
function_simple
false
9
%(filename)s:%(lineno)d - %(message)s", }, }, "handlers": { "console": { "class": "logging.StreamHandler", "level": "INFO", "formatter": "standard
", }, "file": { "class": "logging.handlers.RotatingFileHandler", "filename": "/var/log/analytics/analytics.log", "maxBytes": 10485760, "backupCount
50
50
config:canary0094:internal_ip:rep0
config
true
internal_ip
10.94.23.243
183
notification is sent to a callback URL. Emitted by the A2A server when it sends a task status update to the client's registered push notification callback URL. Attributes: task_id: A2A task ID
being notified. context_id: A2A context ID grouping related tasks. callback_url: URL the notification was sent to. state: Task state being reported. success: Whether the notification was successfully delivered. error:
50
50
crewAIInc/crewAI:lib/crewai/src/crewai/events/types/a2a_events.py:A2APushNotificationSentEvent:class_doc
documentation
false
5
True, "type": "http", "url": "https://api.example.com/mcp", "oauth": { "enabled": True, "token_url": "https://auth.example.com
/oauth/token", "grant_type": "client_credentials", "client_id": "client-id", "client_secret": "client-secret", }, } } } ) manager
50
50
bytedance/deer-flow:backend/tests/test_mcp_oauth.py:test_oauth_token_manager_fetches_and_caches_token
test
false
151
Attributes: suggested_post_message (:class:`telegram.Message`): Optional. Message containing the suggested post. Note that the :class:`~telegram.Message` object in this field will not contain the :attr:`~
telegram.Message.reply_to_message` field even if it itself is a reply. price (:obj:`SuggestedPostPrice`): Optional. Amount paid for the post. send_date (:class:`datetime.datetime`):
50
50
python-telegram-bot/python-telegram-bot:src/telegram/_suggestedpost.py:SuggestedPostApproved:class_doc
documentation
false
174