url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
https://typedoc.org/api/index.html | TypeDoc API - v0.28.16 TypeDoc API - v0.28.16 Docs Example GitHub Preparing search index... TypeDoc API - v0.28.16 Application Root level class which contains most useful behavior. Converter Responsible for converting TypeScript symbols into Reflection s and Type s. Deserializer Deserializes TypeDoc's JSON output Renderer Writes HTML output from TypeDoc's models Serializer Serializes TypeDoc's models to JSON Namespaces Configuration Internationalization Internationalization module for localized strings in output. JSONOutput Contains interfaces which describe the JSON output. JSX Custom JSX module designed specifically for TypeDoc's needs. Models TypeDoc converts source code into these object types. OptionDefaults → Configuration . OptionDefaults Classes ConsoleLogger Context DefaultTheme DefaultThemeRenderContext EventDispatcher EventHooks IndexEvent Logger MarkdownEvent MinimalSourceFile Outputs PageEvent RendererEvent SerializeEvent Slugger Theme ValidatingFileRegistry ArgumentsReader → Configuration . ArgumentsReader ArrayType → Models . ArrayType Comment → Models . Comment CommentTag → Models . CommentTag ConditionalType → Models . ConditionalType ContainerReflection → Models . ContainerReflection DeclarationReflection → Models . DeclarationReflection DocumentReflection → Models . DocumentReflection FileRegistry → Models . FileRegistry IndexedAccessType → Models . IndexedAccessType InferredType → Models . InferredType IntersectionType → Models . IntersectionType IntrinsicType → Models . IntrinsicType LiteralType → Models . LiteralType MappedType → Models . MappedType NamedTupleMember → Models . NamedTupleMember OptionalType → Models . OptionalType Options → Configuration . Options PackageJsonReader → Configuration . PackageJsonReader ParameterReflection → Models . ParameterReflection PredicateType → Models . PredicateType ProjectReflection → Models . ProjectReflection QueryType → Models . QueryType ReferenceReflection → Models . ReferenceReflection ReferenceType → Models . ReferenceType Reflection → Models . Reflection ReflectionCategory → Models . ReflectionCategory ReflectionFlags → Models . ReflectionFlags ReflectionGroup → Models . ReflectionGroup ReflectionSymbolId → Models . ReflectionSymbolId ReflectionType → Models . ReflectionType RestType → Models . RestType SignatureReflection → Models . SignatureReflection SourceReference → Models . SourceReference TemplateLiteralType → Models . TemplateLiteralType TSConfigReader → Configuration . TSConfigReader TupleType → Models . TupleType Type → Models . Type TypeDocReader → Configuration . TypeDocReader TypeOperatorType → Models . TypeOperatorType TypeParameterReflection → Models . TypeParameterReflection UnionType → Models . UnionType UnknownType → Models . UnknownType Enumerations EntryPointStrategy LogLevel PageKind CommentStyle → Configuration . CommentStyle ParameterHint → Configuration . ParameterHint ParameterType → Configuration . ParameterType ReflectionFlag → Models . ReflectionFlag ReflectionKind → Models . ReflectionKind TraverseProperty → Models . TraverseProperty TypeContext → Models . TypeContext VarianceModifier → Models . VarianceModifier Functions normalizePath translateTagName makeRecursiveVisitor → Models . makeRecursiveVisitor Option → Configuration . Option resetReflectionID → Models . resetReflectionID splitUnquotedString → Models . splitUnquotedString Interfaces ApplicationEvents CommentParserConfig Controls how TypeDoc reads option files and what options are available. ComponentPath ConverterEvents Controls how TypeDoc reads option files and what options are available. DeclarationReference Deserializable DeserializerComponent DocumentationEntryPoint FancyConsoleLogger Icons Meaning MinimalNode NavigationElement PageDefinition PageHeading RendererEvents RendererHooks Router SerializerComponent SerializerEvents SymbolReference TranslatableStrings ArrayDeclarationOption → Configuration . ArrayDeclarationOption BooleanDeclarationOption → Configuration . BooleanDeclarationOption DeclarationHierarchy → Models . DeclarationHierarchy DeclarationOptionBase → Configuration . DeclarationOptionBase FlagsDeclarationOption → Configuration . FlagsDeclarationOption InlineTagDisplayPart → Models . InlineTagDisplayPart MapDeclarationOption → Configuration . MapDeclarationOption MixedDeclarationOption → Configuration . MixedDeclarationOption NumberDeclarationOption → Configuration . NumberDeclarationOption ObjectDeclarationOption → Configuration . ObjectDeclarationOption OptionsReader → Configuration . OptionsReader ParameterTypeToOptionTypeMap → Configuration . ParameterTypeToOptionTypeMap ReflectionVariant → Models . ReflectionVariant RelativeLinkDisplayPart → Models . RelativeLinkDisplayPart StringDeclarationOption → Configuration . StringDeclarationOption TraverseCallback → Models . TraverseCallback TypeDocOptionMap → Configuration . TypeDocOptionMap TypeDocOptions → Configuration . TypeDocOptions TypeDocOptionValues → Configuration . TypeDocOptionValues TypeKindMap → Models . TypeKindMap Routers BaseRouter CategoryRouter GroupRouter KindDirRouter KindRouter StructureDirRouter StructureRouter Type Aliases EnumKeys ExternalResolveResult Controls how TypeDoc reads option files and what options are available. ExternalSymbolResolver GlobString MeaningKeyword NormalizedPath NormalizedPathOrModule NormalizedPathOrModuleOrFunction RenderTemplate RouterTarget SortStrategy TagString TranslatedString CommentDisplayPart → Models . CommentDisplayPart DeclarationOption → Configuration . DeclarationOption DeclarationOptionToOptionType → Configuration . DeclarationOptionToOptionType FileId → Models . FileId JsDocCompatibility → Configuration . JsDocCompatibility KeyToDeclaration → Configuration . KeyToDeclaration ManuallyValidatedOption → Configuration . ManuallyValidatedOption OutputSpecification → Configuration . OutputSpecification ReflectionId → Models . ReflectionId ReflectionSymbolIdString → Models . ReflectionSymbolIdString ReflectionVisitor → Models . ReflectionVisitor SomeReflection → Models . SomeReflection SomeType → Models . SomeType TypeKind → Models . TypeKind TypeVisitor → Models . TypeVisitor ValidationOptions → Configuration . ValidationOptions Variables i18n Settings Member Visibility Protected Inherited Internal Theme OS Light Dark On This Page Application Converter Deserializer Renderer Serializer Namespaces Configuration Internationalization JSON Output JSX Models Option Defaults Classes Console Logger Context Default Theme Default Theme Render Context Event Dispatcher Event Hooks Index Event Logger Markdown Event Minimal Source File Outputs Page Event Renderer Event Serialize Event Slugger Theme Validating File Registry Arguments Reader Array Type Comment Comment Tag Conditional Type Container Reflection Declaration Reflection Document Reflection File Registry Indexed Access Type Inferred Type Intersection Type Intrinsic Type Literal Type Mapped Type Named Tuple Member Optional Type Options Package Json Reader Parameter Reflection Predicate Type Project Reflection Query Type Reference Reflection Reference Type Reflection Reflection Category Reflection Flags Reflection Group Reflection Symbol Id Reflection Type Rest Type Signature Reflection Source Reference Template Literal Type TS Config Reader Tuple Type Type Type Doc Reader Type Operator Type Type Parameter Reflection Union Type Unknown Type Enumerations Entry Point Strategy Log Level Page Kind Comment Style Parameter Hint Parameter Type Reflection Flag Reflection Kind Traverse Property Type Context Variance Modifier Functions normalize Path translate Tag Name make Recursive Visitor Option reset Reflection ID split Unquoted String Interfaces Application Events Comment Parser Config Component Path Converter Events Declaration Reference Deserializable Deserializer Component Documentation Entry Point Fancy Console Logger Icons Meaning Minimal Node Navigation Element Page Definition Page Heading Renderer Events Renderer Hooks Router Serializer Component Serializer Events Symbol Reference Translatable Strings Array Declaration Option Boolean Declaration Option Declaration Hierarchy Declaration Option Base Flags Declaration Option Inline Tag Display Part Map Declaration Option Mixed Declaration Option Number Declaration Option Object Declaration Option Options Reader Parameter Type To Option Type Map Reflection Variant Relative Link Display Part String Declaration Option Traverse Callback Type Doc Option Map Type Doc Options Type Doc Option Values Type Kind Map Routers Base Router Category Router Group Router Kind Dir Router Kind Router Structure Dir Router Structure Router Type Aliases Enum Keys External Resolve Result External Symbol Resolver Glob String Meaning Keyword Normalized Path Normalized Path Or Module Normalized Path Or Module Or Function Render Template Router Target Sort Strategy Tag String Translated String Comment Display Part Declaration Option Declaration Option To Option Type File Id Js Doc Compatibility Key To Declaration Manually Validated Option Output Specification Reflection Id Reflection Symbol Id String Reflection Visitor Some Reflection Some Type Type Kind Type Visitor Validation Options Variables i18n Docs Example GitHub TypeDoc API - v0.28.16 Loading... Generated using TypeDoc | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/plivo-inc-plivo/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | CPaaS | LinkedIn Skip to main content LinkedIn Plivo in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in CPaaS Cloud Communications Platforms by Plivo See who's skilled in this Add as skill Learn more Report this product About A CPaaS (communications platform as a service) lets developers add real-time communications features such as SMS messaging and voice call to their applications without the hassle of dealing with the global telecom infrastructure. Plivo's SMS API and Voice API and its Premium Communications Network give businesses a way to communicate with their customers anywhere in the world. Cloud-based communications platforms provide a bridge between businesses and telecom carriers, programmatically connecting a business’s devices and applications via application programming interfaces (API) that provide all the code a business needs to incorporate communication functionality into their applications. On the back end, cloud communications platforms connect to telecommunication carriers and operators to route messages and voice calls on behalf of businesses. Featured customers of CPaaS Houst Real Estate 59,246 followers LAZ Parking Real Estate 44,500 followers FV Technologies Software Development 438 followers Meta Software Development 11,663,927 followers Uber Internet Marketplace Platforms 3,365,124 followers MDabroad Hospitals and Health Care 3,578 followers Deckers Brands Retail Apparel and Fashion 130,788 followers Fastboy Marketing Advertising Services 668 followers Luxer One® Transportation, Logistics, Supply Chain and Storage 11,629 followers CallHub Software Development 72,930 followers Adobe Software Development 5,252,483 followers Fluent, Inc Advertising Services 60,168 followers JungleWorks IT Services and IT Consulting 31,896 followers Become Financial Services 90 followers Magna Legal Services Legal Services 43,389 followers Show more Show less Similar products Slack Slack Cloud Communications Platforms Tencent Cloud Instant Messaging (IM) Tencent Cloud Instant Messaging (IM) Cloud Communications Platforms Avaya Cloud Office Avaya Cloud Office Cloud Communications Platforms MessageBird MessageBird Cloud Communications Platforms Vonage Communications APIs Vonage Communications APIs Cloud Communications Platforms Channelize.io: Live Stream Shopping Platform Channelize.io: Live Stream Shopping Platform Cloud Communications Platforms Sign in to see more Show more Show less Plivo products Phone Numbers Phone Numbers SIP Trunking SIP Trunking SMS API SMS API Verify API Verify API Multi-Factor Authentication (MFA) Software Voice API Voice API Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://twitter.com/MLResearchPress | JavaScript is not available. We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info © 2026 X Corp. Something went wrong, but don’t fret — let’s give it another shot. Try again Some privacy related extensions may cause issues on x.com. Please disable them and try again. | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/cloudflare-workers/ | Cloudflare Workers | LinkedIn Skip to main content LinkedIn Cloudflare in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cloudflare Workers Server Virtualization Software by Cloudflare See who's skilled in this Add as skill Try now Report this product About Build your next application with Cloudflare Workers Media Products media viewer No more previous content Announcing Deploy to Cloudflare Workers We think Cloudflare Workers is the best serverless tool on the market, so we wanted to make it easier for developers to deploy projects and get started building cool stuff on the platform! Our new Deploy to Cloudflare Workers site is just that — a way for both new and experienced Workers users to deploy new serverless projects directly from GitHub. In just a few clicks, we'll configure and deploy the project for you, giving you a new GitHub repo to start hacking on your next great project! What Cloudflare Workers do? Building serverless applications with Cloudflare Workers' Cron Job Scheduler and Key Value Store Learn how to build an application with the the newly announced cron job scheduler, Cron Triggers, from Cloudflare Workers. In this demo, we will explain what Cron Triggers are, how to quickly get started using them, and how to write and deploy your own. We will also review how to integrate with Workers KV, our key value store that allows you to store data at the edge and retrieve it inside your Workers' project instead of sending that data to alternative platforms such as Postgres or Firebase. No more next content Featured customers of Cloudflare Workers npm, Inc. Software Development 11,912 followers DoorDash Software Development 1,437,130 followers 23andMe Non-profit Organizations 81,297 followers Timely IT Services and IT Consulting 5,725 followers Adobe Marketo Software Development 178,692 followers Cordial Software Development 48,215 followers MaxMind Technology, Information and Internet 2,991 followers Optimizely Software Development 167,115 followers Discord Software Development 467,975 followers Broadcom Semiconductor Manufacturing 608,798 followers Glossier, Inc. Personal Care Product Manufacturing 492,298 followers ProPublica Media Production 72,236 followers Show more Show less Similar products Cloud Virtual Machine (CVM) Cloud Virtual Machine (CVM) Server Virtualization Software Proxmox Virtual Environment Proxmox Virtual Environment Server Virtualization Software SC//Fleet Manager SC//Fleet Manager Server Virtualization Software SC//Platform SC//Platform Server Virtualization Software BTS Sybelle Virtualization Platform BTS Sybelle Virtualization Platform Server Virtualization Software Oracle Virtualization Oracle Virtualization Server Virtualization Software Sign in to see more Show more Show less Cloudflare products Argo Smart Routing Argo Smart Routing Network Management Software Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare DNS Cloudflare DNS Managed DNS Services Software Cloudflare Load Balancing Cloudflare Load Balancing Load Balancing Software Cloudflare Registrar Cloudflare Registrar Managed DNS Services Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Cloudflare SSL / TLS Cloudflare SSL / TLS SSL Certificates Software Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Cloudflare Web Analytics Cloudflare Web Analytics Digital Analytics Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#results | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
Jobs Flashbots Writings Docs MEV-Boost Forum Join us dark mode Join Flashbots Join us to help solve hard problems that have asymmetric impact and strike at the core of crypto. Current job openings © 2026 Flashbots, Ltd | 2026-01-13T09:29:15 | |
https://kabirk.com/wimmelbench#introduction | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/uas/login?fromSignIn=true&session_redirect=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fdigitaleconomycouncil&trk=organization_guest_main-feed-card_ellipsis-menu-semaphore-sign-in-redirect&guestReportContentType=POST&_f=guest-reporting | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/timely-appointment-scheduling-software?trk=products_details_guest_organization_image | Timely | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free Timely IT Services and IT Consulting Dunedin, Otago 5,725 followers Timely Booking and Client Management Software | EverCommerce Inc | Educational content | Supportive community Follow View all 130 employees Report this company About us We’re here to grow the value of the beauty industry, with you and for you. Today, Timely provides booking and client management software to over 55,000 amazing hair and beauty professionals around the world. Along the way, we’ve been welcomed into the EverCommerce family and joined forces with the legendary folks at SalonBiz in North America. Together, we form the EverWell team. There’s been a lot of growth, but we haven’t lost touch with our roots. We’re still focused on our mission to make life better for people in the hair and beauty industry, and we’ve held tight to the values that make Timely a pretty special place to work. We love helping you succeed, so come and join the Timely community today! Start your free 14 day Timely trial at gettimely.com. Stop by and say hi 👋 United Kingdom: +44 20 3808 0465 Australia: +61 3 8518 4957 New Zealand: +64 4 488 7012 Website http://www.gettimely.com External link for Timely Industry IT Services and IT Consulting Company size 51-200 employees Headquarters Dunedin, Otago Type Public Company Founded 2012 Specialties Online appointments, Scheduling software, Appointment solutions, Small business solution, Booking software, Salon Software, Small business software, SaaS, client experience, innovation, barber software, spa software, beauty software, beauty discussions, salon trends, and hair software Locations Primary Dunedin, Otago 9016, NZ Get directions Stephenson St Cremorne, Victoria 3121, AU Get directions London, GB Get directions Auckland, NZ Get directions Employees at Timely Andrew Dixon Mark Chambers Stuart M. Adriana Ferdian See all employees Updates Timely 5,725 followers 3mo Edited Report this post Free finance lessons made just for the hair & beauty industry. Running a salon or beauty business means wearing all the hats, and we know one of the biggest sources of stress is often around finances. 💸 And at Timely, we’re always looking for ways to add value beyond bookings and payments. That’s why we’ve partnered with @thecurveplatfrom , a financial education platform, to create free, bite-sized lessons designed specifically for the hair & beauty industry. Our goal is to: ✨ free up mental space so beauty professionals can keep their passions front and centre ✨ make finance feel less overwhelming with easy-to-digest, tailored content ✨ help our customers succeed with Timely on their side We’re proud to collaborate with The Curve on this initiative and support our community in building confidence with money matters. 💪 👉 Explore the free lessons we created here: https://lnkd.in/gMHJ7ipm #timely #salonsoftware #businesseducation …more 6 Like Comment Share Join now to see what you are missing Find people you know at Timely Browse recommended jobs for you View all updates, news, and articles Join now Similar pages EverCommerce Software Development Denver, Colorado Timely Software Development SalonBiz Software Development Denver, Colorado EverWell Technology, Information and Internet Timely Education EverHealth Technology, Information and Internet Timely AI Software Development San Francisco, CA Stealth Software Development Phoenix, Arizona Aether | AI Presentations Technology, Information and Internet Auckland, AUK Halter Technology, Information and Internet Auckland City, Central Show more similar pages Show fewer similar pages Browse jobs Project Manager jobs 253,048 open jobs Manager jobs 1,880,925 open jobs Scrum Master jobs 128,485 open jobs User Experience Designer jobs 13,659 open jobs Engineer jobs 555,845 open jobs Advisor jobs 236,530 open jobs Customer Trainer jobs 67,252 open jobs Customer Team Lead jobs 142,542 open jobs Head of Customer Experience jobs 8,724 open jobs Analyst jobs 694,057 open jobs Head of Operations jobs 10,629 open jobs Concierge jobs 66,817 open jobs Client Partner jobs 86,725 open jobs Technical Account Manager jobs 26,183 open jobs Customer Success Manager jobs 112,728 open jobs Head of Digital jobs 7,661 open jobs Associate Director jobs 125,209 open jobs Associate jobs 1,091,945 open jobs Operational Specialist jobs 58,644 open jobs Support Specialist jobs 114,327 open jobs Show more jobs like this Show fewer jobs like this Funding Timely 4 total rounds Last Round Grant Sep 24, 2020 External Crunchbase Link for last round of funding US$ 16.6K Investors Callaghan Innovation See more info on crunchbase More searches More searches System Engineer jobs Product Design Intern jobs Senior Software Engineer jobs Product Designer jobs User Experience Designer jobs Head of Engineering jobs Account Executive jobs Outbound Sales Representative jobs Sales Development Manager jobs Director of Engineering jobs Chief Financial Officer jobs Sales Development Representative jobs Founder jobs Manager jobs Senior Technology Manager jobs Administrator jobs Sales Manager jobs Human Resources Operations Manager jobs Human Resources Project Manager jobs Senior Manager jobs Project Manager jobs Account Manager jobs Data Administrator jobs Service Desk Analyst jobs Commercial Analyst jobs Data Team Lead jobs Head of Digital jobs Data Manager jobs Analyst jobs Lead jobs Human Resources Specialist jobs Specialist jobs Project Management Specialist jobs Staff Software Engineer jobs Android Developer jobs Global Manager jobs Learning Coordinator jobs Infrastructure Manager jobs Information Technology Operations Manager jobs Training Specialist jobs National Sales Manager jobs Information Technology Manager jobs Graduate jobs Scientist jobs Engineer jobs Travel Advisor jobs Search Engine Optimization Manager jobs Builder jobs Content Strategist jobs Content Manager jobs Search Engine Optimization Specialist jobs Marketing Manager jobs Customer Operations Specialist jobs Product Development Specialist jobs Support Associate jobs Customer Insights Analyst jobs Senior Operations Specialist jobs Associate Architect jobs Process Specialist jobs Customer Engineer jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Timely Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench.html | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://opensource.guide/bn/how-to-contribute/ | How to Contribute to Open Source | Open Source Guides বিস্তারিত অবদান রাখুন العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides How to Contribute to Open Source Want to contribute to open source? A guide to making open source contributions, for first-timers and veterans. সূচিপত্র Why contribute to open source? What it means to contribute Orienting yourself to a new project Finding a project to contribute to How to submit a contribution What happens after you submit your contribution Why contribute to open source? Working on [freenode] helped me earn many of the skills I later used for my studies in university and my actual job. I think working on open source projects helps me as much as it helps the project! — @errietta , “Why I love contributing to open source software” Contributing to open source can be a rewarding way to learn, teach, and build experience in just about any skill you can imagine. Why do people contribute to open source? Plenty of reasons! Improve software you rely on Lots of open source contributors start by being users of software they contribute to. When you find a bug in an open source software you use, you may want to look at the source to see if you can patch it yourself. If that’s the case, then contributing the patch back is the best way to ensure that your friends (and yourself when you update to the next release) will be able to benefit from it. Improve existing skills Whether it’s coding, user interface design, graphic design, writing, or organizing, if you’re looking for practice, there’s a task for you on an open source project. Meet people who are interested in similar things Open source projects with warm, welcoming communities keep people coming back for years. Many people form lifelong friendships through their participation in open source, whether it’s running into each other at conferences or late night online chats about burritos. Find mentors and teach others Working with others on a shared project means you’ll have to explain how you do things, as well as ask other people for help. The acts of learning and teaching can be a fulfilling activity for everyone involved. Build public artifacts that help you grow a reputation (and a career) By definition, all of your open source work is public, which means you get free examples to take anywhere as a demonstration of what you can do. Learn people skills Open source offers opportunities to practice leadership and management skills, such as resolving conflicts, organizing teams of people, and prioritizing work. It’s empowering to be able to make changes, even small ones You don’t have to become a lifelong contributor to enjoy participating in open source. Have you ever seen a typo on a website, and wished someone would fix it? On an open source project, you can do just that. Open source helps people feel agency over their lives and how they experience the world, and that in itself is gratifying. What it means to contribute If you’re a new open source contributor, the process can be intimidating. How do you find the right project? What if you don’t know how to code? What if something goes wrong? Not to worry! There are all sorts of ways to get involved with an open source project, and a few tips will help you get the most out of your experience. You don’t have to contribute code A common misconception about contributing to open source is that you need to contribute code. In fact, it’s often the other parts of a project that are most neglected or overlooked . You’ll do the project a huge favor by offering to pitch in with these types of contributions! I’ve been renowned for my work on CocoaPods, but most people don’t know that I actually don’t do any real work on the CocoaPods tool itself. My time on the project is mostly spent doing things like documentation and working on branding. — @orta , “Moving to OSS by default” Even if you like to write code, other types of contributions are a great way to get involved with a project and meet other community members. Building those relationships will give you opportunities to work on other parts of the project. Do you like planning events? Organize workshops or meetups about the project, like @fzamperin did for NodeSchool Organize the project’s conference (if they have one) Help community members find the right conferences and submit proposals for speaking Do you like to design? Restructure layouts to improve the project’s usability Conduct user research to reorganize and refine the project’s navigation or menus, like Drupal suggests Put together a style guide to help the project have a consistent visual design Create art for t-shirts or a new logo, like hapi.js’s contributors did Do you like to write? Write and improve the project’s documentation, like @CBID2 did for OpenSauced’s documentation Curate a folder of examples showing how the project is used Start a newsletter for the project, or curate highlights from the mailing list, like @opensauced did for their product Write tutorials for the project, like PyPA’s contributors did Write a translation for the project’s documentation, like @frontendwizard did for the instructions for freeCodeCamp’s CSS Flexbox challenge Seriously, [documentation] is mega-important. The documentation so far has been great and has been a killer feature of Babel. There are sections that could certainly use some work and even the addition of a paragraph here or there is extremely appreciated. — @kittens, “Call for contributors” Do you like organizing? Link to duplicate issues, and suggest new issue labels, to keep things organized Go through open issues and suggest closing old ones, like @nzakas did for ESLint Ask clarifying questions on recently opened issues to move the discussion forward Do you like to code? Find an open issue to tackle, like @dianjin did for Leaflet Ask if you can help write a new feature Automate project setup Improve tooling and testing Do you like helping people? Answer questions about the project on e.g., Stack Overflow ( like this Postgres example ) or Reddit Answer questions for people on open issues Help moderate the discussion boards or conversation channels Do you like helping others code? Review code on other people’s submissions Write tutorials for how a project can be used Offer to mentor another contributor, like @ereichert did for @bronzdoc on Rust You don’t just have to work on software projects! While “open source” often refers to software, you can collaborate on just about anything. There are books, recipes, lists, and classes that get developed as open source projects. For example: @sindresorhus curates a list of “awesome” lists @h5bp maintains a list of potential interview questions for front-end developer candidates @stuartlynn and @nicole-a-tesla made a collection of fun facts about puffins Even if you’re a software developer, working on a documentation project can help you get started in open source. It’s often less intimidating to work on projects that don’t involve code, and the process of collaboration will build your confidence and experience. Orienting yourself to a new project If you go to an issue tracker and things seem confusing, it’s not just you. These tools require a lot of implicit knowledge, but people can help you navigate it and you can ask them questions. — @shaunagm, “How to Contribute to Open Source” For anything more than a typo fix, contributing to open source is like walking up to a group of strangers at a party. If you start talking about llamas, while they were deep in a discussion about goldfish, they’ll probably look at you a little strangely. Before jumping in blindly with your own suggestions, start by learning how to read the room. Doing so increases the chances that your ideas will be noticed and heard. Anatomy of an open source project Every open source community is different. Spending years on one open source project means you’ve gotten to know one open source project. Move to a different project, and you might find the vocabulary, norms, and communication styles are completely different. That said, many open source projects follow a similar organizational structure. Understanding the different community roles and overall process will help you get quickly oriented to any new project. A typical open source project has the following types of people: Author: The person/s or organization that created the project Owner: The person/s who has administrative ownership over the organization or repository (not always the same as the original author) Maintainers: Contributors who are responsible for driving the vision and managing the organizational aspects of the project (They may also be authors or owners of the project.) Contributors: Everyone who has contributed something back to the project Community Members: People who use the project. They might be active in conversations or express their opinion on the project’s direction Bigger projects may also have subcommittees or working groups focused on different tasks, such as tooling, triage, community moderation, and event organizing. Look on a project’s website for a “team” page, or in the repository for governance documentation, to find this information. A project also has documentation. These files are usually listed in the top level of a repository. LICENSE: By definition, every open source project must have an open source license . If the project does not have a license, it is not open source. README: The README is the instruction manual that welcomes new community members to the project. It explains why the project is useful and how to get started. CONTRIBUTING: Whereas READMEs help people use the project, contributing docs help people contribute to the project. It explains what types of contributions are needed and how the process works. While not every project has a CONTRIBUTING file, its presence signals that this is a welcoming project to contribute to. A good example of an effective Contributing Guide would be the one from Codecademy’s Docs repository . CODE_OF_CONDUCT: The code of conduct sets ground rules for participants’ behavior associated and helps to facilitate a friendly, welcoming environment. While not every project has a CODE_OF_CONDUCT file, its presence signals that this is a welcoming project to contribute to. Other documentation: There might be additional documentation, such as tutorials, walkthroughs, or governance policies, especially on bigger projects like Astro Docs . Finally, open source projects use the following tools to organize discussion. Reading through the archives will give you a good picture of how the community thinks and works. Issue tracker: Where people discuss issues related to the project. Pull requests: Where people discuss and review changes that are in progress, whether it’s to improve a contributor’s line of code, grammar usage, use of images, etc. Some projects, such as MDN Web Docs , use certain GitHub Action flows to automate and quicken their code reviews. Discussion forums or mailing lists: Some projects may use these channels for conversational topics (for example, “How do I…“ or “What do you think about…“ instead of bug reports or feature requests). Others use the issue tracker for all conversations. A good example of this would be CHAOSS’ weekly Newsletter Synchronous chat channel: Some projects use chat channels (such as Slack or IRC) for casual conversation, collaboration, and quick exchanges. A good example of this would be EddieHub’s Discord community . Finding a project to contribute to Now that you’ve figured out how open source projects work, it’s time to find a project to contribute to! If you’ve never contributed to open source before, take some advice from U.S. President John F. Kennedy, who once said:, “Ask not what your country can do for you - ask what you can do for your country.” Ask not what your country can do for you - ask what you can do for your country. — John F. Kennedy Library Contributing to open source happens at all levels, across projects. You don’t need to overthink what exactly your first contribution will be, or how it will look. Instead, start by thinking about the projects you already use, or want to use. The projects you’ll actively contribute to are the ones you find yourself coming back to. Within those projects, whenever you catch yourself thinking that something could be better or different, act on your instinct. Open source isn’t an exclusive club; it’s made by people just like you. “Open source” is just a fancy term for treating the world’s problems as fixable. You might scan a README and find a broken link or a typo. Or you’re a new user and you noticed something is broken, or an issue that you think should really be in the documentation. Instead of ignoring it and moving on, or asking someone else to fix it, see whether you can help out by pitching in. That’s what open source is all about! According to a study conducted by Igor Steinmacher and other Computer Science researchers, 28% of casual contributions to open source are documentation, such as typo fixes, reformatting, or writing a translation. If you’re looking for existing issues you can fix, every open source project has a /contribute page that highlights beginner-friendly issues you can start out with. Navigate to the main page of the repository on GitHub, and add /contribute at the end of the URL (for example https://github.com/facebook/react/contribute ). You can also use one of the following resources to help you discover and contribute to new projects: GitHub Explore Open Source Friday First Timers Only CodeTriage 24 Pull Requests Up For Grabs First Contributions SourceSort OpenSauced A checklist before you contribute When you’ve found a project you’d like to contribute to, do a quick scan to make sure that the project is suitable for accepting contributions. Otherwise, your hard work may never get a response. Here’s a handy checklist to evaluate whether a project is good for new contributors. Meets the definition of open source Does it have a license? Usually, there is a file called LICENSE in the root of the repository. Project actively accepts contributions Look at the commit activity on the main branch. On GitHub, you can see this information in the Insights tab of a repository’s homepage, such as Virtual-Coffee When was the latest commit? How many contributors does the project have? How often do people commit? (On GitHub, you can find this by clicking "Commits" in the top bar.) Next, look at the project’s issues. How many open issues are there? Do maintainers respond quickly to issues when they are opened? Is there active discussion on the issues? Are the issues recent? Are issues getting closed? (On GitHub, click the "closed" tab on the Issues page to see closed issues.) Now do the same for the project’s pull requests. How many open pull requests are there? Do maintainers respond quickly to pull requests when they are opened? Is there active discussion on the pull requests? Are the pull requests recent? How recently were any pull requests merged? (On GitHub, click the "closed" tab on the Pull Requests page to see closed PRs.) Project is welcoming A project that is friendly and welcoming signals that they will be receptive to new contributors. Do the maintainers respond helpfully to questions in issues? Are people friendly in the issues, discussion forum, and chat (for example, IRC or Slack)? Do pull requests get reviewed? Do maintainers thank people for their contributions? Whenever you see a long thread, spot check responses from core developers coming late in the thread. Are they summarizing constructively, and taking steps to bring the thread to a decision while remaining polite? If you see a lot of flame wars going on, that’s often a sign that energy is going into argument instead of into development. — @kfogel, Producing OSS How to submit a contribution You’ve found a project you like, and you’re ready to make a contribution. Finally! Here’s how to get your contribution in the right way. Communicating effectively Whether you’re a one-time contributor or trying to join a community, working with others is one of the most important skills you’ll develop in open source. [As a new contributor,] I quickly realized I had to ask questions if I wanted to be able to close the issue. I skimmed through the code base. Once I had some sense of what was going on, I asked for more direction. And voilà! I was able to solve the issue after getting all the relevant details I needed. — @shubheksha, A Beginner’s Very Bumpy Journey Through The World of Open Source Before you open an issue or pull request, or ask a question in chat, keep these points in mind to help your ideas come across effectively. Give context. Help others get quickly up to speed. If you’re running into an error, explain what you’re trying to do and how to reproduce it. If you’re suggesting a new idea, explain why you think it’d be useful to the project (not just to you!). 😇 “X doesn’t happen when I do Y” 😢 “X is broken! Please fix it.” Do your homework beforehand. It’s OK not to know things, but show that you tried. Before asking for help, be sure to check a project’s README, documentation, issues (open or closed), mailing list, and search the internet for an answer. People will appreciate it when you demonstrate that you’re trying to learn. 😇 “I’m not sure how to implement X. I checked the help docs and didn’t find any mentions.” 😢 “How do I X?” Keep requests short and direct. Much like sending an email, every contribution, no matter how simple or helpful, requires someone else’s review. Many projects have more incoming requests than people available to help. Be concise. You will increase the chance that someone will be able to help you. 😇 “I’d like to write an API tutorial.” 😢 “I was driving down the highway the other day and stopped for gas, and then I had this amazing idea for something we should be doing, but before I explain that, let me show you…“ Keep all communication public. Although it’s tempting, don’t reach out to maintainers privately unless you need to share sensitive information (such as a security issue or serious conduct violation). When you keep the conversation public, more people can learn and benefit from your exchange. Discussions can be, in themselves, contributions. 😇 (as a comment) “@-maintainer Hi there! How should we proceed on this PR?” 😢 (as an email) “Hey there, sorry to bother you over email, but I was wondering if you’ve had a chance to review my PR” It’s okay to ask questions (but be patient!). Everybody was new to the project at some point, and even experienced contributors need to get up to speed when they look at a new project. By the same token, even longtime maintainers are not always familiar with every part of the project. Show them the same patience that you’d want them to show to you. 😇 “Thanks for looking into this error. I followed your suggestions. Here’s the output.” 😢 “Why can’t you fix my problem? Isn’t this your project?” Respect community decisions. Your ideas may differ from the community’s priorities or vision. They may offer feedback or decide not to pursue your idea. While you should discuss and look for compromise, maintainers have to live with your decision longer than you will. If you disagree with their direction, you can always work on your own fork or start your own project. 😇 “I’m disappointed you can’t support my use case, but as you’ve explained it only affects a minor portion of users, I understand why. Thanks for listening.” 😢 “Why won’t you support my use case? This is unacceptable!” Above all, keep it classy. Open source is made up of collaborators from all over the world. Context gets lost across languages, cultures, geographies, and time zones. In addition, written communication makes it harder to convey a tone or mood. Assume good intentions in these conversations. It’s fine to politely push back on an idea, ask for more context, or further clarify your position. Just try to leave the internet a better place than when you found it. Gathering context Before doing anything, do a quick check to make sure your idea hasn’t been discussed elsewhere. Skim the project’s README, issues (open and closed), mailing list, and Stack Overflow. You don’t have to spend hours going through everything, but a quick search for a few key terms goes a long way. If you can’t find your idea elsewhere, you’re ready to make a move. If the project is on GitHub, you’ll likely communicate by doing the following: Raising an Issue: These are like starting a conversation or discussion Pull requests are for starting work on a solution. Communication Channels: If the project has a designated Discord, IRC, or Slack channel, consider starting a conversation or asking for clarification about your contribution. Before you open an issue or pull request, check the project’s contributing docs (usually a file called CONTRIBUTING, or in the README), to see whether you need to include anything specific. For example, they may ask that you follow a template, or require that you use tests. If you want to make a substantial contribution, open an issue to ask before working on it. It’s helpful to watch the project for a while (on GitHub, you can click “Watch” to be notified of all conversations), and get to know community members, before doing work that might not get accepted. You’ll learn a lot from taking a single project you actively use, “watching” it on GitHub and reading every issue and PR. — @gaearon on joining projects Opening an issue You should usually open an issue in the following situations: Report an error you can’t solve yourself Discuss a high-level topic or idea (for example, community, vision or policies) Propose a new feature or other project idea Tips for communicating on issues: If you see an open issue that you want to tackle, comment on the issue to let people know you’re on it. That way, people are less likely to duplicate your work. If an issue was opened a while ago, it’s possible that it’s being addressed somewhere else, or has already been resolved, so comment to ask for confirmation before starting work. If you opened an issue, but figured out the answer later on your own, comment on the issue to let people know, then close the issue. Even documenting that outcome is a contribution to the project. Opening a pull request You should usually open a pull request in the following situations: Submit small fixes such as a typo, a broken link or an obvious error. Start work on a contribution that was already asked for, or that you’ve already discussed, in an issue. A pull request doesn’t have to represent finished work. It’s usually better to open a pull request early on, so others can watch or give feedback on your progress. Just open it as a “draft” or mark as a “WIP” (Work in Progress) in the subject line or “Notes to Reviewers” sections if provided (or you can just create your own. Like this: **## Notes to Reviewer** ). You can always add more commits later. If the project is on GitHub, here’s how to submit a pull request: Fork the repository and clone it locally. Connect your local to the original “upstream” repository by adding it as a remote. Pull in changes from “upstream” often so that you stay up to date so that when you submit your pull request, merge conflicts will be less likely. (See more detailed instructions here .) Create a branch for your edits. Reference any relevant issues or supporting documentation in your PR (for example, “Closes #37.”) Include screenshots of the before and after if your changes include differences in HTML/CSS. Drag and drop the images into the body of your pull request. Test your changes! Run your changes against any existing tests if they exist and create new ones when needed. It’s important to make sure your changes don’t break the existing project. Contribute in the style of the project to the best of your abilities. This may mean using indents, semi-colons or comments differently than you would in your own repository, but makes it easier for the maintainer to merge, others to understand and maintain in the future. If this is your first pull request, check out Make a Pull Request , which @kentcdodds created as a walkthrough video tutorial. You can also practice making a pull request in the First Contributions repository, created by @Roshanjossey. What happens after you submit your contribution Before we start celebrating, one of the following will happen after you submit your contribution: 😭 You don’t get a response Hopefully you checked the project for signs of activity before making a contribution. Even on an active project, however, it’s possible that your contribution won’t get a response. If you haven’t gotten a response in over a week, it’s fair to politely respond in that same thread, asking someone for a review. If you know the name of the right person to review your contribution, you can @-mention them in that thread. Don’t reach out to that person privately ; remember that public communication is vital to open source projects. If you give a polite reminder and still do not receive a response, it’s possible that nobody will ever respond. It’s not a great feeling, but don’t let that discourage you! 😄 There are many possible reasons why you didn’t get a response, including personal circumstances that may be out of your control. Try to find another project or way to contribute. If anything, this is a good reason not to invest too much time in making a contribution before other community members are engaged and responsive. 🚧 Someone requests changes to your contribution It’s common that you’ll be asked to make changes to your contribution, whether that’s feedback on the scope of your idea, or changes to your code. When someone requests changes, be responsive. They’ve taken the time to review your contribution. Opening a PR and walking away is bad form. If you don’t know how to make changes, research the problem, then ask for help if you need it. A good example of this would be the feedback that another contributor has given to @a-m-lamb on their pull request to Codecademy’s Docs . If you don’t have time to work on the issue anymore due to reasons such as the conversation has been going on for months, and your circumstances have changed or you’re unable to find a solution, let the maintainer know so that they can open the issue for someone else, like @RitaDee did for an issue at OpenSauced’s app repository . 👎 Your contribution doesn’t get accepted Your contribution may or may not be accepted in the end. Hopefully you didn’t put too much work into it already. If you’re not sure why it wasn’t accepted, it’s perfectly reasonable to ask the maintainer for feedback and clarification. Ultimately, however, you’ll need to respect that this is their decision. Don’t argue or get hostile. You’re always welcome to fork and work on your own version if you disagree! 🎉 Your contribution gets accepted Hooray! You’ve successfully made an open source contribution! You did it! 🎉 Whether you just made your first open source contribution, or you’re looking for new ways to contribute, we hope you’re inspired to take action. Even if your contribution wasn’t accepted, don’t forget to say thanks when a maintainer put effort into helping you. Open source is made by people like you: one issue, pull request, comment, or high-five at a time. সকল নির্দেশিকায় ফেরত যান একই সম্পর্কিত নির্দেশিকা Starting an Open Source Project Learn more about the world of open source and get ready to launch your own project. Building Welcoming Communities Building a community that encourages people to use, contribute to, and evangelize your project. Scroll to Top অবদান রাখুন আপনি কি পরামর্শ দিতে চান? এই বিষয়বস্তু ওপেন সোর্স। এটা উন্নত করতে আমাদের সাহায্য করুন। অবদান রাখুন সংযুক্ত থাকুন গিটহাব এর সর্বশেষ সংবাদ এবং সম্পদের খবর সবার আগে শুনুন। ইমেইল এর ঠিকানা fine print এর সাথে সঙ্গে এবং বন্ধুরা | 2026-01-13T09:29:15 |
https://www.linkedin.com/legal/user-agreement?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2FshareArticle%3Fmini%3Dtrue%26url%3Dhttps%253A%252F%252Fdev%2Eto%252Faditya_singh_172b37651201%252F30-core-algorithm-ep-06-prefix-sum-1ii0%26title%3D30%2520Core%2520Algorithm%2520%253A%2520Ep-06%2520%253APrefix%2520Sum%26summary%3DWhy%2520Prefix%2520Sum%2520Is%2520Really%2520About%2520Making%2520Accumulated%2520Cost%2520Explicit%2520%2520%2520Many%2520performance%2E%2E%2E%26source%3DDEV%2520Community&trk=registration-frontend_join-form-user-agreement | User Agreement | LinkedIn Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Effective on November 3, 2025 Our mission is to connect the world’s professionals to allow them to be more productive and successful. Our services are designed to promote economic opportunity for our members by enabling you and millions of other professionals to meet, exchange ideas, learn, and find opportunities or employees, work, and make decisions in a network of trusted relationships. Table of Contents: Introduction Obligations Rights and Limits Disclaimer and Limit of Liability Termination Governing Law and Dispute Resolution General Terms LinkedIn “Dos and Don’ts” Complaints Regarding Content How To Contact Us Introduction 1.1 Contract When you use our Services you agree to all of these terms. Your use of our Services is also subject to our Cookie Policy and our Privacy Policy, which covers how we collect, use, share, and store your personal information. By creating a LinkedIn account or accessing or using our Services (described below), you are agreeing to enter into a legally binding contract with LinkedIn (even if you are using third party credentials or using our Services on behalf of a company). If you do not agree to this contract (“Contract” or “User Agreement”), do not create an account or access or otherwise use any of our Services. If you wish to terminate this Contract at any time, you can do so by closing your account and no longer accessing or using our Services. As a Visitor or Member of our Services, the collection, use, and sharing of your personal data is subject to our Privacy Policy , our Cookie Policy and other documents referenced in our Privacy Policy , and updates. You acknowledge and have read our Privacy Policy . Services This Contract applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-related sites, apps, communications, and other services that state that they are offered under this Contract (“Services”), including the offsite collection of data for those Services, such as via our ads and the “Apply with LinkedIn” and “Share with LinkedIn” plugins. LinkedIn and other Key Terms You are entering into this Contract with LinkedIn (also referred to as “we” and “us”). Designated Countries . We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. If you reside in the “Designated Countries”, you are entering into this Contract with LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) and LinkedIn Ireland will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you reside outside of the “Designated Countries”, you are entering into this Contract with LinkedIn Corporation (“LinkedIn Corp.”) and LinkedIn Corp. will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. Affiliates . Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., Github, Inc.). Social Action . Actions that members take on our services such as likes, comments, follows, sharing content. Content . Content includes, for example, feed posts, feedback, comments, profiles, articles (and contributions), group posts, job postings, messages (including InMails), videos, photos, audio, and/or PDFs. 1.2 Members and Visitors This Contract applies to Members and Visitors. When you register and join the LinkedIn Services, you become a “Member”. If you have chosen not to register for our Services, you may access certain features as a “Visitor.” 1.3 Changes We may make changes to this Contract. We may modify this Contract, our Privacy Policy and our Cookie Policy from time to time. If we materially change these terms or if we are legally required to provide notice, we will provide you notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. However, we may not always provide prior notice of changes to these terms (1) when those changes are legally required to be implemented with immediate effect, or (2) when those changes relate to a newly launched service or feature. We agree that changes cannot be retroactive. If you object to any of these changes, you may close your account . Your continued use of our Services after we publish or send a notice about our changes to these terms means that you are consenting to the updated terms as of their effective date. 2. Obligations 2.1 Service Eligibility Here are some promises that you make to us in this Contract: You’re eligible to enter into this Contract and you are at least our “Minimum Age.” The Services are not for use by anyone under the age of 16. To use the Services, you agree that: (1) you must be the "Minimum Age" (described below) or older; (2) you will only have one LinkedIn account, which must be in your real name; and (3) you are not already restricted by LinkedIn from using the Services. Creating an account with false information is a violation of our terms, including accounts registered on behalf of others or persons under the age of 16. “Minimum Age” means 16 years old. However, if law requires that you must be older in order for LinkedIn to lawfully provide the Services to you without parental consent (including using your personal data) then the Minimum Age is such older age. Learn More 2.2 Your Account You will keep your password a secret You will not share your account with anyone else and will follow our policies and the law. Members are account holders. You agree to: (1) protect against wrongful access to your account (e.g., use a strong password and keep it confidential); (2) not share or transfer your account or any part of it (e.g., sell or transfer the personal data of others by transferring your connections); and (3) follow the law, our list of Dos and Don’ts (below), and our Professional Community Policies . Learn More You are responsible for anything that happens through your account unless you close it or report misuse. As between you and others (including your employer), your account belongs to you. However, if the Services were purchased by another party for you to use in connection with your work for them (e.g., Recruiter seat or LinkedIn Learning subscription bought by your employer), the party paying for such Service has the right to control access to and get reports on your use of such paid Service; however, they do not have rights to your personal account. 2.3 Payment You’ll honor your payment obligations and you are okay with us storing your payment information. You understand that there may be fees and taxes that are added to our prices. Refunds are subject to our policy, and we may modify our prices and those modified prices will apply prospectively. If you buy any of our paid Services, you agree to pay us the applicable fees and taxes and you agree to the additional terms specific to the paid Services. Failure to pay these fees will result in the termination of your paid Services. Also, you agree that: Your purchase may be subject to foreign exchange fees or differences in prices based on location (e.g., exchange rates). We may store and continue billing your payment method (e.g., credit card), even after it has expired, to avoid interruptions in your paid Services and to use it to pay for other Services you may buy. If your primary payment method fails, we may automatically charge a secondary payment method, if you have provided one. You may update or change your payment method. Learn more If you purchase a subscription, your payment method automatically will be charged at the start of each subscription period for the fees and taxes applicable to that period. To avoid future charges, cancel before the renewal date. Learn how to cancel or suspend your paid subscription Services. We may modify our prices effective prospectively upon reasonable notice to the extent allowed under the law. All of your paid Services are subject to LinkedIn’s refund policy . We may calculate taxes payable by you based on the billing information that you provide us. You can get a copy of your invoice through your LinkedIn account settings under “ Purchase History ”. 2.4 Notices and Messages You’re okay with us providing notices and messages to you through our websites, apps, and contact information. If your contact information is out of date, you may miss out on important notices. You agree that we will provide notices and messages to you in the following ways: (1) within the Services or (2) sent to the contact information you provided us (e.g., email, mobile number, physical address). You agree to keep your contact information up to date. Please review your settings to control and limit the types of messages you receive from us. 2.5 Sharing When you share information on our Services, others can see, copy and use that information. Our Services allow sharing of information (including content) in many ways, such as through your profile, posts, articles, group posts, links to news articles, job postings, messages, and InMails. Depending on the feature and choices you make, information that you share may be seen by other Members, Visitors, or others (on or off of the Services). Where we have made settings available, we will honor the choices you make about who can see content or other information (e.g., message content to your addressees, sharing content only to LinkedIn connections, restricting your profile visibility from search tools, or opting not to notify others of your LinkedIn profile update). For job searching activities, we default to not notifying your connections or the public. So, if you apply for a job through our Services or opt to signal that you are interested in a job, our default is to share it only with the job poster. To the extent that laws allow this, we are not obligated to publish any content or other information on our Services and can remove it with or without notice. 3. Rights and Limits 3.1. Your License to LinkedIn You own all of your original content that you provide to us, but you also grant us a non-exclusive license to it. We’ll honor the choices you make about who gets to see your content, including how it can be used for ads. As between you and LinkedIn, you own your original content that you submit or post to the Services. You grant LinkedIn and our Affiliates the following non-exclusive license to the content and other information you provide (e.g., share, post, upload, and/or otherwise submit) to our Services: A worldwide, transferable and sublicensable right to use, copy, modify, distribute, publicly perform and display, host, and process your content and other information without any further consent, notice and/or compensation to you or others. These rights are limited in the following ways: You can end this license for specific content by deleting such content from the Services, or generally by closing your account, except (a) to the extent you (1) shared it with others as part of the Services and they copied, re-shared it or stored it, (2) we had already sublicensed others prior to your content removal or closing of your account, or (3) we are required by law to retain or share it with others, and (b) for the reasonable time it takes to remove the content you delete from backup and other systems. We will not include your content in advertisements for the products and services of third parties to others without your separate consent (including sponsored content). However, without compensation to you or others, ads may be served near your content and other information, and your social actions may be visible and included with ads, as noted in the Privacy Policy. If you use a Service feature, we may mention that with your name or photo to promote that feature within our Services, subject to your settings. We will honor the audience choices for shared content (e.g., “Connections only”). For example, if you choose to share your post to "Anyone on or off LinkedIn” (or similar): (a) we may make it available off LinkedIn; (b) we may enable others to publicly share onto third-party services (e.g., a Member embedding your post on a third party service); and/or (c) we may enable search tools to make that public content findable though their services. Learn More While we may edit and make format changes to your content (such as translating or transcribing it, modifying the size, layout or file type, and removing or adding labels or metadata), we will take steps to avoid materially modifying the meaning of your expression in content you share with others. Because you own your original content and we only have non-exclusive rights to it, you may choose to make it available to others, including under the terms of a Creative Commons license . You and LinkedIn agree that if content includes personal data, it is subject to our Privacy Policy. You and LinkedIn agree that we may access, store, process, and use any information (including content and/or personal data) that you provide in accordance with the terms of the Privacy Policy and your choices (including settings). By submitting suggestions or other feedback regarding our Services to LinkedIn, you agree that LinkedIn can use and share (but does not have to) such feedback for any purpose without compensation to you. You promise to only provide content and other information that you have the right to share and that your LinkedIn profile will be truthful. You agree to only provide content and other information that does not violate the law or anyone’s rights (including intellectual property rights). You have choices about how much information to provide on your profile but also agree that the profile information you provide will be truthful. LinkedIn may be required by law to remove certain content and other information in certain countries. 3.2 Service Availability We may change or limit the availability of some features, or end any Service. We may change, suspend or discontinue any of our Services. We may also limit the availability of features, content and other information so that they are not available to all Visitors or Members (e.g., by country or by subscription access). We don’t promise to store or show (or keep showing) any information (including content) that you’ve shared. LinkedIn is not a storage service. You agree that we have no obligation to store, maintain or provide you a copy of any content or other information that you or others provide, except to the extent required by applicable law and as noted in our Privacy Policy. 3.3 Other Content, Sites and Apps Your use of others’ content and information posted on our Services, is at your own risk. Others may offer their own products and services through our Services, and we aren’t responsible for those third-party activities. Others’ Content: By using the Services, you may encounter content or other information that might be inaccurate, incomplete, delayed, misleading, illegal, offensive, or otherwise harmful. You agree that we are not responsible for content or other information made available through or within the Services by others, including Members. While we apply automated tools to review much of the content and other information presented in the Services, we cannot always prevent misuse of our Services, and you agree that we are not responsible for any such misuse. You also acknowledge the risk that others may share inaccurate or misleading information about you or your organization, and that you or your organization may be mistakenly associated with content about others, for example, when we let connections and followers know you or your organization were mentioned in the news. Members have choices about this feature . Others’ Products and Services: LinkedIn may help connect you to other Members (e.g., Members using Services Marketplace or our enterprise recruiting, jobs, sales, or marketing products) who offer you opportunities (on behalf of themselves, their organizations, or others) such as offers to become a candidate for employment or other work or offers to purchase products or services. You acknowledge that LinkedIn does not perform these offered services, employ those who perform these services, or provide these offered products. You further acknowledge that LinkedIn does not supervise, direct, control, or monitor Members in the making of these offers, or in their providing you with work, delivering products or performing services, and you agree that (1) LinkedIn is not responsible for these offers, or performance or procurement of them, (2) LinkedIn does not endorse any particular Member’s offers, and (3) LinkedIn is not an agent or employment agency on behalf of any Member offering employment or other work, products or services. With respect to employment or other work, LinkedIn does not make employment or hiring decisions on behalf of Members offering opportunities and does not have such authority from Members or organizations using our products. For Services Marketplace , (a) you must be at least 18 years of age to procure, offer, or perform services, and (b) you represent and warrant that you have all the required licenses and will provide services consistent with the relevant industry standards and our Professional Community Policies . Others’ Events: Similarly, LinkedIn may help you register for and/or attend events organized by Members and connect with other Members who are attendees at such events. You agree that (1) LinkedIn is not responsible for the conduct of any of the Members or other attendees at such events, (2) LinkedIn does not endorse any particular event listed on our Services, (3) LinkedIn does not review and/or vet any of these events or speakers, and (4) you will adhere to the terms and conditions that apply to such events. 3.4 Limits We have the right to limit how you connect and interact on our Services. LinkedIn reserves the right to limit your use of the Services, including the number of your connections and your ability to contact other Members. LinkedIn reserves the right to restrict, suspend, or terminate your account if you breach this Contract or the law or are misusing the Services (e.g., violating any of the Dos and Don’ts or Professional Community Policies ). We can also remove any content or other information you shared if we believe it violates our Professional Community Policies or Dos and Don’ts or otherwise violates this Contract. Learn more about how we moderate content. 3.5 Intellectual Property Rights We’re providing you notice about our intellectual property rights. LinkedIn reserves all of its intellectual property rights in the Services. Trademarks and logos used in connection with the Services are the trademarks of their respective owners. LinkedIn, and “in” logos and other LinkedIn trademarks, service marks, graphics and logos used for our Services are trademarks or registered trademarks of LinkedIn. 3.6 Recommendations and Automated Processing We use data and other information about you to make and order relevant suggestions and to generate content for you and others. Recommendations: We use the data and other information that you provide and that we have about Members and content on the Services to make recommendations for connections, content, ads, and features that may be useful to you. We use that data and other information to recommend and to present information to you in an order that may be more relevant for you. For example, that data and information may be used to recommend jobs to you and you to recruiters and to organize content in your feed in order to optimize your experience and use of the Services. Keeping your profile accurate and up to date helps us to make these recommendations more accurate and relevant. Learn More Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies , including not sharing misleading information. The Services may include content automatically generated and shared using tools offered by LinkedIn or others off LinkedIn. Like all content and other information on our Services, regardless of whether it's labeled as created by “AI”, be sure to carefully review before relying on it. 4. Disclaimer and Limit of Liability 4.1 No Warranty This is our disclaimer of legal liability for the quality, safety, or reliability of our Services. LINKEDIN AND ITS AFFILIATES MAKE NO REPRESENTATION OR WARRANTY ABOUT THE SERVICES, INCLUDING ANY REPRESENTATION THAT THE SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, AND PROVIDE THE SERVICES (INCLUDING CONTENT, OUTPUT AND INFORMATION) ON AN “AS IS” AND “AS AVAILABLE” BASIS. TO THE FULLEST EXTENT PERMITTED UNDER APPLICABLE LAW, LINKEDIN AND ITS AFFILIATES DISCLAIM ANY IMPLIED OR STATUTORY WARRANTY, INCLUDING ANY IMPLIED WARRANTY OF TITLE, ACCURACY, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. If you plan to use content, output and information for any reason, it is your responsibility to verify its accuracy and fitness for your purposes, because any content, output and information on the service may not reflect accurate, complete, or current information. 4.2 Exclusion of Liability These are the limits of legal liability we may have to you. TO THE FULLEST EXTENT PERMITTED BY LAW (AND UNLESS LINKEDIN HAS ENTERED INTO A SEPARATE WRITTEN AGREEMENT THAT OVERRIDES THIS CONTRACT), LINKEDIN AND ITS AFFILIATES, WILL NOT BE LIABLE IN CONNECTION WITH THIS CONTRACT FOR LOST PROFITS OR LOST BUSINESS OPPORTUNITIES, REPUTATION (E.G., OFFENSIVE OR DEFAMATORY STATEMENTS), LOSS OF DATA (E.G., DOWN TIME OR LOSS, USE OF, OR CHANGES TO, YOUR INFORMATION OR CONTENT) OR ANY INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES. LINKEDIN AND ITS AFFILIATES WILL NOT BE LIABLE TO YOU IN CONNECTION WITH THIS CONTRACT FOR ANY AMOUNT THAT EXCEEDS (A) THE TOTAL FEES PAID OR PAYABLE BY YOU TO LINKEDIN FOR THE SERVICES DURING THE TERM OF THIS CONTRACT, IF ANY, OR (B) US $1000. 4.3 Basis of the Bargain; Exclusions The limitations of liability in this Section 4 are part of the basis of the bargain between you and LinkedIn and shall apply to all claims of liability (e.g., warranty, tort, negligence, contract and law) even if LinkedIn or its affiliates has been told of the possibility of any such damage, and even if these remedies fail their essential purpose. THESE LIMITATIONS OF LIABILITY DO NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY OR FOR FRAUD, GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT, OR IN CASES OF NEGLIGENCE, WHERE A MATERIAL OBLIGATION HAS BEEN BREACHED. A MATERIAL OBLIGATION BEING AN OBLIGATION WHICH FORMS A PREREQUISITE TO OUR DELIVERY OF SERVICES AND ON WHICH YOU MAY REASONABLY RELY, BUT ONLY TO THE EXTENT THAT THE DAMAGES WERE DIRECTLY CAUSED BY THE BREACH AND WERE FORESEEABLE UPON CONCLUSION OF THIS CONTRACT AND TO THE EXTENT THAT THEY ARE TYPICAL IN THE CONTEXT OF THIS CONTRACT. 5. Termination We can each end this Contract, but some rights and obligations survive. Both you and LinkedIn may terminate this Contract at any time with notice to the other. On termination, you lose the right to access or use the Services. The following shall survive termination: Our rights to use and disclose your feedback; Section 3 (subject to 3.1.1); Sections 4, 6, 7, and 8.2 of this Contract; and Any amounts owed by either party prior to termination remain owed after termination. You can visit our Help Center to learn about how to close your account 6. Governing Law and Dispute Resolution In the unlikely event we end up in a legal dispute, depending on where you live, you and LinkedIn agree to resolve it in California courts using California law, Dublin, Ireland courts using Irish law, or as otherwise provided in this section. If you live in the Designated Countries, the laws of Ireland govern all claims related to LinkedIn's provision of the Services, but this shall not deprive you of the mandatory consumer protections under the law of the country to which we direct your Services where you have habitual residence. With respect to jurisdiction, you and LinkedIn agree to choose the courts of the country to which we direct your Services where you have habitual residence for all disputes arising out of or relating to this User Agreement, or in the alternative, you may choose the responsible court in Ireland. If you are a business user within the scope of Article 6(12) of the EU Digital Markets Act (“DMA”) and have a dispute arising out of or in connection with Article 6(12) of the DMA, you may also utilize the alternative dispute resolution mechanism available in the Help Center . For others outside of Designated Countries, including those who live outside of the United States: You and LinkedIn agree that the laws of the State of California, U.S.A., excluding its conflict of laws rules, shall exclusively govern any dispute relating to this Contract and/or the Services. You and LinkedIn both agree that all claims and disputes can be litigated only in the federal or state courts in Santa Clara County, California, USA, and you and LinkedIn each agree to personal jurisdiction in those courts. You may have additional rights of redress and appeal for some decisions made by LinkedIn that impact you. 7. General Terms Here are some important details about the Contract. If a court with authority over this Contract finds any part of it unenforceable, you and we agree that the court should modify the terms to make that part enforceable while still achieving its intent. If the court cannot do that, you and we agree to ask the court to remove that unenforceable part and still enforce the rest of this Contract. This Contract (including additional terms that may be provided by us when you engage with a feature of the Services) is the only agreement between us regarding the Services and supersedes all prior agreements for the Services. If we don't act to enforce a breach of this Contract, that does not mean that LinkedIn has waived its right to enforce this Contract. You may not assign or transfer this Contract (or your membership or use of Services) to anyone without our consent. However, you agree that LinkedIn may assign this Contract to its affiliates or a party that buys it without your consent. There are no third-party beneficiaries to this Contract. You agree that the only way to provide us legal notice is at the addresses provided in Section 10. 8. LinkedIn “Dos and Don’ts” LinkedIn is a community of professionals. This list of “Dos and Don’ts” along with our Professional Community Policies limits what you can and cannot do on our Services, unless otherwise explicitly permitted by LinkedIn in a separate writing (e.g., through a research agreement). 8.1. Dos You agree that you will: Comply with all applicable laws, including, without limitation, privacy laws, intellectual property laws, anti-spam laws, export control laws, laws governing the content shared, and other applicable laws and regulatory requirements; Provide accurate contact and identity information to us and keep it updated; Use your real name on your profile; and Use the Services in a professional manner. 8.2. Don’ts You agree that you will not : Create a false identity on LinkedIn, misrepresent your identity, create a Member profile for anyone other than yourself (a real person), or use or attempt to use another’s account (such as sharing log-in credentials or copying cookies); Develop, support or use software, devices, scripts, robots or any other means or processes (such as crawlers, browser plugins and add-ons or any other technology) to scrape or copy the Services, including profiles and other data from the Services; Override any security feature or bypass or circumvent any access controls or use limits of the Services (such as search results, profiles, or videos); Copy, use, display or distribute any information (including content) obtained from the Services, whether directly or through third parties (such as search tools or data aggregators or brokers), without the consent of the content owner (such as LinkedIn for content it owns); Disclose information that you do not have the consent to disclose (such as confidential information of others (including your employer); Violate the intellectual property rights of others, including copyrights, patents, trademarks, trade secrets or other proprietary rights. For example, do not copy or distribute (except through the available sharing functionality) the posts or other content of others without their permission, which they may give by posting under a Creative Commons license; Violate the intellectual property or other rights of LinkedIn, including, without limitation, (i) copying or distributing our learning videos or other materials, (ii) copying or distributing our technology, unless it is released under open source licenses; or (iii) using the word “LinkedIn” or our logos in any business name, email, or URL except as provided in the Brand Guidelines ; Post (or otherwise share) anything that contains software viruses, worms, or any other harmful code; Reverse engineer, decompile, disassemble, decipher or otherwise attempt to derive the source code for the Services or any related technology that is not open source; Imply or state that you are affiliated with or endorsed by LinkedIn without our express consent (e.g., representing yourself as an accredited LinkedIn trainer); Rent, lease, loan, trade, sell/re-sell or otherwise monetize the Services or related data or access to the same, without LinkedIn’s consent; Deep-link to our Services for any purpose other than to promote your profile or a Group on our Services, without LinkedIn’s consent; Use bots or other unauthorized automated methods to access the Services, add or download contacts, send or redirect messages, create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement; Engage in “framing”, “mirroring”, or otherwise simulating the appearance or function of the Services; Overlay or otherwise modify the Services or their appearance (such as by inserting elements into the Services or removing, covering, or obscuring an advertisement included on the Services); Interfere with the operation of, or place an unreasonable load on, the Services (e.g., spam, denial of service attack, viruses, manipulating algorithms); Violate the Professional Community Policies , certain third party terms where applicable, or any additional terms concerning a specific Service that are provided when you sign up for or start using such Service; Use our Services to do anything that is unlawful, misleading, discriminatory, fraudulent or deceitful (e.g. manipulated media that wrongfully depicts a person saying or doing something they did not say or do); and/or Misuse our reporting or appeals process, including by submitting duplicative, fraudulent or unfounded reports, complaints or appeals. 9. Complaints Regarding Content Contact information for complaints about content provided by our Members. We ask that you report content and other information that you believe violates your rights (including intellectual property rights), our Professional Community Policies or otherwise violates this Contract or the law. To the extent we can under law, we may remove or restrict access to content, features, services, or information, including if we believe that it’s reasonably necessary to avoid harm to LinkedIn or others, violates the law or is reasonably necessary to prevent misuse of our Services. We reserve the right to take action against serious violations of this Contract, including by implementing account restrictions for significant violations. We respect the intellectual property rights of others. We require that information shared by Members be accurate and not in violation of the intellectual property rights or other rights of third parties. We provide a policy and process for complaints concerning content shared, and/or trademarks used, by our Members. 10. How To Contact Us Our Contact information. Our Help Center also provides information about our Services. For general inquiries, you may contact us online . For legal notices or service of process, you may write us at these addresses . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:15 |
https://drive.google.com/abuse?id=AKkXjozEoJ7rpr_ccAjazUT-UUo8nVcZuEshtA6eSAvALMAXhQM2aifPrbhwyt64l9DpziQRmEM9D-CXALXN478:0&docurl=https://docs.google.com/forms/d/e/1FAIpQLScu2_B6jYbsQUsMgupNufwPsVcQRv5eQtPPaVX0xme01ZaCZA/viewform?usp%3Dpublish-editor | Google Drive - Report Abuse Drive What are you reporting? Reports don’t guarantee removal of the item or any other action on Google’s part. If we do find the item in question to violate these policies, however, we may remove the item from the account, restrict sharing of the item, limit who can view the file, disable the item owner’s access to one or more Google products, delete the owner’s Google account, or even report illegal materials to appropriate law enforcement authorities. Spam or fraud Unwanted solicitation, phishing, malware, or impersonation Non-consensual intimate imagery (NCII) Nude, sexual, or intimate photos or videos shared without the permission of the person shown Disturbing or inappropriate content Violence, hate speech, terrorism, harassment, bullying, threats, intimidation, or unwanted sexually explicit material Other activity or harmful content Promoting activities, goods, services, or information that cause serious and immediate harm Submit report © 2026 Google · Help · Privacy & Terms | 2026-01-13T09:29:15 |
https://www.linkedin.com/signup/cold-join?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fsearch%2Fresults%2Fpeople%2F%3FfacetCurrentCompany%3D%255B103289507%255D&trk=public_biz_employees-join | Sign Up | LinkedIn Make the most of your professional life Not you? Remove photo Join LinkedIn To create a LinkedIn account, you must understand how LinkedIn processes your personal information by selecting learn more for each item listed. Agree to all terms We collect and use personal information. Learn more We share personal information with third parties to provide our services. Learn more Further information is available in our Korea Privacy Addendum . Privacy Policy Addendum 1 of 2 2 of 2 Agree to the term Continue Back Agree to all terms Email Password Show Remember me First name Last name By clicking Agree & Join, you agree to the LinkedIn User Agreement , Privacy Policy , and Cookie Policy . Agree & Join or Security verification Already on LinkedIn? Sign in Looking to create a page for a business? Get help LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Takes less than 2 minutes Join LinkedIn to connect with people, jobs and opportunities that matter Join now Leave | 2026-01-13T09:29:15 |
https://www.linkedin.com/jobs/cloudflare-jobs-worldwide?f_C=407222&trk=top-card_top-card-primary-button-top-card-primary-cta | 983 Cloudflare jobs in Worldwide Skip to main content LinkedIn Cloudflare in Worldwide Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Sign in Join now Cloudflare Clear text Cloudflare (983) EY (78) NTNU Faculty of Natural Sciences (67) LangChain (54) Women In Social Housing (WISH) North East (46) Done Any time Any time (983) Past month (979) Past week (422) Past 24 hours (36) Done Job type Full-time (983) Done Experience level Internship (14) Entry level (40) Associate (267) Mid-Senior level (609) Director (46) Done Remote Hybrid (770) Remote (144) On-site (70) Done Reset Get notified about new Cloudflare jobs in Worldwide . Sign in to create job alert Cloudflare jobs 983 Cloudflare Jobs in Worldwide People Operations Coordinator People Operations Coordinator Cloudflare Austin, TX Actively Hiring 4 days ago Chief of Staff to CTO Chief of Staff to CTO Cloudflare Austin, TX Actively Hiring 4 days ago Business Development Representative (BDR) Business Development Representative (BDR) Cloudflare Austin, TX Actively Hiring 2 days ago **** ******* ****** (****** ****) **** ******* ****** (****** ****) ********** ******, ** Actively Hiring 2 weeks ago ******** *** *&* **** ******** *** *&* **** ********** *** ****, ****** ****** Actively Hiring 1 week ago **** ******** ****** (****** ****) **** ******** ****** (****** ****) ********** ******, ** Actively Hiring 3 days ago ******** *********** ************** (***) - ******** ****** ******** *********** ************** (***) - ******** ****** ********** ******, ** Actively Hiring 2 weeks ago Sign in to view all job postings Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to view more jobs Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/akamai-technologies-akamai-web-application-protector/?trk=products_seo_search | Akamai Web Application Protector | LinkedIn Skip to main content LinkedIn Akamai Technologies in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Akamai Web Application Protector Web Application Firewalls (WAF) by Akamai Technologies See who's skilled in this Add as skill Learn more Report this product About Web Application Protector is designed to meet the needs of enterprises to easily protect web assets from web application attacks and DDoS attacks with much less maintenance than traditional firewalls. Media Products media viewer No more previous content No more next content Similar products Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Imperva Web Application Firewall (WAF) Imperva Web Application Firewall (WAF) Web Application Firewalls (WAF) Advanced API Security Advanced API Security Web Application Firewalls (WAF) Radware Cloud WAF Radware Cloud WAF Web Application Firewalls (WAF) Alibaba Web Application Firewall Alibaba Web Application Firewall Web Application Firewalls (WAF) DNSFilter DNSFilter Web Application Firewalls (WAF) Sign in to see more Show more Show less Akamai Technologies products Akamai Edge DNS Akamai Edge DNS Managed DNS Services Software Akamai Identity Cloud Akamai Identity Cloud Identity & Access Management (IAM) Software Akamai IoT Edge Connect Akamai IoT Edge Connect Internet of Things (IoT) Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software Aura Managed CDN Aura Managed CDN Content Delivery Network (CDN) Software BOCC BOCC Live Streaming Software DNSi AuthServe DNSi AuthServe Managed DNS Services Software DNSi Big Data Connector DNSi Big Data Connector Managed DNS Services Software Enterprise Threat Protector Enterprise Threat Protector Secure Web Gateways Media Services Live Media Services Live Live Streaming Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.youtube.com/watch?v=UYa0vxdCWHU&list=PLQbEq7a9kYPmNfTlP1mEMIUY00eBf3jCZ | - YouTube 정보 보도자료 저작권 문의하기 크리에이터 광고 개발자 약관 개인정보처리방침 정책 및 안전 YouTube 작동의 원리 새로운 기능 테스트하기 © 2026 Google LLC, Sundar Pichai, 1600 Amphitheatre Parkway, Mountain View CA 94043, USA, 0807-882-594 (무료), yt-support-solutions-kr@google.com, 호스팅: Google LLC, 사업자정보 , 불법촬영물 신고 크리에이터들이 유튜브 상에 게시, 태그 또는 추천한 상품들은 판매자들의 약관에 따라 판매됩니다. 유튜브는 이러한 제품들을 판매하지 않으며, 그에 대한 책임을 지지 않습니다. | 2026-01-13T09:29:15 |
https://opensource.guide/id/leadership-and-governance/ | Kepemimpinan dan Pengelolaan | Open Source Guides Tentang Kontribusi العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Kepemimpinan dan Pengelolaan Mengembangkan proyek open source dapat mengambil keuntungan dari aturan resmi untuk pengambilan keputusan Daftar Isi Memahami pengelolaan untuk proyek Anda yang semakin berkembang Apa contoh dari peran formal yang digunakan pada proyek open source? Bagaimana saya memformalkan peran kepemimpinan ini? Kapan saya harus memberikan akses commit kepada seseorang? Apa struktur pengelolaan yang umum untuk proyek open source? Apakah saya perlu dokumentasi pengelolaan ketika Saya merilis proyek Saya? Apa yang terjadi jika karyawan perkantoran mulai mengajukan kontribusi? Memahami pengelolaan untuk proyek Anda yang semakin berkembang Proyek Anda semakin berkembang, orang-orang semakin tertarik untuk bergabung, dan Anda berkomitmen untuk mempertahankan proses ini. Pada tahap ini, Anda mungkin bertanya, bagaimana melibatkan kontributor proyek Anda pada alur kerja Anda, apakah dengan memberikan akses commit pada seseoang atau menyelesaikan debat pada komunitas. Jika Anda memiliki pertanyaan, kami memiliki jawabannya. Apa contoh dari peran formal yang digunakan pada proyek open source? Banyak proyek mengikuti struktur yang serupa untuk peran dan pengakuan kontributor. Arti dari peran tersebut sangat tergantung dari Anda. Berikut adalah beberapa jenis peran yang mungkin Anda kenali: Maintainer Contributor Committer Untuk beberapa project, “maintainer” adalah satu-satunya orang pada proyek yang memiliki akses commit. Pada proyek lain, mereka adalah orang-orang yang terdaftar pada README sebagai pengelola. Seorang pengelola (maintainer) tidak harus merupakan orang yang menuliskan kode pada proyek Anda. Maintainer bisa merupakan orang yang mengembangkan proyek Anda, atau menuliskan dokumentasi agar bisa diakses oleh banyak orang. Terlepas dari apa yang mereka lakukan sehari-hari, seorang pengelola merupakan orang yang bertanggung jawab terhadap arah dari proyek dan berkomitmen untuk meningkatkannya. Seorang “kontributor” bisa siapa saja yang memberikan komentar pada sebuah masalah atau pull request, orang-orang yang memberikan nilai pada proyek (baik menyelesaikan masalah, menuliskan kode, atau mengelola sebuah acara), atau siapapun dengan pull request yang diterima (mungkin definisi tersingkat dari seorang kontributor). [Untuk Node.js,] setiap orang yang memberikan komentar pada sebuah masalah atau mengirimkan kode adalah anggota dari komunitas proyek. Cukup dengan melihat apa yang mereka lakukan berarti mereka sudah beralih dari seorang pengguna menjadi seorang kontributor. — @mikeal, “Healthy Open Source” Istilah “committer” mungkin digunakan untuk membedakan akses commit, yang merupakan tanggung jawab yang spesifik, dari jenis kontribusi lainnya. Walaupun Anda bisa mendefinisikan peran pada proyek Anda sesuka Anda, pertimbangkan untuk menggunakan definisi yang lebih luas untuk mendorong lebih banyak jenis kontribusi. Anda bisa menggunakan peran kepemimpinan untuk secara formal mengakui orang-orang yang memiliki kontribusi yang besar pada proyek Anda, terlepas dari ketrampilan teknis mereka. Anda mungkin mengenal saya sebagai “pencipta” dari Django…tetapi saya hanyalah orang yang dipekerjakan untuk bekerja pada sesuatu setelah satu tahun dibuat. (…) Orang menduga bahwa saya sukses karena ketrampilan pemrograman saya…tetapi saya hanyalah programmer biasa. — @jacobian, “PyCon 2015 Keynote” (video) Bagaimana saya memformalkan peran kepemimpinan ini? Meresmikan peran kepemimpinan akan membantu orang lain merasa memiliki dan memberitahukan anggota kelompok lainnya bagi yang membutuhkan. Untuk proyek yang kecil, menentukan pemimpin semudah menambahkan nama-nama mereka pada berkas README atau CONTRIBUTORS. Untuk proyek yang lebih besar, jika Anda memiliki sebuah website, buatlah halaman tim atau tuliskan pemimpin proyek Anda. Sebagai contoh, PostgreSQL memiliki halaman tim yang lengkap dengan profil singkat pada setiap kontributornya. Jika proyek Anda memiliki komunitas kontributor yang aktif, Anda mungkin perlu membuat “tim inti” dari pengelola, atau sub komite dari orang-orang yang memiliki peran pada beberapa area yang berbeda (misalnya keamanan, laporan masalah, atau kode etik). Biarkan orang lain mengatur dirinya sendiri dan berkontribusi pada peran yang mereka sukai. [Kami] melengkapi tim inti dengan beberapa “sub tim”. Setiap sub tim berfokus pada area tertentu, misalnya desain bahasa atau pustaka. (…) Untuk memastikan koordinasi yang kuat dan global, penyamaan visi pada proyek secara keseluruhan, setiap sub tim dipimpin oleh anggota dari tim inti. — “Rust Governance RFC” Tim pemimpin mungkin perlu membuat chanel khusus (seperti IRC) atau bertemu secara rutin untuk mendiskusikan proyek (seperti pada Gitter atau Google Hangout). Anda bisa membuat hasil rapat tersebut secara terbuka sehingga orang lain bisa mendengarkan. Cucumber-ruby , misalnya, mengadakan jam kerja setiap minggunya . Setelah Anda mendefinisikan peran pemimpin Anda, jangan lupa untuk mendokumentasikan bagaimana orang lain bisa mencapai posisi tersebut! Buatlah proses yang jelas bagaimana seseorang bisa menjadi seorang pengelola atau bergabung pada sub komite pada proyek Anda, dan tuliskan pada GOVERNANCE.md. Peralatan seperti Vossibility bisa membantu Anda melacak siapa yang (tidak) memberikan kontribusi pada proyek. Mendokumentasikan informasi ini akan menghindari persepsi komunitas bahwa pengelola mengambil keputusan secara pribadi. Akhirnya, jika proyek Anda berada pada GitHub, pertimbangkan untuk memindahkan proyek Anda dari akun prbadi pada “Organization” dan menambahkan paling tidak satu admin cadangan. GitHub Organizations membuat pengelolaan hak akses dan banyak repository menjadi lebih mudah dan juga menjaga proyek Anda melalui berbagi kepemilikan . Kapan saya harus memberikan akses commit kepada seseorang? Beberapa orang berpikir bahwa Anda perlu memberikan akses commit pada semua orang yang memberikan kontribusi. Melakukan hal ini bisa mendorong lebih banyak orang untuk merasa memiliki proyek Anda. Disisi lain, terutama untuk proyek yang besar dan kompleks, Anda mungkin hanya akan memberikan akses commit pada orang-orang yang mendemonstrasikan komitmen mereka Tidak ada cara yang paling benar untuk melakukan hal ini - lakukan apa yang Anda rasa paling baik! Jika proyek Anda berada pada GitHub, Anda bisa menggunakan protected branches untuk mengelola siapa saja yang boleh mengirimkan pada branch tertentu, dan pada kondisi apa. Ketika seseorang mengirimkan sebuah pull request, berikan mereka akses commit pada proyek Anda. Meskipun tampaknya hal bodoh pada awalnya, menggunakan strategi ini akan memaksimalkan kekuatan utama dari GitHub. (…) Setelah orang-orang memiliki akses commit, mereka tidak lagi khawatir bahwa perubahan mereka tidak akan digunakan…hal ini akan membuat mereka bekerja lebih keras pada perubahan yang diusulkan. — @felixge, “The Pull Request Hack” Apa struktur pengelolaan yang umum untuk proyek open source? Terdapat tiga struktur pengelolaan yang umumnya dipakai pada proyek open source. BDFL: BDFL kependekan dari “Benevolent Dictator for Life”. Pada struktur ini, satu orang (biasanya pendiri proyek) memiliki keputusan final terhadap semua keputusan proyek. Python adalah contoh klasik. Proyek yang lebih kecil biasanya menganut model BDFL secara default, karena hanya terdapat satu atau dua pengelola. Sebuah proyek yang berawal dari sebuah perusahaan juga bisa masuk kedalam kategori BDFL. Meritokrasi: (Catatan: istilah “meritokrasi” memiliki konotasi negatif pada beberapa komunitas dan sejarah sosial dan politis yang kompleks .) Pada model meritokrasi, kontributor aktif sebuah proyek (mereka yang “layak”) diberikan peran dalam pengambilan keputusan formal. Keputusan biasanya dilakukan berdasarkan konsensus voting. Konsep ini diciptakan oleh Yayasan Apache ; semua proyek Apache menganut model ini. Kontribusi hanya dapat dilakukan secara perseorangan mewakili dirinya sendiri, bukan untuk sebuah perusahaan. Kontribusi liberal: Pada model ini, orang-orang yang banyak melakukan pekerjaan adalah yang dianggap berperan, namun ini berbasiskan pada pekerjaan saat ini dan bukan kontribusi yang lampau. Pengambilan keputusan pada proyek berdasarkan pada proses pencarian konsensus dibandingkan voting murni, dan mencoba melibatkan banyak pandangan dari komunitas. Contoh populer proyek yang menggunakan model ini meliputi Node.js dan Rust . Mana yang harus Anda gunakan? Semuanya tergantung Anda! Setiap model memiliki kelebihan dan kekurangan. Meskipun pada awalnya mereka tampak berbeda di awal, semua model memiliki banyak kesamaan. Jika Anda tertarik untuk mengadopsi salah satu model tersebut, silahkan lihat beberapa template berikut: template model BDFL template model meritokrasi kebijakan kontribusi liberal Node.js Apakah saya perlu dokumentasi pengelolaan ketika Saya merilis proyek Saya? Tidak ada waktu terbaik kapan kita harus menuliskan pengelolaan proyek Anda, tetapi akan lebih mudah untuk mendefinisikannya apabila Anda telah melihat dinamika komunitas Anda mulai bermain. Bagian terbaik (dan tersulit) dari pengelolaan open source adalah karena pengelolaan tersebut dibentuk oleh komunitas! Beberapa dokumentasi awal akan membantu pengelolaan proyek Anda, sehingga mulailah menuliskannya. Sebagai contoh, Anda bisa mendefinisikan harapan yang jelas untuk perilaku, atau bagaimana proses kontributor bekerja, bahkan pada saat Anda merilis proyek Anda. Jika Anda bagian dari sebuah perusahaan yang merilis proyek open source, maka akan sangat berguna untuk melakukan diskusi internal tentang bagaimana perusahaan Anda akan mengelola dan mengambil keputusan ketika proyek sudah mulai berkembang. Anda juga mungkin perlu menjelaskan tentang bagaimana perusahaan Anda (tidak) akan terlibat dengan proyek. Kami menugaskan kelompok kecil untuk mengelola proyek pada GitHub di Facebook. Sebagai contoh, React dikelola oleh pengembang React. — @caabernathy, “An inside look at open source at Facebook” Apa yang terjadi jika karyawan perkantoran mulai mengajukan kontribusi? Proyek open source yang sukses akan digunakan oleh banyak orang dan perusahaan, dan beberapa perusahaan mungkin akan memberikan pendanaan pada proyek. Sebagai contoh, sebuah proyek mungkin menggunakan kode dari proyek sebagai salah satu komponen pada layanan komersialnya. Seiring dengan proyek yang semakin banyak digunakan, orang-orang yang memiliki keahlian akan menjadi kebutuhan - Anda mungkin salah satunya! - dan mungkin akan dibayar untuk pekerjaan mereka pada proyek. Sangatlah penting untuk memperlakukan aktivitas komersial sebagai sesuatu yang biasa dan merupakan sumber lain dari energi pengembangan. Pengembang yang dibayar tidak perlu mendapatkan perlakuan khusus dibandingkan mereka yang tidak dibayar; tentu saja setiap kontribusi harus dievaluasi berdasarkan kelayakan teknisnya. Meski demikian, orang-orang seharusnya lebih nyaman dengan aktivitas komersial, dan merasa nyaman menyatakan kasus mereka ketika berpendapat tentang peningkatan atau fitur tertentu. “Komersial” sangatlah kompatibel dengan “open source”. “Komersial” hanya berarti ada uang yang terlibat didalamnya pada suatu titik - misalnya software yang digunakan pada perdagangan, yang kecenderungannya meningkat setelah proyek banyak diadopsi. (Ketika perangkat lunak open source digunakan sebagai bagian dari produk non open source, secara keseluruhan produk masuk terbilang “proprietary”, meskipun, seperti halnya open source, bisa digunakan untuk kepentingan komersial atau non-komersial.) Seperti halnya orang lain, pengembang yang termotivasi secara komersial mendapatkan pengaruh pada proyek melalui kualitas dan kuantitas dari kontribusinya. Jelas, pengembang yang dibayar untuk waktu mereka bisa melakukan lebih dari mereka yang tidak dibayar, tetapi hal itu sangatlah lumrah: pembayaran hanyalah satu dari banyak faktor yang bisa mempengaruhi seseorang. Pastikan diskusi proyek Anda berfokus pada kontribusi, bukan pada faktor eksternal yang memungkinkan orang untuk membuat kontribusi tersebut. Apakah saya perlu entitas legal untuk mendukung proyek Saya? Anda tidak perlu entitas legal untuk mendukung proyek open source Anda kecuali Anda mengurusi uang. Sebagai contoh, jika Anda hendak membuat bisnis komersial, Anda perlu membuat C Corp atau LLC (jika Anda berada di AS). Jika Anda hanya melakukan pekerjaan kontrak berkaitan dengan proyek open source Anda, Anda bisa menerima uang sebagai pemilik tunggal, atau membuat LLC (jika Anda berbasiskan di AS). Jika Anda hendak menerima donasi untuk proyek open source Anda, Anda bisa membuat tombol donasi (menggunakan PayPal atau Stripe misalnya), tetapi uang tersebut akan dikurangi pajak kecuali Anda adalah nirlaba (501c3 jika Anda berada di AS). Banyak proyek tidak ingin kerepotan untuk membuat nirlaba, sehingga mereka mencari sponsor fiskal nonprofit. Sponsor fiskal menerima donasi untuk Anda, biasanya dengan imbalan beberapa pesen dari donasi. Software Freedom Conservancy , Apache Foundation , Eclipse Foundation , Linux Foundation dan Open Collective adalah contoh organisasi yang melayani sebagai sponsor fiskal untuk proyek open source. Tujuan kami adalah menyediakan infrastruktur yang bisa digunakan oleh komunitas untuk pengelolaan mandiri, sehingga menciptakan sebuah lingkungan dimana setiap orang - kontributor, pendukung, sponsor - bisa menerima keuntungan yang jelas. — @piamancini, “Moving beyond the charity framework” Jika proyek Anda sangat erat hubungannya dengan bahasa atau ekosistem tertentu, seringkali terdapat yayasan yang bisa Anda ajak kerjasama. Sebagai contoh, Python Software Foundation membantu PyPI , Python package manager, dan Node.js Foundation membantu Express.js , framework berbasis Node. Kembali ke semua panduan Panduan yang relevan Kiat Baik untuk Pengelola Mempermudah hidup Anda sebagai pengelola open source, mulai dari mendokumentasikan proses hingga memberdayakan komunitas Anda. Metrik Open Source Buat keputusan yang tepat untuk membantu proyek open source Anda berkembang dengan mengukur dan melacak kesuksesannya. Scroll to Top Kontribusi Ingin memberikan masukan? Konten halaman ini bersifat terbuka. Bantu kami untuk melakukan penyempurnaan. Kontribusi Terus berhubungan Jadilah yang pertama untuk mendengar tentang tips dan sumber daya open source terbaru GitHub. Alamat Email fine print dengan oleh dan teman-teman | 2026-01-13T09:29:15 |
https://opensource.guide/fr/leadership-and-governance/ | Leadership et gouvernance | Open Source Guides A propos Contribuer العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Leadership et gouvernance Les projets Open Source en croissance peuvent bénéficier de règles formelles pour prendre des décisions. Table des matières Comprendre la gouvernance pour votre projet en croissance Quels sont les exemples de rôles formels utilisés dans les projets open source Comment formaliser ces rôles de leadership Quand dois-je donner à quelqu’un le droit de commit Quelles sont les structures de gouvernance communes pour les projets open source Ai-je besoin de documents de gouvernance lorsque je lance mon projet Que se passe-t-il si les employés de l’entreprise commencent à soumettre des contributions Comprendre la gouvernance pour votre projet en croissance Votre projet prend de l’ampleur, les gens sont mobilisés et vous êtes engagé à poursuivre dans cette voie. À ce stade, vous allez peut-être vous demander comment incorporer les contributeurs réguliers du projet dans votre flux de travail, que ce soit en donnant à quelqu’un l’accès à la validation ou en résolvant les débats de la communauté. Si vous avez des questions, nous avons des réponses. Quels sont les exemples de rôles formels utilisés dans les projets open source De nombreux projets suivent une structure similaire pour les rôles contributeurs et la reconnaissance. Qu’est-ce que ces rôles signifient réellement, cependant, est entièrement à vous. Voici quelques types de rôles que vous pouvez reconnaître: Responsables Contributeur Committeur Pour certains projets, les “responsables” sont les seules personnes dans un projet ayant un accès en validation. Dans d’autres projets, ils sont simplement les personnes répertoriées dans le fichier README en tant que responsables. Un responsable ne doit pas nécessairement être quelqu’un qui écrit du code pour votre projet. Ce pourrait être quelqu’un qui a fait beaucoup de travail pour évangéliser votre projet, ou une documentation écrite qui a rendu le projet plus accessible aux autres. Peu importe ce qu’ils font au jour le jour, un responsable est probablement quelqu’un qui se sent responsable de la direction du projet et qui est déterminé à l’améliorer. Un contributeur peut être n’importe quel personne qui commente un problème ou une demande d’extraction, les personnes qui ajoutent de la valeur au projet (qu’il s’agisse de problèmes de triage, d’écriture de code ou d’organisation d’événements) ou toute personne ayant une pull request mergée (sans doute la définition la plus proche d’un contributeur). [Pour Node.js,] chaque personne qui se présente pour commenter une issue ou soumettre du code est membre de la communauté d’un projet. Le simple fait de pouvoir les voir signifie qu’ils ont franchi la ligne d’un utilisateur à un contributeur. — @mikeal, “Healthy Open Source” Le terme “committeur” pourrait être utilisé pour distinguer le droit de commit, qui est un type spécifique de responsabilité, des autres formes de contribution. Alors que vous pouvez définir vos rôles de projet comme vous le souhaitez, pensez à utiliser des définitions plus larges pour encourager plus de formes de contribution. Vous pouvez utiliser des rôles de leadership pour reconnaître formellement les personnes qui ont apporté des contributions exceptionnelles à votre projet, indépendamment de leurs compétences techniques. Vous me connaissez peut-être comme “l’inventeur” de Django … mais en fait je suis le gars qui a été embauché pour travailler sur une chose un an après qu’elle a été déjà fait. (…) Les gens soupçonnent que j’ai réussi grâce à ma compétence en programmation… mais je suis au mieux un programmeur moyen. — @jacobian, “PyCon 2015 Keynote” (video) Comment formaliser ces rôles de leadership La formalisation de vos rôles de leadership aide les gens à se sentir concernés et indique aux autres membres de la communauté qui chercher de l’aide. Pour un projet plus petit, la désignation des responsables peut être aussi simple que d’ajouter leurs noms à votre fichier texte README ou CONTRIBUTORS. Pour un plus grand projet, si vous avez un site web, créez une page d’équipe ou faites une liste de vos chefs de projet. Par exemple, Postgres a une page d’équipe complète avec des profils courts pour chaque contributeur. Si votre projet a une communauté de contributeurs très active, vous pouvez former une équipe de responsables, ou même des sous-comités de personnes qui s’approprient différents domaines (par exemple, la sécurité, le tri des problèmes ou la conduite de la communauté). Laissez les gens s’organiser et faire du bénévolat pour les rôles qui les intéressent le plus, plutôt que de les assigner. [Nous] complétons l’équipe de base avec plusieurs “sous-équipes”. Chaque sous-équipe est concentrée sur une zone spécifique, par exemple, la conception de langage ou les bibliothèques. (…) Pour assurer une coordination globale et une vision forte et cohérente pour le projet dans son ensemble, chaque sous-équipe est dirigée par un membre de l’équipe de base. — “Rust Governance RFC” Les équipes de direction peuvent vouloir créer une chaîne désignée (comme sur IRC) ou se rencontrer régulièrement pour discuter du projet (comme sur Gitter ou Google Hangout). Vous pouvez même rendre ces réunions publiques afin que les autres puissent les écouter. Cucumber-ruby , par exemple, héberge des heures de bureau chaque semaine . Une fois que vous avez établi des rôles de leadership, n’oubliez pas de documenter comment les gens peuvent les atteindre ! Établissez un processus clair pour que quelqu’un puisse devenir responsable ou rejoindre un sous-comité dans votre projet, et l’écrire dans votre GOVERNANCE.md. Des outils tels que Vossibility peuvent vous aider à savoir qui contribue (ou non) au projet. Documenter cette information évite la perception de la communauté que les responsables sont une clique qui prend ses décisions en privé. Enfin, si votre projet est sur GitHub, envisagez de transférer votre projet de votre compte personnel vers une organisation et d’ajouter au moins un administrateur de sauvegarde. Les organisations GitHub facilitent la gestion des permissions et des dépôts multiples et protègent l’héritage de votre projet par propriété partagée . Quand dois-je donner à quelqu’un le droit de commit Certaines personnes pensent que vous devriez donner un droit de commit à tous ceux qui apportent une contribution. Cela pourrait encourager plus de personnes à se sentir propriétaires de votre projet. D’un autre côté, en particulier pour les projets plus volumineux et plus complexes, vous pouvez ne donner que le droit de commit aux personnes qui ont démontré leur engagement. Il n’y a pas une bonne façon de le faire - faites ce qui vous est le plus confortable ! Si votre projet est sur GitHub, vous pouvez utiliser des branches protégées pour gérer qui peut pousser vers une branche particulière, et dans quelles circonstances. Chaque fois que quelqu’un vous envoie une pull request, donnez-leur le droit de commit sur votre projet. Bien que cela puisse sembler incroyablement stupide au début, l’utilisation de cette stratégie vous permettra de libérer le vrai pouvoir de GitHub. (…) Une fois que les gens ont un accès de validation, ils ne craignent plus que leur patch ne soit pas mergé… les obligeant à y mettre beaucoup plus de travail. — @felixge, “The Pull Request Hack” Quelles sont les structures de gouvernance communes pour les projets open source Il existe trois structures de gouvernance communes associées aux projets open source. BDFL : BDFL, “Benevolent Dictator for Life”, signifie “Dictateur bienveillant pour la vie”. Sous cette structure, une personne (généralement l’auteur initial du projet) a le dernier mot sur toutes les grandes décisions de projet. Python est un exemple classique. Les projets plus petits sont probablement BDFL par défaut, car il n’y a qu’un ou deux responsables. Un projet qui provient d’une entreprise pourrait également tomber dans la catégorie BDFL. Méritocratie : (Note: le terme “méritocratie” a des connotations négatives pour certaines communautés et a une histoire sociale et politique complexe .) Dans le cadre d’une méritocratie, les contributeurs actifs du projet (ceux qui démontrent le «mérite») ont un rôle formel de prise de décision. Les décisions sont généralement prises sur la base d’un consensus de vote pur. Le concept de méritocratie a été mis au point par la Fondation Apache . Tous les projets Apache sont des méritocraties. Les contributions ne peuvent être faites que par des personnes qui se représentent elles-mêmes, et non par une entreprise. Contribution libérale : Selon un modèle de contribution libérale, les personnes qui font le plus de travail sont reconnues comme les plus influentes, mais cela est basé sur le travail actuel et non sur les contributions historiques. Les grandes décisions de projet sont prises en fonction d’un processus de recherche de consensus (discuter des griefs majeurs) plutôt que d’un simple vote, et s’efforcent d’inclure autant de perspectives communautaires que possible. Exemples populaires de projets qui utilisent un modèle de contribution libérale : Node.js et Rust . Lequel devriez-vous utiliser ? C’est à vous de décider ! Chaque modèle a des avantages et des compromis. Et bien qu’ils puissent sembler tout à fait différents au début, les trois modèles ont plus en commun qu’ils ne le semblent. Si vous souhaitez adopter l’un de ces modèles, consultez ces modèles : Gabarit de modèle BDFL Gabarit de modèle de la méritocratie Politique de contribution libérale de Node.js Ai-je besoin de documents de gouvernance lorsque je lance mon projet Il n’y a pas de bon moment pour écrire la gouvernance de votre projet, mais c’est beaucoup plus facile à définir une fois que vous avez vu la dynamique de votre communauté se jouer. La meilleure partie (et la plus difficile) de la gouvernance open source est qu’elle est façonnée par la communauté ! Certaines documentations préliminaires contribueront inévitablement à la gouvernance de votre projet, alors commencez à écrire ce que vous pouvez. Par exemple, vous pouvez définir des attentes claires pour le comportement ou le fonctionnement de votre processus contributeur, même au lancement de votre projet. Si vous faites partie d’une entreprise qui lance un projet open source, cela vaut la peine d’avoir une discussion interne avant le lancement sur la manière dont votre entreprise prévoit de maintenir et de prendre des décisions concernant le projet. Vous pouvez également expliquer publiquement quelque chose de particulier à la façon dont votre entreprise sera (ou ne sera pas !) impliquée dans le projet. Nous affectons de petites équipes pour gérer des projets sur GitHub qui travaillent sur ces projets sur Facebook. Par exemple, React est géré par un ingénieur React. — @caabernathy, “An inside look at open source at Facebook” Que se passe-t-il si les employés de l’entreprise commencent à soumettre des contributions Les projets open source réussis sont utilisés par de nombreuses personnes et entreprises, et certaines entreprises peuvent éventuellement avoir des sources de revenus liées au projet. Par exemple, une entreprise peut utiliser le code du projet comme un composant dans une offre de service commercial. Comme le projet est de plus en plus utilisé, les personnes qui ont de l’expertise dans ce domaine deviennent de plus en plus demandées - vous pouvez être l’une d’entre elles ! - et seront parfois payés pour le travail qu’ils font dans le projet. Il est important de traiter l’activité commerciale comme normale et comme une autre source d’énergie de développement. Les développeurs payés ne devraient pas recevoir un traitement spécial par rapport aux non-payés, bien sûr, chaque contribution doit être évaluée sur ses mérites techniques. Cependant, les gens devraient se sentir à l’aise de s’engager dans une activité commerciale, et se sentir à l’aise d’énoncer leurs cas d’utilisation lorsqu’ils argumentent en faveur d’une amélioration ou d’une caractéristique particulière. “Commercial” est complètement compatible avec “open source”. “Commercial” signifie simplement qu’il y a de l’argent impliqué quelque part - que le logiciel est utilisé dans le commerce, ce qui est de plus en plus probable au fur et à mesure qu’un projet est adopté. (Lorsque le logiciel open source est utilisé dans le cadre d’un produit non-open-source, le produit global est toujours un logiciel “propriétaire”, bien que, comme open source, il puisse être utilisé à des fins commerciales ou non-commerciales.) Comme tout le monde, les développeurs motivés par le commerce acquièrent une influence sur le projet grâce à la qualité et à la quantité de leurs contributions. Évidemment, un développeur payé pour son temps peut être capable de faire plus que quelqu’un qui n’est pas payé, mais c’est bon: le paiement est juste l’un des nombreux facteurs possibles qui pourraient affecter combien quelqu’un fait. Gardez vos discussions de projet axées sur les contributions, pas sur les facteurs externes qui permettent aux gens de faire ces contributions. Ai-je besoin d’une entité légale pour soutenir mon projet Vous n’avez pas besoin d’une entité légale pour soutenir votre projet open source à moins que vous ne manipuliez de l’argent. Par exemple, si vous souhaitez créer une entreprise commerciale, vous devez créer un C Corp ou LLC (si vous êtes basé aux États-Unis). Si vous ne faites que du travail contractuel lié à votre projet open source, vous pouvez accepter de l’argent en tant que propriétaire unique ou créer une LLC (si vous êtes basé aux États-Unis). Si vous souhaitez accepter des dons pour votre projet open source, vous pouvez configurer un bouton de don (PayPal ou Stripe, par exemple), mais l’argent ne sera pas déductible d’impôt, sauf si vous êtes un organisme à but non lucratif (501c3, si vous êtes aux États-Unis). Beaucoup de projets ne souhaitent pas se lancer dans la création d’un but non lucratif, ils trouvent donc un sponsor fiscal à but non lucratif. Un sponsor fiscal accepte les dons en votre nom, généralement en échange d’un pourcentage du don. Software Freedom Conservancy , Fondation Apache , Fondation Eclipse , Linux Foundation et Open Collective sont des exemples d’organisations qui servent de sponsors fiscaux pour des projets open source. Notre objectif est de fournir une infrastructure que les communautés peuvent utiliser pour être autosuffisante, créant ainsi un environnement dans lequel tout le monde - contributeurs, bailleurs de fonds, sponsors - en tire des bénéfices concrets. — @piamancini, “Moving beyond the charity framework” Si votre projet est étroitement associé à une langue ou à un écosystème donné, il peut également exister une base de logiciel connexe avec laquelle vous pouvez travailler. Par exemple, Python Software Foundation prend en charge PyPI , le gestionnaire de paquets Python et la Fondation Node.js aide à prendre en charge Express.js , un framework basé sur Node. Retour à tous les guides Guides liés Bonnes pratiques pour les responsables Facilitez-vous la vie en tant que responsable Open Source, de la documentation des processus à l’exploitation de votre communauté. Métriques Open Source Prendre des décisions éclairées pour aider votre projet Open Source à prospérer en mesurant et en suivant son succès. Scroll to Top Contribuer Vous souhaitez faire une suggestion ? Ce contenu est libre. Aidez-nous à l'améliorer. Contribuer Restons en contact Soyez le premier à découvrir les dernières astuces et ressources concernant les logiciels libres avec GitHub. Adresse email fine print avec par et amis | 2026-01-13T09:29:15 |
https://www.linkedin.com/uas/login?session_redirect=%2Fservices%2Fproducts%2Fmightybell-mighty-networks%2F&fromSignIn=true&trk=cold_join_sign_in | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:15 |
https://doc.rust-lang.org/reference/expressions/operator-expr.html#overflow | Operator expressions - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [expr .operator] Operator expressions [expr .operator .syntax] Syntax OperatorExpression → BorrowExpression | DereferenceExpression | TryPropagationExpression | NegationExpression | ArithmeticOrLogicalExpression | ComparisonExpression | LazyBooleanExpression | TypeCastExpression | AssignmentExpression | CompoundAssignmentExpression Show Railroad OperatorExpression BorrowExpression DereferenceExpression TryPropagationExpression NegationExpression ArithmeticOrLogicalExpression ComparisonExpression LazyBooleanExpression TypeCastExpression AssignmentExpression CompoundAssignmentExpression [expr .operator .intro] Operators are defined for built in types by the Rust language. [expr .operator .trait] Many of the following operators can also be overloaded using traits in std::ops or std::cmp . [expr .operator .int-overflow] Overflow [expr .operator .int-overflow .intro] Integer operators will panic when they overflow when compiled in debug mode. The -C debug-assertions and -C overflow-checks compiler flags can be used to control this more directly. The following things are considered to be overflow: [expr .operator .int-overflow .binary-arith] When + , * or binary - create a value greater than the maximum value, or less than the minimum value that can be stored. [expr .operator .int-overflow .unary-neg] Applying unary - to the most negative value of any signed integer type, unless the operand is a literal expression (or a literal expression standing alone inside one or more grouped expressions ). [expr .operator .int-overflow .div] Using / or % , where the left-hand argument is the smallest integer of a signed integer type and the right-hand argument is -1 . These checks occur even when -C overflow-checks is disabled, for legacy reasons. [expr .operator .int-overflow .shift] Using << or >> where the right-hand argument is greater than or equal to the number of bits in the type of the left-hand argument, or is negative. Note The exception for literal expressions behind unary - means that forms such as -128_i8 or let j: i8 = -(128) never cause a panic and have the expected value of -128. In these cases, the literal expression already has the most negative value for its type (for example, 128_i8 has the value -128) because integer literals are truncated to their type per the description in Integer literal expressions . Negation of these most negative values leaves the value unchanged due to two’s complement overflow conventions. In rustc , these most negative expressions are also ignored by the overflowing_literals lint check. [expr .operator .borrow] Borrow operators [expr .operator .borrow .syntax] Syntax BorrowExpression → ( & | && ) Expression | ( & | && ) mut Expression | ( & | && ) raw const Expression | ( & | && ) raw mut Expression Show Railroad BorrowExpression & && Expression & && mut Expression & && raw const Expression & && raw mut Expression [expr .operator .borrow .intro] The & (shared borrow) and &mut (mutable borrow) operators are unary prefix operators. [expr .operator .borrow .result] When applied to a place expression , this expressions produces a reference (pointer) to the location that the value refers to. [expr .operator .borrow .lifetime] The memory location is also placed into a borrowed state for the duration of the reference. For a shared borrow ( & ), this implies that the place may not be mutated, but it may be read or shared again. For a mutable borrow ( &mut ), the place may not be accessed in any way until the borrow expires. [expr .operator .borrow .mut] &mut evaluates its operand in a mutable place expression context. [expr .operator .borrow .temporary] If the & or &mut operators are applied to a value expression , then a temporary value is created. These operators cannot be overloaded. #![allow(unused)] fn main() { { // a temporary with value 7 is created that lasts for this scope. let shared_reference = &7; } let mut array = [-2, 3, 9]; { // Mutably borrows `array` for this scope. // `array` may only be used through `mutable_reference`. let mutable_reference = &mut array; } } [expr .borrow .and-and-syntax] Even though && is a single token ( the lazy ‘and’ operator ), when used in the context of borrow expressions it works as two borrows: #![allow(unused)] fn main() { // same meanings: let a = && 10; let a = & & 10; // same meanings: let a = &&&& mut 10; let a = && && mut 10; let a = & & & & mut 10; } [expr .borrow .raw] Raw borrow operators [expr .borrow .raw .intro] &raw const and &raw mut are the raw borrow operators . [expr .borrow .raw .place] The operand expression of these operators is evaluated in place expression context. [expr .borrow .raw .result] &raw const expr then creates a const raw pointer of type *const T to the given place, and &raw mut expr creates a mutable raw pointer of type *mut T . [expr .borrow .raw .invalid-ref] The raw borrow operators must be used instead of a borrow operator whenever the place expression could evaluate to a place that is not properly aligned or does not store a valid value as determined by its type, or whenever creating a reference would introduce incorrect aliasing assumptions. In those situations, using a borrow operator would cause undefined behavior by creating an invalid reference, but a raw pointer may still be constructed. The following is an example of creating a raw pointer to an unaligned place through a packed struct: #![allow(unused)] fn main() { #[repr(packed)] struct Packed { f1: u8, f2: u16, } let packed = Packed { f1: 1, f2: 2 }; // `&packed.f2` would create an unaligned reference, and thus be undefined behavior! let raw_f2 = &raw const packed.f2; assert_eq!(unsafe { raw_f2.read_unaligned() }, 2); } The following is an example of creating a raw pointer to a place that does not contain a valid value: #![allow(unused)] fn main() { use std::mem::MaybeUninit; struct Demo { field: bool, } let mut uninit = MaybeUninit::<Demo>::uninit(); // `&uninit.as_mut().field` would create a reference to an uninitialized `bool`, // and thus be undefined behavior! let f1_ptr = unsafe { &raw mut (*uninit.as_mut_ptr()).field }; unsafe { f1_ptr.write(true); } let init = unsafe { uninit.assume_init() }; } [expr .deref] The dereference operator [expr .deref .syntax] Syntax DereferenceExpression → * Expression Show Railroad DereferenceExpression * Expression [expr .deref .intro] The * (dereference) operator is also a unary prefix operator. [expr .deref .result] When applied to a pointer it denotes the pointed-to location. [expr .deref .mut] If the expression is of type &mut T or *mut T , and is either a local variable, a (nested) field of a local variable or is a mutable place expression , then the resulting memory location can be assigned to. [expr .deref .safety] Dereferencing a raw pointer requires unsafe . [expr .deref .traits] On non-pointer types *x is equivalent to *std::ops::Deref::deref(&x) in an immutable place expression context and *std::ops::DerefMut::deref_mut(&mut x) in a mutable place expression context. #![allow(unused)] fn main() { let x = &7; assert_eq!(*x, 7); let y = &mut 9; *y = 11; assert_eq!(*y, 11); } [expr .try] The try propagation expression [expr .try .syntax] Syntax TryPropagationExpression → Expression ? Show Railroad TryPropagationExpression Expression ? [expr .try .intro] The try propagation expression uses the value of the inner expression and the Try trait to decide whether to produce a value, and if so, what value to produce, or whether to return a value to the caller, and if so, what value to return. Example #![allow(unused)] fn main() { use std::num::ParseIntError; fn try_to_parse() -> Result<i32, ParseIntError> { let x: i32 = "123".parse()?; // `x` is `123`. let y: i32 = "24a".parse()?; // Returns an `Err()` immediately. Ok(x + y) // Doesn't run. } let res = try_to_parse(); println!("{res:?}"); assert!(res.is_err()) } #![allow(unused)] fn main() { fn try_option_some() -> Option<u8> { let val = Some(1)?; Some(val) } assert_eq!(try_option_some(), Some(1)); fn try_option_none() -> Option<u8> { let val = None?; Some(val) } assert_eq!(try_option_none(), None); } use std::ops::ControlFlow; pub struct TreeNode<T> { value: T, left: Option<Box<TreeNode<T>>>, right: Option<Box<TreeNode<T>>>, } impl<T> TreeNode<T> { pub fn traverse_inorder<B>(&self, f: &mut impl FnMut(&T) -> ControlFlow<B>) -> ControlFlow<B> { if let Some(left) = &self.left { left.traverse_inorder(f)?; } f(&self.value)?; if let Some(right) = &self.right { right.traverse_inorder(f)?; } ControlFlow::Continue(()) } } fn main() { let n = TreeNode { value: 1, left: Some(Box::new(TreeNode{value: 2, left: None, right: None})), right: None, }; let v = n.traverse_inorder(&mut |t| { if *t == 2 { ControlFlow::Break("found") } else { ControlFlow::Continue(()) } }); assert_eq!(v, ControlFlow::Break("found")); } Note The Try trait is currently unstable, and thus cannot be implemented for user types. The try propagation expression is currently roughly equivalent to: #![allow(unused)] fn main() { #![ feature(try_trait_v2) ] fn example() -> Result<(), ()> { let expr = Ok(()); match core::ops::Try::branch(expr) { core::ops::ControlFlow::Continue(val) => val, core::ops::ControlFlow::Break(residual) => return core::ops::FromResidual::from_residual(residual), } Ok(()) } } Note The try propagation operator is sometimes called the question mark operator , the ? operator , or the try operator . [expr .try .restricted-types] The try propagation operator can be applied to expressions with the type of: Result<T, E> Result::Ok(val) evaluates to val . Result::Err(e) returns Result::Err(From::from(e)) . Option<T> Option::Some(val) evaluates to val . Option::None returns Option::None . ControlFlow<B, C> ControlFlow::Continue(c) evaluates to c . ControlFlow::Break(b) returns ControlFlow::Break(b) . Poll<Result<T, E>> Poll::Ready(Ok(val)) evaluates to Poll::Ready(val) . Poll::Ready(Err(e)) returns Poll::Ready(Err(From::from(e))) . Poll::Pending evaluates to Poll::Pending . Poll<Option<Result<T, E>>> Poll::Ready(Some(Ok(val))) evaluates to Poll::Ready(Some(val)) . Poll::Ready(Some(Err(e))) returns Poll::Ready(Some(Err(From::from(e)))) . Poll::Ready(None) evaluates to Poll::Ready(None) . Poll::Pending evaluates to Poll::Pending . [expr .negate] Negation operators [expr .negate .syntax] Syntax NegationExpression → - Expression | ! Expression Show Railroad NegationExpression - Expression ! Expression [expr .negate .intro] These are the last two unary operators. [expr .negate .results] This table summarizes the behavior of them on primitive types and which traits are used to overload these operators for other types. Remember that signed integers are always represented using two’s complement. The operands of all of these operators are evaluated in value expression context so are moved or copied. Symbol Integer bool Floating Point Overloading Trait - Negation* Negation std::ops::Neg ! Bitwise NOT Logical NOT std::ops::Not * Only for signed integer types. Here are some example of these operators #![allow(unused)] fn main() { let x = 6; assert_eq!(-x, -6); assert_eq!(!x, -7); assert_eq!(true, !false); } [expr .arith-logic] Arithmetic and logical binary operators [expr .arith-logic .syntax] Syntax ArithmeticOrLogicalExpression → Expression + Expression | Expression - Expression | Expression * Expression | Expression / Expression | Expression % Expression | Expression & Expression | Expression | Expression | Expression ^ Expression | Expression << Expression | Expression >> Expression Show Railroad ArithmeticOrLogicalExpression Expression + Expression Expression - Expression Expression * Expression Expression / Expression Expression % Expression Expression & Expression Expression | Expression Expression ^ Expression Expression << Expression Expression >> Expression [expr .arith-logic .intro] Binary operators expressions are all written with infix notation. [expr .arith-logic .behavior] This table summarizes the behavior of arithmetic and logical binary operators on primitive types and which traits are used to overload these operators for other types. Remember that signed integers are always represented using two’s complement. The operands of all of these operators are evaluated in value expression context so are moved or copied. Symbol Integer bool Floating Point Overloading Trait Overloading Compound Assignment Trait + Addition Addition std::ops::Add std::ops::AddAssign - Subtraction Subtraction std::ops::Sub std::ops::SubAssign * Multiplication Multiplication std::ops::Mul std::ops::MulAssign / Division*† Division std::ops::Div std::ops::DivAssign % Remainder**† Remainder std::ops::Rem std::ops::RemAssign & Bitwise AND Logical AND std::ops::BitAnd std::ops::BitAndAssign | Bitwise OR Logical OR std::ops::BitOr std::ops::BitOrAssign ^ Bitwise XOR Logical XOR std::ops::BitXor std::ops::BitXorAssign << Left Shift std::ops::Shl std::ops::ShlAssign >> Right Shift*** std::ops::Shr std::ops::ShrAssign * Integer division rounds towards zero. ** Rust uses a remainder defined with truncating division . Given remainder = dividend % divisor , the remainder will have the same sign as the dividend. *** Arithmetic right shift on signed integer types, logical right shift on unsigned integer types. † For integer types, division by zero panics. Here are examples of these operators being used. #![allow(unused)] fn main() { assert_eq!(3 + 6, 9); assert_eq!(5.5 - 1.25, 4.25); assert_eq!(-5 * 14, -70); assert_eq!(14 / 3, 4); assert_eq!(100 % 7, 2); assert_eq!(0b1010 & 0b1100, 0b1000); assert_eq!(0b1010 | 0b1100, 0b1110); assert_eq!(0b1010 ^ 0b1100, 0b110); assert_eq!(13 << 3, 104); assert_eq!(-10 >> 2, -3); } [expr .cmp] Comparison operators [expr .cmp .syntax] Syntax ComparisonExpression → Expression == Expression | Expression != Expression | Expression > Expression | Expression < Expression | Expression >= Expression | Expression <= Expression Show Railroad ComparisonExpression Expression == Expression Expression != Expression Expression > Expression Expression < Expression Expression >= Expression Expression <= Expression [expr .cmp .intro] Comparison operators are also defined both for primitive types and many types in the standard library. [expr .cmp .paren-chaining] Parentheses are required when chaining comparison operators. For example, the expression a == b == c is invalid and may be written as (a == b) == c . [expr .cmp .trait] Unlike arithmetic and logical operators, the traits for overloading these operators are used more generally to show how a type may be compared and will likely be assumed to define actual comparisons by functions that use these traits as bounds. Many functions and macros in the standard library can then use that assumption (although not to ensure safety). [expr .cmp .place] Unlike the arithmetic and logical operators above, these operators implicitly take shared borrows of their operands, evaluating them in place expression context : #![allow(unused)] fn main() { let a = 1; let b = 1; a == b; // is equivalent to ::std::cmp::PartialEq::eq(&a, &b); } This means that the operands don’t have to be moved out of. [expr .cmp .behavior] Symbol Meaning Overloading method == Equal std::cmp::PartialEq::eq != Not equal std::cmp::PartialEq::ne > Greater than std::cmp::PartialOrd::gt < Less than std::cmp::PartialOrd::lt >= Greater than or equal to std::cmp::PartialOrd::ge <= Less than or equal to std::cmp::PartialOrd::le Here are examples of the comparison operators being used. #![allow(unused)] fn main() { assert!(123 == 123); assert!(23 != -12); assert!(12.5 > 12.2); assert!([1, 2, 3] < [1, 3, 4]); assert!('A' <= 'B'); assert!("World" >= "Hello"); } [expr .bool-logic] Lazy boolean operators [expr .bool-logic .syntax] Syntax LazyBooleanExpression → Expression || Expression | Expression && Expression Show Railroad LazyBooleanExpression Expression || Expression Expression && Expression [expr .bool-logic .intro] The operators || and && may be applied to operands of boolean type. The || operator denotes logical ‘or’, and the && operator denotes logical ‘and’. [expr .bool-logic .conditional-evaluation] They differ from | and & in that the right-hand operand is only evaluated when the left-hand operand does not already determine the result of the expression. That is, || only evaluates its right-hand operand when the left-hand operand evaluates to false , and && only when it evaluates to true . #![allow(unused)] fn main() { let x = false || true; // true let y = false && panic!(); // false, doesn't evaluate `panic!()` } [expr .as] Type cast expressions [expr .as .syntax] Syntax TypeCastExpression → Expression as TypeNoBounds Show Railroad TypeCastExpression Expression as TypeNoBounds [expr .as .intro] A type cast expression is denoted with the binary operator as . [expr .as .result] Executing an as expression casts the value on the left-hand side to the type on the right-hand side. An example of an as expression: #![allow(unused)] fn main() { fn sum(values: &[f64]) -> f64 { 0.0 } fn len(values: &[f64]) -> i32 { 0 } fn average(values: &[f64]) -> f64 { let sum: f64 = sum(values); let size: f64 = len(values) as f64; sum / size } } [expr .as .coercions] as can be used to explicitly perform coercions , as well as the following additional casts. Any cast that does not fit either a coercion rule or an entry in the table is a compiler error. Here *T means either *const T or *mut T . m stands for optional mut in reference types and mut or const in pointer types. Type of e U Cast performed by e as U Integer or Float type Integer or Float type Numeric cast Enumeration Integer type Enum cast bool or char Integer type Primitive to integer cast u8 char u8 to char cast *T *V 1 Pointer to pointer cast *T where T: Sized Integer type Pointer to address cast Integer type *V where V: Sized Address to pointer cast &m₁ [T; n] *m₂ T 2 Array to pointer cast *m₁ [T; n] *m₂ T 2 Array to pointer cast Function item Function pointer Function item to function pointer cast Function item *V where V: Sized Function item to pointer cast Function item Integer Function item to address cast Function pointer *V where V: Sized Function pointer to pointer cast Function pointer Integer Function pointer to address cast Closure 3 Function pointer Closure to function pointer cast Semantics [expr .as .numeric] Numeric cast [expr .as .numeric .int-same-size] Casting between two integers of the same size (e.g. i32 -> u32) is a no-op (Rust uses 2’s complement for negative values of fixed integers) #![allow(unused)] fn main() { assert_eq!(42i8 as u8, 42u8); assert_eq!(-1i8 as u8, 255u8); assert_eq!(255u8 as i8, -1i8); assert_eq!(-1i16 as u16, 65535u16); } [expr .as .numeric .int-truncation] Casting from a larger integer to a smaller integer (e.g. u32 -> u8) will truncate #![allow(unused)] fn main() { assert_eq!(42u16 as u8, 42u8); assert_eq!(1234u16 as u8, 210u8); assert_eq!(0xabcdu16 as u8, 0xcdu8); assert_eq!(-42i16 as i8, -42i8); assert_eq!(1234u16 as i8, -46i8); assert_eq!(0xabcdi32 as i8, -51i8); } [expr .as .numeric .int-extension] Casting from a smaller integer to a larger integer (e.g. u8 -> u32) will zero-extend if the source is unsigned sign-extend if the source is signed #![allow(unused)] fn main() { assert_eq!(42i8 as i16, 42i16); assert_eq!(-17i8 as i16, -17i16); assert_eq!(0b1000_1010u8 as u16, 0b0000_0000_1000_1010u16, "Zero-extend"); assert_eq!(0b0000_1010i8 as i16, 0b0000_0000_0000_1010i16, "Sign-extend 0"); assert_eq!(0b1000_1010u8 as i8 as i16, 0b1111_1111_1000_1010u16 as i16, "Sign-extend 1"); } [expr .as .numeric .float-as-int] Casting from a float to an integer will round the float towards zero NaN will return 0 Values larger than the maximum integer value, including INFINITY , will saturate to the maximum value of the integer type. Values smaller than the minimum integer value, including NEG_INFINITY , will saturate to the minimum value of the integer type. #![allow(unused)] fn main() { assert_eq!(42.9f32 as i32, 42); assert_eq!(-42.9f32 as i32, -42); assert_eq!(42_000_000f32 as i32, 42_000_000); assert_eq!(std::f32::NAN as i32, 0); assert_eq!(1_000_000_000_000_000f32 as i32, 0x7fffffffi32); assert_eq!(std::f32::NEG_INFINITY as i32, -0x80000000i32); } [expr .as .numeric .int-as-float] Casting from an integer to float will produce the closest possible float * if necessary, rounding is according to roundTiesToEven mode *** on overflow, infinity (of the same sign as the input) is produced note: with the current set of numeric types, overflow can only happen on u128 as f32 for values greater or equal to f32::MAX + (0.5 ULP) #![allow(unused)] fn main() { assert_eq!(1337i32 as f32, 1337f32); assert_eq!(123_456_789i32 as f32, 123_456_790f32, "Rounded"); assert_eq!(0xffffffff_ffffffff_ffffffff_ffffffff_u128 as f32, std::f32::INFINITY); } [expr .as .numeric .float-widening] Casting from an f32 to an f64 is perfect and lossless #![allow(unused)] fn main() { assert_eq!(1_234.5f32 as f64, 1_234.5f64); assert_eq!(std::f32::INFINITY as f64, std::f64::INFINITY); assert!((std::f32::NAN as f64).is_nan()); } [expr .as .numeric .float-narrowing] Casting from an f64 to an f32 will produce the closest possible f32 ** if necessary, rounding is according to roundTiesToEven mode *** on overflow, infinity (of the same sign as the input) is produced #![allow(unused)] fn main() { assert_eq!(1_234.5f64 as f32, 1_234.5f32); assert_eq!(1_234_567_891.123f64 as f32, 1_234_567_890f32, "Rounded"); assert_eq!(std::f64::INFINITY as f32, std::f32::INFINITY); assert!((std::f64::NAN as f32).is_nan()); } * if integer-to-float casts with this rounding mode and overflow behavior are not supported natively by the hardware, these casts will likely be slower than expected. ** if f64-to-f32 casts with this rounding mode and overflow behavior are not supported natively by the hardware, these casts will likely be slower than expected. *** as defined in IEEE 754-2008 §4.3.1: pick the nearest floating point number, preferring the one with an even least significant digit if exactly halfway between two floating point numbers. [expr .as .enum] Enum cast [expr .as .enum .discriminant] Casts an enum to its discriminant, then uses a numeric cast if needed. Casting is limited to the following kinds of enumerations: Unit-only enums Field-less enums without explicit discriminants , or where only unit-variants have explicit discriminants #![allow(unused)] fn main() { enum Enum { A, B, C } assert_eq!(Enum::A as i32, 0); assert_eq!(Enum::B as i32, 1); assert_eq!(Enum::C as i32, 2); } [expr .as .enum .no-drop] Casting is not allowed if the enum implements Drop . [expr .as .bool-char-as-int] Primitive to integer cast false casts to 0 , true casts to 1 char casts to the value of the code point, then uses a numeric cast if needed. #![allow(unused)] fn main() { assert_eq!(false as i32, 0); assert_eq!(true as i32, 1); assert_eq!('A' as i32, 65); assert_eq!('Ö' as i32, 214); } [expr .as .u8-as-char] u8 to char cast Casts to the char with the corresponding code point. #![allow(unused)] fn main() { assert_eq!(65u8 as char, 'A'); assert_eq!(214u8 as char, 'Ö'); } [expr .as .pointer-as-int] Pointer to address cast Casting from a raw pointer to an integer produces the machine address of the referenced memory. If the integer type is smaller than the pointer type, the address may be truncated; using usize avoids this. [expr .as .int-as-pointer] Address to pointer cast Casting from an integer to a raw pointer interprets the integer as a memory address and produces a pointer referencing that memory. Warning This interacts with the Rust memory model, which is still under development. A pointer obtained from this cast may suffer additional restrictions even if it is bitwise equal to a valid pointer. Dereferencing such a pointer may be undefined behavior if aliasing rules are not followed. A trivial example of sound address arithmetic: #![allow(unused)] fn main() { let mut values: [i32; 2] = [1, 2]; let p1: *mut i32 = values.as_mut_ptr(); let first_address = p1 as usize; let second_address = first_address + 4; // 4 == size_of::<i32>() let p2 = second_address as *mut i32; unsafe { *p2 += 1; } assert_eq!(values[1], 3); } [expr .as .pointer] Pointer-to-pointer cast [expr .as .pointer .behavior] *const T / *mut T can be cast to *const U / *mut U with the following behavior: [expr .as .pointer .sized] If T and U are both sized, the pointer is returned unchanged. [expr .as .pointer .unsized] If T and U are both unsized, the pointer is also returned unchanged. In particular, the metadata is preserved exactly. For instance, a cast from *const [T] to *const [U] preserves the number of elements. Note that, as a consequence, such casts do not necessarily preserve the size of the pointer’s referent (e.g., casting *const [u16] to *const [u8] will result in a raw pointer which refers to an object of half the size of the original). The same holds for str and any compound type whose unsized tail is a slice type, such as struct Foo(i32, [u8]) or (u64, Foo) . [expr .as .pointer .discard-metadata] If T is unsized and U is sized, the cast discards all metadata that completes the wide pointer T and produces a thin pointer U consisting of the data part of the unsized pointer. [expr .assign] Assignment expressions [expr .assign .syntax] Syntax AssignmentExpression → Expression = Expression Show Railroad AssignmentExpression Expression = Expression [expr .assign .intro] An assignment expression moves a value into a specified place. [expr .assign .assignee] An assignment expression consists of a mutable assignee expression , the assignee operand , followed by an equals sign ( = ) and a value expression , the assigned value operand . [expr .assign .behavior-basic] In its most basic form, an assignee expression is a place expression , and we discuss this case first. [expr .assign .behavior-destructuring] The more general case of destructuring assignment is discussed below, but this case always decomposes into sequential assignments to place expressions, which may be considered the more fundamental case. [expr .assign .basic] Basic assignments [expr .assign .evaluation-order] Evaluating assignment expressions begins by evaluating its operands. The assigned value operand is evaluated first, followed by the assignee expression. [expr .assign .destructuring-order] For destructuring assignment, subexpressions of the assignee expression are evaluated left-to-right. Note This is different than other expressions in that the right operand is evaluated before the left one. [expr .assign .drop-target] It then has the effect of first dropping the value at the assigned place, unless the place is an uninitialized local variable or an uninitialized field of a local variable. [expr .assign .behavior] Next it either copies or moves the assigned value to the assigned place. [expr .assign .result] An assignment expression always produces the unit value . Example: #![allow(unused)] fn main() { let mut x = 0; let y = 0; x = y; } [expr .assign .destructure] Destructuring assignments [expr .assign .destructure .intro] Destructuring assignment is a counterpart to destructuring pattern matches for variable declaration, permitting assignment to complex values, such as tuples or structs. For instance, we may swap two mutable variables: #![allow(unused)] fn main() { let (mut a, mut b) = (0, 1); // Swap `a` and `b` using destructuring assignment. (b, a) = (a, b); } [expr .assign .destructure .assignee] In contrast to destructuring declarations using let , patterns may not appear on the left-hand side of an assignment due to syntactic ambiguities. Instead, a group of expressions that correspond to patterns are designated to be assignee expressions , and permitted on the left-hand side of an assignment. Assignee expressions are then desugared to pattern matches followed by sequential assignment. [expr .assign .destructure .irrefutable] The desugared patterns must be irrefutable: in particular, this means that only slice patterns whose length is known at compile-time, and the trivial slice [..] , are permitted for destructuring assignment. The desugaring method is straightforward, and is illustrated best by example. #![allow(unused)] fn main() { struct Struct { x: u32, y: u32 } let (mut a, mut b) = (0, 0); (a, b) = (3, 4); [a, b] = [3, 4]; Struct { x: a, y: b } = Struct { x: 3, y: 4}; // desugars to: { let (_a, _b) = (3, 4); a = _a; b = _b; } { let [_a, _b] = [3, 4]; a = _a; b = _b; } { let Struct { x: _a, y: _b } = Struct { x: 3, y: 4}; a = _a; b = _b; } } [expr .assign .destructure .repeat-ident] Identifiers are not forbidden from being used multiple times in a single assignee expression. [expr .assign .destructure .discard-value] Underscore expressions and empty range expressions may be used to ignore certain values, without binding them. [expr .assign .destructure .default-binding] Note that default binding modes do not apply for the desugared expression. [expr .assign .destructure .tmp-scopes] Note The desugaring restricts the temporary scope of the assigned value operand (the RHS) of a destructuring assignment. In a basic assignment, the temporary is dropped at the end of the enclosing temporary scope. Below, that’s the statement. Therefore, the assignment and use is allowed. #![allow(unused)] fn main() { fn temp() {} fn f<T>(x: T) -> T { x } let x; (x = f(&temp()), x); // OK } Conversely, in a destructuring assignment, the temporary is dropped at the end of the let statement in the desugaring. As that happens before we try to assign to x , below, it fails. #![allow(unused)] fn main() { fn temp() {} fn f<T>(x: T) -> T { x } let x; [x] = [f(&temp())]; // ERROR } This desugars to: #![allow(unused)] fn main() { fn temp() {} fn f<T>(x: T) -> T { x } let x; { let [_x] = [f(&temp())]; // ^ // The temporary is dropped here. x = _x; // ERROR } } [expr .assign .destructure .tmp-ext] Note Due to the desugaring, the assigned value operand (the RHS) of a destructuring assignment is an extending expression within a newly-introduced block. Below, because the temporary scope is extended to the end of this introduced block, the assignment is allowed. #![allow(unused)] fn main() { fn temp() {} let x; [x] = [&temp()]; // OK } This desugars to: #![allow(unused)] fn main() { fn temp() {} let x; { let [_x] = [&temp()]; x = _x; } // OK } However, if we try to use x , even within the same statement, we’ll get an error because the temporary is dropped at the end of this introduced block. #![allow(unused)] fn main() { fn temp() {} let x; ([x] = [&temp()], x); // ERROR } This desugars to: #![allow(unused)] fn main() { fn temp() {} let x; ( { let [_x] = [&temp()]; x = _x; }, // <-- The temporary is dropped here. x, // ERROR ); } [expr .compound-assign] Compound assignment expressions [expr .compound-assign .syntax] Syntax CompoundAssignmentExpression → Expression += Expression | Expression -= Expression | Expression *= Expression | Expression /= Expression | Expression %= Expression | Expression &= Expression | Expression |= Expression | Expression ^= Expression | Expression <<= Expression | Expression >>= Expression Show Railroad CompoundAssignmentExpression Expression += Expression Expression -= Expression Expression *= Expression Expression /= Expression Expression %= Expression Expression &= Expression Expression |= Expression Expression ^= Expression Expression <<= Expression Expression >>= Expression [expr .compound-assign .intro] Compound assignment expressions combine arithmetic and logical binary operators with assignment expressions. For example: #![allow(unused)] fn main() { let mut x = 5; x += 1; assert!(x == 6); } The syntax of compound assignment is a mutable place expression , the assigned operand , then one of the operators followed by an = as a single token (no whitespace), and then a value expression , the modifying operand . [expr .compound-assign .place] Unlike other place operands, the assigned place operand must be a place expression. [expr .compound-assign .no-value] Attempting to use a value expression is a compiler error rather than promoting it to a temporary. [expr .compound-assign .operand-order] Evaluation of compound assignment expressions depends on the types of the operands. [expr .compound-assign .primitives] If the types of both operands are known, prior to monomorphization, to be primitive, the right hand side is evaluated first, the left hand side is evaluated next, and the place given by the evaluation of the left hand side is mutated by applying the operator to the values of both sides. use core::{num::Wrapping, ops::AddAssign}; trait Equate {} impl<T> Equate for (T, T) {} fn f1(x: (u8,)) { let mut order = vec![]; // The RHS is evaluated first as both operands are of primitive // type. { order.push(2); x }.0 += { order.push(1); x }.0; assert!(order.is_sorted()); } fn f2(x: (Wrapping<u8>,)) { let mut order = vec![]; // The LHS is evaluated first as `Wrapping<_>` is not a primitive // type. { order.push(1); x }.0 += { order.push(2); (0u8,) }.0; assert!(order.is_sorted()); } fn f3<T: AddAssign<u8> + Copy>(x: (T,)) where (T, u8): Equate { let mut order = vec![]; // The LHS is evaluated first as one of the operands is a generic // parameter, even though that generic parameter can be unified // with a primitive type due to the where clause bound. { order.push(1); x }.0 += { order.push(2); (0u8,) }.0; assert!(order.is_sorted()); } fn main() { f1((0u8,)); f2((Wrapping(0u8),)); // We supply a primitive type as the generic argument, but this // does not affect the evaluation order in `f3` when // monomorphized. f3::<u8>((0u8,)); } Note This is unusual. Elsewhere left to right evaluation is the norm. See the eval order test for more examples. [expr .compound-assign .trait] Otherwise, this expression is syntactic sugar for using the corresponding trait for the operator (see expr.arith-logic.behavior ) and calling its method with the left hand side as the receiver and the right hand side as the next argument. For example, the following two statements are equivalent: #![allow(unused)] fn main() { use std::ops::AddAssign; fn f<T: AddAssign + Copy>(mut x: T, y: T) { x += y; // Statement 1. x.add_assign(y); // Statement 2. } } Note Surprisingly, desugaring this further to a fully qualified method call is not equivalent, as there is special borrow checker behavior when the mutable reference to the first operand is taken via autoref . #![allow(unused)] fn main() { use std::ops::AddAssign; fn f<T: AddAssign + Copy>(mut x: T) { // Here we used `x` as both the LHS and the RHS. Because the // mutable borrow of the LHS needed to call the trait method // is taken implicitly by autoref, this is OK. x += x; //~ OK x.add_assign(x); //~ OK } } #![allow(unused)] fn main() { use std::ops::AddAssign; fn f<T: AddAssign + Copy>(mut x: T) { // We can't desugar the above to the below, as once we take the // mutable borrow of `x` to pass the first argument, we can't // pass `x` by value in the second argument because the mutable // reference is still live. <T as AddAssign>::add_assign(&mut x, x); //~^ ERROR cannot use `x` because it was mutably borrowed } } #![allow(unused)] fn main() { use std::ops::AddAssign; fn f<T: AddAssign + Copy>(mut x: T) { // As above. (&mut x).add_assign(x); //~^ ERROR cannot use `x` because it was mutably borrowed } } [expr .compound-assign .result] As with normal assignment expressions, compound assignment expressions always produce the unit value . Warning Avoid writing code that depends on the evaluation order of operands in compound assignments as it can be unusual and surprising. where T and V have compatible metadata: V: Sized , or Both slice metadata ( *[u16] -> *[u8] , *str -> *(u8, [u32]) ), or Both the same trait object metadata, modulo dropping auto traits ( *dyn Debug -> *(u16, dyn Debug) , *dyn Debug + Send -> *dyn Debug ) Note : adding auto traits is only allowed if the principal trait has the auto trait as a super trait (given trait T: Send {} , *dyn T -> *dyn T + Send is valid, but *dyn Debug -> *dyn Debug + Send is not) Note : Generics (including lifetimes) must match ( *dyn T<'a, A> -> *dyn T<'b, B> requires 'a = 'b and A = B ) ↩ only when m₁ is mut or m₂ is const . Casting mut reference/pointer to const pointer is allowed. ↩ ↩2 only for closures that do not capture (close over) any local variables can be casted to function pointers. ↩ | 2026-01-13T09:29:15 |
https://www.ietf.org/archive/id/draft-ietf-sacm-coswid-21.html#copyright | Concise Software Identification Tags Internet-Draft CoSWID March 2022 Birkholz, et al. Expires 8 September 2022 [Page] Workgroup: SACM Working Group Internet-Draft: draft-ietf-sacm-coswid-21 Published: 7 March 2022 Intended Status: Standards Track Expires: 8 September 2022 Authors: H. Birkholz Fraunhofer SIT J. Fitzgerald-McKay National Security Agency C. Schmidt The MITRE Corporation D. Waltermire NIST Concise Software Identification Tags Abstract ISO/IEC 19770-2:2015 Software Identification (SWID) tags provide an extensible XML-based structure to identify and describe individual software components, patches, and installation bundles. SWID tag representations can be too large for devices with network and storage constraints. This document defines a concise representation of SWID tags: Concise SWID (CoSWID) tags. CoSWID supports a similar set of semantics and features as SWID tags, as well as new semantics that allow CoSWIDs to describe additional types of information, all in a more memory efficient format. ¶ Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. ¶ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/ . ¶ Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." ¶ This Internet-Draft will expire on 8 September 2022. ¶ Copyright Notice Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved. ¶ This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents ( https://trustee.ietf.org/license-info ) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License. ¶ ▲ Table of Contents 1 . Introduction 1.1 . The SWID and CoSWID Tag Lifecycle 1.2 . Concise SWID Format 1.3 . Requirements Notation 2 . Concise SWID Data Definition 2.1 . Character Encoding 2.2 . Concise SWID Extensions 2.3 . The concise-swid-tag Map 2.4 . concise-swid-tag Co-Constraints 2.5 . The global-attributes Group 2.6 . The entity-entry Map 2.7 . The link-entry Map 2.8 . The software-meta-entry Map 2.9 . The Resource Collection Definition 2.9.1 . The hash-entry Array 2.9.2 . The resource-collection Group 2.9.3 . The payload-entry Map 2.9.4 . The evidence-entry Map 2.10 . Full CDDL Specification 3 . Determining the Type of CoSWID 4 . CoSWID Indexed Label Values 4.1 . Version Scheme 4.2 . Entity Role Values 4.3 . Link Ownership Values 4.4 . Link Rel Values 4.5 . Link Use Values 5 . URI Schemes 5.1 . "swid" URI Scheme 5.2 . "swidpath" URI Scheme 6 . IANA Considerations 6.1 . CoSWID Items Registry 6.2 . Software Tag Values Registries 6.2.1 . Registration Procedures 6.2.2 . Private Use of Index and Name Values 6.2.3 . Expert Review Criteria 6.2.4 . Software Tag Version Scheme Values Registry 6.2.5 . Software Tag Entity Role Values Registry 6.2.6 . Software Tag Link Ownership Values Registry 6.2.7 . Software Tag Link Relationship Values Registry 6.2.8 . Software Tag Link Use Values Registry 6.3 . swid+cbor Media Type Registration 6.4 . CoAP Content-Format Registration 6.5 . CBOR Tag Registration 6.6 . URI Scheme Registrations 6.6.1 . URI-scheme swid 6.6.2 . URI-scheme swidpath 6.7 . CoSWID Model for use in SWIMA Registration 7 . Signed CoSWID Tags 8 . CBOR-Tagged CoSWID Tags 9 . Security Considerations 10 . Privacy Consideration 11 . Change Log 12 . References 12.1 . Normative References 12.2 . Informative References Acknowledgments Contributors Authors' Addresses 1. Introduction SWID tags, as defined in ISO-19770-2:2015 [ SWID ] , provide a standardized XML-based record format that identifies and describes a specific release of software, a patch, or an installation bundle, which are referred to as software components in this document. Different software components, and even different releases of a particular software component, each have a different SWID tag record associated with them. SWID tags are meant to be flexible and able to express a broad set of metadata about a software component. ¶ SWID tags are used to support a number of processes including but not limited to: ¶ Software Inventory Management, a part of a Software Asset Management [ SAM ] process, which requires an accurate list of discernible deployed software components. ¶ Vulnerability Assessment, which requires a semantic link between standardized vulnerability descriptions and software components installed on IT-assets [ X.1520 ] . ¶ Remote Attestation, which requires a link between reference integrity measurements (RIM) and Attester-produced event logs that complement attestation Evidence [ I-D.ietf-rats-architecture ] . ¶ While there are very few required fields in SWID tags, there are many optional fields that support different uses. A SWID tag consisting of only required fields might be a few hundred bytes in size; however, a tag containing many of the optional fields can be many orders of magnitude larger. Thus, real-world instances of SWID tags can be fairly large, and the communication of SWID tags in usage scenarios, such as those described earlier, can cause a large amount of data to be transported. This can be larger than acceptable for constrained devices and networks. Concise SWID (CoSWID) tags significantly reduce the amount of data transported as compared to a typical SWID tag through the use of the Concise Binary Object Representation (CBOR) [ RFC8949 ] . ¶ Size comparisons between XML SWID and CoSWID mainly depend on domain-specific applications and the complexity of attributes used in instances. While the values stored in CoSWID are often unchanged and therefore not reduced in size compared to an XML SWID, the scaffolding that the CoSWID encoding represents is significantly smaller by taking up 10 percent or less in size. This effect is visible in representation sizes, which in early experiments benefited from a 50 percent to 85 percent reduction in generic usage scenarios. Additional size reduction is enabled with respect to the memory footprint of XML parsing/validation. ¶ In a CoSWID, the human-readable labels of SWID data items are replaced with more concise integer labels (indices). This approach allows SWID and CoSWID to share a common implicit information model, with CoSWID providing an alternate data model [ RFC3444 ] . While SWID and CoSWID are intended to share the same implicit information model, this specification does not define this information model, or a mapping between the two data formats. While an attempt to align SWID and CoSWID tags has been made here, future revisions of ISO/IEC 19770-2:2015 or this specification might cause this implicit information model to diverge, since these specifications are maintained by different standards groups. ¶ The use of CBOR to express SWID information in CoSWID tags allows both CoSWID and SWID tags to be part of an enterprise security solution for a wider range of endpoints and environments. ¶ 1.1. The SWID and CoSWID Tag Lifecycle In addition to defining the format of a SWID tag record, ISO/IEC 19770-2:2015 defines requirements concerning the SWID tag lifecycle. Specifically, when a software component is installed on an endpoint, that software component's SWID tag is also installed. Likewise, when the software component is uninstalled or replaced, the SWID tag is deleted or replaced, as appropriate. As a result, ISO/IEC 19770-2:2015 describes a system wherein there is a correspondence between the set of installed software components on an endpoint, and the presence of the corresponding SWID tags for these components on that endpoint. CoSWIDs share the same lifecycle requirements as a SWID tag. ¶ The SWID specification and supporting guidance provided in NIST Internal Report (NISTIR) 8060: Guidelines for the Creation of Interoperable SWID Tags [ SWID-GUIDANCE ] defines four types of SWID tags: primary, patch, corpus, and supplemental. The following text is paraphrased from these sources. ¶ Primary Tag - A SWID or CoSWID tag that identifies and describes an installed software component on an endpoint. A primary tag is intended to be installed on an endpoint along with the corresponding software component. ¶ Patch Tag - A SWID or CoSWID tag that identifies and describes an installed patch that has made incremental changes to a software component installed on an endpoint. A patch tag is intended to be installed on an endpoint along with the corresponding software component patch. ¶ Corpus Tag - A SWID or CoSWID tag that identifies and describes an installable software component in its pre-installation state. A corpus tag can be used to represent metadata about an installation package or installer for a software component, a software update, or a patch. ¶ Supplemental Tag - A SWID or CoSWID tag that allows additional information to be associated with a referenced SWID tag. This allows tools and users to record their own metadata about a software component without modifying CoSWID primary or patch tags created by a software provider. ¶ The type of a tag is determined by specific data elements, which are discussed in Section 3 , which also provides normative language for CoSWID semantics that implement this lifecycle. The following information helps to explain how these semantics apply to use of a CoSWID tag. ¶ Corpus, primary, and patch tags have similar functions in that they describe the existence and/or presence of different types of software components (e.g., software installers, software installations, software patches), and, potentially, different states of these software components. Supplemental tags have the same structure as other tags, but are used to provide information not contained in the referenced corpus, primary, and patch tags. All four tag types come into play at various points in the software lifecycle and support software management processes that depend on the ability to accurately determine where each software component is in its lifecycle. ¶ +------------+ v | Software Software Software Software Software Deployment -> Installation -> Patching -> Upgrading -> Removal Corpus Primary Primary xPrimary xPrimary Supplemental Supplemental Supplemental xSupplemental xSupplemental Patch xPatch Primary Supplemental Figure 1 : Use of Tag Types in the Software Lifecycle Figure 1 illustrates the steps in the software lifecycle and the relationships among those lifecycle events supported by the four types of SWID and CoSWID tags. A detailed description of the four tags types is provided in Section 2.3 . The figure identifies the types of tags that are used in each lifecycle event. ¶ There are many ways in which software tags might be managed for the host the software is installed on. For example, software tags could be made available on the host or to an external software manager when storage is limited on the host. ¶ In these cases the host or external software manager is responsible for management of the tags, including deployment and removal of the tags as indicated by the above lifecycle. Tags are deployed and previously deployed tags that are typically removed (indicated by an "x" prefix) at each lifecycle stage, as follows: ¶ Software Deployment. Before the software component is installed (i.e., pre-installation), and while the product is being deployed, a corpus tag provides information about the installation files and distribution media (e.g., CD/DVD, distribution package). ¶ Corpus tags are not actually deployed on the target system but are intended to support deployment procedures and their dependencies at install-time, such as to verify the installation media. ¶ Software Installation. A primary tag will be installed with the software component (or subsequently created) to uniquely identify and describe the software component. Supplemental tags are created to augment primary tags with additional site-specific or extended information. While not illustrated in the figure, patch tags can also be installed during software installation to provide information about software fixes deployed along with the base software installation. ¶ Software Patching. A new patch tag is provided, when a patch is applied to the software component, supplying details about the patch and its dependencies. While not illustrated in the figure, a corpus tag can also provide information about the patch installer and patching dependencies that need to be installed before the patch. ¶ Software Upgrading. As a software component is upgraded to a new version, new primary and supplemental tags replace existing tags, enabling timely and accurate tracking of updates to software inventory. While not illustrated in the figure, a corpus tag can also provide information about the upgrade installer and dependencies that need to be installed before the upgrade. ¶ Note: In the context of software tagging software patching and updating differ in an important way. When installing a patch, a set of file modifications are made to pre-installed software which do not alter the version number or the descriptive metadata of an installed software component. An update can also make a set of file modifications, but the version number or the descriptive metadata of an installed software component are changed. ¶ Software Removal. Upon removal of the software component, relevant SWID tags are removed. This removal event can trigger timely updates to software inventory reflecting the removal of the product and any associated patch or supplemental tags. ¶ As illustrated in the figure, supplemental tags can be associated with any corpus, primary, or patch tag to provide additional metadata about an installer, installed software, or installed patch respectively. ¶ Understanding the use of CoSWIDs in the software lifecycle provides a basis for understanding the information provided in a CoSWID and the associated semantics of this information. Each of the different SWID and CoSWID tag types provide different sets of information. For example, a "corpus tag" is used to describe a software component's installation image on an installation media, while a "patch tag" is meant to describe a patch that modifies some other software component. ¶ 1.2. Concise SWID Format This document defines the CoSWID tag format, which is based on CBOR. CBOR-based CoSWID tags offer a more concise representation of SWID information as compared to the XML-based SWID tag representation in ISO-19770-2:2015. The structure of a CoSWID is described via the Concise Data Definition Language (CDDL) [ RFC8610 ] . The resulting CoSWID data definition is aligned to the information able to be expressed with the XML schema definition of ISO-19770-2:2015 [ SWID ] . This alignment allows both SWID and CoSWID tags to represent a common set of software component information and allows CoSWID tags to support the same uses as a SWID tag. ¶ The vocabulary, i.e., the CDDL names of the types and members used in the CoSWID CDDL specification, are mapped to more concise labels represented as small integer values (indices). The names used in the CDDL specification and the mapping to the CBOR representation using integer indices is based on the vocabulary of the XML attribute and element names defined in ISO/IEC 19770-2:2015. ¶ 1.3. Requirements Notation The key words " MUST ", " MUST NOT ", " REQUIRED ", " SHALL ", " SHALL NOT", " SHOULD ", " SHOULD NOT ", " RECOMMENDED ", " NOT RECOMMENDED ", " MAY ", and " OPTIONAL " in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here. ¶ 2. Concise SWID Data Definition The following describes the general rules and processes for encoding data using CDDL representation. Prior familiarity with CBOR and CDDL concepts will be helpful in understanding this CoSWID specification. ¶ This section describes the conventions by which a CoSWID is represented in the CDDL structure. The CamelCase [ CamelCase ] notation used in the XML schema definition is changed to a hyphen-separated notation [ KebabCase ] (e.g., ResourceCollection is named resource-collection) in the CoSWID CDDL specification. This deviation from the original notation used in the XML representation reduces ambiguity when referencing certain attributes in corresponding textual descriptions. An attribute referred to by its name in CamelCase notation explicitly relates to XML SWID tags; an attribute referred to by its name in KebabCase notation explicitly relates to CBOR CoSWID tags. This approach simplifies the composition of further work that reference both XML SWID and CBOR CoSWID documents. ¶ In most cases, mapping attribute names between SWID and CoSWID can be done automatically by converting between CamelCase and KebabCase attribute names. However, some CoSWID CDDL attribute names show greater variation relative to their corresponding SWID XML Schema attributes. This is done when the change improves clarity in the CoSWID specification. For example, the "name" and "version" SWID fields corresponds to the "software-name" and "software-version" CoSWID fields, respectively. As such, it is not always possible to mechanically translate between corresponding attribute names in the two formats. In such cases, a manual mapping will need to be used. XPath expressions [ W3C.REC-xpath20-20101214 ] need to use SWID names, see Section 5.2 . ¶ The 57 human-readable text labels of the CDDL-based CoSWID vocabulary are mapped to integer indices via a block of rules at the bottom of the definition. This allows a more concise integer-based form to be stored or transported, as compared to the less efficient text-based form of the original vocabulary. ¶ Through use of CDDL-based integer labels, CoSWID allows for future expansion in subsequent revisions of this specification and through extensions (see Section 2.2 ). New constructs can be associated with a new integer index. A deprecated construct can be replaced by a new construct with a new integer index. An implementation can use these integer indexes to identify the construct to parse. The CoSWID Items registry, defined in Section 6.1 , is used to ensure that new constructs are assigned a unique index value. This approach avoids the need to have an explicit CoSWID version. ¶ In a number of places, the value encoding admits both integer values and text strings. The integer values are defined in a registry specific to the kind of value; the text values are not intended for interchange and exclusively meant for private use as defined in Section 6.2.2 . Encoders SHOULD NOT use string values based on the names registered in the registry, as these values are less concise than their index value equivalent; a decoder MUST however be prepared to accept text strings that are not specified in this document (and ignore the construct if that string is unknown). In the rest of the document, we call this an "integer label with text escape". ¶ The root of the CDDL specification provided by this document is the rule coswid (as defined in Section 8 ): ¶ start = coswid ¶ In CBOR, an array is encoded using bytes that identify the array, and the array's length or stop point (see [ RFC8949 ] ). To make items that support 1 or more values, the following CDDL notation is used. ¶ _name_ = (_label_ => _data_ / [ 2* _data_ ]) ¶ The CDDL rule above allows either a single data item or an array of 2 or more data values to be provided. When a singleton data value is provided, the CBOR markers for the array, array length, and stop point are not needed, saving bytes. When two or more data values are provided, these values are encoded as an array. This modeling pattern is used frequently in the CoSWID CDDL specification to allow for more efficient encoding of singleton values. ¶ Usage of this construct can be simplified using ¶ one-or-more<T> = T / [ 2* T ] ¶ simplifying the above example to ¶ _name_ = (_label_ => one-or-more<_data_>) ¶ The following subsections describe the different parts of the CoSWID model. ¶ 2.1. Character Encoding The CDDL "text" type is represented in CBOR as a major type 3, which represents "a string of Unicode characters that [are] encoded as UTF-8 [ RFC3629 ] " (see Section 3.1 of [ RFC8949 ] ). Thus both SWID and CoSWID use UTF-8 for the encoding of characters in text strings. ¶ To ensure that UTF-8 character strings are able to be encoded/decoded and exchanged interoperably, text strings in CoSWID MUST be encoded consistent with the Net-Unicode definition defined in [ RFC5198 ] . ¶ All names registered with IANA according to requirements in Section 6.2 also MUST be valid according to the XML Schema NMTOKEN data type (see [ W3C.REC-xmlschema-2-20041028 ] Section 3.3.4) to ensure compatibility with the SWID specification where these names are used. ¶ 2.2. Concise SWID Extensions The CoSWID specification contains two features that are not included in the SWID specification on which it is based. These features are: ¶ The explicit definition of types for some attributes in the ISO-19770-2:2015 XML representation that are typically represented by the "any attribute" in the SWID model. These are covered in Section 2.4, Paragraph 2 . ¶ The inclusion of extension points in the CoSWID specification using CDDL sockets (see [ RFC8610 ] Section 3.9). The use of CDDL sockets allow for well-formed extensions to be defined in supplementary CDDL descriptions that support additional uses of CoSWID tags that go beyond the original scope of ISO-19770-2:2015 tags. This extension mechanism can also be used to update the CoSWID format as revisions to ISO-19770-2 are published. ¶ The following CDDL sockets (extension points) are defined in this document, which allow the addition of new information structures to their respective CDDL groups. ¶ Table 1 : CoSWID CDDL Group Extension Points Map Name CDDL Socket Defined in concise-swid-tag $$coswid-extension Section 2.3 entity-entry $$entity-extension Section 2.6 link-entry $$link-extension Section 2.7 software-meta-entry $$software-meta-extension Section 2.8 resource-collection $$resource-collection-extension Section 2.9.2 file-entry $$file-extension Section 2.9.2 directory-entry $$directory-extension Section 2.9.2 process-entry $$process-extension Section 2.9.2 resource-entry $$resource-extension Section 2.9.2 payload-entry $$payload-extension Section 2.9.3 evidence-entry $$evidence-extension Section 2.9.4 The CoSWID Items Registry defined in Section 6.1 provides a registration mechanism allowing new items, and their associated index values, to be added to the CoSWID model through the use of the CDDL sockets described in the table above. This registration mechanism provides for well-known index values for data items in CoSWID extensions, allowing these index values to be recognized by implementations supporting a given extension. ¶ The following additional CDDL sockets are defined in this document to allow for adding new values to corresponding type-choices (i.e. to represent enumerations) via custom CDDL specifications. ¶ Table 2 : CoSWID CDDL Enumeration Extension Points Enumeration Name CDDL Socket Defined in version-scheme $version-scheme Section 4.1 role $role Section 4.2 ownership $ownership Section 4.3 rel $rel Section 4.4 use $use Section 4.5 A number of CoSWID value registries are also defined in Section 6.2 that allow new values to be registered with IANA for the enumerations above. This registration mechanism supports the definition of new well-known index values and names for new enumeration values used by CoSWID, which can also be used by other software tagging specifications. This registration mechanism allows new standardized enumerated values to be shared between multiple tagging specifications (and associated implementations) over time. ¶ 2.3. The concise-swid-tag Map The CDDL specification for the root concise-swid-tag map is as follows and this rule and its constraints MUST be followed when creating or validating a CoSWID tag: ¶ concise-swid-tag = { tag-id => text / bstr .size 16, tag-version => integer, ? corpus => bool, ? patch => bool, ? supplemental => bool, software-name => text, ? software-version => text, ? version-scheme => $version-scheme, ? media => text, ? software-meta => one-or-more<software-meta-entry>, entity => one-or-more<entity-entry>, ? link => one-or-more<link-entry>, ? payload-or-evidence, * $$coswid-extension, global-attributes, } payload-or-evidence //= ( payload => payload-entry ) payload-or-evidence //= ( evidence => evidence-entry ) tag-id = 0 software-name = 1 entity = 2 evidence = 3 link = 4 software-meta = 5 payload = 6 corpus = 8 patch = 9 media = 10 supplemental = 11 tag-version = 12 software-version = 13 version-scheme = 14 $version-scheme /= multipartnumeric $version-scheme /= multipartnumeric-suffix $version-scheme /= alphanumeric $version-scheme /= decimal $version-scheme /= semver $version-scheme /= int / text multipartnumeric = 1 multipartnumeric-suffix = 2 alphanumeric = 3 decimal = 4 semver = 16384 ¶ The following describes each member of the concise-swid-tag root map. ¶ global-attributes: A list of items including an optional language definition to support the processing of text-string values and an unbounded set of any-attribute items. Described in Section 2.4, Paragraph 2 . ¶ tag-id (index 0): A 16-byte binary string, or a textual identifier, uniquely referencing a software component. The tag identifier MUST be globally unique. Failure to ensure global uniqueness can create ambiguity in tag use since the tag-id serves as the global key for matching and lookups. If represented as a 16-byte binary string, the identifier MUST be a valid universally unique identifier as defined by [ RFC4122 ] . There are no strict guidelines on how the identifier is structured, but examples include a 16-byte GUID (e.g., class 4 UUID) [ RFC4122 ] , or a DNS domain name followed by a "/" and a text string, where the domain name serves to ensure uniqueness across organizations. A textual tag-id MUST NOT contain a sequence of two underscores ("__", see Section 6.7 ). ¶ tag-version (index 12): An integer value that indicate the specific release revision of the tag. Typically, the initial value of this field is set to 0 and the value is increased for subsequent tags produced for the same software component release. This value allows a CoSWID tag producer to correct an incorrect tag previously released without indicating a change to the underlying software component the tag represents. For example, the tag version could be changed to add new metadata, to correct a broken link, to add a missing payload entry, etc. When producing a revised tag, the new tag-version value MUST be greater than the old tag-version value. ¶ corpus (index 8): A boolean value that indicates if the tag identifies and describes an installable software component in its pre-installation state. Installable software includes an installation package or installer for a software component, a software update, or a patch. If the CoSWID tag represents installable software, the corpus item MUST be set to "true". If not provided, the default value MUST be considered "false". ¶ patch (index 9): A boolean value that indicates if the tag identifies and describes an installed patch that has made incremental changes to a software component installed on an endpoint. If a CoSWID tag is for a patch, the patch item MUST be set to "true". If not provided, the default value MUST be considered "false". A patch item's value MUST NOT be set to "true" if the installation of the associated software package changes the version of a software component. ¶ supplemental (index 11): A boolean value that indicates if the tag is providing additional information to be associated with another referenced SWID or CoSWID tag. This allows tools and users to record their own metadata about a software component without modifying SWID primary or patch tags created by a software provider. If a CoSWID tag is a supplemental tag, the supplemental item MUST be set to "true". If not provided, the default value MUST be considered "false". ¶ software-name (index 1): This textual item provides the software component's name. This name is likely the same name that would appear in a package management tool. This item maps to '/SoftwareIdentity/@name' in [ SWID ] . ¶ software-version (index 13): A textual value representing the specific release or development version of the software component. This item maps to '/SoftwareIdentity/@version' in [ SWID ] . ¶ version-scheme (index 14): An integer or textual value representing the versioning scheme used for the software-version item, as an integer label with text escape ( Section 2 , for the "Version Scheme" registry Section 4.1 . . If an integer value is used it MUST be an index value in the range -256 to 65535. Integer values in the range -256 to -1 are reserved for testing and use in closed environments (see Section 6.2.2 ). Integer values in the range 0 to 65535 correspond to registered entries in the IANA "Software Tag Version Scheme Values" registry (see Section 6.2.4 . ¶ media (index 10): This text value is a hint to the tag consumer to understand what target platform this tag applies to. This item MUST be formatted as a query as defined by the W3C Media Queries Recommendation (see [ W3C.REC-css3-mediaqueries-20120619 ] ). Support for media queries are included here for interoperability with [ SWID ] , which does not provide any further requirements for media query use. Thus, this specification does not clarify how a media query is to be used for a CoSWID. ¶ software-meta (index 5): An open-ended map of key/value data pairs. A number of predefined keys can be used within this item providing for common usage and semantics across the industry. Use of this map allows any additional attribute to be included in the tag. It is expected that industry groups will use a common set of attribute names to allow for interoperability within their communities. Described in Section 2.8 . This item maps to '/SoftwareIdentity/Meta' in [ SWID ] . ¶ entity (index 2): Provides information about one or more organizations responsible for producing the CoSWID tag, and producing or releasing the software component referenced by this CoSWID tag. Described in Section 2.6 . ¶ link (index 4): Provides a means to establish relationship arcs between the tag and another items. A given link can be used to establish the relationship between tags or to reference another resource that is related to the CoSWID tag, e.g., vulnerability database association, ROLIE feed [ RFC8322 ] , MUD resource [ RFC8520 ] , software download location, etc). This is modeled after the HTML "link" element. Described in Section 2.7 . ¶ payload (index 6): This item represents a collection of software artifacts (described by child items) that compose the target software. For example, these artifacts could be the files included with an installer for a corpus tag or installed on an endpoint when the software component is installed for a primary or patch tag. The artifacts listed in a payload may be a superset of the software artifacts that are actually installed. Based on user selections at install time, an installation might not include every artifact that could be created or executed on the endpoint when the software component is installed or run. This item is mutually exclusive to evidence, as payload can only be provided by an external entity. Described in Section 2.9.3 . ¶ evidence (index 3): This item can be used to record the results of a software discovery process used to identify untagged software on an endpoint or to represent indicators for why software is believed to be installed on the endpoint. In either case, a CoSWID tag can be created by the tool performing an analysis of the software components installed on the endpoint. This item is mutually exclusive to payload, as evidence is always generated on the target device ad-hoc. Described in Section 2.9.4 . ¶ $$coswid-extension: This CDDL socket is used to add new information structures to the concise-swid-tag root map. See Section 2.2 . ¶ 2.4. concise-swid-tag Co-Constraints The following co-constraints apply to the information provided in the concise-swid-tag group. If any of these constraints is not met, a signed tag cannot be used anymore as a signed statement. ¶ The patch and supplemental items MUST NOT both be set to "true". ¶ If the patch item is set to "true", the tag SHOULD contain at least one link item (see Section 2.7 ) with both the rel item value of "patches" and an href item specifying an association with the software that was patched. Without at least one link item the target of the patch cannot be identified and the patch tag cannot be applied without external context. ¶ If the supplemental item is set to "true", the tag SHOULD contain at least one link item with both the rel item value of "supplemental" and an href item specifying an association with the software that is supplemented. Without at least one link item the target of supplement tag cannot be identified and the patch tag cannot be applied without external context. ¶ If all of the corpus, patch, and supplemental items are "false", or if the corpus item is set to "true", then a software-version item MUST be included with a value set to the version of the software component. This ensures that primary and corpus tags have an identifiable software version. ¶ 2.5. The global-attributes Group The global-attributes group provides a list of items, including an optional language definition to support the processing of text-string values, and an unbounded set of any-attribute items allowing for additional items to be provided as a general point of extension in the model. ¶ The CDDL for the global-attributes follows: ¶ global-attributes = ( ? lang => text, * any-attribute, ) any-attribute = ( label => one-or-more<text> / one-or-more<int> ) label = text / int ¶ The following describes each child item of this group. ¶ lang (index 15): A textual language tag that conforms with IANA "Language Subtag Registry" [ RFC5646 ] . The context of the specified language applies to all sibling and descendant textual values, unless a descendant object has defined a different language tag. Thus, a new context is established when a descendant object redefines a new language tag. All textual values within a given context MUST be considered expressed in the specified language. ¶ any-attribute: This sub-group provides a means to include arbitrary information via label/index ("key") value pairs. Labels can be either a single integer or text string. Values can be a single integer, a text string, or an array of integers or text strings. ¶ 2.6. The entity-entry Map The CDDL for the entity-entry map follows: ¶ entity-entry = { entity-name => text, ? reg-id => any-uri, role => one-or-more<$role>, ? thumbprint => hash-entry, * $$entity-extension, global-attributes, } entity-name = 31 reg-id = 32 role = 33 thumbprint = 34 $role /= tag-creator $role /= software-creator $role /= aggregator $role /= distributor $role /= licensor $role /= maintainer $role /= int / text tag-creator=1 software-creator=2 aggregator=3 distributor=4 licensor=5 maintainer=6 ¶ The following describes each child item of this group. ¶ global-attributes: The global-attributes group described in Section 2.4, Paragraph 2 . ¶ entity-name (index 31): The textual name of the organizational entity claiming the roles specified by the role item for the CoSWID tag. This item maps to '/SoftwareIdentity/Entity/@name' in [ SWID ] . ¶ reg-id (index 32): The registration id value is intended to uniquely identify a naming authority in a given scope (e.g., global, organization, vendor, customer, administrative domain, etc.) for the referenced entity. The value of a registration ID MUST be a RFC 3986 URI; it is not intended to be dereferenced. The scope will usually be the scope of an organization. ¶ role (index 33): An integer or textual value (integer label with text escape, see Section 2 ) representing the relationship(s) between the entity, and this tag or the referenced software component. If an integer value is used it MUST be an index value in the range -256 to 255. Integer values in the range -256 to -1 are reserved for testing and use in closed environments (see Section 6.2.2 ). Integer values in the range 0 to 255 correspond to registered entries in the IANA "Software Tag Entity Role Values" registry (see Section 6.2.5 . ¶ The following additional requirements exist for the use of the "role" item: ¶ An entity item MUST be provided with the role of "tag-creator" for every CoSWID tag. This indicates the organization that created the CoSWID tag. ¶ An entity item SHOULD be provided with the role of "software-creator" for every CoSWID tag, if this information is known to the tag creator. This indicates the organization that created the referenced software component. ¶ thumbprint (index 34): The value of the thumbprint item provides a hash (i.e. the thumbprint) of the signing entity's public key certificate. This provides an indicator of which entity signed the CoSWID tag, which will typically be the tag creator. See Section 2.9.1 for more details on the use of the hash-entry data structure. ¶ $$entity-extension: This CDDL socket can be used to extend the entity-entry group model. See Section 2.2 . ¶ 2.7. The link-entry Map The CDDL for the link-entry map follows: ¶ link-entry = { ? artifact => text, href => any-uri, ? media => text, ? ownership => $ownership, rel => $rel, ? media-type => text, ? use => $use, * $$link-extension, global-attributes, } media = 10 artifact = 37 href = 38 ownership = 39 rel = 40 media-type = 41 use = 42 $ownership /= shared $ownership /= private $ownership /= abandon $ownership /= int / text abandon=1 private=2 shared=3 $rel /= ancestor $rel /= component $rel /= feature $rel /= installationmedia $rel /= packageinstaller $rel /= parent $rel /= patches $rel /= requires $rel /= see-also $rel /= supersedes $rel /= supplemental $rel /= -356..65536 / text ancestor=1 component=2 feature=3 installationmedia=4 packageinstaller=5 parent=6 patches=7 requires=8 see-also=9 supersedes=10 supplemental=11 $use /= optional $use /= required $use /= recommended $use /= int / text optional=1 required=2 recommended=3 ¶ The following describes each member of this map. ¶ global-attributes: The global-attributes group described in Section 2.4, Paragraph 2 . ¶ artifact (index 37): To be used with rel="installation-media", this item's value provides the absolute filesystem path to the installer executable or script that can be run to launch the referenced installation. Links with the same artifact name MUST be considered mirrors of each other, allowing the installation media to be acquired from any of the described sources. ¶ href (index 38): A URI-reference [ RFC3986 ] for the referenced resource. The "href" item's value can be, but is not limited to, the following (which is a slightly modified excerpt from [ SWID ] ): ¶ If no URI scheme is provided, then the URI-reference is a relative reference relative to the base URI of the CoSWID tag, i.e., the URI under which the CoSWID tag was provided. For example, "./folder/supplemental.coswid". ¶ a physical resource location with any acceptable URI scheme (e.g., file:// http:// https:// ftp://) ¶ a URI with "swid:" as the scheme refers to another SWID or CoSWID by the referenced tag's tag-id. This URI needs to be resolved in the context of the endpoint by software that can lookup other SWID or CoSWID tags. For example, "swid:2df9de35-0aff-4a86-ace6-f7dddd1ade4c" references the tag with the tag-id value "2df9de35-0aff-4a86-ace6-f7dddd1ade4c". ¶ a URI with "swidpath:" as the scheme, which refers to another software tag via an XPATH query [ W3C.REC-xpath20-20101214 ] that matches items in that tag ( Section 5.2 ). This scheme is provided for compatibility with [ SWID ] . This specification does not define how to resolve an XPATH query in the context of CBOR, see Section 5.2 . ¶ media (index 10): A hint to the consumer of the link to what target platform the link is applicable to. This item represents a query as defined by the W3C Media Queries Recommendation (see [ W3C.REC-css3-mediaqueries-20120619 ] ). As highlighted in media defined in Section 2.3 , support for media queries are included here for interoperability with [ SWID ] , which does not provide any further requirements for media query use. Thus, this specification does not clarify how a media query is to be used for a CoSWID. ¶ ownership (index 39): An integer or textual value (integer label with text escape, see Section 2 , for the "Software Tag Link Ownership Values" registry Section 4.3 ) used when the "href" item references another software component to indicate the degree of ownership between the software component referenced by the CoSWID tag and the software component referenced by the link. If an integer value is used it MUST be an index value in the range -256 to 255. Integer values in the range -256 to -1 are reserved for testing and use in closed environments (see Section 6.2.2 ). Integer values in the range 0 to 255 correspond to registered entries in the "Software Tag Link Ownership Values" registry. ¶ rel (index 40): An integer or textual value that (integer label with text escape, see Section 2 , for the "Software Tag Link Link Relationship Values" registry Section 4.3 ) identifies the relationship between this CoSWID and the target resource identified by the "href" item. If an integer value is used it MUST be an index value in the range -256 to 65535. Integer values in the range -256 to -1 are reserved for testing and use in closed environments (see Section 6.2.2 ). Integer values in the range 0 to 65535 correspond to registered entries in the IANA "Software Tag Link Relationship Values" registry (see Section 6.2.7 . If a string value is used it MUST be either a private use name as defined in Section 6.2.2 or a "Relation Name" from the IANA "Link Relation Types" registry: https://www.iana.org/assignments/link-relations/link-relations.xhtml as defined by [ RFC8288 ] . When a string value defined in the IANA "Software Tag Link Relationship Values" registry matches a Relation Name defined in the IANA "Link Relation Types" registry, the index value in the IANA "Software Tag Link Relationship Values" registry MUST be used instead, as this relationship has a specialized meaning in the context of a CoSWID tag. String values correspond to registered entries in the "Software Tag Link Relationship Values" registry. ¶ media-type (index 41): A link can point to arbitrary resources on the endpoint, local network, or Internet using the href item. Use of this item supplies the resource consumer with a hint of what type of resource to expect. (This is a hint : There is no obligation for the server hosting the target of the URI to use the indicated media type when the URI is dereferenced.) Media types are identified by referencing a "Name" from the IANA "Media Types" registry: http://www.iana.org/assignments/media-types/media-types.xhtml. This item maps to '/SoftwareIdentity/Link/@type' in [ SWID ] . ¶ use (index 42): An integer or textual value (integer label with text escape, see Section 2 , for the "Software Tag Link Link Relationship Values" registry Section 4.3 ) used to determine if the referenced software component has to be installed before installing the software component identified by the COSWID tag. If an integer value is used it MUST be an index value in the range -256 to 255. Integer values in the range -256 to -1 are reserved for testing and use in closed environments (see Section 6.2.2 ). Integer values in the range 0 to 255 correspond to registered entries in the IANA "Link Use Values" registry (see Section 6.2.8 . If a string value is used it MUST be a private use name as defined in Section 6.2.2 . String values correspond to registered entries in the "Software Tag Link Use Values" registry. ¶ $$link-extension: This CDDL socket can be used to extend the link-entry map model. See Section 2.2 . ¶ 2.8. The software-meta-entry Map The CDDL for the software-meta-entry map follows: ¶ software-meta-entry = { ? activation-status => text, ? channel-type => text, ? colloquial-version => text, ? description => text, ? edition => text, ? entitlement-data-required => bool, ? entitlement-key => text, ? generator => text / bstr .size 16, ? persistent-id => text, ? product => text, ? product-family => text, ? revision => text, ? summary => text, ? unspsc-code => text, ? unspsc-version => text, * $$software-meta-extension, global-attributes, } activation-status = 43 channel-type = 44 colloquial-version = 45 description = 46 edition = 47 entitlement-data-required = 48 entitlement-key = 49 generator = 50 persistent-id = 51 product = 52 product-family = 53 revision = 54 summary = 55 unspsc-code = 56 unspsc-version = 57 ¶ The following describes each child item of this group. ¶ global-attributes: The global-attributes group described in Section 2.4, Paragraph 2 . ¶ activation-status (index 43): A textual value that identifies how the software component has been activated, which might relate to specific terms and conditions for its use (e.g., Trial, Serialized, Licensed, Unlicensed, etc) and relate to an entitlement. This attribute is typically used in supplemental tags as it contains information that might be selected during a specific install. ¶ channel-type (index 44): A textual value that identifies which sales, licensing, or marketing channel the software component has been targeted for (e.g., Volume, Retail, OEM, Academic, etc). This attribute is typically used in supplemental tags as it contains information that might be selected during a specific install. ¶ colloquial-version (index 45): A textual value for the software component's informal or colloquial version. Examples may include a year value, a major version number, or similar value that are used to identify a group of specific software component releases that are part of the same release/support cycle. This version can be the same through multiple releases of a software component, while the software-version specified in the concise-swid-tag group is much more specific and will change for each software component release. This version is intended to be used for string comparison (byte-by-byte) only and is not intended to be used to determine if a specific value is earlier or later in a sequence. ¶ description (index 46): A textual value that provides a detailed description of the software component. This value MAY be multiple paragraphs separated by CR LF characters as described by [ RFC5198 ] . ¶ edition (index 47): A textual value indicating that the software component represents a functional variation of the code base used to support multiple software components. For example, this item can be used to differentiate enterprise, standard, or professional variants of a software component. ¶ entitlement-data-required (index 48): A boolean value that can be used to determine if accompanying proof of entitlement is needed when a software license reconciliation process is performed. ¶ entitlement-key (index 49): A vendor-specific textual key that can be used to identify and establish a relationship to an entitlement. Examples of an entitlement-key might include a serial number, product key, or license key. For values that relate to a given software component install (i.e., license key), a supplemental tag will typically contain this information. In other cases, where a general-purpose key can be provided that applies to all possible installs of the software component on different endpoints, a primary tag will typically contain this information. Since CoSWID tags are not intended to contain confidential information, tag authors are advised not to record unprotected, private software license keys in this field. ¶ generator (index 50): The name (or tag-id) of the software component that created the CoSWID tag. If the generating software component has a SWID or CoSWID tag, then the tag-id for the generating software component SHOULD be provided. ¶ persistent-id (index 51): A globally unique identifier used to identify a set of software components that are related. Software components sharing the same persistent-id can be different versions. This item can be used to relate software components, released at different points in time or through different release channels, that may not be able to be related through use of the link item. ¶ product (index 52): A basic name for the software component that can be common across multiple tagged software components (e.g., Apache HTTPD). ¶ product-family (index 53): A textual value indicating the software | 2026-01-13T09:29:15 |
https://huggingface.co/datasets?modality=modality%3A3d | 3D Datasets – Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up 1T","label":"> 1T","type":"size_categories"}],"format":[{"id":"format:json","label":"json","type":"format"},{"id":"format:csv","label":"csv","type":"format"},{"id":"format:parquet","label":"parquet","type":"format"},{"id":"format:optimized-parquet","label":"optimized-parquet","type":"format"},{"id":"format:imagefolder","label":"imagefolder","type":"format"},{"id":"format:audiofolder","label":"soundfolder","type":"format"},{"id":"format:webdataset","label":"webdataset","type":"format"},{"id":"format:text","label":"text","type":"format"},{"id":"format:arrow","label":"arrow","type":"format"}],"benchmark":[{"id":"benchmark:official","label":"Benchmark","type":"benchmark"}]},"numItemsPerPage":30,"numTotalItems":399,"pageIndex":0,"searchQuery":"","sortBy":""}}"> Edit Datasets filters Main Tasks Libraries Languages Licenses Other Modalities Reset Modalities 3D Audio Document Geospatial Image Tabular Text Time-series Video Size (rows) Reset Size < 1K > 1T Format json csv parquet optimized-parquet imagefolder soundfolder webdataset text arrow Evaluation Benchmark Apply filters Datasets 399 Full-text search Edit filters Sort: Trending Active filters: 3d Clear all InternRobotics/InternData-A1 Viewer • Updated 2 days ago • 5.8M • 33.3k • 67 CaiYuanhao/DiffusionGS Viewer • Updated 14 days ago • 119 • 2.63k • 10 AvaLovelace/StableText2Brick Viewer • Updated Jul 2, 2025 • 47.4k • 321 • 16 edbeeching/godot_rl_FlyBy Viewer • Updated Feb 22, 2024 • 9 • 1.3k • 3 Mialiu/nerf Viewer • Updated Apr 2, 2023 • 1.73k • 48 • 2 perler/ppsurf Viewer • Updated Feb 15, 2024 • 5.2k • 4.17k • 3 GaussianWorld/scannetpp_v1_scaffold Preview • Updated Mar 24, 2025 • 1 • 2 huanngzh/3D-Front Preview • Updated Jun 25, 2025 • 1.66k • 21 alexanderdann/CTSpine1K Updated May 25, 2025 • 2.34k • 4 1Konny/t2m4lvo-truebones-zoo Viewer • Updated Jul 1, 2025 • 1.14k • 1.57k • 8 mrmrx/CADS-dataset Viewer • Updated Oct 10, 2025 • 22.1k • 48.9k • 30 EmmiAI/Emmi-Wing Viewer • Updated 25 days ago • 29.7k • 40 • 6 webxos/3D_torus_dataset Viewer • Updated 3 days ago • 71 • 74 • 2 edbeeching/cpp_graphics_engineer_test_datasets Viewer • Updated Oct 7, 2022 • 18 • 151 • 2 ai-habitat/OVMM_objects Preview • Updated Jul 16, 2024 • 936 • 6 xieyizheng/pose Viewer • Updated Jul 14, 2023 • 67.2k • 62 • 3 YouLiXiya/nerf Viewer • Updated Apr 15, 2024 • 3k • 97 • 4 gafoart/gafoart Viewer • Updated Nov 14, 2025 • 5 • 662 • 1 kedardes/splats Viewer • Updated Feb 3, 2024 • 1 • 57 • 1 twoyoung/3DGS-binary-segmentation Viewer • Updated Mar 5, 2024 • 105 • 28 • 1 FlyCole/Dream2Real Viewer • Updated Feb 7, 2024 • 2.6k • 31 • 1 TACOTAKEDOWN/C-17 Viewer • Updated Feb 15, 2024 • 803 • 8 • 1 Qmh/lerf_ovs Viewer • Updated Feb 29, 2024 • 816 • 10 • 1 kentaito321/visco-nerf Viewer • Updated Mar 23, 2024 • 720 • 55 • 1 pablovela5620/sample-polycam-room Viewer • Updated Apr 18, 2024 • 8.37k • 8 • 1 kafkaon/splat_examples Viewer • Updated Mar 12, 2025 • 106 • 12 • 1 MakiAi/Climbing-gym-soleil2-WallDataset Viewer • Updated Mar 17, 2024 • 6 • 15 • 1 UW3DGS/UW3DGS Viewer • Updated Mar 28, 2024 • 3 • 12 • 1 tfnn/MESHY.AI_800_GLB_3D-Assets_Categorised_and_Labelled Updated Mar 30, 2024 • 32 • 4 tfnn/MESHY.AI_363_PLY_Creatures_Labelled Updated Apr 4, 2024 • 19 • 2 Previous 1 2 3 ... 14 Next System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#analysis | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://opensource.guide/fr/starting-a-project/ | Lancer un projet Open Source | Open Source Guides A propos Contribuer العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Lancer un projet Open Source En savoir plus sur le monde de l'Open Source et préparez-vous à lancer votre propre projet. Table des matières Le "quoi" et le "pourquoi" de l’open source Dois-je lancer mon propre projet open source Lancer votre propre projet open source Nommer et créer l’image de marque de votre projet Votre checklist de pré-lancement Le "quoi" et le "pourquoi" de l’open source Donc, vous songez à commencer avec l’open source ? Toutes nos félicitations ! Le monde va apprécier votre contribution. Parlons de ce qu’est l’open source et pourquoi les gens le font. Que signifie “open source” ? Lorsqu’un projet est open source, cela signifie que n’importe qui peut voir, utiliser, modifier et distribuer votre projet dans n’importe quel but. Ces autorisations sont appliquées via une licence open source . L’open source est puissant car il abaisse les barrières à l’adoption, permettant aux idées de se propager rapidement. Pour comprendre comment cela fonctionne, imaginez qu’un ami organise une collation, et que vous apportez une tarte aux cerises. Tout le monde goûte la tarte ( utiliser ) La tarte est un succès ! Ils vous demandent la recette que vous leur fournissez ( voir ) Un ami, Alex, qui est chef pâtissier, suggère de réduire le sucre ( modifier ) Une autre amie, Lisa, demande à l’utiliser pour un dîner la semaine prochaine ( distribuer ) Par comparaison, un processus de source fermée serait de se rendre dans un restaurant et commander une tranche de tarte aux cerises. Vous devez payer des frais pour manger la tarte, et le restaurant ne vous donnera probablement pas leur recette. Si vous avez copié leur tarte exactement et l’avez vendue sous votre propre nom, le restaurant pourrait même prendre des mesures légales contre vous. Pourquoi les gens ouvrent-ils leur travail ? L’une des expériences les plus enrichissantes que je retire de l’utilisation et de la collaboration sur l’open source vient des relations que je construis avec d’autres développeurs confrontés aux mêmes problèmes que moi. — @kentcdodds, “How getting into Open Source has been awesome for me” Il y a de nombreuses raisons pour une personne ou une organisation de vouloir ouvrir un projet. Parmi celles-ci : Collaboration : Les projets open source peuvent accepter des changements de n’importe qui dans le monde. Exercism , par exemple, est une plate-forme d’exercices de programmation avec plus de 350 contributeurs. Adoption et remixage : Les projets open source peuvent être utilisés par n’importe qui dans presque n’importe quel but. Les gens peuvent même l’utiliser pour construire d’autres choses. WordPress , par exemple, a commencé en tant que fork (nous verrons plus tard ce qu’on appelle un fork, pour le moment voyez ceci comme une copie à un instant donné) d’un projet existant appelé b2 . Transparence : Tout le monde peut inspecter un projet open source pour y chercher des erreurs ou des incohérences. La transparence est importante pour des gouvernements comme celui de Bulgarie ou des États-Unis , les industries réglementées comme les banques ou les soins de santé, et les logiciels de sécurité comme Let’s Encrypt . L’open source n’est pas seulement pour les logiciels. Vous pouvez tout rendre open source depuis les jeux de données aux livres. Jetez un coup d’œil sur GitHub Explore pour trouver des idées sur ce que vous pouvez faire d’autre. L’open source signifie-t-il “gratuit” ? L’un des plus gros attraits de l’open source est qu’il ne coûte pas d’argent. La “gratuité” est toutefois une conséquence de la valeur globale de l’open source. Puisqu’ une licence open source nécessite que n’importe qui puisse utiliser, modifier et partager votre projet dans pratiquement n’importe quel but, les projets eux-mêmes ont tendance à être gratuits. Si le projet coûte de l’argent, n’importe qui peut légalement en faire une copie et utiliser la version gratuite à la place. En conséquence, la plupart des projets open source sont gratuits, mais la “gratuité” ne fait pas partie de la définition de l’open source. Il existe des moyens de facturer les projets open source indirectement à travers une double licence ou des fonctionnalités limitées, tout en respectant la définition officielle de l’open source. Dois-je lancer mon propre projet open source La réponse courte est oui, car peu importe le résultat, le lancement de votre propre projet est une excellente façon d’apprendre comment fonctionne l’open source. Si vous n’avez jamais ouvert aux autres un projet auparavant, vous pourriez vous demander ce que les gens vont penser, ou être inquiet du risque que personne ne le remarquera. Si cela vous parle, sachez que vous n’êtes pas seul ! Le travail open source est comme toute autre activité créative, que ce soit l’écriture ou la peinture. Cela peut être effrayant de partager votre travail avec le reste du monde, mais la seule façon de s’améliorer est de pratiquer - même si vous n’avez pas de public. Si vous n’êtes pas encore convaincu, prenez un moment pour réfléchir à vos objectifs. Fixer vos objectifs Les objectifs peuvent vous aider à déterminer ce sur quoi vous devez travailler, ce à quoi vous devez dire non et où vous avez besoin de l’aide des autres. Commencez par vous demander : pourquoi est-ce que j’ouvre ce projet ? Il n’y a pas de bonne réponse à cette question. Vous pouvez avoir plusieurs objectifs pour un même projet ou différents projets avec des objectifs différents. Si votre seul objectif est de montrer votre travail, vous ne voulez peut-être même pas de contributions, et vous pouvez alors même le dire dans votre fichier README. D’un autre côté, si vous voulez des contributeurs, vous allez investir du temps dans une documentation claire et faire en sorte que les nouveaux arrivants se sentent les bienvenus. A un certain moment, j’ai créé un UIAlertView personnalisé que j’utilisais… et j’ai décidé de le rendre open source. Je l’ai donc modifié pour être plus dynamique et l’ai mis à disposition sur GitHub. J’ai également écrit ma première documentation expliquant aux autres développeurs comment l’utiliser dans leurs projets. Probablement personne ne l’a jamais utilisé parce que c’était un projet simple mais je me sentais satisfait de ma contribution. — @mavris, “Self-taught Software Developers: Why Open Source is important to us” Au fur et à mesure que votre projet grandit, votre communauté peut commencer à avoir besoin de plus que du code. Gardez bien à l’esprit que répondre aux problèmes, réviser le code et promouvoir votre projet sont des tâches importantes dans un projet open source. Bien que le temps que vous consacrez à des tâches sans code dépende de la taille et de la portée de votre projet, vous devez vous préparer en tant que responsable à les résoudre par vous-même ou à trouver quelqu’un pour vous aider. Si vous faites partie d’une entreprise qui ouvre le code source d’un projet, assurez-vous que votre projet dispose des ressources internes dont il a besoin pour prospérer. Vous voudrez identifier qui est responsable de la maintenance du projet après son lancement et comment vous allez partager ces tâches avec votre communauté. Si vous avez besoin d’un budget ou d’un personnel dédié pour la promotion, les opérations et la maintenance du projet, commencez ces discussions au sein de votre entreprise aussi tôt que possible. Lorsque vous commencez à ouvrir le projet, il est important de vous assurer que vos processus de gestion prennent en compte les contributions et les capacités de la communauté autour de votre projet. N’ayez pas peur d’impliquer des contributeurs qui ne sont pas employés dans votre entreprise dans les aspects clés du projet — surtout s’ils sont des contributeurs réguliers. — @captainsafia, “So you wanna open source a project, eh?” Contribuer à d’autres projets Si votre objectif est d’apprendre à collaborer avec d’autres ou à comprendre le fonctionnement de l’open source, envisagez de contribuer à un projet existant. Commencez avec un projet que vous utilisez déjà et que vous aimez. Contribuer à un projet peut être aussi simple que de réparer des fautes de frappe ou de mettre à jour une documentation. Si vous ne savez pas comment commencer en tant que contributeur, consultez notre guide Comment contribuer à l’Open Source . Lancer votre propre projet open source Il n’y a pas de moment idéal pour ouvrir votre travail. Vous pouvez ouvrir une idée, un travail en cours, ou après des années de source fermée. De manière générale, vous devriez ouvrir votre projet lorsque vous serez à l’aise à l’idée que les autres puissent voir et donner votre avis sur votre travail. Quelle que soit la phase durant laquelle vous décidez d’ouvrir votre projet, chaque projet doit inclure la documentation suivante : Licence open source README Directives de contribution Code de conduite En tant que responsable, ces composants vous aideront à communiquer les attentes, à gérer les contributions et à protéger les droits légaux de chacun (y compris les vôtres). Ils augmentent considérablement vos chances d’avoir une expérience positive. Si votre projet est sur GitHub, placer ces fichiers dans votre répertoire racine avec les noms de fichiers recommandés aidera GitHub à les reconnaître et à les faire apparaître automatiquement à vos lecteurs. Choisir une licence Une licence open source garantit que d’autres utilisateurs peuvent utiliser, copier, modifier et contribuer à votre projet sans aucune répercussion. Elle vous protège également contre les situations juridiques épineuses. Vous devez inclure une licence lorsque vous lancez un projet open source. Le travail juridique n’est pas amusant. La bonne nouvelle est que vous pouvez copier et coller une licence existante dans votre dépôt. Cela ne prendra qu’une minute pour protéger votre dur labeur. MIT , Apache 2.0 , et GPLv3 sont les licences Open Source les plus populaires, mais vous pouvez choisir d’autres options . Lorsque vous créez un nouveau projet sur GitHub, vous avez la possibilité de sélectionner une licence. L’inclusion d’une licence open source rendra votre projet GitHub open source. Si vous avez d’autres questions ou préoccupations concernant les aspects juridiques de la gestion d’un projet open source, nous avons ce qu’il vous faut . Ecrire un fichier README Les fichiers README font plus qu’expliquer comment utiliser votre projet. Ils expliquent également pourquoi votre projet est important et ce que vos utilisateurs peuvent en faire. Dans votre fichier README, essayez de répondre aux questions suivantes : Que fait ce projet ? Pourquoi ce projet est-il utile ? Par quoi puis-je commencer ? Où puis-je obtenir plus d’aide, si j’en ai besoin ? Vous pouvez utiliser votre fichier README pour répondre à d’autres questions, telles que la manière dont vous gérez les contributions, les objectifs du projet et les informations sur les licences et l’attribution. Si vous ne souhaitez pas accepter de contributions ou si votre projet n’est pas encore prêt pour la production, écrivez ces informations. Une meilleure documentation signifie plus d’utilisateurs, moins de demandes de support et plus de contributeurs. (…) Rappelez-vous que vos lecteurs ne sont pas vous. Il y a des gens avec des expériences complètement différentes qui pourraient participer à votre projet. — @tracymakes, “Writing So Your Words Are Read (video)” Parfois, les gens évitent d’écrire un fichier README parce qu’ils ont l’impression que le projet n’est pas terminé ou qu’ils ne veulent pas de contributions. En fait ce sont toutes de très bonnes raisons d’en écrire un. Pour plus d’inspiration, essayez d’utiliser celui de @18F “Making READMEs Readable” ou le modèle de README de @PurpleBooth pour écrire un fichier README complet. Lorsque vous incluez un fichier README dans le répertoire racine, GitHub l’affiche automatiquement sur la page d’accueil du dépot. Rédaction de vos directives de contribution Un fichier CONTRIBUTING indique à votre audience comment participer à votre projet. Par exemple, vous pouvez inclure des informations sur: Comment déposer un rapport de bug (essayez d’utiliser des modèles de questions et de pull requests ) Comment proposer une nouvelle fonctionnalité Comment configurer votre environnement et exécuter des tests En plus des détails techniques, un fichier CONTRIBUTING est une opportunité de communiquer vos attentes pour les contributions, telles que: Les types de contributions que vous recherchez Votre feuille de route ou vision pour le projet Comment les contributeurs devraient (ou ne devraient pas) entrer en contact avec vous Utiliser un ton chaleureux et amical et offrir des suggestions précises pour les contributions (comme la rédaction de documentation ou la création d’un site Web) peut faire beaucoup pour que les nouveaux arrivants se sentent les bienvenus et enthousiastes à l’idée de participer. Par exemple, Active Admin démarre son guide de contribution avec: Tout d’abord, merci d’envisager de contribuer à Active Admin. Ce sont des gens comme vous qui font d’Active Admin un outil formidable. Dans les premières étapes de votre projet, votre fichier CONTRIBUTING peut être simple. Vous devez toujours expliquer comment signaler les bogues ou les problèmes de fichiers, ainsi que les exigences techniques (comme les tests) pour apporter une contribution. Au fil du temps, vous pouvez ajouter d’autres questions fréquemment posées à votre fichier CONTRIBUTING. La rédaction de ce genre d’informations signifie que moins de personnes vous poseront les mêmes questions encore et encore. Pour plus d’aide avec la rédaction de votre fichier CONTRIBUTING, consultez le modèle de guide de contribution de @nayafia ou le guide de @mozilla “Comment Construire un CONTRIBUTING.md” . Ajoutez un lien vers votre fichier CONTRIBUTING dans votre fichier README, afin que plus de gens le voient. Si vous placez le fichier CONTRIBUTING dans le repository de votre projet , GitHub affichera automatiquement un lien vers votre fichier lorsqu’un contributeur crée un problème ou ouvre une pull request. Établir un code de conduite Nous avons tous eu des expériences où nous avons fait face à ce qui était probablement un abus soit en tant que responsable essayant d’expliquer pourquoi quelque chose devait être d’une certaine manière, ou en tant qu’utilisateur… posant une simple question. (…) Un code de conduite devient un document facilement référencé et lisible qui indique que votre équipe prend un discours constructif très au sérieux. — @mlynch, “Making Open Source a Happier Place” Enfin, un code de conduite permet de définir des règles de base pour le comportement des participants de votre projet. Ceci est particulièrement utile si vous lancez un projet open source pour une communauté ou une entreprise. Un code de conduite vous permet de faciliter un comportement communautaire sain et constructif, ce qui réduira votre stress en tant que responsable. Pour plus d’informations, consultez notre Code de conduite . En plus d’indiquer comment vous souhaitez que les participants se comportent, un code de conduite a également tendance à décrire à qui s’appliquent ces attentes, quand elles s’appliquent, et que faire en cas de violation. Tout comme les licences open source, il existe également des normes émergentes pour les codes de conduite, vous n’avez donc pas besoin d’écrire les vôtres. Le Contributor Covenant ) est un code de conduite qui est utilisé par plus de 40 000 projets open source , y compris Kubernetes, Rails et Swift. Quel que soit le texte que vous utilisez, vous devez être prêt à appliquer votre code de conduite si nécessaire. Collez le texte directement dans un fichier CODE_OF_CONDUCT dans votre repository. Conservez le fichier dans le répertoire racine de votre projet pour qu’il soit facile à trouver et mettez un lien vers lui dans votre fichier README. Nommer et créer l’image de marque de votre projet La marque est plus qu’un logo flashy ou un nom de projet accrocheur. Il s’agit de la façon dont vous parlez de votre projet et à qui est destiné votre message. Choisir le bon nom Choisissez un nom facile à retenir et, idéalement, qui donne une idée de ce que fait le projet. Par exemple: Sentry surveille les applications pour fournir des rapports d’erreur Thin est un serveur web Ruby simple et rapide Si vous construisez sur un projet existant, l’utilisation de leur nom comme préfixe peut aider à clarifier ce que fait votre projet (par exemple, node-fetch apporte window.fetch à Node.js). Pensez à la clarté avant tout. Faire des jeux de mots, c’est amusant, mais rappelez-vous que certaines blagues peuvent ne pas se traduire dans d’autres cultures et des personnes ayant des expériences différentes de vous peuvent ne pas les comprendre. Certains de vos utilisateurs potentiels peuvent être des employés dans une entreprise : vous ne voulez pas les mettre mal à l’aise quand ils devront expliquer votre projet au travail ! Éviter les conflits de noms Vérifiez les projets open source avec un nom similaire , surtout si vous partagez le même langage ou écosystème. Si votre nom est trop proche de celui d’un projet existant populaire, vous risquez de perturber votre auditoire. Si vous souhaitez un site Web, un pseudo Twitter ou d’autres entités pour représenter votre projet, assurez-vous de pouvoir obtenir les noms souhaités. Idéalement, réservez ces noms maintenant pour votre tranquillité d’esprit, même si vous n’avez pas l’intention de les utiliser pour l’instant. Assurez-vous que le nom de votre projet ne porte pas atteinte à une marque. Une entreprise pourrait vous demander d’arrêter votre projet dans le futur, ou même intenter une action en justice contre vous. Cela ne vaut tout simplement pas le risque. Vous pouvez consulter la Base de données mondiale de l’OMPI sur les marques pour obtenir des informations sur les marques provenant de différentes sources aux niveaux national et international. Si vous êtes dans une entreprise, c’est un des sujets sur lesquels votre équipe juridique peut vous aider . Enfin, recherchez le nom de votre projet sur Google. Les gens pourront-ils trouver facilement votre projet ? Est-ce que quelque chose d’autre apparaît dans les résultats de recherche que vous ne voudriez pas qu’ils voient ? Comment vous écrivez (et codez) affecte votre marque, aussi ! Tout au long de la vie de votre projet, vous allez beaucoup écrire : des fichiers README, des didacticiels, des documents communautaires, des réponses aux problèmes, peut-être même des bulletins d’informations et des listes de diffusion. Qu’il s’agisse d’une documentation officielle ou d’un courriel occasionnel, votre style d’écriture fait partie de la marque de votre projet. Réfléchissez à comment vous pourriez rencontrer votre public et au ton que vous souhaitez utiliser pour communiquer avec eux. J’ai essayé de m’impliquer dans tous les sujets de la liste de diffusion, de montrer un comportement exemplaire, d’être gentil avec les gens, de prendre leurs problèmes au sérieux et d’essayer d’être utile dans l’ensemble. Au bout d’un moment, les gens sont restés non seulement à poser des questions, mais aussi à répondre aux questions, et pour mon plus grand plaisir, ils ont imité mon style. — @janl sur CouchDB , “Sustainable Open Source” L’utilisation d’un langage chaleureux et inclusif (tel que l’utilisation de «eux», même en faisant référence à la personne seule) peut faire beaucoup pour rendre votre projet accueillant pour les nouveaux contributeurs. Tenez-vous-en à un langage simple, car bon nombre de vos lecteurs ne sont peut-être pas francophones. Au-delà de votre façon d’écrire, votre style de codage peut également faire partie de la marque de votre projet. Angular et jQuery sont deux exemples de projets avec des lignes directrices et des styles de codage rigoureux. Il n’est pas nécessaire d’écrire un guide de style pour votre projet lorsque vous débutez, et vous constaterez peut-être que vous aimez incorporer différents styles de codage dans votre projet de toute façon. Mais vous devriez prévoir comment votre style d’écriture et de codage pourrait attirer ou décourager différents types de personnes. Les premières étapes de votre projet sont votre opportunité de définir le précédent que vous souhaitez voir. Votre checklist de pré-lancement Prêt à lancer votre projet open source ? Voici une checklist pour vous aider. Toutes les cases sont cochées ? Vous êtes prêt à vous lancer ! [Cliquez sur “Publier”] (https://help.github.com/articles/making-a-private-repository-public/) et donnez vous une tape dans le dos. Documentation Le projet a un fichier LICENSE avec une licence open source Le projet a une documentation de base (README, CONTRIBUTING, CODE_OF_CONDUCT) Le nom est facile à retenir, donne une idée de ce que fait le projet, et n'est pas en conflit avec un projet existant ou ne porte pas atteinte aux marques existantes La liste des problèmes est à jour, avec des problèmes clairement organisés et labellisés Code Le projet utilise des conventions de code cohérentes et les noms des fonctions / méthodes / variables sont clairs Le code est clairement commenté, documentant les intentions et les cas marginaux Il n'y a pas de données sensibles dans l'historique, les problèmes ou les pull requests (par exemple, mots de passe ou autres informations non publiques) Humain Si vous êtes un individu: Vous avez parlé au service juridique et / ou comprenez les politiques de propriété intellectuel et open source de votre entreprise (si vous êtes employé quelque part) Si vous êtes une entreprise ou une organisation: Vous avez parlé à votre service juridique Vous avez un plan marketing pour annoncer et promouvoir le projet Quelqu'un s'engage à gérer les interactions avec la communauté (répondre aux problèmes, passer en revue et fusionner les pull requests) Au moins deux personnes ont un accès administratif au projet Vous l’avez réussi ! Félicitations pour l’ouverture de votre premier projet. Peu importe le résultat, travailler en public est un cadeau fait à la communauté. A chaque commit, commentaire et pull request, vous créez des opportunités pour vous-même et pour les autres d’apprendre et de progresser. Retour à tous les guides Guides liés Trouver des utilisateurs pour votre projet Aidez votre projet Open Source à se développer en le mettant entre les mains d’utilisateurs satisfaits. Construire des communautés accueillantes Construire une communauté qui encourage les gens à utiliser, contribuer et évangéliser votre projet. Scroll to Top Contribuer Vous souhaitez faire une suggestion ? Ce contenu est libre. Aidez-nous à l'améliorer. Contribuer Restons en contact Soyez le premier à découvrir les dernières astuces et ressources concernant les logiciels libres avec GitHub. Adresse email fine print avec par et amis | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#1-description-accuracy | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/23andme?trk=products_details_guest_organization_name | 23andMe | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Create an account 23andMe Non-profit Organizations Palo Alto, California 81,297 followers A nonprofit advancing health through the power of genetics and crowdsourced research. See jobs Follow Discover all 561 employees Report this company About us 23andMe Research Institute is a nonprofit medical research organization that enables people everywhere to access their genetic information, learn about themselves and participate in the world's largest crowdsourced research initiative. The Institute aims to be the world's most significant contributor to scientific advancement, uniting people with the common goal of improving health and deepening our understanding of DNA — the code of life. Website https://www.23andme.com External link for 23andMe Industry Non-profit Organizations Company size 201-500 employees Headquarters Palo Alto, California Type Nonprofit Specialties genetic research, consumer genetics, direct-to-consumer genetic testing, telehealth, healthcare, wellness, nonprofit, research, and Research Institute Locations Primary Palo Alto, California, US Get directions Employees at 23andMe Fred Shue Jim Davis Pierre-Eric Leibovici Daniel Chu See all employees Updates 23andMe 81,297 followers 1mo Report this post Please join us in welcoming John McLeod to the team as our new General Counsel & Chief Risk Officer. John’s experience and leadership will be invaluable as we continue to grow and shape our nonprofit path forward. Read more here: https://lnkd.in/gsAN_CaE 57 2 Comments Like Comment Share 23andMe 81,297 followers 1mo Report this post A landmark expansion of our genetic ancestry reports and features now includes more than 250 high-resolution genetic groups across Africa. This significant update provides members with African heritage the most detailed and scientifically rigorous look at their roots currently available. Made possible by the willing participation of thousands of research-consented 23andMe Research Institute members with documented ancestry in Africa and the diaspora. Their participation enabled the development of a more inclusive and detailed map of human history for everyone. The expanded groups, which span Eastern, Middle, Southern, and Western Africa, include: *57 Nigerian genetic groups *44 Angolan & Congolese genetic groups *40 Ethiopian & Eritrean genetic groups *32 Ghanaian, Liberian & Sierra Leonean genetic groups *And a total of 28 Somali, 22 Senegambian & Guinean, and 9 Sudanese genetic groups, among others. You can learn more about this latest update and the science behind it here: https://23and.me/4pdkRiV . The majority of Black and African American 23andMe Research Institute members on the latest genotyping chip will receive results to one or more of these new groups. While we are excited about this significant step forward, we remain committed to expanding our reference panels and using our industry-leading science to bring this level of detail to all members. 86 1 Comment Like Comment Share 23andMe 81,297 followers 1mo Edited Report this post November is Lung Cancer Awareness Month. On yesterday’s Lost Women of Science podcast, our Founder and CEO Anne Wojcicki and her sister Janet opened up about their sister Susan and her battle with lung cancer. You can listen here: https://lnkd.in/gGi-B6xp Lung cancer is consistently one of the world’s deadliest cancers, responsible for more cancer deaths globally each year than any other type. This is due in part to late detection and the aggressive nature of many lung cancers. In July 2024, we collaborated with Troper Wojcicki Philanthropies and 20+ patient and research advocacy organizations to launch the Lung Cancer Genetics Study. Our goal is to understand more about how genetics influence lung cancer so that we can work to improve detection, risk reduction, and care. More on the study here: https://lnkd.in/gr5i2h3U 67 4 Comments Like Comment Share 23andMe 81,297 followers 2mo Edited Report this post We are delighted to welcome Kael Reicin to the 23andMe Research Institute as our new CFO. Kael brings financial and operational nonprofit expertise as we continue to evolve into a nonprofit medical research organization. His financial insight and deep understanding of mission-driven stewardship will help us translate our ideals into sustainable action. Read more here: https://lnkd.in/gqAGFKqf 85 8 Comments Like Comment Share 23andMe 81,297 followers 2mo Edited Report this post Welcome Peggy Bodin , Fred Cate , and Jonathan Zittrain to the 23andMe Research Institute Privacy Advisory Board. These leading experts in health data privacy, data ethics, and consumer advocacy, will provide strategic guidance on the protection of member information and the organization’s responsible stewardship of personal data. Read more here: https://lnkd.in/gqpfW6yX 59 1 Comment Like Comment Share 23andMe 81,297 followers 2mo Edited Report this post We are proud to welcome Brad Margus and Stephen Quake, D.Phil. to the 23andMe Research Institute Board of Directors. Their deep experience in genetics, biomedical innovation, and translational science will help advance discoveries that improve human health for all. Honored to have their guidance as we shape the future of human genetics and research. 🧬 Read more here: https://lnkd.in/gjHufjHk 102 Like Comment Share 23andMe 81,297 followers 3mo Report this post We are proud to share that 23andMe has been named to the TIME's Best Inventions Hall of Fame! This milestone is made possible by the trust of our 13+ million customers and their commitment to advancing genetics and science. 🧬 🌎 A big thank you to TIME for this honor, and to our community for shaping the future of genetics and research. TIME Best Inventions Hall of Fame time.com 150 3 Comments Like Comment Share 23andMe 81,297 followers 3mo Report this post One of the most depressing aspects of depression is that treatments often don’t work, at least initially. A staggering 60% of patients with major depressive disorder don’t find relief from their first medication, and one-third eventually face what’s called Treatment-Resistant Depression (TRD). Using advanced machine learning techniques, researchers at Johnson & Johnson and the 23andMe Research Institute developed a predictive model that identifies individuals most likely to have Treatment-Resistant Depression, without relying on lab tests or genetic data. The research, published in the journal Nature, used only self-reported information with eight key predictive factors, including such things as residual symptoms, the duration of depression episodes, stress levels, and suicidal thoughts. Using this predictive model, researchers were able to accurately predict those at the highest risk for TRD 78% of the time. Why this matters: Clinicians could pivot faster to other treatments when one fails Care could be more personalized and effective Health systems could allocate resources more efficiently For millions living with depression, this work represents a step toward precision mental health care—and a shorter path to relief. If you or someone you know is struggling, please reach out to a mental health professional. In the U.S., the Suicide & Crisis Lifeline is available by dialing 988. https://lnkd.in/ejMWe5Px Predicting Treatment-Resistant Depression Using the 23andMe Survey Data blog.23andme.com 45 1 Comment Like Comment Share 23andMe 81,297 followers 3mo Report this post As we start Breast Cancer Awareness Month, scientists at the 23andMe Research Institute and Dana-Farber Cancer Institute published a study with a surprising finding: Breast cancer patients with higher genetic risk scores actually showed longer average survival times compared to those with lower risk scores. Why this happens isn’t yet clear. One possibility is that inherited breast cancers driven by genetic risk may develop more slowly than cancers triggered by environmental factors, known as somatic mutations. This study builds on nearly 15 years of 23andMe’s breast cancer research and collaborations, including: - Discovering genetic factors linked to breast size and cancer risk - Understanding how people respond to learning their genetic risk - Revealing gaps in current screening criteria that miss many at-risk individuals - Receiving the first FDA authorization for a direct-to-consumer BRCA1/BRCA2 - Genetic Health Risk report, later expanded to include 44 variants and full exome sequencing across 33 cancer-related genes Breast Cancer Awareness Month is about highlighting the impact of early detection, access, and research. Findings like this one underscore how genetic science can continue to uncover new dimensions of risk, survival, and treatment. https://lnkd.in/eXSqraw5 Study Reveals an Unexpected Link Between Genetic Cancer Risk Scores and Survival blog.23andme.com 54 Like Comment Share 23andMe 81,297 followers 3mo Report this post A massive new genetic study of dyslexia using data from the 23andMe Research Institute uncovered dozens of previously unknown genetic factors linked to the condition that affects between 5-10% of school-age children. Many of the genetic associations play a role in early brain development and neuronal pathways. The study, the largest of its kind, advances our understanding of how the brain is wired for reading. https://lnkd.in/eY5zBaQY #Dyslexia #Genetics #23andMe New Study Expands Our Understanding of the Genetics of Dyslexia blog.23andme.com 72 1 Comment Like Comment Share Join now to see what you are missing Find people you know at 23andMe Browse recommended jobs for you View all updates, news, and articles Join now Similar pages Ancestry Software Development Lehi, UT Regeneron Biotechnology Tarrytown, New York FamilySearch Non-profit Organization Management Salt Lake City, UT Lemonaid Health Medical Practices San Francisco, California 10x Genomics Biotechnology Research Pleasanton, California MyHeritage Software Development Or Yehuda, Israel Illumina Biotechnology Research San Diego, CA Genentech Biotechnology Research South San Francisco, California Nucleus Genomics Biotechnology New York, NY Natera Biotechnology Research Austin, Texas Show more similar pages Show fewer similar pages Browse jobs Scientist jobs 48,969 open jobs Manager jobs 1,880,925 open jobs Analyst jobs 694,057 open jobs 23andMe jobs 96 open jobs Intern jobs 71,196 open jobs Engineer jobs 555,845 open jobs Project Manager jobs 253,048 open jobs Data Manager jobs 114,435 open jobs Director jobs 1,220,357 open jobs Data Scientist jobs 264,158 open jobs Medical Science Liaison jobs 2,540 open jobs Bioinformatician jobs 18,406 open jobs Product Manager jobs 199,941 open jobs Senior Scientist jobs 19,214 open jobs Clinical Research jobs 103,468 open jobs Data Science Specialist jobs 2,441 open jobs Writer jobs 26,384 open jobs Research Assistant jobs 43,677 open jobs Copywriter jobs 17,206 open jobs Associate Scientist jobs 19,864 open jobs Show more jobs like this Show fewer jobs like this Funding 23andMe 14 total rounds Last Round Post IPO equity Jul 17, 2021 External Crunchbase Link for last round of funding US$ 250.0M See more info on crunchbase More searches More searches 23andMe jobs Scientist jobs Engineer jobs Analyst jobs Intern jobs Manager jobs Computational Biologist jobs Product Manager jobs Medical Science Liaison jobs Specialist jobs Project Manager jobs Director of Testing jobs Marketing Manager jobs Trading Assistant jobs Associate Product Manager jobs Software Engineer jobs Process Integration Engineer jobs Bioinformatician jobs Account Executive jobs Human Resources Business Partner jobs Art Curator jobs Security Engineer jobs Mechanical Engineer jobs Supervisor jobs Senior Scientist jobs Data Analyst jobs Biostatistician jobs Policy Advisor jobs Consultant jobs Data Scientist jobs Healthcare Consultant jobs Associate jobs Geneticist jobs Account Manager jobs Senior Process Engineer jobs Investor Relations Specialist jobs Control Engineer jobs Workforce Manager jobs Designer jobs Technical Editor jobs Tester jobs Health Specialist jobs Organizational Psychologist jobs Investor jobs Writer jobs Instructor jobs Support Engineer jobs Investment Analyst jobs Talent Management Specialist jobs Director jobs Developer jobs Project Associate jobs Grants Coordinator jobs Financial Services Professional jobs Director of Development jobs Manufacturing Engineer jobs Learning And Development Specialist jobs Marketing Specialist jobs Senior Writer jobs Head of Human Resources jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at 23andMe Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://lnkd.in/gZgXzpHr?trk=organization_guest_main-feed-card-text | LinkedIn LinkedIn This link will take you to a page that’s not on LinkedIn Because this is an external link, we’re unable to verify it for safety. https://open.spotify.com/show/2f9v6MaVkzI5qetfJCWSKP?si=e48fc3a5826c4ca5 This experience is optimized for Chrome, Edge, and Safari Have questions? Learn more | 2026-01-13T09:29:15 |
https://t.me/s/FuseAnnouncements | Fuse Announcements – Telegram Fuse Announcements @FuseAnnouncements 14.8K subscribers 1.25K photos 70 videos 8 files 1.28K links Official announcement channel for news and updates @ fuse.io Chat: @fuseio Support: @FuseSupportBot Twitter: @Fuse_network Web: fuse.io Blog: news.fuse.io Docs: docs.fuse.io DEX: https://app.voltage.finance Download Telegram About Blog Apps Platform Join Fuse Announcements 14.8K subscribers Fuse Announcements The Scape on Freedom World is celebrating 1 year of success! Powered inside the Freedom World app, The Scape has become one of the main user-onboarding entry points into the Fuse ecosystem. 👾 50,000+ players onboarded 🎯 73k missions completed 💎 4.6M rewards claimed 🔹 363k $FDS earned 👹 467k Shard Gobblers captured Year 2 is just beginning with new updates in development: Planet NFTs to earn $FDS passively, co-op mode for team battles, and the Cube for in-game item crafting + upgrades. 👉 Try The Scape today by downloading the Freedom World app Source 331 views 01:47 Fuse Announcements 📊 Voltage Voting Update: Epoch Closed, Rewards Incoming The latest voting epoch has officially ended. Here’s how 6,156,000 $VOLT rewards will be distributed across the pools you voted for: 🔸 WFUSE–CEUS = 4,106,710.46 $VOLT 🔸 soUSD–USDC.e = 1,215,952.20 $VOLT 🔸 WFUSE–VOLT = 745,663.73 $VOLT 🔸 VOLT–soUSD = 66,201.15 $VOLT 🔸 WFUSE–USDT = 12,586.17 $VOLT 🔸 WFUSE–$FOO = 4,642.56 $VOLT 🔸 WFUSE–fUSDV3 = 4,219.59 $VOLT 🔸 USDT–USDC.e = 24.13 $VOLT 🗳 New Voting Epoch Begins Soon Voting power will reset shortly, followed by the launch of a fresh epoch. We’ll notify once voting is live so all veVOLT holders can participate. Coming next this week: $VOLT token burn and veVOLT staking rewards distribution. Source 🔥 1 502 views 02:20 Fuse Announcements 🔥 29,150,690 $VOLT burned this week. We continue to strengthen $VOLT’s deflationary model by: 🔸 Burning collected swap fees. 🔸 Burning $100 matched treasury reserves for every $1 fee collected All part of the community-approved plan to remove 1.72B $VOLT from the fixed supply of 8.72B. So far, 365,516,836 $VOLT or 21.25% of the total burn target has been permanently removed from circulation. 1️⃣ Fees burn: https://explorer.fuse.io/tx/0x8fb5d0ee33f249ffb42e60b36627e5dbe09f53f17276f6451a80f9eefcf80188 2️⃣ Treasury burn: https://explorer.fuse.io/tx/0x5e4a348e1601ea66e28a40008ea25d9d1e349a4cfb0aaf321574ebb24d9a34fd 🔥 More burns coming up in the next 2 weeks. Source 🔥 3 ❤ 1 328 views 23:05 Fuse Announcements 0:41 Media is too big VIEW IN TELEGRAM Money used to sit in the bank. Now it works while you sleep, and you can spend it anywhere. We are introducing the first onchain neobank that you can actually share with your non-crypto friends and family. 🔸 Earn up to 12% automatically. 🔸 5% bonus on bank and credit card deposits. 🔸 Spend with Visa (Apple Pay & Google Pay). 🔸 3% cashback on every purchase. 🔸 Gasless, frictionless, self-custodial. 🔸 Available across US, LATAM & Africa . Welcome to the new era of finance. 👉 Get started : solid.xyz Source ⚡ 2 ❤ 1 💯 1 523 views 13:01 Fuse Announcements Forwarded from Solid.xyz Announcements 🚀 Solid launches Solidarity, our new creator & ambassador program. If you love breaking down products, teaching others, and shaping the future of onchain finance, this is your moment. Earn weekly rewards. Grow with the ecosystem. Be early. 👉 Join here: https://taskon.xyz/Solidarity Read more: www.solid.xyz/post/solid-launches-solidarity Website | Twitter | Telegeram | Get $50 bonus 164 views 23:50 Fuse Announcements Forwarded from Solid.xyz Announcements 🚀 Solid Points Season 1 is LIVE Start earning points automatically for every $ you deposit on Solid. No forms. No complexities. Just rewards. 💥 5× boost is ON The earlier you deposit, the more you earn. Start stacking ➡️ https://app.solid.xyz/points .., or read more Website | Twitter | Telegeram | Get $50 bonus 178 views 23:50 Fuse Announcements Forwarded from Solid.xyz Announcements 🚀 Solid Points Season 1 is LIVE Earn points automatically on every $1 you hold and this week comes with a 5× boost. 🔵 What Are Solid Points? Points automatically accrue when you: • hold soUSD • earn yield • refer friends • stay active • (soon) spend with the Solid Card The key idea: You earn yield AND Points at the same time. Every hour. 🔥 Season 1: 5× Points Until Dec 24 The season 1 is where early adopters get the biggest boost. Until Dec 24: $1 earns like $5 $1,000 earns like $5,000 The earlier you deposit, the better your position compounds over time. 🟣 Why Points Matter Holding Points positions you for: • future reward distributions • governance influence • exclusive perks in future Seasons • leaderboard visibility • long-term compounding benefits Early users get the strongest upside. 👥 Referrals = The Fastest Way to Grow Invite someone → they earn Points → you earn 10% of all Points they generate. And it stacks when your referees invite others. One strong friend can outperform your entire balance. This is how leaderboard leaders are made. 🎯 How to Start (Right Now) > Create your Solid account > Deposit USDT/USDC → convert to soUSD > Earn Points automatically (5× until Dec 24) > Invite friends > Track your leaderboard rank in real time Every hour counts. 🏁 A New Era for Solid Starts Today Whether you're a passive saver, a DeFi power user, or someone who loves sharing good tools, the earlier you join, the stronger your position becomes. Season 1 is live. The 5× multiplier is live. Referrals are open. The leaderboard awaits. 👉 Start earning today https://app.solid.xyz/points Website | Twitter | Telegeram | Get $50 bonus ⚡ 1 162 views 13:00 Fuse Announcements Forwarded from Solid.xyz Announcements 0:05 This media is not supported in your browser VIEW IN TELEGRAM 💸 YOUR NETWORK IS YOUR NET WORTH Literally. Invite your friends to Solid and earn 10% of their daily points + get 1% cashback on all their purchases once they activate their Solid Card 💳 Grow together, earn together. Start inviting: app.solid.xyz/points Website | Twitter | Telegeram | Get $50 bonus 111 views 18:04 Fuse Announcements Forwarded from Solid.xyz Announcements 0:10 This media is not supported in your browser VIEW IN TELEGRAM 2,000 users. Solid just crossed 2,000 users, people who chose a better way to hold, grow, and spend their money. > 2,000 accounts earning real yield by default. > 2,000 users proving that on-chain finance can be simple, non-custodial, and actually usable. > 2,000 early believers helping shape what an onchain bank for everyday life should look like. If you’re here early, thank you for building this with us. If you’re not yet, now’s a very good time. 🔥 2 109 views 18:04 Fuse Announcements Forwarded from Solid.xyz Announcements This is your friendly reminder that it’s 5× Week ⚡️ Earn 5× Solid Points on every dollar you hold, but not for long. Ends soon. Don’t miss the boost. 👉 Deposit now Website | Twitter | Telegeram | Get $50 bonus 114 views 18:05 Fuse Announcements Forwarded from Solid.xyz Announcements Day 1 of the Layer3 campaign is already showing strong momentum. As promised, we continue to deliver real opportunities for the community ⬇️ > 326 new Solid users joined in the first 24 hours > Record-high daily deposits and TVL > 50,000 $FUSE for grasps You can still join to get a share of the reward - only 43 / 500 full completions so far. → Join the quest now Website | Twitter | Telegeram | Get $50 bonus 124 views 18:05 Fuse Announcements Forwarded from Solid.xyz Announcements Solid Points has been live for over a week. People are already climbing the leaderboard. Deposits are earning 5× points. You’re still early. Deposit now and start stacking points. Website | Twitter | Telegeram | Get $50 bonus ❤ 2 142 views 18:05 Fuse Announcements Forwarded from Solid.xyz Announcements 0:06 This media is not supported in your browser VIEW IN TELEGRAM 5,000 strong. 💚 Thank you to everyone who chose to stay with us while Solid was still being built, tested, questioned, and refined. This milestone is about a community that believes finance should be simpler, fairer, and actually work for real life. Website | Twitter | Telegeram | Get $50 bonus 149 views 15:19 Fuse Announcements 🔥 $VOLT supply keeps shrinking. This week we burned 32,089,649 $VOLT tokens: 🔸 317,719 $VOLT burned from protocol fees. 🔸 31,771,930 $VOLT burned from treasury. All part of the community-driven plan to remove 1.72B $VOLT from the fixed supply of 8.72B. 397,606,485 $VOLT burned so far. That’s 23.12% of the 1.72B burn target already completed. 1️⃣ Fees burn: https://explorer.fuse.io/tx/0x6376f7c4ea421a1fad4b286b0081135b06dbea82a9dff5bc4570ac993a1fe513 2️⃣ Treasury burn: https://explorer.fuse.io/tx/0x3baf5db6635fd06ae643cb3c669b2e0cdb3240d0459857b70c42fd2a9f21266b Source ⚡ 2 268 views 19:55 Fuse Announcements 🗳 Voting Epoch Rewards Distributed & New Epoch Live The previous voting epoch has concluded and 6,156,000 $VOLT rewards and added have now been distributed to the pools you voted for: 🔸 $wstETH – $WETH = 1,800,672 $VOLT (29.25%) 🔸 $WFUSE – $CEUS = 1,530,148 $VOLT (24.86%) 🔸 $WFUSE – $VOLT = 908,064 $VOLT (14.75%) 🔸 $VOLT – $soUSD = 690,623 $VOLT (11.22%) 🔸 $soUSD – $USDCe = 601,389 $VOLT (9.77%) 🔸 $WFUSE – $FOO = 470,360 $VOLT (7.64%) 🔸 $CEUS – $FOO = 124,558 $VOLT (2.02%) 🔸 $WFUSE – $fUSDV3 = 15,049 $VOLT (0.24%) 🔸 $WFUSE – $USDT = 15,049 $VOLT (0.24%) A new 2-week voting epoch has started. veVOLT holders can now vote to decide where the next round of $VOLT rewards will be allocated. 📅 Voting ends: December 31, 2025 Stake $VOLT : https://app.voltage.finance/stake/vevolt Vote here : https://app.voltage.finance/voting/gauges Source 460 views 22:56 Fuse Announcements 🎄 Merry Christmas and a happy upcoming year 2026! Thank you to our community and partners for an incredible year. Wishing you a joyful holiday season and a strong start to the year ahead. Source ❤ 5 409 views 18:09 Fuse Announcements 1:53 Media is too big VIEW IN TELEGRAM Building a community around your brand or business? 💡 Keeping your users engaged over time is a challenge. Freedom World helps brands and creators turn engagement into action by giving them tools to build communities, launch missions, gamify participation, and reward users for real interactions that drive repeat visits. 1,000+ communities. Hundreds of thousands of users. One shared experience. Built on Fuse. 👉 Get started on Freedom World : https://info.freedom.world Source Please open Telegram to view this post VIEW IN TELEGRAM ❤ 2 235 views 22:07 Fuse Announcements $VOLT supply reduction continues. 🔥 This week, 30,935,926 $VOLT was permanently burned: 🔸 306,296 $VOLT from protocol fees. 🔸 30,629,630 $VOLT from matched treasury reserves. All part of the community-approved plan to remove 1.72B $VOLT from the fixed 8.72B total supply. 📉 Burn progress update: 428,542,411 $VOLT burned so far or 24.92% of the total 1.72B $VOLT burn target already completed. 1️⃣ Fees burn: https://explorer.fuse.io/tx/0x297fd5ae8778740a844ab1a8c71fd65987f288bbe0d6230872cff1c810aee32a 2️⃣ Treasury burn: https://explorer.fuse.io/tx/0xcd02e87dd24aed0626eabef0a5a95032cb4d6a2583f813b22c40a5011c29b8dd Source ❤ 3 163 views 18:54 | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#benchmark-construction | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#results | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/cloudflare-cdn/ | Cloudflare CDN | LinkedIn Skip to main content LinkedIn Cloudflare in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cloudflare CDN Content Delivery Network (CDN) Software by Cloudflare See who's skilled in this Add as skill Get started Report this product About A fast, agile, and secure global network Cloudflare offers ultra-fast static and dynamic content delivery over our global edge network. Exercise precise control over how your content is cached, reduce bandwidth costs and take advantage of built-in unmetered DDoS protection. Media Products media viewer No more previous content What is caching? Caching is the process of storing copies of files in a temporary location for quick and easy retrieval. Caching is often used to speed up web site performance. No more next content Featured customers of Cloudflare CDN BLOX Digital Technology, Information and Internet 3,045 followers Zerodha Financial Services 580,035 followers Chrono24 Retail Luxury Goods and Jewelry 15,639 followers Blackmores Group Wellness and Fitness Services 61,966 followers INSEAD Higher Education 739,009 followers Zendesk Software Development 648,235 followers The University of Tokyo Research Services 157,481 followers Poki Computer Games 6,449 followers Show more Show less Similar products Google Cloud CDN Google Cloud CDN Content Delivery Network (CDN) Software Aura Managed CDN Aura Managed CDN Content Delivery Network (CDN) Software Fastly CDN Fastly CDN Content Delivery Network (CDN) Software Imperva Secure CDN Imperva Secure CDN Content Delivery Network (CDN) Software Fastly Streaming Media Delivery Fastly Streaming Media Delivery Content Delivery Network (CDN) Software CDNetworks CDN CDNetworks CDN Content Delivery Network (CDN) Software Sign in to see more Show more Show less Cloudflare products Argo Smart Routing Argo Smart Routing Network Management Software Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare DNS Cloudflare DNS Managed DNS Services Software Cloudflare Load Balancing Cloudflare Load Balancing Load Balancing Software Cloudflare Registrar Cloudflare Registrar Managed DNS Services Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Cloudflare SSL / TLS Cloudflare SSL / TLS SSL Certificates Software Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Cloudflare Web Analytics Cloudflare Web Analytics Digital Analytics Software Cloudflare Workers Cloudflare Workers Server Virtualization Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://opensource.guide/es/starting-a-project/ | Comenzando un proyecto de Código Abierto | Open Source Guides Acerca de Contribuir العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Comenzando un proyecto de Código Abierto Aprende más acerca del mundo del Código Abierto y prepárate a lanzar tu propio proyecto. Tabla de contenidos El cómo y el por qué del Código Abierto ¿Debería lanzar mi propio proyecto de Código Abierto? Lanzando tu propio proyecto de Código Abierto Dando un nombre y una marca a tu proyecto Tu checklist a armar previamente al lanzamiento del proyecto El cómo y el por qué del Código Abierto ¿Estás pensando cómo comenzar un proyecto de código abierto? ¡Felicitaciones! El mundo aprecia tu contribución. Hablemos sobre lo que es un proyecto de código abierto y por qué la gente lo lleva adelante ¿Qué significa “Código Abierto”? Cuando un proyecto es de código abierto, significa que cualquier persona puede ver, modificar, usar o distribuir tu proyecto para cualquier fin. Estos permisos están reforzados a través de una licencia de código abierto . “Código Abierto” es poderoso debido a que reduce las dificultades de adopción, permitiendo que las ideas se esparzan rápidamente. Para entender cómo funciona, imagina a un amigo que organiza una comida, te invita, y llevas una torta. Todos prueban la torta. ( usarlo ) ¡La torta es un éxito! Te piden la receta, la cual tu das. ( estudiarlo/verlo ) Un amigo, Pedro, es cocinero, y te sugiere colocar menos azúcar. ( modificarlo ) Otro amigo, Juan, te pide permiso para usarlo en una cena que tendrá la próxima semana. ( distribuirlo ) Realicemos una comparación: un proceso de código cerrado sería ir a un restaurante y pedir una porción de torta. Para ello tendrías que pagar por la misma, y el restaurante muy probablemente no te dará su receta. Si decidieras copiar su torta y venderla bajo otro nombre, el restaurante podría recurrir a acciones legales en contra. ¿Por qué las personas utilizan el “Código Abierto”? Una de las experiencias más gratificantes que obtengo de usar y colaborar en “Código abierto” proviene de las relaciones que construyo con cada uno de los desarrolladores que se encuentran enfrentando los mismos problemas que yo. — @kentcdodds, “How getting into Open Source has been awesome for me” Hay muchas razones por las cuales una persona u organización querrían involucrarse en un proyecto de código abierto. Algunos ejemplos son: Colaboración: Los proyectos de código abierto pueden aceptar cambios efectuados por cualquier persona alrededor del mundo. Exercism , por ejemplo, es una plataforma para ejercicios de programación con más de 350 colaboradores. Adopción y remezcla: Los proyectos de código abierto pueden ser usados por cualquiera para casi cualquier propósito. Las personas pueden usarlos hasta para construir otras cosas. WordPress , por ejemplo, comenzaron como un “fork” de un proyecto existente llamado b2 . Transparencia: Cualquiera puede inspeccionar un proyecto de este tipo, ya sea para encontrar errores como inconsistencias. La transparencia es de importancia para gobiernos como el de Bulgaria o United States , para industrias reguladas como la bancaria o la del cuidado de la salud, y para la seguridad del software como Let’s Encrypt . Código abierto no es solamente software. Uno puede “abrir” cualquier cosa, desde conjuntos de datos, hasta libros. Mira esto GitHub Explore para tener otros ejemplos. ¿“Código Abierto” significa gratis? Una de las cosas que causa confusión es el que el código abierto no cuesta dinero, es decir, es gratuito. Sin embargo, es un subproducto del valor general del “Código abierto”. Esto es debido a que una licencia open source requiere que cualquiera pueda usar, modificar, y compartir sus proyectos para casi cualquier propósito, y los proyectos en sí mismos suelen ser gratuitos. Si el uso del proyecto cuesta dinero, cualquiera puede legalmente hacer una copia del mismo y usar la versión gratuita en su lugar. El resultado es que la mayor parte de los proyectos de este tipo son gratuitos, pero “gratuito” no forma parte de la definición del “Código Abierto”. Hay formas de cobrar por estos proyectos en forma indirecta a través de licencias duales o funcionalidad limitada, y al mismo tiempo cumplir con la definición oficial del “Código Abierto”. ¿Debería lanzar mi propio proyecto de Código Abierto? La respuesta corta es “Sí”, debido a que, sin importar lo que suceda, lanzar tu propio proyecto es una buena forma de aprender acerca de cómo funciona el código abierto. Si nunca has utilizado este concepto en el pasado, probablemente estés preocupado de lo que otras personas digan, o que no digan nada. Si esto es así, debes saber que no estás solo. El código abierto funciona como cualquier otra actividad creativa, ya sea escribir o pintar. Puede dar miedo de compartir algo con el mundo, pero la única forma de mejorar es practicar (aún si no tienes una audiencia). Si no estás convencido todavía, toma un momento para pensar acerca de cuáles serán tus objetivos. Definiendo tus objetivos Los objetivos pueden ayudarte a detectar puntos en los que continuar trabajando, a qué decirle que no, y a dónde recurrir por ayuda. Comienza preguntándote, ¿Por qué estoy haciendo “código abierto” a mi proyecto? No hay nunca una respuesta correcta a esta pregunta. Puedes tener múltiples objetivos para un solo proyecto o diferentes proyectos con diferentes objetivos. Si tu único objetivo es mostrar al mundo tu trabajo, quizás no necesites ni quieras contribución, y quizás digas eso en el README. Por otra parte, si quieres ayuda, invertirás tiempo en clarificar la documentación y en hacer sentir a los recién llegados bienvenidos. En algún punto creé un UIAlertView personalizado que estaba usando… Y decidí hacerlo código abierto. Entonces lo modifiqué para que fuera más dinámico y lo subí a GitHub. Además escribí mi primera documentación explicando a otros desarrolladores cómo usarlo en sus proyectos. Probablemente nadie jamás lo haya usado porque era un proyecto muy simple, pero me sentía bien habiendo contribuido. — @mavris, “Self-taught Software Developers: Why Open Source is important to us” A medida que tu proyecto crezca, tu comunidad podrá llegar a necesitar más que solamente el código. Es decir, necesitará que respondas a issues, que revises el código, entre otras tareas importantes en un proyecto de esta clase. El tiempo que dediques a tareas ajenas a codificar dependerá del tamaño y alcance de tus proyectos, deberías estar preparado, como encargado de mantenimiento, a afrontarlas por tu cuenta o encontrar a alguien que pueda ayudarte. Si eres parte de una compañia que quiere “abrir” el código de un proyecto, debes asegurarte que el mismo tiene recursos internos que necesitan mejorar. Necesitarás identificar al responsable de mantener el proyecto una vez lanzado y definir cómo vas a compartir esas tareas con tu comunidad. Si necesitas un presupuesto dedicado o personal para la promoción, operación y mantenimiento del proyecto, empieza esas conversaciones de forma temprana. A medida que abres el código de tu proyecto, es importante que puedas asegurarte de que los procesos de administración tengan en cuenta a las contribuciones y a las habilidades de la comunidad alrededor de tu proyecto. No hay que asustarse al involucrar contribuyentes, que no estén empleados en tu empresa, en aspectos claves del proyecto (especialmente si son contribuyentes frecuentes). — @captainsafia, “So you wanna open source a project, eh?” Contribuyendo en otros proyectos. Si tu meta es aprender cómo contribuir con otros o entender cómo funciona el “Código Abierto”, considera contribuir en proyectos existentes. Comienza con proyectos que ya has estado usando y que te gustan. Contribuir a un proyecto puede ser tan simple como arreglar “typos” o actualizar documentación. Si no sabés como comenzar a contribuir, mira esta Guía sobre cómo contribuir . Lanzando tu propio proyecto de Código Abierto No hay momento perfecto para abrir el código de tu trabajo. Puedes abrir el de una idea, el de un trabajo en progreso o después de varios años de ser un proyecto cerrado. Generalmente, puedes abrir el código de tu proyecto cuando te sientas cómodo de que otras personas vean y te aconsejen sobre tu trabajo. No importa en qué etapa decidas hacerlo, cada proyecto debe incluir la siguiente documentación. Licencia de Código Abierto README Pautas para contribuir Código de conducta Como encargado de mantenimiento, estos componentes ayudarán a comunicar tus deseos, manejar tus contribuciones y proteger los derechos legales de cada uno (incluyéndote). Incrementan significativamente tus posibilidades de tener una experiencia positiva. Si tu proyecto está en GitHub, colocar estos archivos en tu directorio raíz con las recomendaciones de nombrado de los mismos, te ayudará a que GitHub los reconozca automáticamente y muestre a tus lectores. Eligiendo una licencia Una licencia de Código Abierto garantiza que otros puedan usar, copiar, modificar y contribuir en tu proyecto sin problemas. Además ayuda a protegerte de situaciones legales complejas. ¡Debes elegir una licencia cuando inicias tu proyecto! El trabajo legal no es divertido. La buena noticia es que puedes copiar y pegar una licencia existente en tu repositorio. Solo llevará un minuto proteger tu trabajo. MIT , Apache 2.0 , y GPLv3 son las licencias más populares, pero hay otras opciones para elegir. Cuando creas un nuevo proyecto en GitHub, te dan la opción de seleccionar una licencia. Incluir una licencia de Código Abierto hará tu proyecto efectivamente de Código Abierto. Si tienes otras preguntas acerca del aspecto legal al manejar proyectos de este tipo, te tenemos cubierto . Escribiendo un README Los README hacen más que explicar cómo usar tu proyecto, también explican por qué importa el mismo, y qué pueden hacer los usuarios con él. Trata de que tu README responda a las siguientes preguntas: ¿Qué hace el proyecto? ¿Por qué es útil? ¿Cómo se debe comenzar? ¿Dónde puedo buscar más información? (si es que la necesito) Puedes usarlo para responder otras preguntas: cómo manejo las contribuciones, cuáles son las metas del proyecto, e información acerca de licencias y atribuciones. Si no quieres aceptar contribuciones, o tu proyecto no está listo para producción, lo escribes. Mejor documentación significa más usuarios, menos pedidos de soporte, y más contribuyentes. Recuerda que tus lectores no son tú, son personas que quizás acudan al proyecto con experiencias totalmente distintas a las tuyas. — @tracymakes, “Writing So Your Words Are Read (video)” Algunas veces las personas evitan escribir el README debido a que sienten que su proyecto está incompleto, o qué no quiere contribuciones. Ambas son muy buenas razones para escribir uno… Para más inspiración, trata de usar @18F’s “Making READMEs Readable” o @PurpleBooth’s README template para escribir un README. Cuando incluyes un archivo de este tipo en tu directorio raíz, GitHub automáticamente lo mostrará en la página principal del repositorio. Escribiendo las pautas para contribuir Un archivo CONTRIBUTING le da información a la audiencia acerca de cómo participar en el proyecto, por ejemplo: Cómo archivar un reporte de bug (trata de usar issue and pull request templates ) Cómo sugerir una nueva funcionalidad/característica Cómo establecer tu entorno y correr pruebas Además de detalles técnicos, este archivo es una oportunidad para comunicar tus expectativas, como: Los tipos de contribución que esperas Tu visión del proyecto (La hoja de ruta) Cómo deberían comunicarse (o cómo no) los contribuyentes contigo Usando un tono cálido y amigable, y ofreciendo sugerencias específicas para contribuciones puede ayudar a los iniciados a sentirse bienvenidos y ansiosos de participar. Por ejemplo, Active Admin comienza su guía de contribuciones con: Primero, muchas gracias por considerar contribuir a Active Admin. Son personas como ustedes las que la hacen una gran herramienta. En las primeras etapas del proyecto, tu archivo CONTRIBUTING puede ser simple. Siempre debes explicar cómo reportar bugs o issues, y cualquier requerimiento técnico (como tests) para hacer una contribución. Con el tiempo, quizás debas agregar otras “preguntas frecuentes” a tu archivo. Escribir esta información significa que menos personas te preguntarán nuevamente la misma pregunta. Para más información sobre este tema, puedes acceder a @nayafia’s contributing guide template o @mozilla’s “How to Build a CONTRIBUTING.md” . Vincula tu CONTRIBUTING desde tu README, así más personas pueden verlo.Si tu colocas el archivo en tu repositorio , GitHub automáticamente lo vinculará cuando un contribuyente cree un issue o abra un pull request. Estableciendo un código de conducta. Todos hemos experimentado cierta sensación de abuso cuando nos han tratado de explicar por qué algo tiene que ser de determinada forma, o como usuarios al hacer una pregunta simple. (…) Un código de conducta se vuelve una forma sencilla (referenciable y vinculable) de documento que nos indica que un equipo toma las críticas constructivas seriamente. — @mlynch, “Making Open Source a Happier Place” Finalmente, un código de conducta ayuda a establecer reglas base de comportamiento para los participantes de tus proyectos. Esto es muy deseable si estás lanzando un nuevo proyecto de código abierto para una compañía o comunidad. Un código de conducta facilita un comportamiento en comunidad sano y constructivo, reduciendo tu estrés como encargado de mantenimiento. Para más información, entra a Guía del Código de Conducta . Además de comunicar cómo esperas que se comporten los participantes, un código de conducta tiende a describir a quién se aplican las expectativas, cuando apliquen, y qué hacer si una violación a las mismas ocurre. Como muchas licencias de código abierto, existen estándares emergentes para códigos de conducta para que no debas escribir uno propio. El Contributor Covenant es un código de conducta usado por más de 40000 proyectos de código abierto , incluyendo Kubernetes, Rails, and Swift. Debes estar preparado para redefinir el tuyo cuando sea necesario. Copia el texto directamente en el archivo CODE_OF_CONDUCT dentro de tu repositorio, en el directorio raíz, y vincúlalo desde tu README. Dando un nombre y una marca a tu proyecto La marca es más que elegir un nombre atractivo y un buen logo. Es acerca de cómo hablar de tu proyecto y llegar a la gente. Eligiendo el nombre correcto Debes elegir un nombre sencillo de recordar y que en lo posible de una idea de lo que el proyecto hace. Ejemplos son: Sentry monitorea apps Thin es un server de Ruby Si estás construyendo sobre un proyecto ya existente, usar su nombre como prefijo suele clarificar lo que el mismo hace (ejemplo: node-fetch trae window.fetch a Node.js). Considera claridad por sobre todo. Los chismes son divertidos, pero recuerda que algunas bromas pueden no ser traducidas otros idiomas o llevadas a otras culturas, con gente que posee diferentes experiencias a las tuyas. Algunos de tus potenciales usuarios pueden ser empleados de la compañía: ¡no debes hacerlos sentir incómodos cuando tienen que explicar tu proyecto en el trabajo! Evitando conflictos con los nombres Busca proyectos con el mismo nombre, o similar, especialmente si compartes el mismo ecosistema o idioma. Si tu nombre coincide con algún otro proyecto popular, puede confundir a las personas. Si quieres una página web, manejo de Twitter, u otros ítems que representen tu proyecto, asegúrate de que puedas asignar el nombre que quieras. En lo posible, reserva tu nombre ahora para estar tranquilo, aunque aún no lo vayas a usar. Tu nombre no debe infringir ninguna marca (trademark), de ser así la compañía puede pedirte que des de baja a tu proyecto, o puede tomar acciones legales en tu contra. No vale el riesgo. Puedes verificar WIPO para verificar conflictos de este tipo. Si estás en una compañía, ésta es una de las cosas con las cual tu equipo legal puede ayudar . Finalmente, haz una búsqueda rápida en Google: ¿Las personas podrán encontrar rápidamente el nombre? ¿En los resultados de búsqueda aparece algo que no quieres que ellos vean? Cómo escribir y codificar afecta tu marca Durante el ciclo de vida de tu proyecto, escribirás una serie grande de documentos: README, tutoriales, issues, etc.. Ya sea documentación oficial como casual (un email), tu estilo de redacción debe ser parte de la marca de tu proyecto. Considera cómo será visto por tu audiencia y si es o no adecuado. He tratado de involucrarme con cada hilo del listado de emails, y mostrando un comportamiento ejemplar, siendo agradable con las personas, tomando sus issues seriamente y tratando de ser de ayuda por sobre todo. Después de un tiempo las personas no solo me buscaban para hacerme preguntas si no para ayudarme a responderlas, y, para mi sorpresa, imitaban mi estilo. — @janl on CouchDB , “Sustainable Open Source” Usando un lenguaje inclusivo, puede ir lejos haciendo que tus proyectos reciban de forma más cálida a los nuevos participantes. Mantén un lenguaje simple. Luego de cómo expresarte, tu estilo a la hora de codificar puede ser importante también. Angular y jQuery son ejemplos de proyectos con un riguroso trato en este sentido. No es necesario escribir una “guía de estilo” para tus proyectos cuando recién estás comenzando, y quizás hasta descubras que disfrutas al incorporar distintos estilos de codificación en tu proyecto. Pero deberías anticiparte y definir cómo tu redacción y estilo de codificación puede atraer o no a distintos tipos de personas. Define esto al comienzo. Tu checklist a armar previamente al lanzamiento del proyecto Estás listo para iniciar tu propio proyecto de Código Abierto. Aquí dejamos un checklist que puede ayudar. ¡Una vez marcadas todas las casillas estás listo para continuar! Clickea “publish” . Documentación El proyecto tiene un archivo LICENCIA con una licencia de Código Abierto. El proyecto tiene documentación básica (README, CONTRIBUYENDO, CóDIGO_DE_CONDUCTA). El nombre es fácil de recordar, da una idea de lo que el proyecto hace, y no entra en conflicto con un proyecto preexistente ni infringe en marcas registadas. La cola de issues está actualizada, organizada y etiquetada. Code Los proyectos usan código consistente en cuanto a las convenciones usadas, además de nombres claros de funciones, métodos y variables. El código está comentado correctamente, y documentado. No hay material sensible en la revisión histórica, issues, o pull requests. (Ejemplo: contraseñas) Personas Si eres un individuo: Has hablado con el departamento legal y/o entiendes el IP y las políticas de Código Abierto de tu compañía (Si eres empleado en algún lado) Si eres una compañía u organización: Has hablado con tu departamento legal. Tienes un plan de marketing para anunciar y promocionar tu proyecto. Alguien se ha comprometido a administrar las interacciones de la comunidad. Al menos dos personas tienen rol y acceso de administradores al proyecto. ¡Lo has hecho! ¡Felicitaciones por abrir el código de tu primer proyecto! Sin importar el devenir, trabajar en público es un regalo a la comunidad. Con cada commit, comentario y pull request, estás creando oportunidades no solo para ti, si no para que otras personas puedan aprender y crecer. Volver a todas las guías Guías relacionadas Encontrando Usuarios Para Tu Proyecto Ayuda a tu proyecto de código abierto a crecer poniéndolo en manos de usuarios satisfechos. Construyendo Comunidades de Bienvenida Construyendo una comunidad que anime a al gente a usar, contribuir y educar con su proyecto Scroll to Top Contribuir ¿Tienes alguna sugerencia? Este contenido es de código abierto. Ayúdanos a mejorarlo. Contribuir Suscribirse para novedades Sea el primero en enterarse de los últimos consejos y recursos de código abierto. Dirección de correo fine print con hecho por y amigos | 2026-01-13T09:29:15 |
https://proceedings.mlr.press/v37/ | Proceedings of Machine Learning Research | Proceedings of the 32nd International Conference on Machine Learning Held in Lille, France on 07-09 July 2015 Published as Volume 37 by the Proceedings of Machine Learning Research on 01 June 2015. Volume Edited by: Francis Bach David Blei Series Editors: Neil D. Lawrence Mark Reid Proceedings of Machine Learning Research PMLR JMLR MLOSS FAQ Submission Format [ edit ] Volume 37: International Conference on Machine Learning, 7-9 July 2015, Lille, France [ edit ] Editors: Francis Bach, David Blei [ bib ][ citeproc ] Filter Authors: Filter Titles: Stochastic Optimization with Importance Sampling for Regularized Loss Minimization Peilin Zhao, Tong Zhang ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1-9 [ abs ][ Download PDF ] Approval Voting and Incentives in Crowdsourcing Nihar Shah, Dengyong Zhou, Yuval Peres ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:10-19 [ abs ][ Download PDF ][ Supplementary Material ] A low variance consistent test of relative dependency Wacha Bounliphone, Arthur Gretton, Arthur Tenenhaus, Matthew Blaschko ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:20-29 [ abs ][ Download PDF ] An Aligned Subtree Kernel for Weighted Graphs Lu Bai, Luca Rossi, Zhihong Zhang, Edwin Hancock ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:30-39 [ abs ][ Download PDF ][ Supplementary Material ] Spectral Clustering via the Power Method - Provably Christos Boutsidis, Prabhanjan Kambadur, Alex Gittens ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:40-48 [ abs ][ Download PDF ] Information Geometry and Minimum Description Length Networks Ke Sun, Jun Wang, Alexandros Kalousis, Stephan Marchand-Maillet ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:49-58 [ abs ][ Download PDF ][ Supplementary Material ] Efficient Training of LDA on a GPU by Mean-for-Mode Estimation Jean-Baptiste Tristan, Joseph Tassarotti, Guy Steele ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:59-68 [ abs ][ Download PDF ] Adaptive Stochastic Alternating Direction Method of Multipliers Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:69-77 [ abs ][ Download PDF ] A Lower Bound for the Optimization of Finite Sums Alekh Agarwal, Leon Bottou ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:78-86 [ abs ][ Download PDF ][ Supplementary Material ] Learning Word Representations with Hierarchical Sparse Coding Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah Smith ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:87-96 [ abs ][ Download PDF ][ Supplementary Material ] Learning Transferable Features with Deep Adaptation Networks Mingsheng Long, Yue Cao, Jianmin Wang, Michael Jordan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:97-105 [ abs ][ Download PDF ] Robust partially observable Markov decision process Takayuki Osogami ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:106-115 [ abs ][ Download PDF ][ Supplementary Material ] On the Relationship between Sum-Product Networks and Bayesian Networks Han Zhao, Mazen Melibari, Pascal Poupart ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:116-124 [ abs ][ Download PDF ][ Supplementary Material ] Learning from Corrupted Binary Labels via Class-Probability Estimation Aditya Menon, Brendan Van Rooyen, Cheng Soon Ong, Bob Williamson ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:125-134 [ abs ][ Download PDF ][ Supplementary Material ] An Explicit Sampling Dependent Spectral Error Bound for Column Subset Selection Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:135-143 [ abs ][ Download PDF ] A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate Ohad Shamir ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:144-152 [ abs ][ Download PDF ][ Supplementary Material ] Attribute Efficient Linear Regression with Distribution-Dependent Sampling Doron Kukliansky, Ohad Shamir ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:153-161 [ abs ][ Download PDF ][ Supplementary Material ] Learning Local Invariant Mahalanobis Distances Ethan Fetaya, Shimon Ullman ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:162-168 [ abs ][ Download PDF ] Finding Linear Structure in Large Datasets with Scalable Canonical Correlation Analysis Zhuang Ma, Yichao Lu, Dean Foster ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:169-178 [ abs ][ Download PDF ][ Supplementary Material ] Abstraction Selection in Model-based Reinforcement Learning Nan Jiang, Alex Kulesza, Satinder Singh ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:179-188 [ abs ][ Download PDF ][ Supplementary Material ] Surrogate Functions for Maximizing Precision at the Top Purushottam Kar, Harikrishna Narasimhan, Prateek Jain ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:189-198 [ abs ][ Download PDF ][ Supplementary Material ] Optimizing Non-decomposable Performance Measures: A Tale of Two Classes Harikrishna Narasimhan, Purushottam Kar, Prateek Jain ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:199-208 [ abs ][ Download PDF ][ Supplementary Material ] Coresets for Nonparametric Estimation - the Case of DP-Means Olivier Bachem, Mario Lucic, Andreas Krause ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:209-217 [ abs ][ Download PDF ][ Supplementary Material ] A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits Pratik Gajane, Tanguy Urvoy, Fabrice Clérot ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:218-227 [ abs ][ Download PDF ][ Supplementary Material ] Functional Subspace Clustering with Application to Time Series Mohammad Taha Bahadori, David Kale, Yingying Fan, Yan Liu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:228-237 [ abs ][ Download PDF ][ Supplementary Material ] Accelerated Online Low Rank Tensor Learning for Multivariate Spatiotemporal Streams Rose Yu, Dehua Cheng, Yan Liu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:238-247 [ abs ][ Download PDF ][ Supplementary Material ] Atomic Spatial Processes Sean Jewell, Neil Spencer, Alexandre Bouchard-Côté ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:248-256 [ abs ][ Download PDF ][ Supplementary Material ] Classification with Low Rank and Missing Data Elad Hazan, Roi Livni, Yishay Mansour ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:257-266 [ abs ][ Download PDF ][ Supplementary Material ] Dynamic Sensing: Better Classification under Acquisition Constraints Oran Richman, Shie Mannor ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:267-275 [ abs ][ Download PDF ] A Modified Orthant-Wise Limited Memory Quasi-Newton Method with Convergence Analysis Pinghua Gong, Jieping Ye ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:276-284 [ abs ][ Download PDF ][ Supplementary Material ] Telling cause from effect in deterministic linear dynamical systems Naji Shajarisales, Dominik Janzing, Bernhard Schoelkopf, Michel Besserve ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:285-294 [ abs ][ Download PDF ][ Supplementary Material ] High Dimensional Bayesian Optimisation and Bandits via Additive Models Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:295-304 [ abs ][ Download PDF ][ Supplementary Material ] Theory of Dual-sparse Regularized Randomized Reduction Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:305-314 [ abs ][ Download PDF ] Generalization error bounds for learning to rank: Does the length of document lists matter? Ambuj Tewari, Sougata Chaudhuri ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:315-323 [ abs ][ Download PDF ][ Supplementary Material ] PeakSeg: constrained optimal segmentation and supervised penalty learning for peak detection in count data Toby Hocking, Guillem Rigaill, Guillaume Bourque ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:324-332 [ abs ][ Download PDF ] Mind the duality gap: safer rules for the Lasso Olivier Fercoq, Alexandre Gramfort, Joseph Salmon ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:333-342 [ abs ][ Download PDF ][ Supplementary Material ] A General Analysis of the Convergence of ADMM Robert Nishihara, Laurent Lessard, Ben Recht, Andrew Packard, Michael Jordan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:343-352 [ abs ][ Download PDF ] Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization Yuchen Zhang, Xiao Lin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:353-361 [ abs ][ Download PDF ] DiSCO: Distributed Optimization for Self-Concordant Empirical Loss Yuchen Zhang, Xiao Lin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:362-370 [ abs ][ Download PDF ] Spectral MLE: Top-K Rank Aggregation from Pairwise Comparisons Yuxin Chen, Changho Suh ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:371-380 [ abs ][ Download PDF ][ Supplementary Material ] Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs Stephen Bach, Bert Huang, Jordan Boyd-Graber, Lise Getoor ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:381-390 [ abs ][ Download PDF ][ Supplementary Material ] Structural Maxent Models Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Umar Syed ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:391-399 [ abs ][ Download PDF ][ Supplementary Material ] A Provable Generalized Tensor Spectral Method for Uniform Hypergraph Partitioning Debarghya Ghoshdastidar, Ambedkar Dukkipati ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:400-409 [ abs ][ Download PDF ][ Supplementary Material ] The Benefits of Learning with Strongly Convex Approximate Inference Ben London, Bert Huang, Lise Getoor ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:410-418 [ abs ][ Download PDF ][ Supplementary Material ] Pushing the Limits of Affine Rank Minimization by Adapting Probabilistic PCA Bo Xin, David Wipf ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:419-427 [ abs ][ Download PDF ] Budget Allocation Problem with Multiple Advertisers: A Game Theoretic View Takanori Maehara, Akihiro Yabe, Ken-ichi Kawarabayashi ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:428-437 [ abs ][ Download PDF ][ Supplementary Material ] Tracking Approximate Solutions of Parameterized Optimization Problems over Multi-Dimensional (Hyper-)Parameter Domains Katharina Blechschmidt, Joachim Giesen, Soeren Laue ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:438-447 [ abs ][ Download PDF ] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe, Christian Szegedy ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:448-456 [ abs ][ Download PDF ][ Supplementary Material ] Distributed Estimation of Generalized Matrix Rank: Efficient Algorithms and Lower Bounds Yuchen Zhang, Martin Wainwright, Michael Jordan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:457-465 [ abs ][ Download PDF ] Landmarking Manifolds with Gaussian Processes Dawen Liang, John Paisley ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:466-474 [ abs ][ Download PDF ] Markov Mixed Membership Models Aonan Zhang, John Paisley ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:475-483 [ abs ][ Download PDF ] A Unified Framework for Outlier-Robust PCA-like Algorithms Wenzhuo Yang, Huan Xu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:484-493 [ abs ][ Download PDF ][ Supplementary Material ] Streaming Sparse Principal Component Analysis Wenzhuo Yang, Huan Xu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:494-503 [ abs ][ Download PDF ][ Supplementary Material ] A Divide and Conquer Framework for Distributed Graph Clustering Wenzhuo Yang, Huan Xu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:504-513 [ abs ][ Download PDF ][ Supplementary Material ] How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances? Senjian An, Farid Boussaid, Mohammed Bennamoun ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:514-523 [ abs ][ Download PDF ] Improved Regret Bounds for Undiscounted Continuous Reinforcement Learning K. Lakshmanan, Ronald Ortner, Daniil Ryabko ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:524-532 [ abs ][ Download PDF ] The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling Michael Betancourt ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:533-540 [ abs ][ Download PDF ] Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets Dan Garber, Elad Hazan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:541-549 [ abs ][ Download PDF ][ Supplementary Material ] Ordered Stick-Breaking Prior for Sequential MCMC Inference of Bayesian Nonparametric Models Mrinal Das, Trapit Bansal, Chiranjib Bhattacharyya ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:550-559 [ abs ][ Download PDF ][ Supplementary Material ] Online Learning of Eigenvectors Dan Garber, Elad Hazan, Tengyu Ma ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:560-568 [ abs ][ Download PDF ][ Supplementary Material ] A Unifying Framework of Anytime Sparse Gaussian Process Regression Models with Stochastic Variational Inference for Big Data Trong Nghia Hoang, Quang Minh Hoang, Bryan Kian Hsiang Low ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:569-578 [ abs ][ Download PDF ][ Supplementary Material ] Yinyang K-Means: A Drop-In Replacement of the Classic K-Means with Consistent Speedup Yufei Ding, Yue Zhao, Xipeng Shen, Madanlal Musuvathi, Todd Mytkowicz ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:579-587 [ abs ][ Download PDF ] Ordinal Mixed Membership Models Seppo Virtanen, Mark Girolami ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:588-596 [ abs ][ Download PDF ] Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:597-606 [ abs ][ Download PDF ][ Supplementary Material ] Fast Kronecker Inference in Gaussian Processes with non-Gaussian Likelihoods Seth Flaxman, Andrew Wilson, Daniel Neill, Hannes Nickisch, Alex Smola ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:607-616 [ abs ][ Download PDF ][ Supplementary Material ] Statistical and Algorithmic Perspectives on Randomized Sketching for Ordinary Least-Squares Garvesh Raskutti, Michael Mahoney ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:617-625 [ abs ][ Download PDF ] On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence Nathaniel Korda, Prashanth La ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:626-634 [ abs ][ Download PDF ] Learning Parametric-Output HMMs with Two Aliased States Roi Weiss, Boaz Nadler ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:635-644 [ abs ][ Download PDF ][ Supplementary Material ] Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Yarin Gal, Yutian Chen, Zoubin Ghahramani ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:645-654 [ abs ][ Download PDF ][ Supplementary Material ] Improving the Gaussian Process Sparse Spectrum Approximation by Representing Uncertainty in Frequency Inputs Yarin Gal, Richard Turner ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:655-664 [ abs ][ Download PDF ][ Supplementary Material ] Ranking from Stochastic Pairwise Preferences: Recovering Condorcet Winners and Tournament Solution Sets at the Top Arun Rajkumar, Suprovat Ghoshal, Lek-Heng Lim, Shivani Agarwal ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:665-673 [ abs ][ Download PDF ][ Supplementary Material ] Stochastic Dual Coordinate Ascent with Adaptive Probabilities Dominik Csiba, Zheng Qu, Peter Richtarik ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:674-683 [ abs ][ Download PDF ][ Supplementary Material ] Vector-Space Markov Random Fields via Exponential Families Wesley Tansey, Oscar Hernan Madrid Padilla, Arun Sai Suggala, Pradeep Ravikumar ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:684-692 [ abs ][ Download PDF ][ Supplementary Material ] JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes Jonathan Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash Mansinghka ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:693-701 [ abs ][ Download PDF ][ Supplementary Material ] Low Rank Approximation using Error Correcting Coding Matrices Shashanka Ubaru, Arya Mazumdar, Yousef Saad ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:702-710 [ abs ][ Download PDF ] Off-policy Model-based Learning under Unknown Factored Dynamics Assaf Hallak, Francois Schnitzler, Timothy Mann, Shie Mannor ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:711-719 [ abs ][ Download PDF ][ Supplementary Material ] Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification Zhiwu Huang, Ruiping Wang, Shiguang Shan, Xianqiu Li, Xilin Chen ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:720-729 [ abs ][ Download PDF ] Asymmetric Transfer Learning with Deep Gaussian Processes Melih Kandemir ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:730-738 [ abs ][ Download PDF ] Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing Rongda Zhu, Quanquan Gu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:739-747 [ abs ][ Download PDF ] BilBOWA: Fast Bilingual Distributed Representations without Word Alignments Stephan Gouws, Yoshua Bengio, Greg Corrado ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:748-756 [ abs ][ Download PDF ] Multi-view Sparse Co-clustering via Proximal Alternating Linearized Minimization Jiangwen Sun, Jin Lu, Tingyang Xu, Jinbo Bi ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:757-766 [ abs ][ Download PDF ][ Supplementary Material ] Cascading Bandits: Learning to Rank in the Cascade Model Branislav Kveton, Csaba Szepesvari, Zheng Wen, Azin Ashkan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:767-776 [ abs ][ Download PDF ][ Supplementary Material ] Latent Topic Networks: A Versatile Probabilistic Programming Framework for Topic Models James Foulds, Shachi Kumar, Lise Getoor ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:777-786 [ abs ][ Download PDF ][ Supplementary Material ] Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions Alina Ene, Huy Nguyen ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:787-795 [ abs ][ Download PDF ][ Supplementary Material ] Alpha-Beta Divergences Discover Micro and Macro Structures in Data Karthik Narayan, Ali Punjani, Pieter Abbeel ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:796-804 [ abs ][ Download PDF ][ Supplementary Material ] Fictitious Self-Play in Extensive-Form Games Johannes Heinrich, Marc Lanctot, David Silver ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:805-813 [ abs ][ Download PDF ][ Supplementary Material ] Counterfactual Risk Minimization: Learning from Logged Bandit Feedback Adith Swaminathan, Thorsten Joachims ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:814-823 [ abs ][ Download PDF ] The Hedge Algorithm on a Continuum Walid Krichene, Maximilian Balandat, Claire Tomlin, Alexandre Bayen ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:824-832 [ abs ][ Download PDF ][ Supplementary Material ] A Linear Dynamical System Model for Text David Belanger, Sham Kakade ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:833-842 [ abs ][ Download PDF ][ Supplementary Material ] Unsupervised Learning of Video Representations using LSTMs Nitish Srivastava, Elman Mansimov, Ruslan Salakhudinov ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:843-852 [ abs ][ Download PDF ] Message Passing for Collective Graphical Models Tao Sun, Dan Sheldon, Akshat Kumar ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:853-861 [ abs ][ Download PDF ] DP-space: Bayesian Nonparametric Subspace Clustering with Small-variance Asymptotics Yining Wang, Jun Zhu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:862-870 [ abs ][ Download PDF ][ Supplementary Material ] HawkesTopic: A Joint Model for Network Inference and Topic Modeling from Text-Based Cascades Xinran He, Theodoros Rekatsinas, James Foulds, Lise Getoor, Yan Liu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:871-880 [ abs ][ Download PDF ][ Supplementary Material ] MADE: Masked Autoencoder for Distribution Estimation Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:881-889 [ abs ][ Download PDF ][ Supplementary Material ] An Online Learning Algorithm for Bilinear Models Yuanbin Wu, Shiliang Sun ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:890-898 [ abs ][ Download PDF ][ Supplementary Material ] Adaptive Belief Propagation Georgios Papachristoudis, John Fisher ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:899-907 [ abs ][ Download PDF ][ Supplementary Material ] Large-scale log-determinant computation through stochastic Chebyshev expansions Insu Han, Dmitry Malioutov, Jinwoo Shin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:908-917 [ abs ][ Download PDF ][ Supplementary Material ] Differentially Private Bayesian Optimization Matt Kusner, Jacob Gardner, Roman Garnett, Kilian Weinberger ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:918-927 [ abs ][ Download PDF ][ Supplementary Material ] A Nearly-Linear Time Framework for Graph-Structured Sparsity Chinmay Hegde, Piotr Indyk, Ludwig Schmidt ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:928-937 [ abs ][ Download PDF ][ Supplementary Material ] Support Matrix Machines Luo Luo, Yubo Xie, Zhihua Zhang, Wu-Jun Li ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:938-947 [ abs ][ Download PDF ][ Supplementary Material ] Rademacher Observations, Private Data, and Boosting Richard Nock, Giorgio Patrini, Arik Friedman ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:948-956 [ abs ][ Download PDF ] From Word Embeddings To Document Distances Matt Kusner, Yu Sun, Nicholas Kolkin, Kilian Weinberger ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:957-966 [ abs ][ Download PDF ] Bayesian and Empirical Bayesian Forests Taddy Matthew, Chun-Sheng Chen, Jun Yu, Mitch Wyle ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:967-976 [ abs ][ Download PDF ] Inferring Graphs from Cascades: A Sparse Recovery Framework Jean Pouget-Abadie, Thibaut Horel ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:977-986 [ abs ][ Download PDF ][ Supplementary Material ] Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM Ching-Pei Lee, Dan Roth ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:987-996 [ abs ][ Download PDF ][ Supplementary Material ] Safe Exploration for Optimization with Gaussian Processes Yanan Sui, Alkis Gotovos, Joel Burdick, Andreas Krause ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:997-1005 [ abs ][ Download PDF ][ Supplementary Material ] The Ladder: A Reliable Leaderboard for Machine Learning Competitions Avrim Blum, Moritz Hardt ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1006-1014 [ abs ][ Download PDF ] Enabling scalable stochastic gradient-based inference for Gaussian processes by employing the Unbiased LInear System SolvEr (ULISSE) Maurizio Filippone, Raphael Engler ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1015-1024 [ abs ][ Download PDF ] Finding Galaxies in the Shadows of Quasars with Gaussian Processes Roman Garnett, Shirley Ho, Jeff Schneider ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1025-1033 [ abs ][ Download PDF ] Following the Perturbed Leader for Online Structured Learning Alon Cohen, Tamir Hazan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1034-1042 [ abs ][ Download PDF ][ Supplementary Material ] Reified Context Models Jacob Steinhardt, Percy Liang ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1043-1052 [ abs ][ Download PDF ][ Supplementary Material ] Large-Scale Markov Decision Problems with KL Control Cost and its Application to Crowdsourcing Yasin Abbasi-Yadkori, Peter Bartlett, Xi Chen, Alan Malek ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1053-1062 [ abs ][ Download PDF ][ Supplementary Material ] Learning Fast-Mixing Models for Structured Prediction Jacob Steinhardt, Percy Liang ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1063-1072 [ abs ][ Download PDF ][ Supplementary Material ] A Probabilistic Model for Dirty Multi-task Feature Selection Daniel Hernandez-Lobato, Jose Miguel Hernandez-Lobato, Zoubin Ghahramani ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1073-1082 [ abs ][ Download PDF ][ Supplementary Material ] On Deep Multi-View Representation Learning Weiran Wang, Raman Arora, Karen Livescu, Jeff Bilmes ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1083-1092 [ abs ][ Download PDF ][ Supplementary Material ] Learning Program Embeddings to Propagate Feedback on Student Code Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, Leonidas Guibas ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1093-1102 [ abs ][ Download PDF ] Safe Subspace Screening for Nuclear Norm Regularized Least Squares Problems Qiang Zhou, Qi Zhao ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1103-1112 [ abs ][ Download PDF ][ Supplementary Material ] Efficient Learning in Large-Scale Combinatorial Semi-Bandits Zheng Wen, Branislav Kveton, Azin Ashkan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1113-1122 [ abs ][ Download PDF ][ Supplementary Material ] Swept Approximate Message Passing for Sparse Estimation Andre Manoel, Florent Krzakala, Eric Tramel, Lenka Zdeborovà ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1123-1132 [ abs ][ Download PDF ] Simple regret for infinitely many armed bandits Alexandra Carpentier, Michal Valko ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1133-1141 [ abs ][ Download PDF ] Exponential Integration for Hamiltonian Monte Carlo Wei-Lun Chao, Justin Solomon, Dominik Michels, Fei Sha ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1142-1151 [ abs ][ Download PDF ][ Supplementary Material ] Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays Junpei Komiyama, Junya Honda, Hiroshi Nakagawa ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1152-1161 [ abs ][ Download PDF ][ Supplementary Material ] Faster cover trees Mike Izbicki, Christian Shelton ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1162-1170 [ abs ][ Download PDF ] Blitz: A Principled Meta-Algorithm for Scaling Sparse Optimization Tyler Johnson, Carlos Guestrin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1171-1179 [ abs ][ Download PDF ][ Supplementary Material ] Unsupervised Domain Adaptation by Backpropagation Yaroslav Ganin, Victor Lempitsky ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1180-1189 [ abs ][ Download PDF ][ Supplementary Material ] Non-Linear Cross-Domain Collaborative Filtering via Hyper-Structure Transfer Yan-Fu Liu, Cheng-Yu Hsu, Shan-Hung Wu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1190-1198 [ abs ][ Download PDF ][ Supplementary Material ] Manifold-valued Dirichlet Processes Hyunwoo Kim, Jia Xu, Baba Vemuri, Vikas Singh ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1199-1208 [ abs ][ Download PDF ] Multi-Task Learning for Subspace Segmentation Yu Wang, David Wipf, Qing Ling, Wei Chen, Ian Wassell ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1209-1217 [ abs ][ Download PDF ][ Supplementary Material ] Markov Chain Monte Carlo and Variational Inference: Bridging the Gap Tim Salimans, Diederik Kingma, Max Welling ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1218-1226 [ abs ][ Download PDF ] Scalable Model Selection for Large-Scale Factorial Relational Models Chunchen Liu, Lu Feng, Ryohei Fujimaki, Yusuke Muraoka ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1227-1235 [ abs ][ Download PDF ] The Power of Randomization: Distributed Submodular Maximization on Massive Datasets Rafael Barbosa, Alina Ene, Huy Nguyen, Justin Ward ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1236-1244 [ abs ][ Download PDF ][ Supplementary Material ] Dealing with small data: On the generalization of context trees Ralf Eggeling, Mikko Koivisto, Ivo Grosse ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1245-1253 [ abs ][ Download PDF ][ Supplementary Material ] Non-Gaussian Discriminative Factor Models via the Max-Margin Rank-Likelihood Xin Yuan, Ricardo Henao, Ephraim Tsalik, Raymond Langley, Lawrence Carin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1254-1263 [ abs ][ Download PDF ][ Supplementary Material ] A Bayesian nonparametric procedure for comparing algorithms Alessio Benavoli, Giorgio Corani, Francesca Mangili, Marco Zaffalon ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1264-1272 [ abs ][ Download PDF ][ Supplementary Material ] Convergence rate of Bayesian tensor estimator and its minimax optimality Taiji Suzuki ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1273-1282 [ abs ][ Download PDF ][ Supplementary Material ] On Identifying Good Options under Combinatorially Structured Feedback in Finite Noisy Environments Yifan Wu, Andras Gyorgy, Csaba Szepesvari ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1283-1291 [ abs ][ Download PDF ][ Supplementary Material ] Nested Sequential Monte Carlo Methods Christian Naesseth, Fredrik Lindsten, Thomas Schon ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1292-1301 [ abs ][ Download PDF ] Sparse Variational Inference for Generalized GP Models Rishit Sheth, Yuyang Wang, Roni Khardon ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1302-1311 [ abs ][ Download PDF ] Universal Value Function Approximators Tom Schaul, Daniel Horgan, Karol Gregor, David Silver ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1312-1320 [ abs ][ Download PDF ][ Supplementary Material ] Approximate Dynamic Programming for Two-Player Zero-Sum Markov Games Julien Perolat, Bruno Scherrer, Bilal Piot, Olivier Pietquin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1321-1329 [ abs ][ Download PDF ][ Supplementary Material ] On Greedy Maximization of Entropy Dravyansh Sharma, Ashish Kapoor, Amit Deshpande ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1330-1338 [ abs ][ Download PDF ] Metadata Dependent Mondrian Processes Yi Wang, Bin Li, Yang Wang, Fang Chen ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1339-1347 [ abs ][ Download PDF ] Complex Event Detection using Semantic Saliency and Nearly-Isotonic SVM Xiaojun Chang, Yi Yang, Eric Xing, Yaoliang Yu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1348-1357 [ abs ][ Download PDF ][ Supplementary Material ] Rebuilding Factorized Information Criterion: Asymptotically Accurate Marginal Likelihood Kohei Hayashi, Shin-ichi Maeda, Ryohei Fujimaki ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1358-1366 [ abs ][ Download PDF ][ Supplementary Material ] Double Nyström Method: An Efficient and Accurate Nyström Scheme for Large-Scale Data Sets Woosang Lim, Minhwan Kim, Haesun Park, Kyomin Jung ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1367-1375 [ abs ][ Download PDF ][ Supplementary Material ] The Composition Theorem for Differential Privacy Peter Kairouz, Sewoong Oh, Pramod Viswanath ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1376-1385 [ abs ][ Download PDF ][ Supplementary Material ] Convex Formulation for Learning from Positive and Unlabeled Data Marthinus Du Plessis, Gang Niu, Masashi Sugiyama ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1386-1394 [ abs ][ Download PDF ][ Supplementary Material ] Threshold Influence Model for Allocating Advertising Budgets Atsushi Miyauchi, Yuni Iwamasa, Takuro Fukunaga, Naonori Kakimura ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1395-1404 [ abs ][ Download PDF ][ Supplementary Material ] Strongly Adaptive Online Learning Amit Daniely, Alon Gonen, Shai Shalev-Shwartz ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1405-1411 [ abs ][ Download PDF ][ Supplementary Material ] CUR Algorithm for Partially Observed Matrices Miao Xu, Rong Jin, Zhi-Hua Zhou ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1412-1421 [ abs ][ Download PDF ][ Supplementary Material ] A Deterministic Analysis of Noisy Sparse Subspace Clustering for Dimensionality-reduced Data Yining Wang, Yu-Xiang Wang, Aarti Singh ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1422-1431 [ abs ][ Download PDF ][ Supplementary Material ] MRA-based Statistical Learning from Incomplete Rankings Eric Sibony, Stéphan Clemençon, Jérémie Jakubowicz ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1432-1441 [ abs ][ Download PDF ][ Supplementary Material ] Risk and Regret of Hierarchical Bayesian Learners Jonathan Huggins, Josh Tenenbaum ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1442-1451 [ abs ][ Download PDF ][ Supplementary Material ] Towards a Learning Theory of Cause-Effect Inference David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Iliya Tolstikhin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1452-1461 [ abs ][ Download PDF ][ Supplementary Material ] DRAW: A Recurrent Neural Network For Image Generation Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, Daan Wierstra ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1462-1471 [ abs ][ Download PDF ] Multiview Triplet Embedding: Learning Attributes in Multiple Maps Ehsan Amid, Antti Ukkonen ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1472-1480 [ abs ][ Download PDF ] Distributed Gaussian Processes Marc Deisenroth, Jun Wei Ng ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1481-1490 [ abs ][ Download PDF ] Guaranteed Tensor Decomposition: A Moment Approach Gongguo Tang, Parikshit Shah ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1491-1500 [ abs ][ Download PDF ][ Supplementary Material ] \ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods Zirui Zhou, Qi Zhang, Anthony Man-Cho So ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1501-1510 [ abs ][ Download PDF ][ Supplementary Material ] Consistent estimation of dynamic and multi-layer block models Qiuyi Han, Kevin Xu, Edoardo Airoldi ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1511-1520 [ abs ][ Download PDF ] On the Rate of Convergence and Error Bounds for LSTD(λ) Manel Tagorti, Bruno Scherrer ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1521-1529 [ abs ][ Download PDF ][ Supplementary Material ] Variational Inference with Normalizing Flows Danilo Rezende, Shakir Mohamed ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1530-1538 [ abs ][ Download PDF ][ Supplementary Material ] Controversy in mechanistic modelling with Gaussian processes Benn Macdonald, Catherine Higham, Dirk Husmeier ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1539-1547 [ abs ][ Download PDF ] Convex Learning of Multiple Tasks and their Structure Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1548-1557 [ abs ][ Download PDF ][ Supplementary Material ] K-hyperplane Hinge-Minimax Classifier Margarita Osadchy, Tamir Hazan, Daniel Keren ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1558-1566 [ abs ][ Download PDF ] Non-Stationary Approximate Modified Policy Iteration Boris Lesner, Bruno Scherrer ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1567-1575 [ abs ][ Download PDF ][ Supplementary Material ] Entropy evaluation based on confidence intervals of frequency estimates : Application to the learning of decision trees Mathieu Serrurier, Henri Prade ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1576-1584 [ abs ][ Download PDF ] Geometric Conditions for Subspace-Sparse Recovery Chong You, Rene Vidal ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1585-1593 [ abs ][ Download PDF ][ Supplementary Material ] An Empirical Study of Stochastic Variational Inference Algorithms for the Beta Bernoulli Process Amar Shah, David Knowles, Zoubin Ghahramani ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1594-1603 [ abs ][ Download PDF ][ Supplementary Material ] Long Short-Term Memory Over Recursive Structures Xiaodan Zhu, Parinaz Sobihani, Hongyu Guo ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1604-1612 [ abs ][ Download PDF ] Weight Uncertainty in Neural Network Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1613-1622 [ abs ][ Download PDF ] Learning Submodular Losses with the Lovasz Hinge Jiaqian Yu, Matthew Blaschko ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1623-1631 [ abs ][ Download PDF ] Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection Julie Nutini, Mark Schmidt, Issam Laradji, Michael Friedlander, Hoyt Koepke ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1632-1641 [ abs ][ Download PDF ][ Supplementary Material ] Hashing for Distributed Data Cong Leng, Jiaxiang Wu, Jian Cheng, Xi Zhang, Hanqing Lu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1642-1650 [ abs ][ Download PDF ] Large-scale Distributed Dependent Nonparametric Trees Zhiting Hu, Ho Qirong, Avinava Dubey, Eric Xing ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1651-1659 [ abs ][ Download PDF ][ Supplementary Material ] Qualitative Multi-Armed Bandits: A Quantile-Based Approach Balazs Szorenyi, Robert Busa-Fekete, Paul Weng, Eyke Hüllermeier ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1660-1668 [ abs ][ Download PDF ][ Supplementary Material ] Deep Edge-Aware Filters Li Xu, Jimmy Ren, Qiong Yan, Renjie Liao, Jiaya Jia ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1669-1678 [ abs ][ Download PDF ] A Convex Optimization Framework for Bi-Clustering Shiau Hong Lim, Yudong Chen, Huan Xu ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1679-1688 [ abs ][ Download PDF ][ Supplementary Material ] Is Feature Selection Secure against Training Data Poisoning? Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1689-1698 [ abs ][ Download PDF ] Predictive Entropy Search for Bayesian Optimization with Unknown Constraints Jose Miguel Hernandez-Lobato, Michael Gelbart, Matthew Hoffman, Ryan Adams, Zoubin Ghahramani ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1699-1707 [ abs ][ Download PDF ][ Supplementary Material ] A Theoretical Analysis of Metric Hypothesis Transfer Learning Michaël Perrot, Amaury Habrard ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1708-1717 [ abs ][ Download PDF ][ Supplementary Material ] Generative Moment Matching Networks Yujia Li, Kevin Swersky, Rich Zemel ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1718-1727 [ abs ][ Download PDF ] Stay on path: PCA along graph paths Megasthenis Asteris, Anastasios Kyrillidis, Alex Dimakis, Han-Gyol Yi, Bharath Chandrasekaran ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1728-1736 [ abs ][ Download PDF ][ Supplementary Material ] Deep Learning with Limited Numerical Precision Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1737-1746 [ abs ][ Download PDF ] Safe Screening for Multi-Task Feature Learning with Multiple Data Matrices Jie Wang, Jieping Ye ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1747-1756 [ abs ][ Download PDF ] Harmonic Exponential Families on Manifolds Taco Cohen, Max Welling ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1757-1765 [ abs ][ Download PDF ][ Supplementary Material ] Training Deep Convolutional Neural Networks to Play Go Christopher Clark, Amos Storkey ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1766-1774 [ abs ][ Download PDF ] Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) Andrew Wilson, Hannes Nickisch ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1775-1784 [ abs ][ Download PDF ][ Supplementary Material ] Learning Deep Structured Models Liang-Chieh Chen, Alexander Schwing, Alan Yuille, Raquel Urtasun ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1785-1794 [ abs ][ Download PDF ] Community Detection Using Time-Dependent Personalized PageRank Haim Avron, Lior Horesh ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1795-1803 [ abs ][ Download PDF ][ Supplementary Material ] Scalable Variational Inference in Log-supermodular Models Josip Djolonga, Andreas Krause ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1804-1813 [ abs ][ Download PDF ][ Supplementary Material ] Variational Inference for Gaussian Process Modulated Poisson Processes Chris Lloyd, Tom Gunter, Michael Osborne, Stephen Roberts ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1814-1822 [ abs ][ Download PDF ] Scalable Deep Poisson Factor Analysis for Topic Modeling Zhe Gan, Changyou Chen, Ricardo Henao, David Carlson, Lawrence Carin ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1823-1832 [ abs ][ Download PDF ][ Supplementary Material ] Hidden Markov Anomaly Detection Nico Goernitz, Mikio Braun, Marius Kloft ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1833-1842 [ abs ][ Download PDF ][ Supplementary Material ] Robust Estimation of Transition Matrices in High Dimensional Heavy-tailed Vector Autoregressive Processes Huitong Qiu, Sheng Xu, Fang Han, Han Liu, Brian Caffo ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1843-1851 [ abs ][ Download PDF ][ Supplementary Material ] Convex Calibrated Surrogates for Hierarchical Classification Harish Ramaswamy, Ambuj Tewari, Shivani Agarwal ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1852-1860 [ abs ][ Download PDF ][ Supplementary Material ] Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks Jose Miguel Hernandez-Lobato, Ryan Adams ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1861-1869 [ abs ][ Download PDF ][ Supplementary Material ] Active Nearest Neighbors in Changing Environments Christopher Berlind, Ruth Urner ; Proceedings of the 32nd International Conference on Machine Learning , PMLR 37:1870-1879 [ abs ][ Download PDF ][ Supplementary Material | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/managed-dns-services-software?trk=organization_guest_product_card_category_link | Best Managed DNS Services Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Information Technology Manager (5) Domain Manager (3) Chief Technology Officer (2) Director of Information Technology (2) Cyber Security Engineer (2) See all products Find top products in Managed DNS Services Software category Software used to manage Domain Name Systems (DNS) traffic through specified protocols. - Generate traffic analysis, usage analytics, and other DNS log data - Authenticate visitors via SMTP Authentication and prevent failover via DNS Failover - Facilitate intelligent traffic routing (i.e. geolocation and policy) using GeoDNS 67 results Cisco Umbrella Secure Web Gateways by Cisco Cisco Umbrella is cloud-delivered enterprise network security which provides users with a first line of defense against cyber security threats. View product Akamai Edge DNS Managed DNS Services Software by Akamai Technologies Akamai Edge DNS is a robust, reliable, and scalable enhanced DNS web security solution to dependably direct end users to your applications & websites. View product Cloudflare DNS Managed DNS Services Software by Cloudflare DNS is a mission-critical component for any online business. Yet this component is often overlooked and forgotten, until something breaks. Cloudflare DNS is an enterprise-grade authoritative DNS service that offers the fastest response time, unparalleled redundancy, and advanced security with built-in DDoS mitigation and DNSSEC. View product Cloudflare Registrar Managed DNS Services Software by Cloudflare Finally, a domain registrar you can love. Cloudflare Registrar securely registers and manages your domain names with transparent, no-markup pricing that eliminates surprise renewal fees and hidden add-on charges. You pay what we pay — you won’t find a better value. View product Infoblox Core Network Services Managed DNS Services Software by Infoblox Automate and consolidate core network services, including DNS, DHCP, and IP Address Management. As your network evolves to support rapidly multiplying applications and devices, your core network services need to scale with minimal management overhead. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Advanced DNS Protection Managed DNS Services Software by Infoblox Distributed denial of service (DDoS) and other external and internal DNS-based threats can flood your DNS server with malicious requests, bringing down your network. With Infoblox Advanced DNS Protection, you can comprehensively defend your DNS server from DNS attacks. View product DNSi Big Data Connector Managed DNS Services Software by Akamai Technologies DNS resolution data provides a subscriber and content-centric view of network activity. View product DNSi AuthServe Managed DNS Services Software by Akamai Technologies Akamai DNSi AuthServe is an authoritative DNS server that enables highly resilient, secure, always-on name services. View product DNS Blast Managed DNS Services Software by EfficientIP Our hardened DNS Blast security appliance combines the world’s fastest cache & advanced built-in security to intelligently provide DDoS attack protection. View product DNS Cloud Managed DNS Services Software by EfficientIP DNS Cloud from EfficientIP offers a unique solution to extend in just a few minutes and easily manage your public and private DNS services on AWS Route 53 and Microsoft Azure DNS, global network, providing unique benefits in securing your online business availability and maximizing its performance View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/radware-cloud-waf/?trk=products_seo_search | Radware Cloud WAF | LinkedIn Skip to main content LinkedIn Radware in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Radware Cloud WAF Web Application Firewalls (WAF) by Radware See who's skilled in this Add as skill Learn more Report this product About Radware’s Cloud WAF Service is a web application firewall that provides continuous adaptive web application security protection and full coverage of OWASP Top 10 threats. Media Products media viewer No more previous content No more next content Similar products Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Akamai Web Application Protector Akamai Web Application Protector Web Application Firewalls (WAF) Imperva Web Application Firewall (WAF) Imperva Web Application Firewall (WAF) Web Application Firewalls (WAF) Advanced API Security Advanced API Security Web Application Firewalls (WAF) Alibaba Web Application Firewall Alibaba Web Application Firewall Web Application Firewalls (WAF) DNSFilter DNSFilter Web Application Firewalls (WAF) Sign in to see more Show more Show less Radware products APSolute Vision APSolute Vision Network Management Software Attack Mitigation System Attack Mitigation System DDoS Protection Software Radware Alteon Radware Alteon Load Balancing Software Radware Cloud DDoS Protection Service Radware Cloud DDoS Protection Service DDoS Protection Software Radware Cloud Workload Protection Radware Cloud Workload Protection Cloud Workload Protection Platforms Radware DefensePro Radware DefensePro DDoS Protection Software Radware Security for Azure Radware Security for Azure DDoS Protection Software Radware Threat Intelligence Radware Threat Intelligence Threat Intelligence Platforms Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/tencentglobal-tencent-effect-sdk/ | Tencent Effect SDK | LinkedIn Skip to main content LinkedIn Tencent in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Tencent Effect SDK Software Development Kits (SDK) by Tencent See who's skilled in this Add as skill Learn more Report this product About Tencent Effect SDK leverages YouTu's accurate AI capabilities and Pitu's rich real-time effects to offer a wealth of services for many video processing use cases. Similar products Meta Business SDK Meta Business SDK Software Development Kits (SDK) MantHANA MantHANA Software Development Kits (SDK) Elevate Elevate Software Development Kits (SDK) Lyzr AI Lyzr AI Software Development Kits (SDK) RevenueCat RevenueCat Software Development Kits (SDK) Ready Player Me Ready Player Me Software Development Kits (SDK) Sign in to see more Show more Show less Tencent products Cloud Native Database TDSQL-C Cloud Native Database TDSQL-C Relational Database Management Systems Cloud Object Storage (COS) Cloud Object Storage (COS) Object Storage Software Cloud Virtual Machine (CVM) Cloud Virtual Machine (CVM) Server Virtualization Software Simple Email Service (SES) Simple Email Service (SES) Email Marketing Software Tencent Cloud Automation Tools Tencent Cloud Automation Tools Build Automation Tools Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software Tencent Cloud Instant Messaging (IM) Tencent Cloud Instant Messaging (IM) Cloud Communications Platforms Tencent Cloud Lighthouse Tencent Cloud Lighthouse Enterprise Cloud Platforms Tencent Real-Time Communication (TRTC) Tencent Real-Time Communication (TRTC) Live Streaming Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/amazon-web-services-amazon-aurora/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | Amazon Aurora | LinkedIn Skip to main content LinkedIn Amazon Web Services (AWS) in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Amazon Aurora Relational Database Management Systems by Amazon Web Services (AWS) See who's skilled in this Add as skill Learn more Report this product About Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Similar products Amazon RDS Amazon RDS Relational Database Management Systems Google Cloud SQL Google Cloud SQL Relational Database Management Systems Teradata Vantage Teradata Vantage Relational Database Management Systems Cloud Native Database TDSQL-C Cloud Native Database TDSQL-C Relational Database Management Systems SQL Server SQL Server Relational Database Management Systems CockroachDB CockroachDB Relational Database Management Systems Sign in to see more Show more Show less Amazon Web Services (AWS) products Amazon DynamoDB Amazon DynamoDB NoSQL Database Software Amazon EC2 Amazon EC2 Infrastructure as a Service (IaaS) Amazon Lightsail Amazon Lightsail Platform as a Service (PaaS) Software Amazon RDS Amazon RDS Relational Database Management Systems Amazon SageMaker Amazon SageMaker Data Science & Machine Learning Platforms Amazon Simple Storage Service (S3) Amazon Simple Storage Service (S3) Object Storage Software Amazon Virtual Private Cloud Amazon Virtual Private Cloud Virtual Private Cloud (VPC) Software AWS Lambda AWS Lambda Platform as a Service (PaaS) Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://intl.cloud.tencent.com/product/cbs?trk=products_details_guest_secondary_call_to_action | Cloud Block Storage | Tencent Cloud tencent cloud Promotions Products Solutions Pricing Partner Network Marketplace Explore Prev Next Cloud Block Storage A reliable, scalable, and persistent block storage service Get Started Contact Sales Overview Benefits Features Scenarios Pricing Overview Overview Benefits Features Scenarios Pricing Overview Cloud Block Storage (CBS) is a persistent block storage service designed for CVM instances. CBS protects against data failures with up to 99.9999999% reliability, including replication within availability zone. It allows you to select the CBS disk type and specification that best fits your business. You can mount/unmount CBS disks to/from instances in the same availability zone easily and scale up or down as needed and pay for what you use. CBS Documentation Benefits Elastic Scalability Attach up to 20 cloud disks (up to 32 TB each) to one single instance to enable efficient processing of big data at the terabyte and petabyte levels. Resize storage easily to meet your needs without business interruption. Multiple Storage Types CBS offers three volume options including HDD cloud disk, premium cloud disk and SSD cloud disk to meet different business storage requirements. High Stability and Reliability Protect against failures with 99.9999999% reliability and facilitate rapid data migration and recovery with industry-leading replication mechanism Easy to Use Deploy and manage your CBS disks easily through simple operations such as creation, mounting, unmounting and deletion. Snapshot Backup Back up your data and ensure data availability by taking point-in-time snapshots of cloud disks to prevent data loss and business failures Features Elasticity Attach up to 20 cloud disks (up to 32 TB each) to one single instance to enable efficient processing of big data at the terabyte and petabyte levels. Resize storage easily to meet your needs without business interruption. Data Backup Use snapshot to ensure data security and availability in the event of faulty operations, attacks and tampering. CBS supports creating snapshots without impact disk performance or interrupt your business. Roll back your snapshot when data loss or inconsistency occurs to protect your data integrity. Plus, snapshot enables you to copy data to a new server for rapid batch deployment or business migration. High Reliability Store your data in three cross-rack copies to protect your data from disk failure and ensure data reliability by deploying CBS system and snapshot system in different data centers. Protect your data from malicious attacks with independent and isolated deployment. You can also eliminate the possibility of business interruption caused by maintenance events by storing CBS data and snapshot data in different racks. Flexible Billing Scale up or down as needed and only pay for what you use with pay-as-you-go billing mode. Billing is on a per second level and charged hourly. Scenarios Enterprise Office Applications Relational/NoSQL Databases High-Volume Data Analytics Ensure data availability and integrity with comprehensive capabilities such as data backup, snapshot and recovery. Migrate or resize your storage on the fly without disrupting your business. Show more Ensure continuous and stable high performance and low latency for disks during peak hours with full SSD storage media and distributed architecture. CBS supports running high-throughput databases and satisfies the high-IO needs of NoSQL/relational databases. Show more CBS supports Hadoop-Mapreduce and HDFS for processing terabytes or petabytes of data offline with excellent multi-thread access performance to satisfy needs in high-IO storage scenarios. Show more Pricing CBS provides two billing options to meet different customer needs: monthly subscription and pay-as-you-go. The monthly subscription mode is an upfront payment (unit: USD/GB x month). A minimum one month purchase is required. It's suitable for businesses with stable and long-term demands. The pay-as-you-go mode is an hourly settlement (unit: USD/GB x hour) that freezes fees for one billing cycle upon purchase. For more information, see Product Pricing . | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/maxmind?trk=products_details_guest_organization_image | MaxMind | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free MaxMind Technology, Information and Internet Malden, Massachusetts 2,991 followers Founded in 2002, MaxMind is an industry leader in IP intelligence, proxy detection, and online fraud prevention tools. Follow Discover all 99 employees Report this company About us Founded in 2002, MaxMind is an industry leader in IP intelligence, proxy detection, and online fraud prevention tools. We help businesses enhance their user experience, manage risk, and reduce fraud, easily and efficiently. Known as the creator of GeoIP® and the first company to offer both IP address data and fraud detection services, we are on a mission to make the Internet a smarter and safer place, with data-driven solutions. Today, over 100,000 businesses worldwide rely on MaxMind data for online fraud prevention, Internet security, content customization, advertising, analytics, digital rights management, compliance, and user experience optimization. We design our solutions to be scalable, flexible, and affordable, ensuring they are accessible to businesses of all sizes. Since the inception of MaxMind in 2002, we have offered a free version of our GeoIP data, and we continue to donate over 60% of profits to charities. Website http://www.maxmind.com External link for MaxMind Industry Technology, Information and Internet Company size 51-200 employees Headquarters Malden, Massachusetts Type Privately Held Founded 2002 Specialties eCommerce risk analysis, reduce manual revew, decrease chargebacks, fraud detection, ad targeting, geo-targeting, geo-fencing, digital rights management, content customization, prevent fraudulent online transactions, Chargebacks, geolocation, adaptive fraud prevention technologies, account takeover prevention, traffic analysis, Hosting provider/data center, VPN, Tor, and public proxy detection, and supervised machine learning Locations Primary 51 Pleasant Street #1020 Malden, Massachusetts 02148, US Get directions Employees at MaxMind Dave Bliss Deborah Lieou McCall Kevin Phair Kristian Arndt See all employees Updates MaxMind 2,991 followers 3w Report this post How Telesign flags more fraud using IP data: • Enriches transactions with GeoIP Enterprise and Anonymous IP signals • Uses IP risk context to spot suspicious behavior earlier in the flow • Identifies 23% more high-risk transactions without adding user friction We break down how this works in practice in our latest case study with Telesign: https://lnkd.in/gM_RAMk8 #geoip 8 Like Comment Share MaxMind 2,991 followers 2mo Report this post Holiday traffic ramps up soon. We start prepping servers in late October to support you through the season. #GeoIP #minFraud #MaxMind View C2PA information +1 6 Like Comment Share MaxMind 2,991 followers 4mo Report this post Some MaxMind customers limit the number of inputs sent to our minFraud service, to the detriment of fraud detection accuracy. We analyzed the contribution of various inputs on risk scoring to create a must-have list of minFraud inputs. Read our latest article on the MaxMind blog for a clear, data-forward evaluation of how (and how much) various minFraud inputs contribute to fraud detection accuracy. These findings are especially helpful if you are required to limit the number of inputs passed to the minFraud service: https://hubs.la/Q03G2jY10 #fraud_detection Fraud detection accuracy: An analysis of data inputs that significantly improve risk scoring blog.maxmind.com 9 1 Comment Like Comment Share MaxMind 2,991 followers 6mo Report this post Most GeoIP databases are now updated every weekday, Monday through Friday. As a best practice, we recommend automating GeoIP database updates so that you can be sure that you always have the most accurate IP geolocation and IP intelligence data. You can automate database updates with our GeoIP Update program or other methods. You can also download databases directly through the account portal. Learn how to automate database updates: https://hubs.la/Q03xxkj50 #GeoIP #MaxMind #IPgeolocation 8 8 Comments Like Comment Share MaxMind 2,991 followers 9mo Report this post As a non-profit focused on user privacy, Quad9 DNS needed a non-invasive way to measure the impact of its mission. Here's how they used MaxMind #GeoIP data to accomplish just that. 👉 https://lnkd.in/gE7EP5FM 15 Like Comment Share MaxMind 2,991 followers 10mo Report this post We’ve teamed up with Ateme to deliver a powerful geo-compliance and streaming solution for a Tier-1 APAC PayTV provider. Our collaboration brings MaxMind's IP geolocation and proxy detection data into Ateme’s CDN to enhance streaming performance while ensuring regional compliance. #GeoIP #IPGeolocation #Proxydetection Ateme 26,165 followers 10mo 🚀 Exciting News! 🚀 Ateme and MaxMind have partnered to deliver an optimized streaming and compliance solution for a Tier 1 APAC PayTV provider! 🎥 With Ateme’s NEA CDN and MaxMind’s GeoIP® data, this solution ensures: ✅ Seamless, high-quality streaming ✅ Geo-compliance & piracy protection ✅ Optimized content delivery 🗣️ “Our CDN cache optimization is a game-changer, minimizing buffering and enhancing the viewing experience.” – Daniel Patton VP of NEA Products, Ateme. Together, we’re shaping the future of secure, high-quality streaming! 🔗 Learn more: https://lnkd.in/eDixztuj #Streaming #PayTV #CDN #GeoCompliance #Ateme #MaxMind 14 1 Comment Like Comment Share MaxMind 2,991 followers 10mo Report this post We are excited to introduce GeoIP® Exchange, a centralized platform designed to help network operators and global enterprises proactively manage how their networks are represented online. By providing insights and facilitating secure feedback, GeoIP Exchange enables organizations to address geolocation and security issues before they impact end-users, ensuring a seamless online experience. Learn more 👉 https://lnkd.in/eEb8jPwK #GeoIP #IPgeolocation 21 1 Comment Like Comment Share MaxMind 2,991 followers 10mo Report this post minFraud® risk score reasons are officially out of beta 👏. What are risk score reasons? Risk score reasons provide specific and understandable reasons for why a risk score is high or low, helping to uncover risk patterns for individual transactions and the overall transaction stream, both at a point in time and over time. Why prompted us to introduce this data? We wanted to introduce an alternative to the blackbox approach of most fraud models, where the data and methodology are concealed or poorly explained—causing serious issues when new fraud patterns emerge and performance drops. Enhance your fraud strategy. Read on for real-life examples, use case recommendations, and more: https://hubs.la/Q038mSXj0 Enhance your fraud strategy with risk score reasons | MaxMind blog.maxmind.com 11 Like Comment Share Join now to see what you are missing Find people you know at MaxMind Browse recommended jobs for you View all updates, news, and articles Join now Similar pages IPinfo Technology, Information and Internet Seattle, Washington Maxmind.ma IT System Custom Software Development Casablanca, Casablanca Rodeo Technology, Information and Internet IP2Location IT Services and IT Consulting Bayan Baru, Pulau Pinang GeoComply Software Development Vancouver, British Columbia Digital Element IT Services and IT Consulting Peachtree Corners, Georgia Salesforce Software Development San Francisco, California Mslm Software Development Lahore, Punjab TUNE Software Development MAXMIND Health and Human Services New York, NY Show more similar pages Show fewer similar pages Browse jobs Engineer jobs 555,845 open jobs Operational Specialist jobs 58,644 open jobs Software Engineer jobs 300,699 open jobs Brand Strategist jobs 22,435 open jobs Principal Engineer jobs 168,566 open jobs Principal Software Engineer jobs 73,845 open jobs Engineering Manager jobs 145,990 open jobs Accounts Payable Specialist jobs 48,065 open jobs Manager jobs 1,880,925 open jobs Statistician jobs 20,674 open jobs Analyst jobs 694,057 open jobs Marketing Specialist jobs 49,178 open jobs Office Administrator jobs 62,873 open jobs Associate jobs 1,091,945 open jobs Administrator jobs 469,404 open jobs Leasing Analyst jobs 19,339 open jobs Assistant jobs 711,811 open jobs Fund Administrator jobs 52,355 open jobs Compensation Specialist jobs 11,530 open jobs Business Administrator jobs 81,937 open jobs Show more jobs like this Show fewer jobs like this More searches More searches Operational Specialist jobs Recruiter jobs Customer Service Representative jobs Field Service Manager jobs Account Executive jobs Marketing Manager jobs Analyst jobs Principal jobs Eligibility Specialist jobs Accounts Receivable Specialist jobs Healthcare Specialist jobs Leasing Analyst jobs Curator jobs Account Coordinator jobs Proposal Manager jobs Contract Manager jobs Manager jobs Representative jobs Director of Operations jobs Director jobs Associate jobs Specialist jobs Presales Engineer jobs Business Development Associate jobs Business Development Representative jobs Head of Sales jobs Paralegal jobs Corporate Legal Assistant jobs Assistant Attorney General jobs Employee Benefits Specialist jobs Business Solutions Architect jobs Employment Lawyer jobs Estate Planning Attorney jobs Criminal Defense Attorney jobs Assistant General Counsel jobs Corporate Associate jobs Corporate Lawyer jobs Senior Paralegal jobs Attorney At Law jobs Staff Attorney jobs Trial Attorney jobs Data Migration Analyst jobs Data Migration Consultant jobs Sales Process Manager jobs Commercial Sales Executive jobs Solutions Sales Executive jobs Account Development Manager jobs Migration Specialist jobs Legal Assistant jobs Channel Development Manager jobs Sales Strategy Manager jobs Sales Project Manager jobs Professional Services Manager jobs Attorney jobs Real Estate Paralegal jobs Integrated Marketing Manager jobs Advertising Manager jobs Real Estate Attorney jobs Senior Legal Counsel jobs Sales Advisor jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at MaxMind Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/ovhgroup-ovh-block-storage/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | OVHcloud Block Storage | LinkedIn Skip to main content LinkedIn OVHcloud in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in OVHcloud Block Storage Block Storage Software by OVHcloud See who's skilled in this Add as skill Learn more Report this product About With Block Storage, you can use secure and flexible storage volumes that are available on demand. Block Storage is based on Ceph, an open-source technology. Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software DataCore SANsymphony DataCore SANsymphony Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software Sign in to see more Show more Show less OVHcloud products OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software OVHcloud Marketplace OVHcloud Marketplace Marketplace Platforms OVHcloud Object Storage OVHcloud Object Storage Object Storage Software OVHcloud Zerto DRP OVHcloud Zerto DRP Disaster Recovery as a Service (DRaaS) Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/load-balancing-software?trk=organization_guest_products_modal_category_link | Best Load Balancing Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Information Technology Manager (3) Network Specialist (3) Chief Technology Officer (2) DevOps Engineer (2) Software Engineer (2) See all products Find top products in Load Balancing Software category Software used to monitor and distribute resources and traffic on websites or applications across server pools. - Provide automation via balancing algorithms that identify fluctuations - Make processing more efficient and optimize infrastructure workloads - Use integrations with failover and backup systems 25 results Cloudflare Load Balancing Load Balancing Software by Cloudflare Overutilized or geographically distant servers add latency and degrade the user experience. Such mistakes can prove costly, resulting in lost customers, missed revenue, and reputational damage. With Cloudflare Load Balancing, you can improve application performance and availability by steering traffic away from unhealthy origin servers and dynamically distributing it to the most available and responsive server pools. View product Progress Kemp LoadMaster Load Balancing Software by Progress Software Kemp Load Balancers are designed to optimize a wide array of applications with templates and support for: • Web Application servers including SSL and HTTP/2 • Microsoft Applications including Exchange, Skype, RDS, Always-on-VPN • SharePoint and ADFS • Dell EMC ECS • Citrix XenApp/XenDesk • Oracle E-Business Suite, PeopleSoft and WebLogic • E-Commerce, Healthcare, Education and Government Applications View product Radware Alteon Load Balancing Software by Radware Alteon is Radware’s next-generation application delivery controller (ADC) and the only network load balancer that guarantees application SLA. View product F5 Load Balancing Software by CLM Segurança, Performance e Disponibilidade. A F5 é líder global em segurança e entrega de aplicações, garantindo proteção, escalabilidade e alto desempenho em qualquer ambiente – on-premises, nuvem e híbrido. ✅ Segurança e Proteção • Defesa contra DDoS, bots e fraudes digitais. • WAF para proteção contra ameaças na web. • Segurança Zero Trust para aplicações e APIs. ✅ Otimização e Disponibilidade • Balanceamento de carga inteligente. • Aceleração para reduzir latência e melhorar UX. • Alta disponibilidade com failover automatizado. ✅ Integração com Soluções CLM • Complementa Netskope, Proofpoint e SentinelOne. • Segurança reforçada para ambientes híbridos. 🔗 CLM e F5: • Proteção Estratégica • Suporte técnico e consultoria especializada. • Treinamentos e PoCs para validação da tecnologia. 🚀 Fortaleça suas aplicações com F5 e CLM! #CyberSecurity #CloudSecurity #AppProtection #F5 #CLMTech View product Fastly Load Balancer Load Balancing Software by Fastly Our Layer 7 Load Balancer lets you define content-aware routing decisions while ensuring instant convergence and failover. Unlike DNS-based solutions, our Load Balancer gives you granular control, immediate scalability, and real-time visibility. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights ADC Management Load Balancing Software by TCPWave Themis ADC integrates GSLB, SLB, and WAF capabilities to offer a comprehensive solution for intelligent network management. Explore TCPWave's ADC solutions: Our SLB ensures optimal application performance by intelligently distributing traffic across multiple servers. Our GSLB enhances resilience by geographically routing users based on server health. And with our WAF, safeguard web applications by filtering and monitoring HTTP traffic. Together, they provide a seamless, secure application delivery experience. Business Challenges Discovering Challenges with TCPWave: Charting cloud terrains, fortifying against threats, optimizing resources, and illuminating network blind spots. Our Solution Know how TCPWave's ADC acts as a digital compass for intelligent routing, real-time analytics, and robust security in the digital realm. Our Difference Discover TCPWave's ADC solutions: unparalleled innovation, AI-driven tech, dedicated support, and global recognition, all craft View product Kemp Load Balancing Software by Fakhroo Information Technology Services Load Balancers, Exchange Load Balancing, Application Delivery Controllers, Reverse Proxy, SSL Acceleration and Off-loading, AD FS Load Balancing, Azure Load Balancer, AWS Load Balancer, SDN Adaptive, Internet of things, Skype for Business, Hybrid Cloud, Azure Stack, Multi-Cloud, CMP, AMP, CSP, MSP, and Multicloud. View product NGINX Ingress Controller Load Balancing Software by NGINX News and updates (UnOfficial) Streamline and simplify Kubernetes (north-south) network traffic management, delivering consistent, predictable performance at scale without slowing down your apps. Production-Grade Kubernetes NGINX Ingress Controller, the most widely used Ingress technology with more than 50 million DockerHub pulls, is synonymous with high‑performing, scalable, and secure modern apps in production View product Load Balancer Load Balancing Software by GoCache Entregue uma alta disponibilidade para sua audiência e otimize o desempenho de suas aplicações, distribuindo o tráfego de forma eficiente entre os servidores e garanta uma experiência online fluida e confiável para sua audiência. View product Haltdos Link Balancer Load Balancing Software by Haltdos Haltdos Enterprise Link Balancing Solution - High performance WAN link balancer and QoS solution Features: Built-in DDoS Protection DNS Firewall Context Aware Qos With Haltdos Build Reliable Enterprise Network Improve Network Utilization Optimize WAN Links Eliminate Downtime Improve Productivity Quality of Service Secure DNS View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://opensource.guide/fa/leadership-and-governance/ | مدیریت و نظارت | Open Source Guides درباره پروژه مشارکت العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides مدیریت و نظارت وجود نقشهای رسمی جهت تصمیمگیری، منافع زیادی برای پروژههای متن باز در حال رشد به همراه دارد. فهرست مطالب نظارت درست بر روی پروژهی در حال رشد مثالهایی از نقشهای رسمی مورد استفاده در پروژه های متن باز، چه هستند؟ چگونه به این نقشهای مدیریتی، رسمیت ببخشیم؟ چه موقع باید به کسی اجازهی دسترسی کامیت بدهیم؟ برخی از ساختارهای نظارتی متداول برای پروژههای متن باز چیست؟ آیا هنگام راهاندازی پروژهی خود به اسناد نظارتی نیاز دارم؟ اگر کارمندان شاغل در شرکت شروع به ارائهی خدمات کنند، چه میشود؟ نظارت درست بر روی پروژهی در حال رشد پروژهی شما رشد میکند، مردم درگیر پروژهی شما میشوند، و شما به ادامه دادن این کار متعهد میشوید. در این مرحله، ممکن است از خود بپرسید که چگونه میتوانید مشارکتکنندگان پروژهی خود را منسجم و یکپارچه کنید، خواه دادن دسترسی کامیت باشد یا حل و فصل کردن بحثها و گفتگوهای درون اجتماع. اگر سوالاتی دارید، جوابها پیش ماست. مثالهایی از نقشهای رسمی مورد استفاده در پروژه های متن باز، چه هستند؟ بسیاری از پروژهها، ساختار مشابهی را در حوزهی نقشهای مشارکتی و شناختی دنبال میکنند. مضمون و معنای این نقشها، در واقع کاملا به شما بستگی دارد. در اینجا، چند نوع نقشی که ممکن است آنها را تشخیص دهید، نام بردیم: نگهدارنده (Maintainer) مشارکتکننده (Contributor) کامیت کننده (Committer) در برخی از پروژهها، نگهدارندگان تنها افرادی در پروژه هستند که دسترسی کامیت دارند. در برخی دیگر از پروژهها، فقط افرادی دسترسی دارند که در «README» به عنوان نگهدارنده ذکر شدهاند. نگهدارنده لزوماً کسی نیست که برای پروژهی شما کد مینویسد. بلکه میتواند کسی باشد که در تبلیغ پروژهی شما کارهای زیادی انجام داده باشد یا مستنداتی را نوشته باشد که پروژه را برای دیگران قابل دسترسیتر کرده است. صرف نظر از کاری که آنها در طی روز انجام میدهند، نگهدارنده کسی است که نسبت به مسیر و اجرای پروژه احساس مسئولیت میکند و متعهد به بهبود بخشیدن آن است. مشارکتکننده میتواند هر کسی باشد : کسی که مسئله یا درخواست «Pull» ی را مطرح میکند، یا افرادی باشند که به پروژه ارزش میبخشند (خواه این که مسائل مربوط به اولویتبندی درخواستها، نوشتن کد یا سازماندهی رویدادها باشد) یا هر کسی باشد که درخواست «Pull» ی را ادغام (merge) بکند (جزئیترین تعریف از مشارکتکننده) [در پروژهی Node.js ]، هر شخصی که درباره یک موضوع اظهارنظر یا کدی را ارسال کند، عضوی از پروژه است. تنها دیدن نظرات یا ارسال کد به منزلهی عبور آنها از فقط کاربر بودن به مشارکتکننده بودن است. — @mikeal, “Healthy Open Source” اصطلاح «کامیت کننده»، ممکن است برای وجه تمایز دسترسی کامیت، که نوع خاصی از مسئولیت در مقابل سایر اشکال مشارکت است، استفاده شود. در حالی که میتوانید نقشها را در پروژهی خود به هر روشی که دوست دارید تعریف کنید، استفاده از تعاریف گستردهتر را برای تقویت اشکال بیشتری از مشارکت، مد نظر خود قرار دهید . شما میتوانید از نقشهای مدیریتی برای شناسایی رسمی افرادی که مشارکت برجستهای به پروژهی شما کردهاند، صرف نظر از مهارتهای فنی آنها استفاده کنید ممکن است من را به عنوان سازندهی «Django» بشناسید… اما در واقع من کسی هستم که یک سال بعد از ساخت «Django» ، برای کار در بخشی از آن استخدام شدم. (…)مردم فکر میکنند به دلیل مهارت برنامهنویسی است که موفق شدم… اما در بهترین حالت میتوان گفت که من یک برنامهنویس متوسط هستم. — @jacobian, “PyCon 2015 Keynote” (video) چگونه به این نقشهای مدیریتی، رسمیت ببخشیم؟ رسمیت بخشیدن به نقشهای مدیریتی، باعث میشود افراد احساس مالکیت کنند و به اعضای سایر اجتماعها بگویند برای کمک به چه کسانی روی آورند. در پروژههای کوچکتر، تعیین کردن مدیران میتواند به سادگی افزودن نام آنها به فایلهای «README» یا به یک فایل متنی «CONTRIBUTORS» باشد. برای پروژههای بزرگتر، اگر وبسایتی دارید، یک صفحهی تیمی ایجاد کنید یا اسامی مدیران پروژه را در آنجا لیست کنید. به عنوان مثال، Postgres یک صفحهی تیمی جامع و فراگیر با مشخصاتی کوتاه برای هر مشارکتکننده دارد. اگر پروژهی شما دارای جامعهی مشارکتکنندهی بسیار فعالی است، شما میتوانید یک «تیم اصلی» از نگهدارندهها، یا حتی کمیتههای فرعی از افرادی که مالکیت حوزههای موضوعات مختلف را دارند (به عنوان مثال، امنیت، اولویتبندی درخواستها یا هدایت اجتماع) تشکیل دهید. به جای اینکه به هر کسی، وظایف خاصی را محول کنید، بگذارید افراد خودشان، برای نقشهایی که بیشتر از همه هیجان زدهاند سازمان یابند و داوطلب شوند. [ما] تیم اصلی را با چندین «زیرگروه (گروههای فرعی)» تکمیل میکنیم. هر زیرگروه، بر روی حوزهای خاص متمرکز میشود، به عنوان مثال، طراحی زبان یا کتابخانهها. (…)به منظور اطمینان از هماهنگی کلی و داشتن چشماندازی قوی و منسجم برای کل پروژه، هر زیرگروه توسط یکی از اعضای تیم اصلی هدایت میشود — “Rust Governance RFC” تیمهای مدیریت، ممکن است بخواهند کانالی مشخص (مانند IRC ) ایجاد کنند یا به طور منظم برای بحث درمورد پروژه با هم ملاقات کنند (مانند Gitter یا Google Hangout ). حتی میتوانید آن جلسات را علنی برگزار کنید تا سایر افراد بتوانند گوش دهند. به عنوان مثال، Cucumber-ruby همچین کاری میکند پس از اینکه نقشهای مدیریتی را ایجاد کردید، ثبت چگونگی نحوهی دسترسی افراد به آنها را فراموش نکنید! فرآیند روشن و واضحی را برای چگونگی کسی که میخواهد نگهدارنده شود یا به کمیتهای فرعی در پروژه شما بپیوندد، ایجاد کنید و آن را در «GOVERNANCE.md» خود بنویسید. ابزارهایی مانند Vossibility ، میتواند به شما کمک کند تا به طور عمومی افرادی که در پروژه مشارکت میکنند (یا نمیکنند) را ردیابی کنید. ثبت این اطلاعات، جلوی این ذهنیت اجتماع که نگهدارندگان گروهی هستند که تصمیمات خود را به صورت خصوصی اتخاذ میکنند، میگیرد در آخر اینکه اگر پروژهی شما در «GitHub» است، انتقال پروژهی خود را از حساب شخصی خود به یک تشکل و اضافه کردن حداقل یک مدیر پشتیبان را مد نظر خود قرار دهید. تشکلهای «GitHub» ، مدیریت دسترسیها و مراکز ذخیرهسازی متعدد را آسانتر میکنند و پیشینهی پروژهی شما را از طریق مالکیت مشترک محافظت میکنند. چه موقع باید به کسی اجازهی دسترسی کامیت بدهیم؟ برخی از افراد فکر میکنند که شما باید به هر کسی که در پروژه مشارکت میکند، دسترسی کامیت بدهید. با انجام این کار، افراد بیشتری ترغیب به داشتن احساس مالکیت نسبت به پروژهی شما، میشوند. از طرف دیگر، به خصوص برای پروژههای بزرگتر و پیچیدهتر، ممکن است بخواهید فقط به افرادی که تعهد خود را نشان داده و به اثبات رساندهاند، دسترسی کامیت بدهید. هیچ راه درستی برای انجام این کار وجود ندارد - هر جور که راحتتر هستید، عمل کنید! اگر پروژهی شما در «GitHub» است، می توانید از شاخههای محافظت شده protected branches برای مدیریت افرادی که میتوانند تحت شرایط خاصی در شاخهای خاص عمل کنند، استفاده کنید هر زمان کسی درخواست «Pull» ی را برای شما ارسال کرد، به او اجازهی دسترسی کامیت به پروژه را بدهید. اگرچه ممکن است در ابتدا بسیار احمقانه به نظر برسد، اما استفاده از این استراتژی به شما این امکان را میدهد تا قدرت واقعی «GitHub» را تجربه کنید. (…)به محض دسترسی کامیت افراد، آنها دیگر نگران این نیستند که «patch» هاشان ادغام نشود و باعث زحمت و کار زیادی برای آنها شود. — @felixge, “The Pull Request Hack” برخی از ساختارهای نظارتی متداول برای پروژههای متن باز چیست؟ سه ساختار نظارتی متداول در ارتباط با پروژههای متن باز وجود دارد. BDFL مخفف “Benevolent Dictator for Life” یا «دیکتاتور خیرخواه جاویدان» است تحت این ساختار، یک نفر (معمولاً نویسندهی اولیهی پروژه) در مورد تمام تصمیمات مهم پروژه، حرف آخر را میزند. Python ، نمونهای موثق در این مورد است. پروژههای کوچکتر احتمالاً به طور پیشفرض به صورت BDFL هستند، زیرا فقط یک یا دو نگهدارنده وجود دارد. پروژههایی که از یک شرکت منشأ گرفته شده باشند نیز ممکن است در طبقهبندی BDFL قرار گیرند Meritocracy: (شایسته سالاری) (توجه داشته باشید که اصطلاح شایسته سالاری برای برخی اجتماعها، مفهومی منفی دارد و ساختار آن دارای پیشینهی پیچیدهی اجتماعی و سیاسی .)است.) تحت ساختار «شایسته سالاری»، به مشارکتکنندگان فعال پروژه (کسانی که از خود «شایستگی» نشان میدهند) نقش تصمیمگیرندگی رسمی داده میشود. تصمیمها، معمولاً براساس اجماع رای گرفته میشوند. مفهوم شایسته سالاری، نخستین بار توسط بنیاد «Apache» ، به وجود آمد. تمامی پروژههای «Apache» بر اساس شایسته سالاری هستند. مشارکتها فقط توسط افراد حقیقی صورت میگیرد؛ نه توسط یک شرکت. Liberal contribution: (مشارکت آزادنه) تحت مدل مشارکت آزادانه، افرادی که بیشترین کار را انجام میدهند به عنوان تأثیرگذارترین افراد شناخته میشوند، اما این مدل براساس کاری که اکنون انجام میدهند است و نه مشارکتهای که در گذشته داشتهاند. تصمیمات بزرگ پروژه بر اساس فرآیندی در اجتماع به صورت اجماعی (بحث در مورد مسائل اساسی) به جای رأیگیری تنها گرفته میشود، و تلاش میشود تا آنجا که ممکن است دیدگاههای بیشتری از اجتماع را شامل شود. از جمله پروژههایی که از مدل مشارکت آزادانه استفاده کردند، میتوان Node.js و Rust را نام برد. از کدام مدل بهتر است استفاده کنید؟ به خودتان بستگی دارد! هر مدل، شامل نکات منفی و مثبتی میشود. اگرچه ممکن است این مدلها در ابتدا کاملاً متفاوت به نظر برسند؛ اما هر سه مدل، بیشتری از آنچه که به نظر می رسد اشتراکاتی دارند: BDFL model template Meritocracy model template Node.js’s liberal contribution policy آیا هنگام راهاندازی پروژهی خود به اسناد نظارتی نیاز دارم؟ زمان مناسب و دقیقی برای نوشتن اسناد نظارتی پروژه وجود ندارد، اما هنگامی که شرایط اجتماع خود را دیدید و با جو آن آشنا گشتید، به ثبت رساندن اسناد بسیار سادهتر میشود. بهترین (و سختترین) بخش در مورد نظارت پروژههای متن باز این است که توسط خود اجتماع شکل میگیرد! هر چند برخی اسناد اولیهی نظارتی به طور اجتناب ناپذیری در پروژهی شما خواهد بود، ولی شروع به نوشتن آنچه میتوانید بکنید. به عنوان مثال، شما میتوانید حتی در زمان راهاندازی پروژهی خود، انتظارات واضح و شفاف خودتان از رفتار یا فرآیند مشارکت کاری را تعریف کنید. اگر شما عضوی از شرکتی هستید که پروژهای متن باز راهاندازی میکند، بد نیست قبل از راهاندازی، بحثی درونسازمانی درباره نحوهی نگهداری شرکت و تصمیمگیری در مورد پیشرفت پروژه داشته باشید. همچنین بهتر است در مورد مسیری که شرکت شما در رابطه با پروژه پیش خواهد گرفت، به صورت علنی صحبت دهید. ما تیمهای کوچکی را برای مدیریت پروژهها در «GitHub» اختصاص میدهیم که در واقع در فیسبوک نیز بر روی آنها کار میکنند. برای مثال، پروژهی «React» توسط «React engineer» ، اداره میشود. — @caabernathy, “An inside look at open source at Facebook” اگر کارمندان شاغل در شرکت شروع به ارائهی خدمات کنند، چه میشود؟ پروژههای متن باز موفق، توسط بسیاری از افراد و شرکتها مورد استفاده قرار میگیرند و در نهایت ممکن است برخی از شرکتها شروع به کسب درآمد از آن پروژهها بکنند. به عنوان مثال، شرکتی ممکن است از کدهای پروژه به عنوان یک بخش سازندهی محصولات خدمات تجاری استفاده کند. هنگامی که استفاده از پروژه بیشتر شود، افراد زیادی خواستار کسانی که در آن پروژه تخصص دارند میشوند؛ ممکن است شما یکی از آنها باشید! و گاهی اوقات نیز به خاطر کاری که در پروژه میکنید، حقوق دریافت کنید. بسیار مهم است که رفتاری عادی در قبال فعالیتهای تجاری داشت؛ درست رفتاری مانند فعالیتهای توسعهای در پروژههای دیگر. البته نباید برخورد خاص و رفتار ویژهای با توسعهدهندگانی که به آنها پول داده میشود در برابر آنهایی که بدون دریافت پول به کارشان مشغول هستند، قائل شد. هر یک از مشارکتها باید بر اساس شایستگیهای فنی ارزیابی شود. با این حال، افرادی که مشغول به فعالیتهای تجاری هستند، باید احساس راحتی داشته باشند و هنگام بحث و گفتگو در مورد پیشرفت یا ویژگی خاصی، در بیان موارد کاربردی و کارهایی که قبلا داشتند، باید احساس راحتی کنند. پروژههای «تجاری» هیچ فرقی با پروژههای «متن باز» ندارند. «تجاری» فقط به این معنی است که پول هم در کار است؛ به این معنی که نرم افزاری که در تجارت مورد استفاده قرار میگیرد، نرمافزاری در پروژه بوده است که در فعالیت تجاری پذیرفته شده است. (هنگامی که از یک نرمافزار «متن باز» به عنوان بخشی سازنده از محصولی «غیر متن باز» استفاده میشود، محصول نهایی همچنان نرمافزاری «دارای حقوق انحصاری» است، هرچند مانند «متن باز» ممکن است برای اهداف تجاری یا غیر تجاری استفاده شود.) مانند هر کس دیگری، توسعهدهندگانی که انگیزههای تجاری دارند از طریق کیفیت و کمیت مشارکتهای خودشان است که در پروژه اهمیت مییابند و تاثیرگذار میشوند. بدیهی است که توسعهدهندهای که برای کاری که میکند حقوق میگیرد، احتمالا بیشتر از شخصی که حقوق نمیگیرد، توانایی کار کردن دارد ولی اهمیتی ندارد، پرداخت حقوق فقط یکی از بسیار عاملهای احتمالی است که میتواند بر میزان کاری که شخص میکند، تأثیر بگذارد. مشارکتها رو معطوف به بحثهای مربوط به پروژه بکنید، و نه معطوف به موضوعاتی خارج از پروژه. آیا برای حمایت و پشتیبانی از پروژهی خود به نهاد قانونی نیاز خواهم داشت؟ شما برای پشتیبانی از پروژهی متن باز خود احتیاج به نهاد قانونی نخواهید داشت، مگر اینکه با پول سر و کار داشته باشید. به عنوان مثال، اگر میخواهید کسب و کاری ایجاد کنید، باید از طریق «C Corp» یا «LLC» (اگر در ایالات متحده مستقر هستید) اقدام کنید. اگر فقط کارهای پیمانکاری مربوط به پروژه متن باز خود را انجام میدهید، میتوانید به عنوان یک مالک شخصی پول را بپذیرید یا یک «LLC» (اگر در ایالات متحده مستقر هستید) ایجاد کنید. اگر میخواهید کمکهای مالی برای پروژهی متن باز خود بپذیرید، میتوانید بستری را برای کمکهای مالی (مثلاً با استفاده از PayPal یا Stripe ) ایجاد کنید، اما این مبلغ مشمول کسر مالیات میشود، مگر اینکه به عنوان یک سازمان غیرانتفاعی واجد شرایط باشید (اگر در ایالات متحده مستقر هستید). بسیاری از پروژهها مایل به پذیرفتن مشکلات ناشی از ایجاد سازمانی غیرانتفاعی نیستند، بنابراین در عوض یک حامی مالی غیرانتفاعی پیدا میکنند. یک حامی مالی، معمولاً در ازای درصدی از کمک مالی، کمکهای مالی را از طرف شما قبول میکند. Software Freedom Conservancy ، Apache Foundation ، Eclipse Foundation ، Linux Foundation و Open Collective ، نمونههایی از سازمانها هستند که به عنوان حامیهای مالی در پروژههای متن باز فعالیت میکنند هدف ما فراهم آوردن زیرساختی است که اجتماعهامان (communities) بتوانند از آن برای پایداری خود استفاده کنند، بنابراین محیطی ایجاد میکنیم که همه - مشارکتکنندگان، پشتیبانان، حامیان مالی - از مزایای آن بهرهمند شوند — @piamancini, “Moving beyond the charity framework” اگر پروژهی شما رابطهی نزدیکی با زبان یا اکوسیستم خاصی داشته باشد، ممکن است پیشزمینهی نرمافزاری مرتبط با آن وجود داشته باشد که بتوانید با آن کار کنید. به عنوان مثال، Python Software Foundation از PyPI پشتیبانی میکند، Python package manager و Node.js Foundation به Express.js ، که مبتنی بر Node است، کمک میکند. بازگشت به راهنمای اصلی راهنما های مشابه بهترین روالهای تجربه شده برای نگهدارندهها زندگی خود را به عنوان یک نگهدارندهی اوپن سورس آسانتر کنید؛ از ثبتکردن فرآیندها گرفته تا بهرهبردن از اجتماعتان. سنجههای پروژههای متن باز آگاهانه تصمیمگیری کنید تا با ارزیابی و پیگیری موفقیت، به پیشرفت پروژۀ متن باز خود کمک کنید. Scroll to Top مشارکت کنید آیا پیشنهاد یا نظری دارید؟ این مطلب متن باز است. به ما کمک کنید تا بهبودش دهیم. مشارکت درارتباط باشید اولین نفری باشید که در خصوص آخرین نکات و ترفندهای متن باز در گیتهاب باخبر می شود. آدرس ایمیل fine print با توسط و دوستان | 2026-01-13T09:29:15 |
https://rust-lang.github.io/rfcs/print.html | The Rust RFC Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust RFC Book Rust RFCs - RFC Book - Active RFC List The “RFC” (request for comments) process is intended to provide a consistent and controlled path for changes to Rust (such as new features) so that all stakeholders can be confident about the direction of the project. Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow. Some changes though are “substantial”, and we ask that these be put through a bit of a design process and produce a consensus among the Rust community and the sub-team s. Table of Contents Opening Table of Contents When you need to follow this process Sub-team specific guidelines Before creating an RFC What the process is The RFC life-cycle Reviewing RFCs Implementing an RFC RFC Postponement Help this is all too informal! License Contributions When you need to follow this process You need to follow this process if you intend to make “substantial” changes to Rust, Cargo, Crates.io, or the RFC process itself. What constitutes a “substantial” change is evolving based on community norms and varies depending on what part of the ecosystem you are proposing to change, but may include the following. Any semantic or syntactic change to the language that is not a bugfix. Removing language features, including those that are feature-gated. Large additions to std . Some changes do not require an RFC: Rephrasing, reorganizing, refactoring, or otherwise “changing shape does not change meaning”. Additions that strictly improve objective, numerical quality criteria (warning removal, speedup, better platform coverage, more parallelism, trap more errors, etc.) Additions only likely to be noticed by other developers-of-rust, invisible to users-of-rust. Minor additions to std : these only require an ACP . If you submit a pull request to implement a new feature without going through the RFC process, it may be closed with a polite request to submit an RFC first. Sub-team specific guidelines For more details on when an RFC is required for the following areas, please see the Rust community’s sub-team specific guidelines for: language changes , library changes , compiler changes . Before creating an RFC A hastily-proposed RFC can hurt its chances of acceptance. Low quality proposals, proposals for previously-rejected features, or those that don’t fit into the near-term roadmap, may be quickly rejected, which can be demotivating for the unprepared contributor. Laying some groundwork ahead of the RFC can make the process smoother. Although there is no single way to prepare for submitting an RFC, it is generally a good idea to pursue feedback from other project developers beforehand, to ascertain that the RFC may be desirable; having a consistent impact on the project requires concerted effort toward consensus-building. The most common preparations for writing and submitting an RFC include talking the idea over on our official Zulip server , discussing the topic on our developer discussion forum , and occasionally posting “pre-RFCs” on the developer forum. You may file issues on this repo for discussion, but these are not actively looked at by the teams. As a rule of thumb, receiving encouraging feedback from long-standing project developers, and particularly members of the relevant sub-team is a good indication that the RFC is worth pursuing. What the process is In short, to get a major feature added to Rust, one must first get the RFC merged into the RFC repository as a markdown file. At that point the RFC is “active” and may be implemented with the goal of eventual inclusion into Rust. Fork the RFC repo RFC repository Copy 0000-template.md to text/0000-my-feature.md (where “my-feature” is descriptive). Don’t assign an RFC number yet; This is going to be the PR number and we’ll rename the file accordingly if the RFC is accepted. Fill in the RFC. Put care into the details: RFCs that do not present convincing motivation, demonstrate lack of understanding of the design’s impact, or are disingenuous about the drawbacks or alternatives tend to be poorly-received. Submit a pull request. As a pull request the RFC will receive design feedback from the larger community, and the author should be prepared to revise it in response. Now that your RFC has an open pull request, use the issue number of the PR to rename the file: update your 0000- prefix to that number. Also update the “RFC PR” link at the top of the file. Each pull request will be labeled with the most relevant sub-team , which will lead to its being triaged by that team in a future meeting and assigned to a member of the subteam. Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those that don’t receive any comments. Feel free to reach out to the RFC assignee in particular to get help identifying stakeholders and obstacles. The sub-team will discuss the RFC pull request, as much as possible in the comment thread of the pull request itself. Offline discussion will be summarized on the pull request comment thread. RFCs rarely go through this process unchanged, especially as alternatives and drawbacks are shown. You can make edits, big and small, to the RFC to clarify or change the design, but make changes as new commits to the pull request, and leave a comment on the pull request explaining your changes. Specifically, do not squash or rebase commits after they are visible on the pull request. At some point, a member of the subteam will propose a “motion for final comment period” (FCP), along with a disposition for the RFC (merge, close, or postpone). This step is taken when enough of the tradeoffs have been discussed that the subteam is in a position to make a decision. That does not require consensus amongst all participants in the RFC thread (which is usually impossible). However, the argument supporting the disposition on the RFC needs to have already been clearly articulated, and there should not be a strong consensus against that position outside of the subteam. Subteam members use their best judgment in taking this step, and the FCP itself ensures there is ample time and notification for stakeholders to push back if it is made prematurely. For RFCs with lengthy discussion, the motion to FCP is usually preceded by a summary comment trying to lay out the current state of the discussion and major tradeoffs/points of disagreement. Before actually entering FCP, all members of the subteam must sign off; this is often the point at which many subteam members first review the RFC in full depth. The FCP lasts ten calendar days, so that it is open for at least 5 business days. It is also advertised widely, e.g. in This Week in Rust . This way all stakeholders have a chance to lodge any final objections before a decision is reached. In most cases, the FCP period is quiet, and the RFC is either merged or closed. However, sometimes substantial new arguments or ideas are raised, the FCP is canceled, and the RFC goes back into development mode. The RFC life-cycle Once an RFC becomes “active” then authors may implement it and submit the feature as a pull request to the Rust repo. Being “active” is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it does mean that in principle all the major stakeholders have agreed to the feature and are amenable to merging it. Furthermore, the fact that a given RFC has been accepted and is “active” implies nothing about what priority is assigned to its implementation, nor does it imply anything about whether a Rust developer has been assigned the task of implementing the feature. While it is not necessary that the author of the RFC also write the implementation, it is by far the most effective way to see an RFC through to completion: authors should not expect that other project developers will take on responsibility for implementing their accepted feature. Modifications to “active” RFCs can be done in follow-up pull requests. We strive to write each RFC in a manner that it will reflect the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually reflect what the end result will be at the time of the next major release. In general, once accepted, RFCs should not be substantially changed. Only very minor changes should be submitted as amendments. More substantial changes should be new RFCs, with a note added to the original RFC. Exactly what counts as a “very minor change” is up to the sub-team to decide; check Sub-team specific guidelines for more details. Reviewing RFCs While the RFC pull request is up, the sub-team may schedule meetings with the author and/or relevant stakeholders to discuss the issues in greater detail, and in some cases the topic may be discussed at a sub-team meeting. In either case a summary from the meeting will be posted back to the RFC pull request. A sub-team makes final decisions about RFCs after the benefits and drawbacks are well understood. These decisions can be made at any time, but the sub-team will regularly issue decisions. When a decision is made, the RFC pull request will either be merged or closed. In either case, if the reasoning is not clear from the discussion in thread, the sub-team will add a comment describing the rationale for the decision. Implementing an RFC Some accepted RFCs represent vital features that need to be implemented right away. Other accepted RFCs can represent features that can wait until some arbitrary developer feels like doing the work. Every accepted RFC has an associated issue tracking its implementation in the Rust repository; thus that associated issue can be assigned a priority via the triage process that the team uses for all issues in the Rust repository. The author of an RFC is not obligated to implement it. Of course, the RFC author (like any other developer) is welcome to post an implementation for review after the RFC has been accepted. If you are interested in working on the implementation for an “active” RFC, but cannot determine if someone else is already working on it, feel free to ask (e.g. by leaving a comment on the associated issue). RFC Postponement Some RFC pull requests are tagged with the “postponed” label when they are closed (as part of the rejection process). An RFC closed with “postponed” is marked as such because we want neither to think about evaluating the proposal nor about implementing the described feature until some time in the future, and we believe that we can afford to wait until then to do so. Historically, “postponed” was used to postpone features until after 1.0. Postponed pull requests may be re-opened when the time is right. We don’t have any formal process for that, you should ask members of the relevant sub-team. Usually an RFC pull request marked as “postponed” has already passed an informal first round of evaluation, namely the round of “do we think we would ever possibly consider making this change, as outlined in the RFC pull request, or some semi-obvious variation of it.” (When the answer to the latter question is “no”, then the appropriate response is to close the RFC, not postpone it.) Help this is all too informal! The process is intended to be as lightweight as reasonable for the present circumstances. As usual, we are trying to let the process be driven by consensus and community norms, not impose more structure than necessary. License This repository is currently in the process of being licensed under either of: Apache License, Version 2.0, ( LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0) MIT license ( LICENSE-MIT or https://opensource.org/licenses/MIT) at your option. Some parts of the repository are already licensed according to those terms. For more see RFC 2044 and its tracking issue . Contributions Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. RFC policy - the compiler Compiler RFCs will be managed by the compiler sub-team, and tagged T-compiler . The compiler sub-team will do an initial triage of new PRs within a week of submission. The result of triage will either be that the PR is assigned to a member of the sub-team for shepherding, the PR is closed because the sub-team believe it should be done without an RFC, or closed because the sub-team feel it should clearly not be done and further discussion is not necessary. We’ll follow the standard procedure for shepherding, final comment period, etc. Most compiler decisions that go beyond the scope of a simple PR are done using MCP s, not RFCs. It is therefore likely that you should file an MCP instead of an RFC for your problem. Changes which need an RFC Significant user-facing changes to the compiler with a complex design space, especially if they involve other teams as well (for example, path sanitization ). Any other change which causes significant backwards incompatible changes to stable behaviour of the compiler, language, or libraries Changes which don’t need an RFC Bug fixes, improved error messages, etc. Minor refactoring/tidying up Large internal refactorings or redesigns of the compiler (needs an MCP ) Implementing language features which have an accepted RFC. New lints (these fall under the lang team). Lints are best first tried out in clippy and then uplifted later. Changing the API presented to syntax extensions or other compiler plugins in non-trivial ways Adding, removing, or changing a stable compiler flag (needs an FCP somewhere, like on an MCP or just on a PR) Adding unstable API for tools (note that all compiler API is currently unstable) Adding, removing, or changing an unstable compiler flag (if the compiler flag is widely used there should be at least some discussion on discuss, or an RFC in some cases) If in doubt it is probably best to just announce the change you want to make to the compiler subteam on Zulip , and see if anyone feels it needs an RFC. RFC policy - language design Pretty much every change to the language needs an RFC. Note that new lints (or major changes to an existing lint) are considered changes to the language. Language RFCs are managed by the language sub-team, and tagged T-lang . The language sub-team will do an initial triage of new PRs within a week of submission. The result of triage will either be that the PR is assigned to a member of the sub-team for shepherding, the PR is closed as postponed because the subteam believe it might be a good idea, but is not currently aligned with Rust’s priorities, or the PR is closed because the sub-team feel it should clearly not be done and further discussion is not necessary. In the latter two cases, the sub-team will give a detailed explanation. We’ll follow the standard procedure for shepherding, final comment period, etc. Amendments Sometimes in the implementation of an RFC, changes are required. In general these don’t require an RFC as long as they are very minor and in the spirit of the accepted RFC (essentially bug fixes). In this case implementers should submit an RFC PR which amends the accepted RFC with the new details. Although the RFC repository is not intended as a reference manual, it is preferred that RFCs do reflect what was actually implemented. Amendment RFCs will go through the same process as regular RFCs, but should be less controversial and thus should move more quickly. When a change is more dramatic, it is better to create a new RFC. The RFC should be standalone and reference the original, rather than modifying the existing RFC. You should add a comment to the original RFC with referencing the new RFC as part of the PR. Obviously there is some scope for judgment here. As a guideline, if a change affects more than one part of the RFC (i.e., is a non-local change), affects the applicability of the RFC to its motivating use cases, or there are multiple possible new solutions, then the feature is probably not ‘minor’ and should get a new RFC. RFC guidelines - libraries sub-team Motivation RFCs are heavyweight: RFCs generally take at minimum 2 weeks from posting to land. In practice it can be more on the order of months for particularly controversial changes. RFCs are a lot of effort to write; especially for non-native speakers or for members of the community whose strengths are more technical than literary. RFCs may involve pre-RFCs and several rewrites to accommodate feedback. RFCs require a dedicated shepherd to herd the community and author towards consensus. RFCs require review from a majority of the subteam, as well as an official vote. RFCs can’t be downgraded based on their complexity. Full process always applies. Easy RFCs may certainly land faster, though. RFCs can be very abstract and hard to grok the consequences of (no implementation). PRs are low overhead but potentially expensive nonetheless: Easy PRs can get insta-merged by any rust-lang contributor. Harder PRs can be easily escalated. You can ping subject-matter experts for second opinions. Ping the whole team! Easier to grok the full consequences. Lots of tests and Crater to save the day. PRs can be accepted optimistically with bors, buildbot, and the trains to guard us from major mistakes making it into stable. The size of the nightly community at this point in time can still mean major community breakage regardless of trains, however. HOWEVER: Big PRs can be a lot of work to make only to have that work rejected for details that could have been hashed out first. RFCs are only meaningful if a significant and diverse portion of the community actively participates in them. The official teams are not sufficiently diverse to establish meaningful community consensus by agreeing amongst themselves. If there are tons of RFCs – especially trivial ones – people are less likely to engage with them. Official team members are super busy. Domain experts and industry professionals are super busy and have no responsibility to engage in RFCs. Since these are exactly the most important people to get involved in the RFC process, it is important that we be maximally friendly towards their needs. Is an RFC required? The overarching philosophy is: do whatever is easiest . If an RFC would be less work than an implementation, that’s a good sign that an RFC is necessary. That said, if you anticipate controversy, you might want to short-circuit straight to an RFC. For instance new APIs almost certainly merit an RFC. Especially as std has become more conservative in favour of the much more agile cargoverse. Submit a PR if the change is a: Bugfix Docfix Obvious API hole patch, such as adding an API from one type to a symmetric type. e.g. Vec<T> -> Box<[T]> clearly motivates adding String -> Box<str> Minor tweak to an unstable API (renaming, generalizing) Implementing an “obvious” trait like Clone/Debug/etc Submit an RFC if the change is a: New API Semantic Change to a stable API Generalization of a stable API (e.g. how we added Pattern or Borrow) Deprecation of a stable API Nontrivial trait impl (because all trait impls are insta-stable) Do the easier thing if uncertain. (choosing a path is not final) Non-RFC process A (non-RFC) PR is likely to be closed if clearly not acceptable: Disproportionate breaking change (small inference breakage may be acceptable) Unsound Doesn’t fit our general design philosophy around the problem Better as a crate Too marginal for std Significant implementation problems A PR may also be closed because an RFC is appropriate. A (non-RFC) PR may be merged as unstable . In this case, the feature should have a fresh feature gate and an associated tracking issue for stabilisation. Note that trait impls and docs are insta-stable and thus have no tracking issue. This may imply requiring a higher level of scrutiny for such changes. However, an accepted RFC is not a rubber-stamp for merging an implementation PR. Nor must an implementation PR perfectly match the RFC text. Implementation details may merit deviations, though obviously they should be justified. The RFC may be amended if deviations are substantial, but are not generally necessary. RFCs should favour immutability. The RFC + Issue + PR should form a total explanation of the current implementation. Once something has been merged as unstable, a shepherd should be assigned to promote and obtain feedback on the design. Every time a release cycle ends, the libs teams assesses the current unstable APIs and selects some number of them for potential stabilization during the next cycle. These are announced for FCP at the beginning of the cycle, and (possibly) stabilized just before the beta is cut. After the final comment period, an API should ideally take one of two paths: Stabilize if the change is desired, and consensus is reached Deprecate is the change is undesired, and consensus is reached Extend the FCP is the change cannot meet consensus If consensus still can’t be reached, consider requiring a new RFC or just deprecating as “too controversial for std”. If any problems are found with a newly stabilized API during its beta period, strongly favour reverting stability in order to prevent stabilizing a bad API. Due to the speed of the trains, this is not a serious delay (~2-3 months if it’s not a major problem). 0001-private-fields Start Date: 2014-03-11 RFC PR: rust-lang/rfcs#1 Rust Issue: rust-lang/rust#8122 Summary This is an RFC to make all struct fields private by default. This includes both tuple structs and structural structs. Motivation Reasons for default private visibility Visibility is often how soundness is achieved for many types in rust. These types are normally wrapping unsafe behavior of an FFI type or some other rust-specific behavior under the hood (such as the standard Vec type). Requiring these types to opt-in to being sound is unfortunate. Forcing tuple struct fields to have non-overridable public visibility greatly reduces the utility of such types. Tuple structs cannot be used to create abstraction barriers as they can always be easily destructed. Private-by-default is more consistent with the rest of the Rust language. All other aspects of privacy are private-by-default except for enum variants. Enum variants, however, are a special case in that they are inserted into the parent namespace, and hence naturally inherit privacy. Public fields of a struct must be considered as part of the API of the type. This means that the exact definition of all structs is by default the API of the type. Structs must opt-out of this behavior if the priv keyword is required. By requiring the pub keyword, structs must opt-in to exposing more surface area to their API. Reasons for inherited visibility (today’s design) Public definitions like pub struct Point { x: int, y: int } are concise and easy to read. Private definitions certainly want private fields (to hide implementation details). Detailed design Currently, rustc has two policies for dealing with the privacy of struct fields: Tuple structs have public fields by default (including “newtype structs”) Fields of structural structs ( struct Foo { ... } ) inherit the same privacy of the enclosing struct. This RFC is a proposal to unify the privacy of struct fields with the rest of the language by making them private by default. This means that both tuple variants and structural variants of structs would have private fields by default. For example, the program below is accepted today, but would be rejected with this RFC. mod inner { pub struct Foo(u64); pub struct Bar { field: u64 } } fn main() { inner::Foo(10); inner::Bar { field: 10 }; } Refinements to structural structs Public fields are quite a useful feature of the language, so syntax is required to opt out of the private-by-default semantics. Structural structs already allow visibility qualifiers on fields, and the pub qualifier would make the field public instead of private. Additionally, the priv visibility will no longer be allowed to modify struct fields. Similarly to how a priv fn is a compiler error, a priv field will become a compiler error. Refinements on tuple structs As with their structural cousins, it’s useful to have tuple structs with public fields. This RFC will modify the tuple struct grammar to: tuple_struct := 'struct' ident '(' fields ')' ';' fields := field | field ',' fields field := type | visibility type For example, these definitions will be added to the language: // a "newtype wrapper" struct with a private field struct Foo(u64); // a "newtype wrapper" struct with a public field struct Bar(pub u64); // a tuple struct with many fields, only the first and last of which are public struct Baz(pub u64, u32, f32, pub int); Public fields on tuple structs will maintain the semantics that they currently have today. The structs can be constructed, destructed, and participate in pattern matches. Private fields on tuple structs will prevent the following behaviors: Private fields cannot be bound in patterns (both in irrefutable and refutable contexts, i.e. let and match statements). Private fields cannot be specified outside of the defining module when constructing a tuple struct. These semantics are intended to closely mirror the behavior of private fields for structural structs. Statistics gathered A brief survey was performed over the entire mozilla/rust repository to gather these statistics. While not representative of all projects, this repository should give a good indication of what most structs look like in the real world. The repository has both libraries ( libstd ) as well as libraries which don’t care much about privacy ( librustc ). These numbers tally up all structs from all locations, and only take into account structural structs, not tuple structs. Inherited privacy Private-by-default Private fields 1418 1852 Public fields 2036 1602 All-private structs 551 (52.23%) 671 (63.60%) All-public structs 468 (44.36%) 352 (33.36%) Mixed privacy structs 36 ( 3.41%) 32 ( 3.03%) The numbers clearly show that the predominant pattern is to have all-private structs, and that there are many public fields today which can be private (and perhaps should!). Additionally, there is on the order of 1418 instances of the word priv today, when in theory there should be around 1852 . With this RFC, there would need to be 1602 instances of the word pub . A very large portion of structs requiring pub fields are FFI structs defined in the libc module. Impact on enums This RFC does not impact enum variants in any way. All enum variants will continue to inherit privacy from the outer enum type. This includes both the fields of tuple variants as well as fields of struct variants in enums. Alternatives The main alternative to this design is what is currently implemented today, where fields inherit the privacy of the outer structure. The pros and cons of this strategy are discussed above. Unresolved questions As the above statistics show, almost all structures are either all public or all private. This RFC provides an easy method to make struct fields all private, but it explicitly does not provide a method to make struct fields all public. The statistics show that pub will be written less often than priv is today, and it’s always possible to add a method to specify a struct as all-public in the future in a backwards-compatible fashion. That being said, it’s an open question whether syntax for an “all public struct” is necessary at this time. 0002-rfc-process Start Date: 2014-03-11 RFC PR: rust-lang/rfcs#2 , rust-lang/rfcs#6 Rust Issue: N/A Summary The “RFC” (request for comments) process is intended to provide a consistent and controlled path for new features to enter the language and standard libraries, so that all stakeholders can be confident about the direction the language is evolving in. Motivation The freewheeling way that we add new features to Rust has been good for early development, but for Rust to become a mature platform we need to develop some more self-discipline when it comes to changing the system. This is a proposal for a more principled RFC process to make it a more integral part of the overall development process, and one that is followed consistently to introduce features to Rust. Detailed design Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow. Some changes though are “substantial”, and we ask that these be put through a bit of a design process and produce a consensus among the Rust community and the core team . When you need to follow this process You need to follow this process if you intend to make “substantial” changes to the Rust distribution. What constitutes a “substantial” change is evolving based on community norms, but may include the following. Any semantic or syntactic change to the language that is not a bugfix. Removing language features, including those that are feature-gated. Changes to the interface between the compiler and libraries, including lang items and intrinsics. Additions to std Some changes do not require an RFC: Rephrasing, reorganizing, refactoring, or otherwise “changing shape does not change meaning”. Additions that strictly improve objective, numerical quality criteria (warning removal, speedup, better platform coverage, more parallelism, trap more errors, etc.) Additions only likely to be noticed by other developers-of-rust, invisible to users-of-rust. If you submit a pull request to implement a new feature without going through the RFC process, it may be closed with a polite request to submit an RFC first. What the process is In short, to get a major feature added to Rust, one must first get the RFC merged into the RFC repo as a markdown file. At that point the RFC is ‘active’ and may be implemented with the goal of eventual inclusion into Rust. Fork the RFC repo https://github.com/rust-lang/rfcs Copy 0000-template.md to text/0000-my-feature.md (where ‘my-feature’ is descriptive. don’t assign an RFC number yet). Fill in the RFC Submit a pull request. The pull request is the time to get review of the design from the larger community. Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those that don’t receive any comments. Eventually, somebody on the core team will either accept the RFC by merging the pull request, at which point the RFC is ‘active’, or reject it by closing the pull request. Whomever merges the RFC should do the following: Assign an id, using the PR number of the RFC pull request. (If the RFC has multiple pull requests associated with it, choose one PR number, preferably the minimal one.) Add the file in the text/ directory. Create a corresponding issue on Rust repo Fill in the remaining metadata in the RFC header, including links for the original pull request(s) and the newly created Rust issue. Add an entry in the Active RFC List of the root README.md . Commit everything. Once an RFC becomes active then authors may implement it and submit the feature as a pull request to the Rust repo. An ‘active’ is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it does mean that in principle all the major stakeholders have agreed to the feature and are amenable to merging it. Modifications to active RFC’s can be done in followup PR’s. An RFC that makes it through the entire process to implementation is considered ‘complete’ and is removed from the Active RFC List ; an RFC that fails after becoming active is ‘inactive’ and moves to the ‘inactive’ folder. Alternatives Retain the current informal RFC process. The newly proposed RFC process is designed to improve over the informal process in the following ways: Discourage unactionable or vague RFCs Ensure that all serious RFCs are considered equally Give confidence to those with a stake in Rust’s development that they understand why new features are being merged As an alternative, we could adopt an even stricter RFC process than the one proposed here. If desired, we should likely look to Python’s PEP process for inspiration. Unresolved questions Does this RFC strike a favorable balance between formality and agility? Does this RFC successfully address the aforementioned issues with the current informal RFC process? Should we retain rejected RFCs in the archive? 0003-attribute-usage Start Date: 2012-03-20 RFC PR: rust-lang/rfcs#3 Rust Issue: rust-lang/rust#14373 Summary Rust currently has an attribute usage lint but it does not work particularly well. This RFC proposes a new implementation strategy that should make it significantly more useful. Motivation The current implementation has two major issues: There are very limited warnings for valid attributes that end up in the wrong place. Something like this will be silently ignored: #[deriving(Clone)]; // Shouldn't have put a ; here struct Foo; #[ignore(attribute-usage)] // Should have used #[allow(attribute-usage)] instead! mod bar { //... } ItemDecorators can now be defined outside of the compiler, and there’s no way to tag them and associated attributes as valid. Something like this requires an #[allow(attribute-usage)] : #[feature(phase)]; #[phase(syntax, link)] extern crate some_orm; #[ormify] pub struct Foo { #[column(foo_)] #[primary_key] foo: int } Detailed design The current implementation is implemented as a simple fold over the AST, comparing attributes against a whitelist. Crate-level attributes use a separate whitelist, but no other distinctions are made. This RFC would change the implementation to actually track which attributes are used during the compilation process. syntax::ast::Attribute_ would be modified to add an ID field: pub struct AttrId(uint); pub struct Attribute_ { id: AttrId, style: AttrStyle, value: @MetaItem, is_sugared_doc: bool, } syntax::ast::parse::ParseSess will generate new AttrId s on demand. I believe that attributes will only be created during parsing and expansion, and the ParseSess is accessible in both. The AttrId s will be used to create a side table of used attributes. This will most likely be a thread local to make it easily accessible during all stages of compilation by calling a function in syntax::attr : fn mark_used(attr: &Attribute) { } The attribute-usage lint would run at the end of compilation and warn on all attributes whose ID does not appear in the side table. One interesting edge case is attributes like doc that are used, but not in the normal compilation process. There could either be a separate fold pass to mark all doc attributes as used or doc could simply be whitelisted in the attribute-usage lint. Attributes in code that has been eliminated with #[cfg()] will not be linted, but I feel that this is consistent with the way #[cfg()] works in general (e.g. the code won’t be type-checked either). Alternatives An alternative would be to rewrite rustc::middle::lint to robustly check that attributes are used where they’re supposed to be. This will be fairly complex and be prone to failure if/when more nodes are added to the AST. This also doesn’t solve motivation #2, which would require externally loaded lint support. Unresolved questions This implementation doesn’t allow for a distinction between “unused” and “unknown” attributes. The #[phase(syntax)] crate loading infrastructure could be extended to pull a list of attributes from crates to use in the lint pass, but I’m not sure if the extra complexity is worth it. The side table could be threaded through all of the compilation stages that need to use it instead of being a thread local. This would probably require significantly more work than the thread local approach, however. The thread local approach should not negatively impact any future parallelization work as each thread can keep its own side table, which can be merged into one for the lint pass. 0008-new-intrinsics Start Date: 2014-03-14 RFC PR: rust-lang/rfcs#8 Rust Issue: Note: this RFC was never implemented and has been retired. The design may still be useful in the future, but before implementing we would prefer to revisit it so as to be sure it is up to date. Summary The way our intrinsics work forces them to be wrapped in order to behave like normal functions. As a result, rustc is forced to inline a great number of tiny intrinsic wrappers, which is bad for both compile-time performance and runtime performance without optimizations. This proposal changes the way intrinsics are surfaced in the language so that they behave the same as normal Rust functions by removing the “rust-intrinsic” foreign ABI and reusing the “Rust” ABI. Motivation A number of commonly-used intrinsics, including transmute , forget , init , uninit , and move_val_init , are accessed through wrappers whose only purpose is to present the intrinsics as normal functions. As a result, rustc is forced to inline a great number of tiny intrinsic wrappers, which is bad for both compile-time performance and runtime performance without optimizations. Intrinsics have a differently-named ABI from Rust functions (“rust-intrinsic” vs. “Rust”) though the actual ABI implementation is identical. As a result one can’t take the value of an intrinsic as a function: // error: the type of transmute is `extern "rust-intrinsic" fn ...` let transmute: fn(int) -> uint = intrinsics::transmute; This incongruity means that we can’t just expose the intrinsics directly as part of the public API. Detailed design extern "Rust" fn is already equivalent to fn , so if intrinsics have the “Rust” ABI then the problem is solved. Under this scheme intrinsics will be declared as extern "Rust" functions and identified as intrinsics with the #[lang = "..."] attribute: extern "Rust" { #[lang = "transmute"] fn transmute<T, U>(T) -> U; } The compiler will type check and translate intrinsics the same as today. Additionally, when trans sees a “Rust” extern tagged as an intrinsic it will not emit a function declaration to LLVM bitcode. Because intrinsics will be lang items, they can no longer be redeclared arbitrary number of times. This will require a small amount of existing library code to be refactored, and all intrinsics to be exposed through public abstractions. Currently, “Rust” foreign functions may not be generic; this change will require a special case that allows intrinsics to be generic. Alternatives Instead of making intrinsics lang items we could create a slightly different mechanism, like an #[intrinsic] attribute, that would continue letting intrinsics to be redeclared. While using lang items to identify intrinsics, intrinsic lang items could be allowed to be redeclared. We could also make “rust-intrinsic” coerce or otherwise be the same as “Rust” externs and normal Rust functions. Unresolved questions None. 0016-more-attributes Start Date: 2014-03-20 RFC PR: rust-lang/rfcs#16 Rust Issue: rust-lang/rust#15701 Summary Allow attributes on more places inside functions, such as statements, blocks and expressions. Motivation One sometimes wishes to annotate things inside functions with, for example, lint #[allow] s, conditional compilation #[cfg] s, and even extra semantic (or otherwise) annotations for external tools. For the lints, one can currently only activate lints at the level of the function which is possibly larger than one needs, and so may allow other “bad” things to sneak through accidentally. E.g. #[allow(uppercase_variable)] let L = List::new(); // lowercase looks like one or capital i For the conditional compilation, the work-around is duplicating the whole containing function with a #[cfg] , or breaking the conditional code into a its own function. This does mean that any variables need to be explicitly passed as arguments. The sort of things one could do with other arbitrary annotations are #[allowed_unsafe_actions(ffi)] #[audited="2014-04-22"] unsafe { ... } and then have an external tool that checks that that unsafe block’s only unsafe actions are FFI, or a tool that lists blocks that have been changed since the last audit or haven’t been audited ever. The minimum useful functionality would be supporting attributes on blocks and let statements, since these are flexible enough to allow for relatively precise attribute handling. Detailed design Normal attribute syntax on let statements, blocks and expressions. fn foo() { #[attr1] let x = 1; #[attr2] { // code } #[attr3] unsafe { // code } #[attr4] foo(); let x = #[attr5] 1; qux(3 + #[attr6] 2); foo(x, #[attr7] y, z); } Attributes bind tighter than any operator, that is #[attr] x op y is always parsed as (#[attr] x) op y . cfg It is definitely an error to place a #[cfg] attribute on a non-statement expressions, that is, attr1 – attr4 can possibly be #[cfg(foo)] , but attr5 – attr7 cannot, since it makes little sense to strip code down to let x = ; . However, like #ifdef in C/C++, widespread use of #[cfg] may be an antipattern that makes code harder to read. This RFC is just adding the ability for attributes to be placed in specific places, it is not mandating that #[cfg] actually be stripped in those places (although it should be an error if it is ignored). Inner attributes Inner attributes can be placed at the top of blocks (and other structure incorporating a block) and apply to that block. { #![attr11] foo() } match bar { #![attr12] _ => {} } // are the same as #[attr11] { foo() } #[attr12] match bar { _ => {} } if Attributes would be disallowed on if for now, because the interaction with if / else chains are funky, and can be simulated in other ways. #[cfg(not(foo))] if cond1 { } else #[cfg(not(bar))] if cond2 { } else #[cfg(not(baz))] { } There is two possible interpretations of such a piece of code, depending on if one regards the attributes as attaching to the whole if ... else chain (“exterior”) or just to the branch on which they are placed (“interior”). --cfg foo : could be either removing the whole chain (exterior) or equivalent to if cond2 {} else {} (interior). --cfg bar : could be either if cond1 {} ( e ) or if cond1 {} else {} ( i ) --cfg baz : equivalent to if cond1 {} else if cond2 {} (no subtlety). --cfg foo --cfg bar : could be removing the whole chain ( e ) or the two if branches (leaving only the else branch) ( i ). (This applies to any attribute that has some sense of scoping, not just #[cfg] , e.g. #[allow] and #[warn] for lints.) As such, to avoid confusion, attributes would not be supported on if . Alternatives include using blocks: #[attr] if cond { ... } else ... // becomes, for an exterior attribute, #[attr] { if cond { ... } else ... } // and, for an interior attribute, if cond { #[attr] { ... } } else ... And, if the attributes are meant to be associated with the actual branching (e.g. a hypothetical #[cold] attribute that indicates a branch is unlikely), one can annotate match arms: match cond { #[attr] true => { ... } #[attr] false => { ... } } Drawbacks This starts mixing attributes with nearly arbitrary code, possibly dramatically restricting syntactic changes related to them, for example, there was some consideration for using @ for attributes, this change may make this impossible (especially if @ gets reused for something else, e.g. Python is using it for matrix multiplication ). It may also make it impossible to use # for other things. As stated above, allowing #[cfg] s everywhere can make code harder to reason about, but (also stated), this RFC is not for making such #[cfg] s be obeyed, it just opens the language syntax to possibly allow it. Alternatives These instances could possibly be approximated with macros and helper functions, but to a low degree degree (e.g. how would one annotate a general unsafe block). Only allowing attributes on “statement expressions” that is, expressions at the top level of a block, this is slightly limiting; but we can expand to support other contexts backwards compatibly in the future. The if / else issue may be able to be resolved by introducing explicit “interior” and “exterior” attributes on if : by having #[attr] if cond { ... be an exterior attribute (applying to the whole if / else chain) and if cond #[attr] { ... be an interior attribute (applying to only the current if branch). There is no difference between interior and exterior for an else { branch, and so else #[attr] { is sufficient. Unresolved questions Are the complications of allowing attributes on arbitrary expressions worth the benefits? 0019-opt-in-builtin-traits Start Date: 2014-09-18 RFC PR #: rust-lang/rfcs#19 , rust-lang/rfcs#127 Rust Issue #: rust-lang/rust#13231 Note: The Share trait described in this RFC was later renamed to Sync . Summary The high-level idea is to add language features that simultaneously achieve three goals: move Send and Share out of the language entirely and into the standard library, providing mechanisms for end users to easily implement and use similar “marker” traits of their own devising; make “normal” Rust types sendable and sharable by default, without the need for explicit opt-in; and, continue to require “unsafe” Rust types (those that manipulate unsafe pointers or implement special abstractions) to “opt-in” to sendability and sharability with an unsafe declaration. These goals are achieved by two changes: Unsafe traits: An unsafe trait is a trait that is unsafe to implement, because it represents some kind of trusted assertion. Note that unsafe traits are perfectly safe to use . Send and Share are examples of unsafe traits: implementing these traits is effectively an assertion that your type is safe for threading. Default and negative impls: A default impl is one that applies to all types, except for those types that explicitly opt out . For example, there would be a default impl for Send , indicating that all types are Send “by default”. To counteract a default impl, one uses a negative impl that explicitly opts out for a given type T and any type that contains T . For example, this RFC proposes that unsafe pointers *T will opt out of Send and Share . This implies that unsafe pointers cannot be sent or shared between threads by default. It also implies that any structs which contain an unsafe pointer cannot be sent. In all examples encountered thus far, the set of negative impls is fixed and can easily be declared along with the trait itself. Safe wrappers like Arc , Atomic , or Mutex can opt to implement Send and Share explicitly. This will then make them be considered sendable (or sharable) even though they contain unsafe pointers etc. Based on these two mechanisms, we can remove the notion of Send and Share as builtin concepts. Instead, these would become unsafe traits with default impls (defined purely in the library). The library would explicitly opt out of Send / Share for certain types, like unsafe pointers ( *T ) or interior mutability ( Unsafe<T> ). Any type, therefore, which contains an unsafe pointer would be confined (by default) to a single thread. Safe wrappers around those types, like Arc , Atomic , or Mutex , can then opt back in by explicitly implementing Send (these impls would have to be designed as unsafe). Motivation Since proposing opt-in builtin traits, I have become increasingly concerned about the notion of having Send and Share be strictly opt-in. There are two main reasons for my concern: Rust is very close to being a language where computations can be parallelized by default. Making Send , and especially Share , opt-in makes that harder to achieve. The model followed by Send / Share cannot easily be extended to other traits in the future nor can it be extended by end-users with their own similar traits. It is worrisome that I have come across several use cases already which might require such extension (described below). To elaborate on those two points: With respect to parallelization: for the most part, Rust types are threadsafe “by default”. To make something non-threadsafe, you must employ unsynchronized interior mutability (e.g., Cell , RefCell ) or unsynchronized shared ownership ( Rc ). In both cases, there are also synchronized variants available ( Mutex , Arc , etc). This implies that we can make APIs to enable intra-task parallelism and they will work ubiquitously, so long as people avoid Cell and Rc when not needed. Explicit opt-in threatens that future, however, because fewer types will implement Share , even if they are in fact threadsafe. With respect to extensibility, it is particularly worrisome that if a library forgets to implement Send or Share , downstream clients are stuck. They cannot, for example, use a newtype wrapper, because it would be illegal to implement Send on the newtype. This implies that all libraries must be vigilant about implementing Send and Share (even more so than with other pervasive traits like Eq or Ord ). The current plan is to address this via lints and perhaps some convenient deriving syntax, which may be adequate for Send and Share . But if we wish to add new “classification” traits in the future, these new traits won’t have been around from the start, and hence won’t be implemented by all existing code. Another concern of mine is that end users cannot define classification traits of their own. For example, one might like to define a trait for “tainted” data, | 2026-01-13T09:29:15 |
https://opensource.guide/it/starting-a-project/ | Avvio di un progetto open source | Open Source Guides Chi siamo Contribuire العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Avvio di un progetto open source Scopri di più sul mondo dell'open source e preparati a iniziare il tuo progetto. Contenuto Il “cosa” e il “perché” dell’open source Dovrei iniziare il mio progetto open source? Inizia il tuo progetto open source Assegna un nome e un marchio al tuo progetto La tua lista di controllo pre-lancio Il “cosa” e il “perché” dell’open source Quindi stai pensando di iniziare con l’open source? Congratulazioni! Il mondo apprezza il tuo contributo. Parliamo di cos’è l’open source e perché le persone lo fanno. Cosa significa “open source”? Quando un progetto è open source, significa che chiunque è libero di utilizzare, studiare, modificare e distribuire il tuo progetto per qualsiasi scopo. Queste autorizzazioni si applicano tramite la licenza open source . L’open source è potente perché abbassa le barriere all’adozione e alla collaborazione, consentendo alle persone di distribuire e migliorare rapidamente i progetti. Anche perché offre agli utenti la possibilità di controllare i propri computer, rispetto al closed source. Ad esempio, un’azienda che utilizza software open source ha la possibilità di assumere qualcuno per apportare miglioramenti personalizzati al software invece di affidarsi esclusivamente alle soluzioni di prodotto di un fornitore closed source. Software Libero si riferisce allo stesso insieme di progetti di Open Source . A volte vedrai anche questi termini combinati come “software libero e open source” (FOSS) o “software libero e open source” (FLOSS). Free e libre si riferiscono alla libertà, non al prezzo . Защо хората отварят кода на работата си? Una delle esperienze più gratificanti che ottengo utilizzando e collaborando con l’open source deriva dalle relazioni che instauro con altri sviluppatori che affrontano molti dei miei stessi problemi. — @kentcdodds, “Come è stato fantastico per me entrare nell’Open Source” Ci sono molte ragioni per cui un individuo o un’organizzazione vorrebbe rendere open source un progetto. Alcuni esempi includono: Collaborazione: I progetti open source possono accettare modifiche da chiunque nel mondo. Exercism , ad esempio, è una piattaforma di esercizi di programmazione con oltre 350 contributori. Adozione e remix: i progetti open source possono essere utilizzati da chiunque per quasi tutti gli scopi. Le persone possono persino usarlo per costruire altre cose. WordPress , ad esempio, è iniziato come fork di un progetto esistente chiamato b2 . Trasparenza: chiunque può verificare la presenza di bug o incoerenze in un progetto open source. La trasparenza è importante per governi come la Bulgaria o gli Stati Uniti , settori regolamentati come quello bancario o sanitario e software di sicurezza come Let’s Encrypt . L’open source non riguarda solo il software. Puoi aprire qualsiasi cosa, dai set di dati ai libri. Dai un’occhiata a GitHub Explore per idee su cos’altro puoi aprire. Open source significa “gratuito”? Uno dei maggiori vantaggi dell’open source è che non costa denaro. Tuttavia, “gratuito” è un sottoprodotto del valore complessivo dell’open source. Poiché la licenza open source richiede che chiunque sia in grado di utilizzare, modificare e condividere il tuo progetto per quasi tutti gli scopi, i progetti stessi sono generalmente gratuiti. Se l’utilizzo del progetto costa denaro, chiunque può legalmente farne una copia e utilizzare invece la versione gratuita. Di conseguenza, la maggior parte dei progetti open source sono gratuiti, ma “gratuito” non fa parte della definizione di open source. Esistono modi per far pagare i progetti open source indirettamente attraverso doppia licenza o funzionalità limitate, pur rispettando la definizione ufficiale di open source. Dovrei iniziare il mio progetto open source? La risposta breve è sì, perché indipendentemente dal risultato, avviare il proprio progetto è un ottimo modo per imparare come funziona l’open source. Se non hai mai aperto un progetto open source prima, potresti essere preoccupato per ciò che la gente dirà o se qualcuno se ne accorgerà. Se suona come te, non sei solo! Il lavoro open source è come qualsiasi altra attività creativa, che si tratti di scrivere o disegnare. Potresti avere paura di condividere il tuo lavoro con il mondo, ma l’unico modo per migliorare è esercitarsi, anche se non hai un pubblico. Se non sei ancora convinto, prenditi un momento per pensare a quali potrebbero essere i tuoi obiettivi. Stabilisci i tuoi obiettivi Gli obiettivi possono aiutarti a capire su cosa lavorare, a cosa dire di no e dove hai bisogno dell’aiuto degli altri. Inizia chiedendoti perché sto utilizzando questo progetto open source? Non esiste un’unica risposta corretta a questa domanda. Puoi avere più obiettivi per un progetto o progetti diversi con obiettivi diversi. Se il tuo unico obiettivo è mettere in mostra il tuo lavoro, potresti non volere nemmeno un contributo e addirittura dirlo nel tuo README. D’altra parte, se desideri contributori, investirai tempo in una documentazione chiara e farai sentire i nuovi arrivati i benvenuti. Ad un certo punto ho creato un UIAlertView personalizzato che ho utilizzato… e ho deciso di renderlo open source. Quindi l’ho modificato per renderlo più dinamico e l’ho caricato su GitHub. Ho anche scritto la mia prima documentazione spiegando ad altri sviluppatori come utilizzarlo nei loro progetti. Probabilmente nessuno lo ha mai usato perché era un progetto semplice, ma mi sono sentito bene con il mio contributo. — @mavris, “Sviluppatori di software autodidatti: perché l’open source è importante per noi” Man mano che il tuo progetto cresce, la tua comunità potrebbe aver bisogno di qualcosa di più del semplice codice, da parte tua. Rispondere ai problemi, rivedere il codice ed evangelizzare il tuo progetto sono compiti importanti in un progetto open source. Anche se la quantità di tempo che dedichi ad attività non di codifica dipenderà dalle dimensioni e dall’ambito del tuo progetto, dovresti essere preparato come manutentore a gestirle da solo o a trovare qualcuno che ti aiuti. Se fai parte di un’azienda che offre un progetto open source, assicurati che il tuo progetto disponga delle risorse interne necessarie per prosperare. Ti consigliamo di determinare chi è responsabile del mantenimento del progetto dopo il lancio e come condividerai tali attività con la tua comunità. Se hai bisogno di un budget o di personale dedicato per la promozione, le operazioni e la manutenzione del progetto, avvia queste conversazioni in anticipo. Quando inizi a rendere open source il tuo progetto, è importante assicurarti che i tuoi processi di governance tengano conto del contributo e delle capacità della comunità attorno al tuo progetto. Non aver paura di coinvolgere collaboratori esterni alla tua azienda negli aspetti chiave del progetto, soprattutto se collaborano spesso. — @captainsafia, “Quindi vuoi un progetto open source eh?” Contributo ad altri progetti Se il tuo obiettivo è imparare a collaborare con gli altri o capire come funziona l’open source, valuta la possibilità di contribuire a un progetto esistente. Inizia con un progetto che già usi e che ti piace. Contribuire a un progetto può essere semplice come correggere un errore di battitura o aggiornare la documentazione. Se non sei sicuro di come iniziare come collaboratore, consulta la nostra Come contribuire alla guida Open Source . Inizia il tuo progetto open source Non esiste il momento perfetto per aprire la tua attività. Puoi rendere open source un’idea, in lavorazione o dopo anni di closed source. In generale, dovresti aprire il tuo progetto quando ti senti a tuo agio con gli altri che visualizzano e forniscono feedback sul tuo lavoro. Non importa in quale fase decidi di aprire il tuo progetto, ogni progetto deve includere la seguente documentazione: Open source license README Contributing guidelines Code of conduct In qualità di sostenitore, questi componenti ti aiuteranno a comunicare le aspettative, a gestire i contributi e a proteggere i diritti legali di tutti (inclusi i tuoi). Aumentano notevolmente le tue possibilità di un’esperienza positiva. Se il tuo progetto è su GitHub, posizionare questi file nella directory root con i nomi file consigliati aiuterà GitHub a riconoscerli e a mostrarli automaticamente ai tuoi lettori. Selezione della licenza Una licenza open source garantisce che altri possano utilizzare, copiare, modificare e contribuire al tuo progetto senza conseguenze. Ti protegge anche da situazioni legali difficili. Devi includere una licenza quando avvii un progetto open source. Il lavoro legale non è divertente. La buona notizia è che puoi copiare e incollare una licenza esistente nel tuo repository. Ci vorrà solo un minuto per proteggere il tuo duro lavoro. MIT , Apache 2.0 e GPLv3 sono le licenze open source più popolari, ma ci sono altre opzioni , da qualle scegliere . Quando crei un nuovo progetto su GitHub, ti viene data la possibilità di scegliere una licenza. Includere una licenza open source renderà il tuo progetto GitHub open source. Se hai altre domande o dubbi sugli aspetti legali della gestione di un progetto open source, ti abbiamo coperto . Scrivi README I README fanno molto di più che spiegare come utilizzare il tuo progetto. Spiegano anche perché il tuo progetto è importante e cosa possono farci i tuoi utenti. Nel README, prova a rispondere alle seguenti domande: Cosa fa questo progetto? Perché è utile questo progetto? Come iniziare? Dove posso ottenere ulteriore aiuto se ne ho bisogno? Puoi utilizzare il file README per rispondere ad altre domande, ad esempio come gestisci i contributi, quali sono gli obiettivi del progetto e informazioni su licenza e attribuzione. Se non vuoi accettare contributi o il tuo progetto non è ancora pronto per la produzione, salva queste informazioni. Una migliore documentazione significa più utenti, meno richieste di supporto e più contributori. (…) Ricorda che i tuoi lettori non sei tu. Ci sono persone che possono venire a un progetto che hanno background completamente diversi. — @tracymakes, “Scrivi in modo che le tue parole vengano lette (video)” A volte le persone evitano di scrivere un README perché pensano che il progetto sia incompiuto o non vogliono contributi. Questi sono tutti ottimi motivi per scriverne uno. Per ulteriore ispirazione, prova a utilizzare la guida “Crea un README” di @dguo o README nel modello di @PurpleBooth per scrivere un completo README. Quando includi un file README nella directory root, GitHub lo visualizzerà automaticamente nella home page del repository. Scrivi le linee guida per il tuo contributo Un file CONTRIBUTO indica al tuo pubblico come partecipare al tuo progetto. Ad esempio, puoi includere informazioni su: Come inviare una segnalazione di bug (prova a utilizzare modelli di richiesta problemi e pull ) Come proporre una nuova funzionalità Come configurare il tuo ambiente ed eseguire i test Oltre ai dettagli tecnici, il file CONTRIBUTI è un’opportunità per comunicare le vostre aspettative per i contributi, come ad esempio: I tipi di contributi che stai cercando La tua tabella di marcia o visione per il progetto Come i contributori dovrebbero (o non dovrebbero) contattarti Usare un tono caldo e amichevole e offrire suggerimenti specifici per i contributi (come scrivere documentazione o creare un sito web) può fare molto per far sentire i nuovi arrivati benvenuti ed entusiasti di partecipare. Per esempio Active Admin inizia la sua guida ai contributi con: Innanzitutto, grazie per aver considerato di contribuire ad Active Admin. Sono le persone come te che rendono Active Admin uno strumento così eccezionale. Nelle prime fasi del tuo progetto, il tuo file CONTRIBUTO può essere semplice. Dovresti sempre spiegare come segnalare bug o problemi, ed eventuali requisiti tecnici (come i test) per dare un contributo. Nel tempo, potresti aggiungere altre domande frequenti al tuo file CONTRIBUTING. Annotare queste informazioni significa che meno persone ti faranno sempre le stesse domande. Per ulteriore assistenza nella scrittura del file CONTRIBUTING, consulta @nayafia modello guida per la collaborazione o @mozilla “Come creare CONTRIBUTING.md” . Collega il tuo file CONTRIBUTE dal tuo README in modo che più persone possano vederlo. Se posiziona il file CONTRIBUTING nel repository del tuo progetto , GitHub si collegherà automaticamente al tuo file quando un collaboratore crea un problema o apre una richiesta pull. Creazione di un codice di condotta Tutti abbiamo avuto esperienze in cui ci siamo imbattuti in quello che probabilmente era un abuso, sia come manutentore che cercava di spiegare perché qualcosa dovrebbe essere in un certo modo, sia come utente… che poneva una semplice domanda. (…) Il Codice di condotta diventa un documento con riferimenti e collegamenti facili che dimostra che il vostro team prende molto sul serio il discorso costruttivo. — @mlynch, “Rendere l’open source un luogo più felice” Infine, un codice di condotta aiuta a definire le regole di base per la condotta dei partecipanti al progetto. Ciò è particolarmente utile se stai avviando un progetto open source per una comunità o un’azienda. Un codice di condotta ti consente di facilitare un comportamento sano e costruttivo nella comunità che ridurrà il tuo stress come sostenitore. Per ulteriori informazioni, consultare la nostra Guida al Codice di condotta . Oltre a comunicare come ci si aspetta che i partecipanti si comportino, un codice di condotta tende anche a descrivere a chi si applicano tali aspettative, quando si applicano e cosa fare se si verifica una violazione. Come per le licenze open source, esistono standard emergenti per i codici di condotta, quindi non è necessario scriverne uno proprio. Il Contributor Covenant è un codice di condotta utilizzato da oltre 40.000 progetti open source , incluso Kubernetes, Rails e Swift. Indipendentemente dal testo che utilizzi, devi essere pronto a far rispettare il tuo codice di condotta quando necessario. Incolla il testo direttamente in un file CODE_OF_CONDUCT nel tuo repository. Archivia il file nella directory principale del tuo progetto in modo che sia facile da trovare e collegalo dal tuo file README. Assegna un nome e un marchio al tuo progetto Il branding è più di un logo appariscente o di un nome di progetto accattivante. Riguarda il modo in cui parli del tuo progetto e chi raggiungi con il tuo messaggio. Scegliere il nome giusto Scegli un nome che sia facile da ricordare e che, idealmente, dia un’idea di ciò che fa il progetto. Per esempio: Sentry monitora le applicazioni di segnalazione degli arresti anomali Thin è un server web Ruby facile e veloce Se stai sviluppando un progetto esistente, usare il loro nome come prefisso può aiutarti a chiarire cosa fa il tuo progetto (ad esempio node-fetch porta window.fetch a Node.js). Pensa prima alla chiarezza. I giochi di parole sono divertenti, ma ricorda che alcune battute potrebbero non essere applicabili ad altre culture o persone con esperienze diverse dalle tue. Alcuni dei tuoi potenziali utenti potrebbero essere dipendenti dell’azienda: non vuoi farli sentire a disagio quando devono spiegare il tuo progetto al lavoro! Evitare conflitti di nomi Verifica la presenza di progetti open source con nomi simili , soprattutto se condividi la stessa lingua o ecosistema. Se il tuo nome si sovrappone a un popolare progetto esistente, potresti confondere il tuo pubblico. Se desideri che un sito web, un account Twitter o altre proprietà rappresentino il tuo progetto, assicurati di poter ottenere i nomi che desideri. Idealmente, salva questi nomi adesso per la massima tranquillità, anche se non intendi ancora utilizzarli. Assicurati che il nome del tuo progetto non violi alcun marchio. Un’azienda potrebbe chiederti di rimuovere il tuo progetto in un secondo momento o addirittura intraprendere un’azione legale contro di te. Non vale la pena rischiare. Puoi controllare il WIPO Global Brand Database per eventuali conflitti tra marchi. Se lavori in un’azienda, questa è una delle cose in cui il tuo team legale può aiutarti . Infine, esegui una rapida ricerca su Google per il nome del tuo progetto. Le persone riusciranno a trovare facilmente il tuo progetto? C’è qualcos’altro che appare nei risultati di ricerca che non vuoi che venga visualizzato? Anche il modo in cui scrivi (e codifichi) influisce sul tuo marchio! Nel corso della vita del tuo progetto, scriverai molto: README, tutorial, documenti della community, risposte a problemi, forse anche newsletter e mailing list. Che si tratti di documentazione formale o di un’e-mail informale, il tuo stile di scrittura fa parte del marchio del tuo progetto. Pensa a come potresti percepire il tuo pubblico e se questo è il tono che vuoi trasmettere. Ho cercato di partecipare a ogni discussione della mailing list e di mostrare un comportamento esemplare, di essere gentile con le persone, di prendere sul serio i loro problemi e di cercare di essere d’aiuto in generale. Dopo un po’, le persone si sono fermate non solo per fare domande, ma anche per aiutare con le risposte, e con mia grande gioia hanno imitato il mio stile. — @janl в CouchDB , “Open source sostenibile” Usare un linguaggio caldo e inclusivo (come “loro” anche quando si riferisce a una sola persona) può fare molto per far sì che il tuo progetto si senta accogliente per i nuovi contributori. Attieniti a un linguaggio semplice poiché molti dei tuoi lettori potrebbero non essere di madrelingua inglese. Oltre al modo in cui digiti le parole, anche il tuo stile di codifica può diventare parte del marchio del tuo progetto. Angular e jQuery sono due esempi di progetti con stili e linee guida di codifica rigorosi. Non è necessario scrivere una guida di stile per il tuo progetto quando hai appena iniziato e potresti scoprire che ti diverti comunque a incorporare diversi stili di codifica nel tuo progetto. Ma devi prevedere in che modo il tuo stile di scrittura e programmazione potrebbe attrarre o scoraggiare diversi tipi di persone. Le prime fasi del tuo progetto sono la tua opportunità per creare il precedente che desideri vedere. La tua lista di controllo pre-lancio Pronto per aprire il tuo progetto? Ecco una lista di controllo per aiutarti. Seleziona tutte le caselle? Sei pronto per andare! Fai clic su “pubblica” e datti una pacca sulla spalla. Documentazione Il progetto ha un file LICENSE di licenza open source Il progetto ha la documentazione di base (README, CONTRIBUTING, CODE_OF_CONDUCT) Името е лесно за запомняне, дава известна представа какво прави проектът и не е в конфликт със съществуващ проект или нарушава търговски марки La coda dei problemi è aggiornata, con problemi chiaramente organizzati ed etichettati Код Il progetto utilizza convenzioni di codifica coerenti e nomi chiari di funzioni/metodi/variabili Il codice è chiaramente commentato, documentando intenti e casi limite Nessun materiale sensibile nella cronologia delle modifiche, nei problemi o nelle richieste pull (ad esempio password o altre informazioni non pubbliche) Хора Se sei un privato: Hai parlato con l'ufficio legale e/o hai compreso le politiche open source e IP della tua azienda (se sei un dipendente da qualche parte) Se sei un’azienda o un ente: Hai parlato con il tuo dipartimento legale Hai un piano di marketing per annunciare e promuovere il progetto Qualcuno è impegnato a gestire le interazioni della comunità (rispondere ai problemi, rivedere e unire le richieste pull) Almeno due persone hanno accesso amministrativo al progetto Fallo! Congratulazioni per il tuo primo progetto open source. Qualunque sia il risultato, il servizio pubblico è un dono per la comunità. Con ogni coinvolgimento, commento e richiesta pull, crei opportunità per te e gli altri di apprendere e crescere. Torna a tutte le guide Guide correlate Trovare utenti per il tuo progetto Aiuta il tuo progetto open source a crescere mettendolo nelle mani di utenti soddisfatti. Costruire comunità accoglienti Costruire una comunità che incoraggi le persone a utilizzare, contribuire ed educare con il tuo progetto Scroll to Top Contribuire Vuoi dare un suggerimento? Questo contenuto è open source. Aiutaci a migliorarlo. Contribuire Rimani in contatto Sii il primo a conoscere gli ultimi suggerimenti e risorse open source su GitHub. Email fine print con da e [amici] | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/becordial?trk=products_details_guest_organization_name | Cordial | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free Cordial Software Development San Diego, CA 48,215 followers Cordial automates billions of emails, SMS, and mobile app messages using all of your data. See jobs Follow Discover all 233 employees Report this company Overview Jobs Life About us Cordial automates billions of data-driven, cross-channel messages to create lifetime customer connections for leading companies. Website https://www.cordial.com/ External link for Cordial Industry Software Development Company size 51-200 employees Headquarters San Diego, CA Type Privately Held Founded 2014 Specialties Email Marketing, Data Driven Marketing, Personalization, Marketing Automation, Cross Channel Marketing, Marketing Technology, Messaging Platform, Mobile Engagement, Customer Engagement, SMS marketing, CDP, Mobile App Engagement, Mobile Marketing, Data Platform, and Loyalty Marketing Products Cordial Cordial Marketing Automation Software Cordial automates billions of data-driven, cross-channel messages to create lifetime customer connections for leading companies. Locations Primary 402 W. Broadway Suite #700 San Diego, CA 92101, US Get directions Employees at Cordial Rob Garf Jeremy Swift Rachel Frederick Julien Sauvage 🥖 See all employees Updates Cordial reposted this Julien Sauvage 🥖 1d Report this post Look who’s RIGHT THERE right in front of Javits for National Retail Federation ☕️ and 🥐 on us …more 32 2 Comments Like Comment Share Cordial reposted this Julien Sauvage 🥖 19h Report this post Good timing gets noticed Good location, too in front of National Retail Federation see you there? ☕️ and croissant 🥐 on us …more 24 3 Comments Like Comment Share Cordial reposted this Julien Sauvage 🥖 14h Report this post There’s no such thing as “too many croissants” 🥐 Last 30 min of our Café de l’Attention here in front of Javits in NYC let’s goooooo ☕️ 30 13 Comments Like Comment Share Cordial 48,215 followers 14h Report this post Coming to you live from #NRF2026 with Julien Sauvage 🥖 and Rob Garf ... and this year it's all about AI agents. We heard a great piece of advice for retailers from our keynote speaker, Jason Goldberg , at #CordialConnect last night: make sure you can be easily discoverable where consumers are, as the gap between discovery and the buy button continues to shrink. …more 18 2 Comments Like Comment Share Cordial 48,215 followers 17h Report this post Le Café de l'Attention at #NRF is still open for a few more hours… stop by and grab a complimentary coffee ☕, pastry 🥐, and baguette sandwich 🥖 📍 Right in front of Javits Center 🕘 Open until 2 PM 👋 See you there! National Retail Federation #NRF2026 Rob Garf Julien Sauvage 🥖 …more 12 1 Comment Like Comment Share Cordial 48,215 followers 19h Report this post AI that does the work. Not just suggests it. Today, we're introducing two new AI agents as part of Cordial Agents - a system built to ground, govern, execute, and coordinate real marketing work inside the platform. 📧 The Email Production Agent handles execution-heavy email work: personalization, audience logic, message orchestration, and measurement. Outputs are validated against real customer profiles before they run. 📊 The Data Intelligence Agent monitors campaign and audience performance as it happens - identifying trends, surfacing issues early, and recommending next actions while campaigns are still running. Both operate within a governed framework with built-in quality checks and enforceable guardrails. Read the full press release → https://lnkd.in/eBBJu5en 3 1 Comment Like Comment Share Cordial reposted this Julien Sauvage 🥖 1d Report this post Le Café de l’Attention is still open for an hour or so… Good vibes. Good coffee. Good QRoissants 🥐 60 2 Comments Like Comment Share Cordial 48,215 followers 1d Edited Report this post At #NRF ? Come find us at Le Café de l'Attention today and tomorrow. We’re in front of Javits Center with complimentary coffee ☕, pastries 🥐, and baguette sandwiches 🥖 for #NRF attendees. While you're there, grab the latest edition of the Cordial Times 📰 , get a first look at what we're launching, and learn how leading brands are winning the battle for attention. 📍 Corner of 37th and 11th 📅 Sunday, January 11 & Monday, January 12 🕘 9 AM – 2 PM 👋 See you there! National Retail Federation #NRF2026 22 1 Comment Like Comment Share Cordial 48,215 followers 1d Report this post NRF kicks off today! Need a coffee before hitting the show floor? We've got you covered. Stop by either of these coffee shops near Javits, show them your NRF badge, and grab a drink on us. ☕ Paradis in the Park Hudson Yards, 524 W 36th St. 9 AM – 3 PM, Sunday & Monday ☕ Enoch's 480 10th Ave Sunday 8 AM – 5 PM, Monday 7 AM – 6 PM See you out there! 7 1 Comment Like Comment Share Cordial 48,215 followers 3d Report this post Heading to #NRF ? Come find us at Le Café de l'Attention We're pulling up in front of Javits Center with complimentary coffee ☕, pastries 🥐, and baguette sandwiches 🥖 for NRF attendees. While you're there, grab the latest edition of the Cordial Times 📰 , get a first look at what we're launching, and learn how leading brands are winning the battle for attention. 📍 In front of Javits Center 📅 Sunday, January 11 & Monday, January 12 🕘 9 AM – 2 PM 👋 See you there! 15 2 Comments Like Comment Share Join now to see what you are missing Find people you know at Cordial Browse recommended jobs for you View all updates, news, and articles Join now Similar pages Iterable Software Development San Francisco, California Braze Software Development New York, New York Attentive Advertising Services Cordial AB Business Consulting and Services Klaviyo Marketing Services Boston, Massachusetts Clari Software Development Sunnyvale, CA Listrak Advertising Services Lititz, PA Gong Software Development San Francisco, California Bloomreach Software Development Mountain View, California Hightouch Software Development San Francisco, CA Show more similar pages Show fewer similar pages Browse jobs Manager jobs 1,880,925 open jobs Director jobs 1,220,357 open jobs Analyst jobs 694,057 open jobs Account Executive jobs 71,457 open jobs Financial Planning and Analysis Manager jobs 18,291 open jobs Developer jobs 258,935 open jobs Vice President jobs 235,270 open jobs Senior Manager jobs 181,836 open jobs Engineer jobs 555,845 open jobs Senior Sales Director jobs 45,162 open jobs Solutions Consultant jobs 62,819 open jobs Controller jobs 74,523 open jobs Cordial jobs 6,030 open jobs Director of Operations jobs 61,881 open jobs Accountant jobs 91,739 open jobs Sales Manager jobs 310,050 open jobs Consultant jobs 760,907 open jobs Financial Analyst jobs 102,246 open jobs Sales Director jobs 83,949 open jobs Solutions Engineer jobs 92,218 open jobs Show more jobs like this Show fewer jobs like this More searches More searches Cordial jobs Manager jobs Analyst jobs Solutions Consultant jobs Financial Planning and Analysis Manager jobs Engineer jobs Account Executive jobs Director jobs Associate jobs Marketing Manager jobs Sales Director jobs Machine Learning Engineer jobs Quality Engineer jobs Vice President Operations jobs Accountant jobs Senior Software Engineer jobs Senior Sales Engineer jobs Marketing Director jobs President jobs Senior Vice President jobs Senior Manager jobs Chief Legal Officer jobs Controller jobs Writer jobs Implementation Manager jobs Financial Planning Specialist jobs Quality Assurance Analyst jobs Account Manager jobs Vice President jobs Vice President of Customer Support jobs Strategist jobs Director of Engineering jobs Specialist jobs Solutions Engineer jobs Cyber Security Specialist jobs Benefits Specialist jobs Veterinarian jobs Corporate Development Specialist jobs Software Engineer jobs Strategic Planning Manager jobs Finance Business Partner jobs Senior Vice President Marketing jobs Builder jobs Accounting Officer jobs Senior Product Designer jobs Marketing Specialist jobs Chief Administrative Officer jobs Chief Marketing Officer jobs Enterprise Account Executive jobs Senior Director jobs Designer jobs Consultant jobs Sales Development Representative jobs Vice President of Services jobs Medical Specialist jobs Senior Product Marketing Manager jobs Retention Marketing Manager jobs Senior Finance Manager jobs Information Technology System Engineer jobs Design Manager jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Cordial Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/raspberrypifoundation?trk=products_details_guest_organization_image#main-content | Raspberry Pi Foundation | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free Raspberry Pi Foundation Non-profit Organizations Cambridge, England 188,449 followers A charity that enables young people to realise their full potential through the power of computing See jobs Follow Discover all 348 employees Report this company About us The Raspberry Pi Foundation is a UK-based charity with the mission to enable young people to realise their full potential through the power of computing and digital technologies. Our vision Our vision is that every young person develops: • The knowledge, skills, and confidence to use computers and digital technologies effectively in their work, community, and personal life; to solve problems and to express themselves creatively • Sufficient understanding of societal and ethical issues to be able to critically evaluate digital technologies and their application, and to design and use technology for good • The mindsets that enable them to confidently engage with technological change and to continue learning about new and emerging technologies Our long-term goals 1. Education: To enable any school to teach students about computing and how to create with digital technologies, through providing the best possible curriculum, resources, and training for teachers. 2. Non-formal learning: To engage millions of young people in learning about computing and how to create with digital technologies outside of school, through online resources and apps, clubs, competitions, and partnerships with youth organisations. 3. Research: To deepen our understanding of how young people learn about computing and how to create with digital technologies, and we use that knowledge to increase the impact of our work and advance the field of computing education. Website http://www.raspberrypi.org External link for Raspberry Pi Foundation Industry Non-profit Organizations Company size 51-200 employees Headquarters Cambridge, England Type Nonprofit Founded 2008 Specialties education Locations Primary 37 Hills Road Cambridge, England CB2 1NT, GB Get directions Employees at Raspberry Pi Foundation Stephen Greene CBE Laura Turkington Fraser Speirs Jonathan Wells See all employees Updates Raspberry Pi Foundation 188,449 followers 2d Report this post 👽 Alien Language What if you could talk to aliens using your own made-up words? 🛸 Kids can train a model to recognise their new alien language and use it to control a character in Scratch! It’s a fun way to explore how computers learn from sound. 👉 rpf.io/alien-language #LearnAI #CodingForKids #Scratch 31 2 Comments Like Comment Share Raspberry Pi Foundation 188,449 followers 3d Report this post We’re excited to welcome Suradnici u učenju as our newest Experience AI global partner! 🎉 Suradnici u učenju is joining the worldwide #ExperienceAI partner network, supported by funding from Google.org . By working together, we’re building AI literacy across Croatia, equipping educators with the skills, resources, and confidence to introduce AI learning in their classrooms. 👉 Learn more about the Experience AI initiative: rpf.io/experienceai #AIEducation #AILiteracy 29 1 Comment Like Comment Share Raspberry Pi Foundation reposted this Code Club 6,564 followers 4d Report this post 📍 We’re exhibiting at ASE 2026! The UK & IE team will be at the National ASE 2026 Conference at Nottingham University from Thursday 8 January to Saturday 10 January. We’re looking forward to chatting with teachers and spreading the Code Club love! You’ll find us in the Pope Building, A13 exhibition block (stand #27). Come and say hello! 39 1 Comment Like Comment Share Raspberry Pi Foundation 188,449 followers 4d Report this post In the past years, our research seminar series have focused on AI technology in the computing classroom. In 2026, we shift focus to showcasing how education research in other disciplines is starting to include teaching about AI. Our new seminar series, Applied AI, will explore how AI is applied across contexts, careers, and disciplines, and how what we teach across school subjects will change accordingly. Read our latest blog to find out more and sign up: 🔗 https://lnkd.in/erTuUxCb #AIEducation #EducationResearch 19 5 Comments Like Comment Share Raspberry Pi Foundation 188,449 followers 5d Report this post What shaped computing education in 2025? 🤔 Explore the newest episode of the Hello World podcast as we look back at the biggest moments in computer science education from the last year, and look ahead to 2026. Hear from educators across the world as they discuss topics such as AI literacy, data science and improving learners’ digital skills. Read our blog to find out more: 🔗 https://lnkd.in/e7A-kuU2 #ComputerScience #DataScience #AILiteracy 33 Like Comment Share Raspberry Pi Foundation 188,449 followers 5d Report this post 2025 was a busy year in computing education 📚 We saw a huge acceleration in AI literacy, and new questions about what we should actually be teaching in the classroom. Join us for a special episode of the Hello World podcast as we reflect on 2025 and, more importantly, discuss what’s to come in 2026. In this episode we explore: • How AI is changing the way we teach programming • The potential of a new Data Science qualification • Real stories of progress and challenges from Kenya, South Africa, and Greece • Our thoughts as to what’s to come in the year ahead. Whether you are commuting to school, or looking for some CPD inspiration this is the perfect listen to get you ready for the year ahead. Listen now: rpf.io/hwp-future-of-csed #HelloWorldPodcast #CSed #AILiteracy #DataScience 92 Like Comment Share Raspberry Pi Foundation reposted this Code Club 6,564 followers 1w Report this post This month, we’re running a mix of workshops and community sessions designed to help you get started, build confidence, and connect with others. From learning how to launch a Code Club, to exploring our resources, to relaxed coffee and conversation meetups, there are plenty of ways to get involved. Whether you’re new to Code Club or already part of the community, we’d love to see you there. 💻👇 https://lnkd.in/evkR-eNn 17 Like Comment Share Raspberry Pi Foundation 188,449 followers 6d Report this post New year, new research study! 💡 Find out about our recent research into data science education and how to teach learners to use and create with data-driven technologies such as AI and machine learning. We share our new data paradigms framework and invite teachers to register interest in our next collaborative study. 🔗 Read our latest blog to find out more: rpf.io/blog-data-paradigms 31 Like Comment Share Raspberry Pi Foundation 188,449 followers 1w Report this post Read an interview with our CEO Philip Colligan CBE in the Guardian newspaper about why AI literacy must become a universal part of education on a par with reading and writing. The journalist visited one of our Code Clubs in Cambridge to meet young people learning about artificial intelligence 👇 https://lnkd.in/e7Ypenip #AIEducation #Computing #Education #CSForAll #AIEdu #ComputerScience #AI Generation AI: fears of ‘social divide’ unless all children learn computing skills theguardian.com 48 Like Comment Share Raspberry Pi Foundation 188,449 followers 1w Report this post Start the new year right with Hello World, the official magazine and podcast of the Raspberry Pi Foundation 🌟 Throughout the year we’ll be releasing free computer science resources specifically created for educators like you, helping you to teach computing confidently. Subscribe to Hello World today to never miss an update, and to be notified on each day of release. By subscribing you are joining a global community of educators who are sharing their ideas, stories and classroom best practices. Hello World is completely free, and has everything you need to feel confident teaching computing this year. Subscribe now: helloworld.cc/subscribe #HelloWorld #ComputerScience #CSEd 57 Like Comment Share Join now to see what you are missing Find people you know at Raspberry Pi Foundation Browse recommended jobs for you View all updates, news, and articles Join now Similar pages Raspberry Pi Computer Hardware Manufacturing Cambridge, Cambridge Arduino Appliances, Electrical, and Electronics Manufacturing Code Club Education Administration Programs Linux Software Development Micro:bit Educational Foundation Computer Hardware Manufacturing London, England Google DeepMind Research Services London, London Lingokids E-Learning Providers Madrid, Community of Madrid Home Assistant Technology, Information and Internet San Diego, California Scratch Foundation Non-profit Organizations Boston, MA Finom Financial Services Amsterdam, North Holland Show more similar pages Show fewer similar pages Browse jobs Manager jobs 1,880,925 open jobs Project Manager jobs 253,048 open jobs Teacher jobs 182,095 open jobs Analyst jobs 694,057 open jobs Director jobs 1,220,357 open jobs Instructional Designer jobs 5,603 open jobs Writer jobs 26,384 open jobs Head jobs 1,018,536 open jobs Marketing Specialist jobs 49,178 open jobs Specialist jobs 768,666 open jobs Associate jobs 1,091,945 open jobs Partnerships Manager jobs 61,442 open jobs Intern jobs 71,196 open jobs Researcher jobs 195,654 open jobs Engineer jobs 555,845 open jobs Human Resources Specialist jobs 34,340 open jobs Officer jobs 1,191,868 open jobs Graduate jobs 361,130 open jobs Human Resources Manager jobs 31,339 open jobs Customer Service Representative jobs 185,647 open jobs Show more jobs like this Show fewer jobs like this Funding Raspberry Pi Foundation 3 total rounds Last Round Corporate round Dec 2, 2023 External Crunchbase Link for last round of funding Investors Arm Holdings See more info on crunchbase More searches More searches Senior Product Manager jobs Manager jobs Director jobs Specialist jobs Project Manager jobs Teacher jobs Head of Product jobs Curriculum Designer jobs Consultant jobs Engineer jobs Senior Manager jobs Writer jobs Head jobs Program Manager jobs Curriculum Developer jobs Learning and Development Manager jobs Co-Founder jobs Delivery Manager jobs Content Manager jobs Coordinator jobs Teaching Specialist jobs Human Resources Administrator jobs Research Assistant jobs Associate jobs Country Manager jobs Analyst jobs Associate Product Manager jobs Educational Consultant jobs Proposal Writer jobs Marketing Specialist jobs Generalist jobs Implementation Manager jobs Team Lead jobs Professor jobs Sales Account Manager jobs Curator jobs Partner jobs Instructor jobs Committee Member jobs Grants Writer jobs Community Manager jobs Candidate Attorney jobs Principal Product Manager jobs Program Coordinator jobs Psychology Teacher jobs Lead Product Manager jobs Computer Engineer jobs Head of Partnerships jobs Lead jobs Telecommunications Engineer jobs Vice President of Business Development jobs User Experience Designer jobs Content Writer jobs Product Manager jobs Learning And Development Specialist jobs Designer jobs Service Project Manager jobs Product Marketer jobs SAP Recruitment Consultant jobs Science Teacher jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Raspberry Pi Foundation Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/legal/privacy-policy?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fposts%2Fmakerxau_were-incredibly-proud-to-have-been-nominated-activity-7401472884281692161-_WTa&trk=registration-frontend_join-form-privacy-policy | LinkedIn Privacy Policy Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws Privacy Policy Effective November 3, 2025 Your Privacy Matters LinkedIn’s mission is to connect the world’s professionals to allow them to be more productive and successful. Central to this mission is our commitment to be transparent about the data we collect about you, how it is used and with whom it is shared. This Privacy Policy applies when you use our Services (described below). We offer our users choices about the data we collect, use and share as described in this Privacy Policy, Cookie Policy , Settings and our Help Center. Key Terms Choices Settings are available to Members of LinkedIn and Visitors are provided separate controls. Learn More . Table of Contents Data We Collect How We Use Your Data How We Share Information Your Choices and Obligations Other Important Information Introduction We are a social network and online platform for professionals. People use our Services to find and be found for business opportunities, to connect with others and find information. Our Privacy Policy applies to any Member or Visitor to our Services. Our registered users (“Members”) share their professional identities, engage with their network, exchange knowledge and professional insights, post and view relevant content, learn and develop skills, and find business and career opportunities. Content and data on some of our Services is viewable to non-Members (“Visitors”). We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. Members and Visitors located in the Designated Countries or the UK can review additional information in our European Regional Privacy Notice . Services This Privacy Policy, including our Cookie Policy applies to your use of our Services. This Privacy Policy applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-branded sites, apps, communications and services offered by LinkedIn (“Services”), including off-site Services, such as our ad services and the “Apply with LinkedIn” and “Share with LinkedIn” plugins, but excluding services that state that they are offered under a different privacy policy. For California residents, additional disclosures required by California law may be found in our California Privacy Disclosure . Data Controllers and Contracting Parties If you are in the “Designated Countries”, LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you are outside of the Designated Countries, LinkedIn Corporation will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. As a Visitor or Member of our Services, the collection, use and sharing of your personal data is subject to this Privacy Policy and other documents referenced in this Privacy Policy, as well as updates. Change Changes to the Privacy Policy apply to your use of our Services after the “effective date.” LinkedIn (“we” or “us”) can modify this Privacy Policy, and if we make material changes to it, we will provide notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. If you object to any changes, you may close your account. You acknowledge that your continued use of our Services after we publish or send a notice about our changes to this Privacy Policy means that the collection, use and sharing of your personal data is subject to the updated Privacy Policy, as of its effective date. 1. Data We Collect 1.1 Data You Provide To Us You provide data to create an account with us. Registration To create an account you need to provide data including your name, email address and/or mobile number, general location (e.g., city), and a password. If you register for a premium Service, you will need to provide payment (e.g., credit card) and billing information. You create your LinkedIn profile (a complete profile helps you get the most from our Services). Profile You have choices about the information on your profile, such as your education, work experience, skills, photo, city or area , endorsements, and optional verifications of information on your profile (such as verifications of your identity or workplace). You don’t have to provide additional information on your profile; however, profile information helps you to get more from our Services, including helping recruiters and business opportunities find you. It’s your choice whether to include sensitive information on your profile and to make that sensitive information public. Please do not post or add personal data to your profile that you would not want to be publicly available. You may give other data to us, such as by syncing your calendar. Posting and Uploading We collect personal data from you when you provide, post or upload it to our Services, such as when you fill out a form, (e.g., with demographic data or salary), respond to a survey, or submit a resume or fill out a job application on our Services. If you sync your calendars with our Services, we will collect your calendar meeting information to keep growing your network by suggesting connections for you and others, and by providing information about events, e.g. times, places, attendees and contacts. You don’t have to post or upload personal data; though if you don’t, it may limit your ability to grow and engage with your network over our Services. 1.2 Data From Others Others may post or write about you. Content and News You and others may post content that includes information about you (as part of articles, posts, comments, videos) on our Services. We also may collect public information about you, such as professional-related news and accomplishments, and make it available as part of our Services, including, as permitted by your settings, in notifications to others of mentions in the news . Others may sync their calendar with our Services Contact and Calendar Information We receive personal data (including contact information) about you when others import or sync their calendar with our Services, associate their contacts with Member profiles, scan and upload business cards, or send messages using our Services (including invites or connection requests). If you or others opt-in to sync email accounts with our Services, we will also collect “email header” information that we can associate with Member profiles. Customers and partners may provide data to us. Partners We receive personal data (e.g., your job title and work email address) about you when you use the services of our customers and partners, such as employers or prospective employers and applicant tracking systems providing us job application data. Related Companies and Other Services We receive data about you when you use some of the other services provided by us or our Affiliates , including Microsoft. For example, you may choose to send us information about your contacts in Microsoft apps and services, such as Outlook, for improved professional networking activities on our Services or we may receive information from Microsoft about your engagement with their sites and services. 1.3 Service Use We log your visits and use of our Services, including mobile apps. We log usage data when you visit or otherwise use our Services, including our sites, app and platform technology, such as when you view or click on content (e.g., learning video) or ads (on or off our sites and apps), perform a search, install or update one of our mobile apps, share articles or apply for jobs. We use log-ins, cookies, device information and internet protocol (“IP”) addresses to identify you and log your use. 1.4 Cookies and Similar Technologies We collect data through cookies and similar technologies. As further described in our Cookie Policy , we use cookies and similar technologies (e.g., pixels and ad tags) to collect data (e.g., device IDs) to recognize you and your device(s) on, off and across different services and devices where you have engaged with our Services. We also allow some others to use cookies as described in our Cookie Policy. If you are outside the Designated Countries, we also collect (or rely on others, including Microsoft, who collect) information about your device where you have not engaged with our Services (e.g., ad ID, IP address, operating system and browser information) so we can provide our Members with relevant ads and better understand their effectiveness. Learn more . You can opt out from our use of data from cookies and similar technologies that track your behavior on the sites of others for ad targeting and other ad-related purposes. For Visitors, the controls are here . 1.5 Your Device and Location We receive data through cookies and similar technologies When you visit or leave our Services (including some plugins and our cookies or similar technology on the sites of others), we receive the URL of both the site you came from and the one you go to and the time of your visit. We also get information about your network and device (e.g., IP address, proxy server, operating system, web browser and add-ons, device identifier and features, cookie IDs and/or ISP, or your mobile carrier). If you use our Services from a mobile device, that device will send us data about your location based on your phone settings. We will ask you to opt-in before we use GPS or other tools to identify your precise location. 1.6 Communications If you communicate through our Services, we learn about that. We collect information about you when you communicate with others through our Services (e.g., when you send, receive, or engage with messages, events, or connection requests, including our marketing communications). This may include information that indicates who you are communicating with and when. We also use automated systems to support and protect our site. For example, we use such systems to suggest possible responses to messages and to manage or block content that violates our User Agreement or Professional Community Policies . 1.7 Workplace and School Provided Information When your organization (e.g., employer or school) buys a premium Service for you to use, they give us data about you. Others buying our Services for your use, such as your employer or your school, provide us with personal data about you and your eligibility to use the Services that they purchase for use by their workers, students or alumni. For example, we will get contact information for “ LinkedIn Page ” (formerly Company Page) administrators and for authorizing users of our premium Services, such as our recruiting, sales or learning products. 1.8 Sites and Services of Others We get data when you visit sites that include our ads, cookies or plugins or when you log-in to others’ services with your LinkedIn account. We receive information about your visits and interaction with services provided by others when you log-in with LinkedIn or visit others’ services that include some of our plugins (such as “Apply with LinkedIn”) or our ads, cookies or similar technologies. 1.9 Other We are improving our Services, which means we get new data and create new ways to use data. Our Services are dynamic, and we often introduce new features, which may require the collection of new information. If we collect materially different personal data or materially change how we collect, use or share your data, we will notify you and may also modify this Privacy Policy. Key Terms Affiliates Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., GitHub, Inc.). 2. How We Use Your Data We use your data to provide, support, personalize and develop our Services. How we use your personal data will depend on which Services you use, how you use those Services and the choices you make in your settings . We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others. You can review LinkedIn's Responsible AI principles here and learn more about our approach to generative AI here . Learn more about the inferences we may make, including as to your age and gender and how we use them. 2.1 Services Our Services help you connect with others, find and be found for work and business opportunities, stay informed, get training and be more productive. We use your data to authorize access to our Services and honor your settings. Stay Connected Our Services allow you to stay in touch and up to date with colleagues, partners, clients, and other professional contacts. To do so, you can “connect” with the professionals who you choose, and who also wish to “connect” with you. Subject to your and their settings , when you connect with other Members, you will be able to search each others’ connections in order to exchange professional opportunities. We use data about you (such as your profile, profiles you have viewed or data provided through address book uploads or partner integrations) to help others find your profile, suggest connections for you and others (e.g. Members who share your contacts or job experiences) and enable you to invite others to become a Member and connect with you. You can also opt-in to allow us to use your precise location or proximity to others for certain tasks (e.g. to suggest other nearby Members for you to connect with, calculate the commute to a new job, or notify your connections that you are at a professional event). It is your choice whether to invite someone to our Services, send a connection request, or allow another Member to become your connection. When you invite someone to connect with you, your invitation will include your network and basic profile information (e.g., name, profile photo, job title, region). We will send invitation reminders to the person you invited. You can choose whether or not to share your own list of connections with your connections. Visitors have choices about how we use their data. Stay Informed Our Services allow you to stay informed about news, events and ideas regarding professional topics you care about, and from professionals you respect. Our Services also allow you to improve your professional skills, or learn new ones. We use the data we have about you (e.g., data you provide, data we collect from your engagement with our Services and inferences we make from the data we have about you), to personalize our Services for you, such as by recommending or ranking relevant content and conversations on our Services. We also use the data we have about you to suggest skills you could add to your profile and skills that you might need to pursue your next opportunity. So, if you let us know that you are interested in a new skill (e.g., by watching a learning video), we will use this information to personalize content in your feed, suggest that you follow certain Members on our site, or suggest related learning content to help you towards that new skill. We use your content, activity and other data, including your name and photo, to provide notices to your network and others. For example, subject to your settings , we may notify others that you have updated your profile, posted content, took a social action , used a feature, made new connections or were mentioned in the news . Career Our Services allow you to explore careers, evaluate educational opportunities, and seek out, and be found for, career opportunities. Your profile can be found by those looking to hire (for a job or a specific task ) or be hired by you. We will use your data to recommend jobs and show you and others relevant professional contacts (e.g., who work at a company, in an industry, function or location or have certain skills and connections). You can signal that you are interested in changing jobs and share information with recruiters. We will use your data to recommend jobs to you and you to recruiters. We may use automated systems to provide content and recommendations to help make our Services more relevant to our Members, Visitors and customers. Keeping your profile accurate and up-to-date may help you better connect to others and to opportunities through our Services. Productivity Our Services allow you to collaborate with colleagues, search for potential clients, customers, partners and others to do business with. Our Services allow you to communicate with other Members and schedule and prepare meetings with them. If your settings allow, we scan messages to provide “bots” or similar tools that facilitate tasks such as scheduling meetings, drafting responses, summarizing messages or recommending next steps. Learn more . 2.2 Premium Services Our premium Services help paying users to search for and contact Members through our Services, such as searching for and contacting job candidates, sales leads and co-workers, manage talent and promote content. We sell premium Services that provide our customers and subscribers with customized-search functionality and tools (including messaging and activity alerts) as part of our talent, marketing and sales solutions. Customers can export limited information from your profile, such as name, headline, current company, current title, and general location (e.g., Dublin), such as to manage sales leads or talent, unless you opt-out . We do not provide contact information to customers as part of these premium Services without your consent. Premium Services customers can store information they have about you in our premium Services, such as a resume or contact information or sales history. The data stored about you by these customers is subject to the policies of those customers. Other enterprise Services and features that use your data include TeamLink and LinkedIn Pages (e.g., content analytics and followers). 2.3 Communications We contact you and enable communications between Members. We offer settings to control what messages you receive and how often you receive some types of messages. We will contact you through email, mobile phone, notices posted on our websites or apps, messages to your LinkedIn inbox, and other ways through our Services, including text messages and push notifications. We will send you messages about the availability of our Services, security, or other service-related issues. We also send messages about how to use our Services, network updates, reminders, job suggestions and promotional messages from us and our partners. You may change your communication preferences at any time. Please be aware that you cannot opt out of receiving service messages from us, including security and legal notices. We also enable communications between you and others through our Services, including for example invitations , InMail , groups and messages between connections. 2.4 Advertising We serve you tailored ads both on and off our Services. We offer you choices regarding personalized ads, but you cannot opt-out of seeing non-personalized ads. We target (and measure the performance of) ads to Members, Visitors and others both on and off our Services directly or through a variety of partners, using the following data, whether separately or combined: Data collected by advertising technologies on and off our Services using pixels, ad tags (e.g., when an advertiser installs a LinkedIn tag on their website), cookies, and other device identifiers; Member-provided information (e.g., profile, contact information, title and industry); Data from your use of our Services (e.g., search history, feed, content you read, who you follow or is following you, connections, groups participation, page visits, videos you watch, clicking on an ad, etc.), including as described in Section 1.3; Information from advertising partners , vendors and publishers ; and Information inferred from data described above (e.g., using job titles from a profile to infer industry, seniority, and compensation bracket; using graduation dates to infer age or using first names or pronoun usage to infer gender; using your feed activity to infer your interests; or using device data to recognize you as a Member). Learn more about the inferences we make and how they may be used for advertising. Learn more about the ad technologies we use and our advertising services and partners. You can learn more about our compliance with laws in the Designated Countries or the UK in our European Regional Privacy Notice . We will show you ads called sponsored content which look similar to non-sponsored content, except that they are labeled as advertising (e.g., as “ad” or “sponsored”). If you take a social action (such as like, comment or share) on these ads, your action is associated with your name and viewable by others, including the advertiser. Subject to your settings , if you take a social action on the LinkedIn Services, that action may be mentioned with related ads. For example, when you like a company we may include your name and photo when their sponsored content is shown. Ad Choices You have choices regarding our uses of certain categories of data to show you more relevant ads. Member settings can be found here . For Visitors, the setting is here . Info to Ad Providers We do not share your personal data with any non-Affiliated third-party advertisers or ad networks except for: (i) hashed IDs or device identifiers (to the extent they are personal data in some countries); (ii) with your separate permission (e.g., in a lead generation form) or (iii) data already visible to any users of the Services (e.g., profile). However, if you view or click on an ad on or off our Services, the ad provider will get a signal that someone visited the page that displayed the ad, and they may, through the use of mechanisms such as cookies, determine it is you. Advertising partners can associate personal data collected by the advertiser directly from you with hashed IDs or device identifiers received from us. We seek to contractually require such advertising partners to obtain your explicit, opt-in consent before doing so where legally required, and in such instances, we take steps to ensure that consent has been provided before processing data from them. 2.5 Marketing We promote our Services to you and others. In addition to advertising our Services, we use Members’ data and content for invitations and communications promoting membership and network growth, engagement and our Services, such as by showing your connections that you have used a feature on our Services. 2.6 Developing Services and Research We develop our Services and conduct research Service Development We use data, including public feedback, to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity. Other Research We seek to create economic opportunity for Members of the global workforce and to help them be more productive and successful. We use the personal data available to us to research social, economic and workplace trends, such as jobs availability and skills needed for these jobs and policies that help bridge the gap in various industries and geographic areas. In some cases, we work with trusted third parties to perform this research, under controls that are designed to protect your privacy. We may also make public data available to researchers to enable assessment of the safety and legal compliance of our Services. We publish or allow others to publish economic insights, presented as aggregated data rather than personal data. Surveys Polls and surveys are conducted by us and others through our Services. You are not obligated to respond to polls or surveys, and you have choices about the information you provide. You may opt-out of survey invitations. 2.7 Customer Support We use data to help you and fix problems. We use data (which can include your communications) to investigate, respond to and resolve complaints and for Service issues (e.g., bugs). 2.8 Insights That Do Not Identify You We use data to generate insights that do not identify you. We use your data to perform analytics to produce and share insights that do not identify you. For example, we may use your data to generate statistics about our Members, their profession or industry, to calculate ad impressions served or clicked on (e.g., for basic business reporting to support billing and budget management or, subject to your settings , for reports to advertisers who may use them to inform their advertising campaigns), to show Members' information about engagement with a post or LinkedIn Page , to publish visitor demographics for a Service or create demographic workforce insights, or to understand usage of our services. 2.9 Security and Investigations We use data for security, fraud prevention and investigations. We and our Affiliates, including Microsoft, may use your data (including your communications) for security purposes or to prevent or investigate possible fraud or other violations of the law, our User Agreement and/or attempts to harm our Members, Visitors, company, Affiliates, or others. Key Terms Social Action E.g. like, comment, follow, share Partners Partners include ad networks, exchanges and others 3. How We Share Information 3.1 Our Services Any data that you include on your profile and any content you post or social action (e.g., likes, follows, comments, shares) you take on our Services will be seen by others, consistent with your settings. Profile Your profile is fully visible to all Members and customers of our Services. Subject to your settings , it can also be visible to others on or off of our Services (e.g., Visitors to our Services or users of third-party search tools). As detailed in our Help Center , your settings, degree of connection with the viewing Member, the subscriptions they may have, their usage of our Services , access channels and search types (e.g., by name or by keyword) impact the availability of your profile and whether they can view certain fields in your profile. Posts, Likes, Follows, Comments, Messages Our Services allow viewing and sharing information including through posts, likes, follows and comments. When you share an article or a post (e.g., an update, image, video or article) publicly it can be viewed by everyone and re-shared anywhere (subject to your settings ). Members, Visitors and others will be able to find and see your publicly-shared content, including your name (and photo if you have provided one). In a group , posts are visible to others according to group type. For example, posts in private groups are visible to others in the group and posts in public groups are visible publicly. Your membership in groups is public and part of your profile, but you can change visibility in your settings . Any information you share through companies’ or other organizations’ pages on our Services will be viewable by those organizations and others who view those pages' content. When you follow a person or organization, you are visible to others and that “page owner” as a follower. We let senders know when you act on their message, subject to your settings where applicable. Subject to your settings , we let a Member know when you view their profile. We also give you choices about letting organizations know when you've viewed their Page. When you like or re-share or comment on another’s content (including ads), others will be able to view these “social actions” and associate it with you (e.g., your name, profile and photo if you provided it). Your employer can see how you use Services they provided for your work (e.g. as a recruiter or sales agent) and related information. We will not show them your job searches or personal messages. Enterprise Accounts Your employer may offer you access to our enterprise Services such as Recruiter, Sales Navigator, LinkedIn Learning or our advertising Campaign Manager. Your employer can review and manage your use of such enterprise Services. Depending on the enterprise Service, before you use such Service, we will ask for permission to share with your employer relevant data from your profile or use of our non-enterprise Services. For example, users of Sales Navigator will be asked to share their “social selling index”, a score calculated in part based on their personal account activity. We understand that certain activities such as job hunting and personal messages are sensitive, and so we do not share those with your employer unless you choose to share it with them through our Services (for example, by applying for a new position in the same company or mentioning your job hunting in a message to a co-worker through our Services). Subject to your settings , when you use workplace tools and services (e.g., interactive employee directory tools) certain of your data may also be made available to your employer or be connected with information we receive from your employer to enable these tools and services. 3.2 Communication Archival Regulated Members may need to store communications outside of our Service. Some Members (or their employers) need, for legal or professional compliance, to archive their communications and social media activity, and will use services of others to provide these archival services. We enable archiving of messages by and to those Members outside of our Services. For example, a financial advisor needs to archive communications with her clients through our Services in order to maintain her professional financial advisor license. 3.3 Others’ Services You may link your account with others’ services so that they can look up your contacts’ profiles, post your shares on such platforms, or enable you to start conversations with your connections on such platforms. Excerpts from your profile will also appear on the services of others. Subject to your settings , other services may look up your profile. When you opt to link your account with other services, personal data (e.g., your name, title, and company) will become available to them. The sharing and use of that personal data will be described in, or linked to, a consent screen when you opt to link the accounts. For example, you may link your Twitter or WeChat account to share content from our Services into these other services, or your email provider may give you the option to upload your LinkedIn contacts into its own service. Third-party services have their own privacy policies, and you may be giving them permission to use your data in ways we would not. You may revoke the link with such accounts. The information you make available to others in our Services (e.g., information from your profile, your posts, your engagement with the posts, or message to Pages) may be available to them on other services . For example, search tools, mail and calendar applications, or talent and lead managers may show a user limited profile data (subject to your settings ), and social media management tools or other platforms may display your posts. The information retained on these services may not reflect updates you make on LinkedIn. 3.4 Related Services We share your data across our different Services and LinkedIn affiliated entities. We will share your personal data with our Affiliates to provide and develop our Services. For example, we may refer a query to Bing in some instances, such as where you'd benefit from a more up to date response in a chat experience. Subject to our European Regional Privacy Notice , we may also share with our Affiliates, including Microsoft, your (1) publicly-shared content (such as your public LinkedIn posts) to provide or develop their services and (2) personal data to improve, provide or develop their advertising services. Where allowed , we may combine information internally across the different Services covered by this Privacy Policy to help our Services be more relevant and useful to you and others. For example, we may personalize your feed or job recommendations based on your learning history. 3.5 Service Providers We may use others to help us with our Services. We use others to help us provide our Services (e.g., maintenance, analysis, audit, payments, fraud detection, customer support, marketing and development). They will have access to your information (e.g., the contents of a customer support request) as reasonably necessary to perform these tasks on our behalf and are obligated not to disclose or use it for other purposes. If you purchase a Service from us, we may use a payments service provider who may separately collect information about you (e.g., for fraud prevention or to comply with legal obligations). 3.6 Legal Disclosures We may need to share your data when we believe it’s required by law or to help protect the rights and safety of you, us or others. It is possible that we will need to disclose information about you when required by law, subpoena, or other legal process or if we have a good faith belief that disclosure is reasonably necessary to (1) investigate, prevent or take action regarding suspected or actual illegal activities or to assist government enforcement agencies; (2) enforce our agreements with you; (3) investigate and defend ourselves against any third-party claims or allegations; (4) protect the security or integrity of our Services or the products or services of our Affiliates (such as by sharing with companies facing similar threats); or (5) exercise or protect the rights and safety of LinkedIn, our Members, personnel or others. We attempt to notify Members about legal demands for their personal data when appropriate in our judgment, unless prohibited by law or court order or when the request is an emergency. We may dispute such demands when we believe, in our discretion, that the requests are overbroad, vague or lack proper authority, but we do not promise to challenge every demand. To learn more see our Data Request Guidelines and Transparency Report . 3.7 Change in Control or Sale We may share your data when our business is sold to others, but it must continue to be used in accordance with this Privacy Policy. We can also share your personal data as part of a sale, merger or change in control, or in preparation for any of these events. Any other entity which buys us or part of our business will have the right to continue to use your data, but only in the manner set out in this Privacy Policy unless you agree otherwise. 4. Your Choices & Obligations 4.1 Data Retention We keep most of your personal data for as long as your account is open. We generally retain your personal data as long as you keep your account open or as needed to provide you Services. This includes data you or others provided to us and data generated or inferred from your use of our Services. Even if you only use our Services when looking for a new job every few years, we will retain your information and keep your profile open, unless you close your account. In some cases we choose to retain certain information (e.g., insights about Services use) in a depersonalized or aggregated form. 4.2 Rights to Access and Control Your Personal Data You can access or delete your personal data. You have many choices about how your data is collected, used and shared. We provide many choices about the collection, use and sharing of your data, from deleting or correcting data you include in your profile and controlling the visibility of your posts to advertising opt-outs and communication controls. We offer you settings to control and manage the personal data we have about you. For personal data that we have about you, you can: Delete Data : You can ask us to erase or delete all or some of your personal data (e.g., if it is no longer necessary to provide Services to you). Change or Correct Data : You can edit some of your personal data through your account. You can also ask us to change, update or fix your data in certain cases, particularly if it’s inaccurate. Object to, or Limit or Restrict, Use of Data : You can ask us to stop using all or some of your personal data (e.g., if we have no legal right to keep using it) or to limit our use of it (e.g., if your personal data is inaccurate or unlawfully held). Right to Access and/or Take Your Data : You can ask us for a copy of your personal data and can ask for a copy of personal data you provided in machine readable form. Visitors can learn more about how to make these requests here . You may also contact us using the contact information below, and we will consider your request in accordance with applicable laws. Residents in the Designated Countries and the UK , and other regions , may have additional rights under their laws. 4.3 Account Closure We keep some of your data even after you close your account. If you choose to close your LinkedIn account, your personal data will generally stop being visible to others on our Services within 24 hours. We generally delete closed account information within 30 days of account closure, except as noted below. We retain your personal data even after you have closed your account if reasonably necessary to comply with our legal obligations (including law enforcement requests), meet regulatory requirements, resolve disputes, maintain security, prevent fraud and abuse (e.g., if we have restricted your account for breach of our Professional Community Policies ), enforce our User Agreement, or fulfill your request to "unsubscribe" from further messages from us. We will retain de-personalized information after your account has been closed. Information you have shared with others (e.g., through InMail, updates or group posts) will remain visible after you close your account or delete the information from your own profile or mailbox, and we do not control data that other Members have copied out of our Services. Groups content and ratings or review content associated with closed accounts will show an unknown user as the source. Your profile may continue to be displayed in the services of others (e.g., search tools) until they refresh their cache. 5. Other Important Information 5.1. Security We monitor for and try to prevent security breaches. Please use the security features available through our Services. We implement security safeguards designed to protect your data, such as HTTPS. We regularly monitor our systems for possible vulnerabilities and attacks. However, we cannot warrant the security of any information that you send us. There is no guarantee that data may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. 5.2. Cross-Border Data Transfers We store and use your data outside your country. We process data both inside and outside of the United States and rely on legally-provided mechanisms to lawfully transfer data across borders. Learn more . Countries where we process data may have laws which are different from, and potentially not as protective as, the laws of your own country. 5.3 Lawful Bases for Processing We have lawful bases to collect, use and share data about you. You have choices about our use of your data. At any time, you can withdraw consent you have provided by going to settings. We will only collect and process personal data about you where we have lawful bases. Lawful bases include consent (where you have given consent), contract (where processing is necessary for the performance of a contract with you (e.g., to deliver the LinkedIn Services you have requested) and “legitimate interests.” Learn more . Where we rely on your consent to process personal data, you have the right to withdraw or decline your consent at any time and where we rely on legitimate interests, you have the right to object. Learn More . If you have any questions about the lawful bases upon which we collect and use your personal data, please contact our Data Protection Officer here . If you're located in one of the Designated Countries or the UK, you can learn more about our lawful bases for processing in our European Regional Privacy Notice . 5.4. Direct Marketing and Do Not Track Signals Our statements regarding direct marketing and “do not track” signals. We currently do not share personal data with third parties for their direct marketing purposes without your permission. Learn more about this and about our response to “do not track” signals. 5.5. Contact Information You can contact us or use other options to resolve any complaints. If you have questions or complaints regarding this Policy, please first contact LinkedIn online. You can also reach us by physical mail . If contacting us does not resolve your complaint, you have more options . Residents in the Designated Countries and other regions may also have the right to contact our Data Protection Officer here . If this does not resolve your complaint, Residents in the Designated Countries and other regions may have more options under their laws. Key Terms Consent Where we process data based on consent, we will ask for your explicit consent. You may withdraw your consent at any time, but that will not affect the lawfulness of the processing of your personal data prior to such withdrawal. Where we rely on contract, we will ask that you agree to the processing of personal data that is necessary for entering into or performance of your contract with us. We will rely on legitimate interests as a basis for data processing where the processing of your data is not overridden by your interests or fundamental rights and freedoms. LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#aside-why-not-sam | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/network-management-software?trk=organization_guest_products_modal_category_link | Best Network Management Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Network Engineer (32) Network Specialist (23) Network Administrator (21) Information Technology Manager (15) Chief Technology Officer (10) See all products Find top products in Network Management Software category Software used to automate administrative control of large computer networks. - Provision new devices and control configuration and updates - Detect performance bottlenecks, compliance issues, and misconfigured nodes - Map and discover network features to improve administrative remediation, maintenance, and reporting 132 results LG Electronics Private Network Solutions Network Management Software by LG Electronics LG Private Network Solutions offers a secure, scalable, and cost-effective on-premise Private 5G network designed to meet the connectivity demands of modern enterprises and system integrators. Our comprehensive approach ensures that businesses can harness the full potential of 5G technology without compromising on security or efficiency. For system integrators, our fully virtualized solution significantly reduces the Total Cost of Ownership (TCO). Use our innovative Front Haul Multiplexer to simplify RF design and flexible Cell-to-RU mapping to streamline deployment, accelerating timelines and boosting efficiency. Enterprise IT managers can gain unparalleled control over their private networks through our intuitive Private 5G Total Management System (PTMS). With a single, unified interface, you can effortlessly manage devices, diverse use cases, and network resources. Leverage LG's top-tier R&D and design for robust support, optimized performance, and reduced costs. View product Argo Smart Routing Network Management Software by Cloudflare The public Internet does its best to deliver your content — but it can’t account for network congestion, leading to slow load times and a degraded end-user experience. The Cloudflare network is different. It routes over 10 trillion global requests per month — providing Argo Smart Routing with a unique vantage point to detect real-time congestion and route web traffic across the fastest and most reliable network paths. On average, web assets perform 30% faster. View product Juniper Sky Enterprise Network Management Software by Juniper Networks Learn how Juniper Sky Enterprise simplifies cloud network management. Eliminate software maintenance cycles and dedicated hardware, and quickly manage your network through our intuitive UI. View product A2P SMS Network Management Software by Monty Mobile Reach customers directly, securely, and at scale. Our A2P messaging routes are built for reliability and deliverability. View product Network 360 Network Management Software by İnnova Bilişim Network 360 ile ağınıza bağlı tüm ekipmanları gerçek zamanlı izleyebilir ve yönetebilirsiniz. Ağa bağlı yönlendirici, anahtar, erişim noktası, Linux ve Windows sunucuların performansını gerçek zamanlı takip eden Network 360, ağ ekipmanların verimini, bellek kullanımını, disk alanını ve ICMP, SNMP veya WMI protokolleriyle algılanabilen tüm verilerini raporlar. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Progress WhatsUp Gold Network Management Software by Progress Software WhatsUp Gold provides complete visibility into the status and performance of applications, network devices and servers in the cloud or on-premises. Discover Your Network: WhatsUp Gold’s powerful layer 2/3 discovery results in a detailed interactive map of your entire networked infrastructure. Monitor and map everything from the edge to the cloud including devices, wireless controllers, servers, virtual machines, applications, traffic flows and configurations across Windows, LAMP and Java environments. Get Real-time Alerts: Ensure optimal performance and availability to meet or beat SLAs. Manage networks, traffic, physical servers, VMs and applications with easy-to-use and customizable maps, dashboards and alerts. Click on any device in WhatsUp Gold to get immediate access to a wealth of related network monitoring settings and reports. Resolve Issues Quickly: Intuitive workflows and easy customization help you reduce MTTRs. WhatsUp Gold streamlines network monitoring workflows by let View product Entuity Network Management Software by Park Place Technologies With Entuity™ Software you can discover, monitor, manage, and optimize your entire network across countless devices supplied by a whole host of different vendors. Entuity™ supports thousands of devices out of the box across hundreds of vendors. The Entuity network monitoring tool automates network discovery and uses intuitive workflows that make it easy to see when something has gone wrong. Responsive dashboards allow you to take a high-level view to gauge your network health or drill down to the component level to quickly and efficiently fix network problems. View product NetworkAccess Network Management Software by Lepton Software NetworkAccess by Lepton offers fiber network management and engineering services to support informed business decisions throughout the organization. A true digital twin for your Fiber Network, NetworkAccess lets you plan, design, and roll out your network in a flash by: - Proactively planning the network to avoid any customer complaints - Delighting the customers and shortening your sales cycle - Encouraging maintenance teams to collaborate–plan, provision, and deploy on time - Optimizing operations using automation to remove inefficiencies and lower network downtimes Lepton’s NetworkAccess Fiber GeoSuite includes: - SmartLocator for enhanced customer experience - SmartSQ for client acquisition - SmartOps for operational efficiency - SmartInventory for network modeling and inventory management NetworkAccess is a unified GIS-based platform that enables telecom businesses to plan and design end-to-end networks. View product RUCKUS AI Network Management Software by RUCKUS Networks RUCKUS AI brings AI-driven service assurance, predictive analytics, and AIOps automation to enterprise networks. It continuously learns and optimizes performance, turning complex data into actionable insights for faster resolution and smarter operations. Powered by Machine Learning, Graph AI, and Digital Twin modeling, it delivers: - Proactive health monitoring - Predictive issue detection - Automated remediation - Explainable AI insights Seamlessly integrated across RUCKUS wired and wireless solutions, RUCKUS AI ensures exceptional performance, visibility, and reliability for any environment, from campuses and arenas to hotels and enterprises. View product nGeniusPULSE Network Management Software by NETSCOUT nGeniusPULSE provides 24x7 monitoring of critical applications and services from anywhere in the enterprise; in the Data Center, at remote branches, with remote workers, for IoT devices, and more. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://intl.cloud.tencent.com/products/im?trk=products_details_guest_secondary_call_to_action | In-App chat messaging API and SDK | Tencent Cloud Chat tencent cloud Promotions Products Solutions Pricing Partner Network Marketplace Explore Prev Next Chat Build real-time social messaging capabilities with all the features into your applications and websites based on powerful and feature-rich chat APIs, SDKs and UIKit components 1 Billion+ Monthly active users 550 Billion+ Daily peak messages 100,000+ Customers worldwide Free Demo Documentation Get Started more Free Demo Documentation Get Started Features Scenarios Pricing Customer Stories Compliance FEATURES Variety of Products for All Communication Needs Chat Bring conversations to the next level with our chat solution and bring a pleasant customer experience on your platform. What we offer One-to-one chat, group chat Group Management Notification (online/offline push) Conversation group Relationship Chain Super large community group with 1 million users Scenarios UI integration Non-UI integration Client APIs Server APIs Live Intergrate Live Streaming for a Customized, Engaging Interactive Experience in Minutes. What we offer Audience Join as Interactive Guest Audience List Regular Gift (Barrage Message) & Grand Gift (Full-screen Pop-ups) Audience Tag Based on Engagement and Contribution Face Beautification Anchor Go-live Capability Background Music & Sound Effect Setup Cloud Recording for Future Viewing and Sharing More Information Live Overview Live Documentation Pricing Plan Contact Us Desk Deliver world-class customer support with our desk solution, aiming to resolve customer’s issues quickly and increase satisfaction on your platform. What we offer AI Chatbot FAQ Chatbot Agent Dashboard and Management Automatic routing Satisfactory survey Dashboard data analytics More Information Contact Us Documentation SCENARIOS How It Works in Various Businesses Scenarios Social Build real-time engagement and interactivity into your social and communities platform. Live streaming Facilitate a highly engaging and immersive experience to be integrated on your platform. On-demand service Support smooth coordination between drivers, consumers and guarantee order completion. Marketplace Unlock more sales and revenue through effective communication between merchants and buyers. Gaming Engage players with a dynamic and immersive in-game experience, keeping them longer. Web3 Drive interactions within web3 platforms with our encrypted solution. Social Build real-time engagement and interactivity into your social and communities platform. Live streaming Facilitate a highly engaging and immersive experience to be integrated on your platform. On-demand service Support smooth coordination between drivers, consumers and guarantee order completion. Marketplace Unlock more sales and revenue through effective communication between merchants and buyers. Gaming Engage players with a dynamic and immersive in-game experience, keeping them longer. Web3 Drive interactions within web3 platforms with our encrypted solution. View All PRICING Choose the Right Plan for Your Business Free Trial Edition Free Trial Try Now All Features No Credit Card Required Free MAU tier 100/month Maximum users Unlimited Maximum Non-AVChatRoom Groups 100 Maximum AvChatRoom 50 Message retention 7 days Maximum contacts per user 20 Maximum groups per user 50 Maximum members per group (for non-AvChatRoom) 20 Professional Standard Edition $399 . 00 /month Activate Now Global Connection Basic Chat Features Basic Messaging Capabilities Basic Group Capabilities Complete Ul Free MAU tier 10,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom 50 Message retention 7 days Maximum contacts per user 3,000 Maximum groups per user 500 Maximum members per group (for non-AvChatRoom) 200 Pro Edition $699 . 00 /month Activate Now Stardard Features Unlimited AvChatRoom Group Super Community User Online Status Message Read Receipts Free MAU tier 10,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 30 days Maximum contacts per user 3,000 Maximum groups per user 1,000 Maximum members per group (for non-AvChatRoom) 2,000 Function Upgrade Pro-plus Edition $1299 . 00 /month Activate Now Pro Features Larger Number of Users Larger Group Capacity Larger Storage Capacity Message Translation&Cloud Message Search Free MAU tier 25,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 90 days Maximum contacts per user 6,000 Maximum groups per user 3,000 Maximum members per group (for non-AvChatRoom) 5,000 Customized Service Enterprise Edition Pro-plus Features Dedicated Servers Customize Functions Customize Capacity Priority Support Contact Us Free MAU tier 50,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 90 days Maximum contacts per user 10,000(custom) Maximum groups per user 5,000(custom) Maximum members per group (for non-AvChatRoom) 6,000(custom) Free Trial Edition All Features No Credit Card Required Free MAU tier 100/month Maximum users Unlimited Maximum Non-AVChatRoom Groups 100 Maximum AvChatRoom 50 Message retention 7 days Maximum contacts per user 20 Maximum groups per user 50 Maximum members per group (for non-AvChatRoom) 20 Free Trial Try Now Professional Standard Edition Global Connection Basic Chat Features Basic Messaging Capabilities Basic Group Capabilities Complete Ul Free MAU tier 10,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom 50 Message retention 7 days Maximum contacts per user 3,000 Maximum groups per user 500 Maximum members per group (for non-AvChatRoom) 200 $399 . 00 /month Activate Now Pro Edition Stardard Features Unlimited AvChatRoom Group Super Community User Online Status Message Read Receipts Free MAU tier 10,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 30 days Maximum contacts per user 3,000 Maximum groups per user 1,000 Maximum members per group (for non-AvChatRoom) 2,000 $699 . 00 /month Activate Now Function Upgrade Pro-plus Edition Pro Features Larger Number of Users Larger Group Capacity Larger Storage Capacity Message Translation&Cloud Message Search Free MAU tier 25,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 90 days Maximum contacts per user 6,000 Maximum groups per user 3,000 Maximum members per group (for non-AvChatRoom) 5,000 $1299 . 00 /month Activate Now Customized Service Enterprise Edition Pro-plus Features Dedicated Servers Customize Functions Customize Capacity Priority Support Free MAU tier 50,000/month Maximum users Unlimited Maximum Non-AVChatRoom Groups Unlimited Maximum AvChatRoom Unlimited Message retention 90 days Maximum contacts per user 10,000(custom) Maximum groups per user 5,000(custom) Maximum members per group (for non-AvChatRoom) 6,000(custom) Contact Us View All CUSTOMER STORIES Trusted by Customers Worldwide A popular music app in Southeast Asia with over 150 million downloads worldwide. Requirements: In-app chat, global messaging, and music-based social networking JOOX has live audio/video chat rooms with an unlimited number of users and allows them to send on-screen comments, likes, and gifts for high-quality interactions. APIs for Android and iOS can be quickly integrated into apps to enable users to socialize and improve user retention. Chat has independent IDCs in Singapore, Germany, India, Korea, and United States and deploys over 2,800 cache and access nodes in six continents to ensure reliable global messaging. A popular music app in Southeast Asia with over 150 million downloads worldwide. Requirements: In-app chat, global messaging, and music-based social networking WeSing uses Chat with TRTC to provide a powerful karaoke experience such as duets and room member management. WeSing provides live audio/video chat rooms with an unlimited number of users. Users can send on-screen comments, likes, and gifts for interactions. APIs for Android and iOS can be quickly integrated into apps to enable users to socialize and improve user retention. A world-famous shooting game launched by Habby. It was ranked top 10 in App Store download charts in 46 countries/regions. Requirements: In-game chat, in-game group chat, and global messaging With Chat, Archero players enjoy powerful chat features with diverse message types including text, emojis, and custom messages. Chat brings Archero players who pursue shared interest together and lets them team up for battles. With group member profile feature, players' levels and badges can be easily displayed. Chat has independent IDCs in Singapore, Germany, India, Korea, and United States and deploys over 2,800 cache and access nodes in six continents to ensure messaging. A one-stop digital finance service platform launched by Futu for global investors. It integrates features such as trading, quotation, news, and social networking. Requirements: Stock market trends monitoring, live streaming, in-app chat, reliable messaging, and security and compliance With Chat server APIs, it can update stock prices in real time. moomoo users can chat with each other and fund managers with Chat's one-to-one chat and live streaming features. moomoo uses customer service chat to engage users effectively. Chat is backed by 13 international compliance certificates and achieves over 99.99% service availability and message deliverability for moomoo. A video conferencing solution for users in over 100 countries/regions. Requirements: In-meeting chat, member permission management, and global messaging VooV Meeting uses Chat APIs to allow meeting attendees to send various types of messages including text, images, and custom messages. VooV Meeting uses Chat's signaling management feature to implement meeting control features such as managing and muting attendees and inviting users to join meetings. Chat has independent IDCs in Singapore, Germany, India, Korea, and United States and deploys over 2,800 cache and access nodes in six continents to ensure reliable global messaging. COMPLIANCE Global Compliance Certifications SOC ISO 9001 ISO 27001 ISO 27017 ISO 27018 CSA STAR NIST CSF ISO 27701 ISO 29151 BS10012 Are you ready to get started from the free trial? Tencent cloud is designed to create a seamless experience for you. Get Started Contact Us | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/ddos-protection-software?trk=organization_guest_product_card_category_link | Best DDoS Protection Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Chief Information Security Officer (11) Chief Information Officer (7) Head of Information Technology (6) Cyber Security Specialist (5) Chief Technology Officer (3) See all products Find top products in DDoS Protection Software category Software used to stop or prevent denial-of-service (DDoS) attacks, attacks on websites that use a flood of hijacked nodes to interrupt services. - Identify DDoS attacks with IP monitoring and comparison of other signatures - Redirect illegitimate traffic directed at websites and resources - Use traffic load benchmarks to detect bandwidth abnormalities and maintain baselines - Prevent downtime, sudden service failure, and delayed performance with filtering, scrubbing, and other traffic management techniques 64 results Cloudflare DDoS Protection DDoS Protection Software by Cloudflare Comprehensive DDoS Protection Built for anything connected to the Internet Cloudflare DDoS protection secures websites, applications, and entire networks while ensuring the performance of legitimate traffic is not compromised. Cloudflare’s 51 Tbps network blocks an average of 72 billion threats per day, including some of the largest DDoS attacks in history. View product Cloudflare Spectrum DDoS Protection Software by Cloudflare Cloudflare Spectrum increases TCP and UDP security and prevents DDoS attacks for gaming, mail, SSH, and other services. View product Akamai Prolexic Routed DDoS Protection Software by Akamai Technologies Prolexic solutions provide fully managed DDoS protection for your applications, data centers, and network infrastructure. View product OVHcloud Anti-DDoS Protection DDoS Protection Software by OVHcloud Mitigate your risk of DDoS attacks and keep your services available at all times. Free with every OVH service. Learn more. View product Kona DDoS Defender DDoS Protection Software by Akamai Technologies Always-on, managed DDoS protection service to stop DDoS for critical websites and web applications. HTTP and HTTPS. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Kaspersky DDoS Protection DDoS Protection Software by Kaspersky Distributed Denial of Service (DDoS) can cause financial costs for your business. Keep your service protected with Kaspersky DDoS-Protection! View product Arbor Cloud DDoS Protection DDoS Protection Software by NETSCOUT Arbor Cloud combines on-premise DDoS defense with cloud-based traffic scrubbing services that are tightly integrated via an automated cloud signal. This multi-layered, adaptive DDoS protection approach is a proven industry best practice and is the only way to mitigate today's full spectrum of DDoS threats for both Service Providers and Enterprises, all from a single cloud DDoS protection provider. View product Attack Mitigation System DDoS Protection Software by Radware Continuously adaptive real-time DDoS services for the most sophisticated web security threats through best-in-class cloud WAF and DDoS protection technologies. View product A10 Defend - Intelligent & Automated DDoS Protection DDoS Protection Software by A10 Networks, Inc A10 Defend provides a holistic DDoS protection solution that is scalable, economical, precise, and intelligent to help customers ensure optimal user and subscriber experiences. View product Arbor Edge Defense DDoS Protection Software by NETSCOUT Arbor Edge Defense is an inline security appliance deployed at the network perimeter that can automatically detect and block inbound threats and outbound malicious communication using highly scalable, stateless technology and unique, global threat intelligence. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://opensource.guide/bg/leadership-and-governance/ | Лидерство и управление | Open Source Guides Относно нас Допринеси العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Лидерство и управление Разрастващите се проекти с отворен код могат да се възползват от формалните правила за вземане на решения. Съдържание Разбиране на управлението за вашия разрастващ се проект Какви са примерите за официални роли, използвани в проекти с отворен код? Как да формализирам тези лидерски роли? Кога трябва да дам на някого достъп за ангажиране? Какви са някои от общите структури на управление за проекти с отворен код? Имам ли нужда от документи за управление, когато стартирам проекта си? Какво се случва, ако корпоративните служители започнат да изпращат вноски? Разбиране на управлението за вашия разрастващ се проект Вашият проект се разраства, хората са ангажирани и вие сте ангажирани да поддържате това нещо. На този етап може да се чудите как да включите редовни сътрудници на проекта във вашия работен процес, независимо дали става дума за даване на достъп за ангажимент или разрешаване на дебати в общността. Ако имате въпроси, ние имаме отговори. Какви са примерите за официални роли, използвани в проекти с отворен код? Много проекти следват подобна структура за роли на сътрудници и признание. Какво всъщност означават тези роли обаче, зависи изцяло от вас. Ето няколко типа роли, които може да разпознаете: Поддържащ Сътрудник Комитер За някои проекти “поддържащите” са единствените хора в проект с достъп за ангажиране. В други проекти те са просто хората, които са изброени в README като поддържащи. Поддържащият не е задължително да е някой, който пише код за вашия проект. Може да е някой, който е свършил много работа по евангелизирането на вашия проект, или писмена документация, която е направила проекта по-достъпен за другите. Независимо от това, което прави всеки ден, поддържащият вероятно е някой, който се чувства отговорен за посоката на проекта и се е ангажирал да го подобри. “Сътрудник” може да бъде всеки , който коментира проблем или заявка за изтегляне, хора, които добавят стойност към проекта (независимо дали става въпрос за сортиране на проблеми, писане на код или организиране на събития), или всеки с обединена заявка за изтегляне (може би най-тясната дефиниция на сътрудник). [За Node.js,] всеки човек, който се появява, за да коментира проблем или да изпрати код, е член на общността на проекта. Само възможността да ги видите означава, че са преминали границата от потребител до сътрудник. — @mikeal, “Healthy Open Source” Терминът “извършител” може да се използва за разграничаване на достъпа за ангажиране, който е специфичен тип отговорност, от други форми на принос. Въпреки че можете да дефинирате ролите си в проекта както желаете, помислете дали да не използвате по-широки дефиниции , за да насърчите повече форми на принос. Можете да използвате лидерски роли, за да признаете официално хора, които са направили изключителен принос към вашия проект, независимо от техните технически умения. Може да ме познавате като “изобретателя” на Django…но всъщност аз съм човекът, който беше нает да работи върху нещо година след като то вече беше направено. (…) Хората подозират, че съм успешен благодарение на уменията ми за програмиране…но в най-добрия случай съм среден програмист. — @jacobian, “Основна бележка на PyCon 2015” (video) Как да формализирам тези лидерски роли? Формализирането на вашите лидерски роли помага на хората да се чувстват собственост и казва на другите членове на общността към кого да търсят помощ. За по-малък проект определянето на лидери може да бъде толкова просто, колкото добавянето на техните имена към вашия README или текстов файл CONTRIBUTORS. За по-голям проект, ако имате уебсайт, създайте страница на екип или избройте ръководителите на проекта си там. Например Postgres има изчерпателна екипна страница с кратки профили за всеки участник. Ако вашият проект има много активна общност на сътрудници, можете да сформирате “основен екип” от поддържащи или дори подкомитети от хора, които поемат отговорност за различни проблемни области (например сигурност, сортиране на проблеми или поведение на общността). Позволете на хората да се самоорганизират и доброволно изпълняват ролите, които ги вълнуват най-много, вместо да ги възлагат. [Ние] допълваме основния екип с няколко “подекипа”. Всеки подекип е фокусиран върху конкретна област, например езиков дизайн или библиотеки. (…) За да се осигури глобална координация и силна, съгласувана визия за проекта като цяло, всеки подекип се ръководи от член на основния екип. — “RFC за управление на Rust” Лидерските екипи може да искат да създадат определен канал (като в IRC) или да се срещат редовно, за да обсъждат проекта (като в Gitter или Google Hangout). Можете дори да направите тези срещи публични, така че други хора да могат да слушат. Cucumber-ruby , например, поема работно време всяка седмица . След като сте установили лидерски роли, не забравяйте да документирате как хората могат да ги постигнат! Установете ясен процес за това как някой може да стане поддържащ или да се присъедини към подкомисия във вашия проект и го напишете във вашия GOVERNANCE.md. Инструменти като Vossibility могат да ви помогнат публично да проследите кой (или не) прави принос към проекта. Документирането на тази информация избягва възприемането на общността, че поддържащите са клика, която взема решенията си частно. И накрая, ако проектът ви е в GitHub, помислете за преместване на проекта от личния ви акаунт в организация и добавяне на поне един резервен администратор. Организациите на GitHub улесняват управлението на разрешения и множество хранилища и защитават наследството на вашия проект чрез споделена собственост . Кога трябва да дам на някого достъп за ангажиране? Някои хора смятат, че трябва да дадете достъп за ангажиране на всеки, който направи принос. Това може да насърчи повече хора да почувстват собственост върху вашия проект. От друга страна, особено за по-големи, по-сложни проекти, може да искате да дадете достъп за ангажиране само на хора, които са демонстрирали своя ангажимент. Няма един правилен начин да го направите - направете това, което ви прави най-удобно! Ако вашият проект е в GitHub, можете да използвате защитени клонове , за да управлявате кой може да насочва към определен клон и при какви обстоятелства. Всеки път, когато някой ви изпрати заявка за изтегляне, дайте му достъп до вашия проект. Въпреки че в началото може да звучи невероятно глупаво, използването на тази стратегия ще ви позволи да разгърнете истинската сила на GitHub. (…) След като хората имат достъп за ангажиране, те вече не се притесняват, че корекцията им може да остане необединена… което ги кара да положат много повече работа в нея. — @felixge, “Хакът на заявка за изтегляне” Какви са някои от общите структури на управление за проекти с отворен код? Има три общи структури на управление, свързани с проекти с отворен код. BDFL: BDFL означава “Доброжелателен диктатор за цял живот”. При тази структура един човек (обикновено първоначалният автор на проекта) има последната дума за всички основни решения по проекта. Python е класически пример. По-малките проекти вероятно са BDFL по подразбиране, защото има само един или двама поддържащи. Проект, който произхожда от компания, може също да попадне в категорията BDFL. Меритокрация: (Забележка: терминът “меритокрация” носи отрицателни конотации за някои общности и има сложна социална и политическа история .) ** При меритокрацията на активните участници в проекти (тези, които демонстрират “заслуги”) се дава официална роля за вземане на решения. Решенията обикновено се вземат въз основа на чист консенсус при гласуване. Концепцията за меритокрация е въведена от Фондация Apache ; всички проекти на Apache са меритокрации. Вноски могат да се правят само от лица, представляващи себе си, а не от компания. Либерален принос: При либерален модел на принос хората, които вършат най-много работа, се признават за най-влиятелни, но това се основава на текуща работа, а не на исторически принос. Решенията за големи проекти се вземат въз основа на процес на търсене на консенсус (обсъждане на основните оплаквания), а не на чисто гласуване, и се стремят да включват възможно най-много гледни точки на общността. Популярни примери за проекти, които използват либерален модел на принос, включват Node.js и Rust . Кое трябва да използвате? От теб зависи! Всеки модел има предимства и компромиси. И въпреки че в началото може да изглеждат доста различни, и трите модела имат повече общо, отколкото изглежда. Ако се интересувате от приемането на един от тези модели, вижтеthese templates: BDFL model template Meritocracy model template Node.js’s liberal contribution policy Имам ли нужда от документи за управление, когато стартирам проекта си? Няма подходящ момент да запишете управлението на вашия проект, но е много по-лесно да го дефинирате, след като видите как се развива динамиката на вашата общност. Най-добрата (и най-трудната) част от управлението с отворен код е, че е оформено от общността! Някои ранни документи обаче неизбежно ще допринесат за управлението на вашия проект, така че започнете да записвате каквото можете. Например, можете да дефинирате ясни очаквания за поведение или как работи вашият процес на сътрудник, дори при стартирането на вашия проект. Ако сте част от компания, стартираща проект с отворен код, струва си да проведете вътрешна дискусия преди стартирането за това как вашата компания очаква да поддържа и да взема решения за напредъка на проекта. Можете също така да искате публично да обясните нещо конкретно за това как вашата компания ще (или няма!) да бъде включена в проекта. Назначаваме малки екипи за управление на проекти в GitHub, които всъщност работят върху тях във Facebook. Например React се управлява от React инженер. — @caabernathy, “An inside look at open source at Facebook” Какво се случва, ако корпоративните служители започнат да изпращат вноски? Успешните проекти с отворен код се използват от много хора и компании и някои компании може в крайна сметка да имат потоци от приходи, които в крайна сметка да са свързани с проекта. Например, една компания може да използва кода на проекта като един компонент в предлагането на търговска услуга. Тъй като проектът се използва по-широко, хората, които имат опит в него, стават все по-търсени - вие може да сте един от тях! - и понякога ще получават заплащане за работата, която вършат в проекта. Важно е търговската дейност да се третира като нормална и просто като още един източник на енергия за развитие. Разбира се, платените разработчици не трябва да получават специално отношение спрямо неплатените; всеки принос трябва да бъде оценен според техническите си качества. Въпреки това, хората трябва да се чувстват комфортно да участват в търговска дейност и да се чувстват комфортно да посочват своите случаи на употреба, когато спорят в полза на определено подобрение или функция. “Комерсиален” е напълно съвместим с “отворен код”. “Търговски” просто означава, че някъде има замесени пари – че софтуерът се използва в търговията, което е все по-вероятно, тъй като даден проект получава приемане. (Когато софтуер с отворен код се използва като част от продукт с неотворен код, цялостният продукт все още е “патентован” софтуер, въпреки че, подобно на отворен код, може да се използва за търговски или нетърговски цели.) Като всеки друг, комерсиално мотивираните разработчици печелят влияние в проекта чрез качеството и количеството на техния принос. Очевидно програмист, който получава заплащане за времето си, може да е в състояние да направи повече от някой, който не получава заплащане, но това е добре: плащането е само един от многото възможни фактори, които могат да повлияят на това колко прави някой. Поддържайте дискусиите по проекта си фокусирани върху приноса, а не върху външните фактори, които позволяват на хората да направят този принос. Нуждая ли се от юридическо лице, което да поддържа моя проект? Нямате нужда от юридическо лице, за да поддържате вашия проект с отворен код, освен ако не боравите с пари. Например, ако искате да създадете търговски бизнес, ще искате да създадете C Corp или LLC (ако сте базирани в САЩ). Ако просто работите по договор, свързан с вашия проект с отворен код, можете да приемате пари като едноличен собственик или да създадете LLC (ако сте базирани в САЩ). Ако искате да приемате дарения за вашия проект с отворен код, можете да настроите бутон за дарение (с помощта на PayPal или Stripe, например), но парите няма да се приспадат от данъци, освен ако не сте квалифицирана организация с нестопанска цел (501c3, ако вие сте в САЩ). Много проекти не желаят да преминат през неприятностите при създаването на организация с нестопанска цел, така че вместо това намират фискален спонсор с нестопанска цел. Фискален спонсор приема дарения от ваше име, обикновено в замяна на процент от дарението. Software Freedom Conservancy , Apache Foundation , Eclipse Foundation , Linux Foundation и Open Collective са примери за организации, които служат като фискални спонсори за проекти с отворен код. Нашата цел е да предоставим инфраструктура, която общностите могат да използват, за да бъдат самоустойчиви, като по този начин създаваме среда, в която всички — сътрудници, поддръжници, спонсори — извличат конкретни ползи от това. — @piamancini, “Преминаване извън рамката на благотворителността” Ако вашият проект е тясно свързан с определен език или екосистема, може да има и свързана софтуерна основа, с която можете да работите. Например Python Software Foundation помага за поддръжката на PyPI , мениджъра на пакети на Python и Node.js Foundation помага за поддръжката на Express.js , базирана на възли рамка. Назад към всички ръководства Свързани ръководства Добри практики за поддържащи кода. Улесняване на живота ви като поддържащ отворен код, от процеса на документиране до извличане на максимума от общността. Показатели за отворен код Вземете информирани решения, за да помогнете на вашия проект с отворен код да процъфтява, като измервате и проследявате неговия успех. Scroll to Top Допринеси Искате ли да направите предложение? Това съдържание е с отворен код. Помогнете ни да го подобрим. Допринеси Поддържай връзка Бъди първите, който ще научи за най-новите съвети и ресурси с отворен код на GitHub. Имейл адрес fine print с от и [приятели] | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/becordial?trk=products_details_guest_organization_image | Cordial | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free Cordial Software Development San Diego, CA 48,215 followers Cordial automates billions of emails, SMS, and mobile app messages using all of your data. See jobs Follow View all 233 employees Report this company Overview Jobs Life About us Cordial automates billions of data-driven, cross-channel messages to create lifetime customer connections for leading companies. Website https://www.cordial.com/ External link for Cordial Industry Software Development Company size 51-200 employees Headquarters San Diego, CA Type Privately Held Founded 2014 Specialties Email Marketing, Data Driven Marketing, Personalization, Marketing Automation, Cross Channel Marketing, Marketing Technology, Messaging Platform, Mobile Engagement, Customer Engagement, SMS marketing, CDP, Mobile App Engagement, Mobile Marketing, Data Platform, and Loyalty Marketing Products Cordial Cordial Marketing Automation Software Cordial automates billions of data-driven, cross-channel messages to create lifetime customer connections for leading companies. Locations Primary 402 W. Broadway Suite #700 San Diego, CA 92101, US Get directions Employees at Cordial Rob Garf Jeremy Swift Rachel Frederick Julien Sauvage 🥖 See all employees Updates Cordial reposted this Julien Sauvage 🥖 1d Report this post Look who’s RIGHT THERE right in front of Javits for National Retail Federation ☕️ and 🥐 on us …more 32 2 Comments Like Comment Share Cordial reposted this Julien Sauvage 🥖 19h Report this post Good timing gets noticed Good location, too in front of National Retail Federation see you there? ☕️ and croissant 🥐 on us …more 24 3 Comments Like Comment Share Cordial reposted this Julien Sauvage 🥖 14h Report this post There’s no such thing as “too many croissants” 🥐 Last 30 min of our Café de l’Attention here in front of Javits in NYC let’s goooooo ☕️ 30 13 Comments Like Comment Share Cordial 48,215 followers 14h Report this post Coming to you live from #NRF2026 with Julien Sauvage 🥖 and Rob Garf ... and this year it's all about AI agents. We heard a great piece of advice for retailers from our keynote speaker, Jason Goldberg , at #CordialConnect last night: make sure you can be easily discoverable where consumers are, as the gap between discovery and the buy button continues to shrink. …more 18 2 Comments Like Comment Share Cordial 48,215 followers 17h Report this post Le Café de l'Attention at #NRF is still open for a few more hours… stop by and grab a complimentary coffee ☕, pastry 🥐, and baguette sandwich 🥖 📍 Right in front of Javits Center 🕘 Open until 2 PM 👋 See you there! National Retail Federation #NRF2026 Rob Garf Julien Sauvage 🥖 …more 12 1 Comment Like Comment Share Cordial 48,215 followers 19h Report this post AI that does the work. Not just suggests it. Today, we're introducing two new AI agents as part of Cordial Agents - a system built to ground, govern, execute, and coordinate real marketing work inside the platform. 📧 The Email Production Agent handles execution-heavy email work: personalization, audience logic, message orchestration, and measurement. Outputs are validated against real customer profiles before they run. 📊 The Data Intelligence Agent monitors campaign and audience performance as it happens - identifying trends, surfacing issues early, and recommending next actions while campaigns are still running. Both operate within a governed framework with built-in quality checks and enforceable guardrails. Read the full press release → https://lnkd.in/eBBJu5en 3 1 Comment Like Comment Share Cordial reposted this Julien Sauvage 🥖 1d Report this post Le Café de l’Attention is still open for an hour or so… Good vibes. Good coffee. Good QRoissants 🥐 60 2 Comments Like Comment Share Cordial 48,215 followers 1d Edited Report this post At #NRF ? Come find us at Le Café de l'Attention today and tomorrow. We’re in front of Javits Center with complimentary coffee ☕, pastries 🥐, and baguette sandwiches 🥖 for #NRF attendees. While you're there, grab the latest edition of the Cordial Times 📰 , get a first look at what we're launching, and learn how leading brands are winning the battle for attention. 📍 Corner of 37th and 11th 📅 Sunday, January 11 & Monday, January 12 🕘 9 AM – 2 PM 👋 See you there! National Retail Federation #NRF2026 22 1 Comment Like Comment Share Cordial 48,215 followers 1d Report this post NRF kicks off today! Need a coffee before hitting the show floor? We've got you covered. Stop by either of these coffee shops near Javits, show them your NRF badge, and grab a drink on us. ☕ Paradis in the Park Hudson Yards, 524 W 36th St. 9 AM – 3 PM, Sunday & Monday ☕ Enoch's 480 10th Ave Sunday 8 AM – 5 PM, Monday 7 AM – 6 PM See you out there! 7 1 Comment Like Comment Share Cordial 48,215 followers 3d Report this post Heading to #NRF ? Come find us at Le Café de l'Attention We're pulling up in front of Javits Center with complimentary coffee ☕, pastries 🥐, and baguette sandwiches 🥖 for NRF attendees. While you're there, grab the latest edition of the Cordial Times 📰 , get a first look at what we're launching, and learn how leading brands are winning the battle for attention. 📍 In front of Javits Center 📅 Sunday, January 11 & Monday, January 12 🕘 9 AM – 2 PM 👋 See you there! 15 2 Comments Like Comment Share Join now to see what you are missing Find people you know at Cordial Browse recommended jobs for you View all updates, news, and articles Join now Similar pages Iterable Software Development San Francisco, California Braze Software Development New York, New York Attentive Advertising Services Cordial AB Business Consulting and Services Klaviyo Marketing Services Boston, Massachusetts Clari Software Development Sunnyvale, CA Listrak Advertising Services Lititz, PA Gong Software Development San Francisco, California Bloomreach Software Development Mountain View, California Hightouch Software Development San Francisco, CA Show more similar pages Show fewer similar pages Browse jobs Manager jobs 1,880,925 open jobs Director jobs 1,220,357 open jobs Analyst jobs 694,057 open jobs Account Executive jobs 71,457 open jobs Financial Planning and Analysis Manager jobs 18,291 open jobs Developer jobs 258,935 open jobs Vice President jobs 235,270 open jobs Senior Manager jobs 181,836 open jobs Engineer jobs 555,845 open jobs Senior Sales Director jobs 45,162 open jobs Solutions Consultant jobs 62,819 open jobs Controller jobs 74,523 open jobs Cordial jobs 6,030 open jobs Director of Operations jobs 61,881 open jobs Accountant jobs 91,739 open jobs Sales Manager jobs 310,050 open jobs Consultant jobs 760,907 open jobs Financial Analyst jobs 102,246 open jobs Sales Director jobs 83,949 open jobs Solutions Engineer jobs 92,218 open jobs Show more jobs like this Show fewer jobs like this More searches More searches Cordial jobs Manager jobs Analyst jobs Solutions Consultant jobs Financial Planning and Analysis Manager jobs Engineer jobs Account Executive jobs Director jobs Associate jobs Marketing Manager jobs Sales Director jobs Machine Learning Engineer jobs Quality Engineer jobs Vice President Operations jobs Accountant jobs Senior Software Engineer jobs Senior Sales Engineer jobs Marketing Director jobs President jobs Senior Vice President jobs Senior Manager jobs Chief Legal Officer jobs Controller jobs Writer jobs Implementation Manager jobs Financial Planning Specialist jobs Quality Assurance Analyst jobs Account Manager jobs Vice President jobs Vice President of Customer Support jobs Strategist jobs Director of Engineering jobs Specialist jobs Solutions Engineer jobs Cyber Security Specialist jobs Benefits Specialist jobs Veterinarian jobs Corporate Development Specialist jobs Software Engineer jobs Strategic Planning Manager jobs Finance Business Partner jobs Senior Vice President Marketing jobs Builder jobs Accounting Officer jobs Senior Product Designer jobs Marketing Specialist jobs Chief Administrative Officer jobs Chief Marketing Officer jobs Enterprise Account Executive jobs Senior Director jobs Designer jobs Consultant jobs Sales Development Representative jobs Vice President of Services jobs Medical Specialist jobs Senior Product Marketing Manager jobs Retention Marketing Manager jobs Senior Finance Manager jobs Information Technology System Engineer jobs Design Manager jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Cordial Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/signup/cold-join?session_redirect=%2Fservices%2Fproducts%2Ftencentglobal-tencent-cloud-lighthouse%2F&trk=products_details_guest_nav-header-join | Sign Up | LinkedIn Make the most of your professional life Not you? Remove photo Join LinkedIn To create a LinkedIn account, you must understand how LinkedIn processes your personal information by selecting learn more for each item listed. Agree to all terms We collect and use personal information. Learn more We share personal information with third parties to provide our services. Learn more Further information is available in our Korea Privacy Addendum . Privacy Policy Addendum 1 of 2 2 of 2 Agree to the term Continue Back Agree to all terms Email Password Show Remember me First name Last name By clicking Agree & Join, you agree to the LinkedIn User Agreement , Privacy Policy , and Cookie Policy . Agree & Join or Security verification Already on LinkedIn? Sign in Looking to create a page for a business? Get help LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Takes less than 2 minutes Join LinkedIn to connect with people, jobs and opportunities that matter Join now Leave | 2026-01-13T09:29:15 |
https://www.atlassian.com/licensing/bamboo | Bamboo Licensing | Atlassian Close View this page in your language ? All languages Choose your language 中文 Deutsch English Español Français Italiano 한국어 Nederlands 日本語 Português Pусский Polski Get it free Products Featured Developers Product Managers IT professionals Business Teams Leadership Teams Featured Developers Product Managers IT professionals Business Teams Leadership Teams See all apps Featured FEATURED APPS Jira Flexible project management Confluence Collaborative knowledge workspace Jira Service Management High-velocity service delivery ATLASSIAN COLLECTIONS Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Optimize strategy and outcomes confidently Focus • Talent • Align Deliver service at high-velocity Jira Service Management • Customer Service Management • Assets • Rovo Ship high-quality software fast Rovo Dev • DX • Pipelines • Bitbucket • Compass POWERED BY Rovo AI-powered apps – driven by your team's knowledge. Developers Jira Flexible project management Compass Software catalog for teams Pipelines Scalable CI/CD automation Bitbucket Source code and CI/CD DX Measure productivity and AI impact Rovo Dev Agentic AI for developers Ship high-quality software fast Rovo Dev • DX • Pipelines • Bitbucket • Compass Rovo AI-powered apps – driven by your team's knowledge. Product Managers Jira Flexible project management Confluence Knowledge, all in one place Jira Product Discovery Capture & prioritize ideas Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Rovo AI-powered apps – driven by your team's knowledge. IT professionals Jira Service Management High-velocity service delivery Guard Enhanced cloud security Rovo AI-powered apps – driven by your team's knowledge. Business Teams Jira Flexible project management Confluence Knowledge, all in one place Trello Organized & visualized work Loom Quick, async video updates Jira Service Management High-velocity service delivery Customer Service Management Customer experiences reimagined Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Rovo AI-powered apps – driven by your team's knowledge. Leadership Teams Focus Enterprise-scale strategic planning Talent Knowledge workforce planning Jira Align Enterprise-wide work planning & value Optimize strategy and outcomes confidently Focus • Talent • Align Rovo AI-powered apps – driven by your team's knowledge. Back Solutions Solutions By use case Team collaboration Strategy and planning Service management Software development By team Software Marketing IT By size Enterprise Small business Startup Non-profit By industry Retail Telecommunications Professional services Government Rovo AI-powered apps – driven by your team's knowledge. Back Why Atlassian Why Atlassian System of Work New Atlassian's blueprint for how teams work together Integrations Connect thousands of apps to your Atlassian products Customers Case studies & stories powered by teamwork FedRAMP Compliant solutions for the public sector Resilience Enterprise-grade & highly performant infrastructure Platform Our deeply integrated, reliable & secure platform Trust center Ensure your data’s security, compliance & availability Back Resources Resources Customer Support Ask questions, report bugs & give us feedback Find Partners Consulting, training & product customization support Atlassian Ascend Resources and support for your transformation Community Learn, connect and grow with the Atlassian Community support General inquiries Product Advice Partner support Enterprise support Technical Support Pricing and Billing Developer support Purchasing & Licensing Resources Project Management Agile Atlassian Learning Get Started Project Collaboration Team Playbook Product Documentation Back Enterprise More + Less - Get it free Jira Flexible project management Jira Service Management High-velocity service delivery Confluence Knowledge, all in one place See all products Get it free Back Get it free Licensing Open and close the navigation menu Purchase & Licensing Atlassian Cloud Marketplace Atlassian Guard Data Center Jira Jira Work Management Service Collection Jira Product Discovery Confluence Bitbucket Fisheye/Crucible Bamboo Crowd Trello Opsgenie Halp Open Devops Enterprise Services Resellers Terms of Service Purchase & Licensing Atlassian Cloud Marketplace Atlassian Guard Data Center Jira Jira Work Management Service Collection Jira Product Discovery Confluence Bitbucket Fisheye/Crucible Bamboo Crowd Trello Opsgenie Halp Open Devops Enterprise Services Resellers Terms of Service Atlas Purchasing & Licensing Atlassian Cloud Marketplace Atlassian Guard Rovo Data Center Data Center End of Life Server End of Support Jira Jira Work Management Jira Product Discovery Teamwork Collection Strategy Collection Service Collection Atlassian Government Cloud Compass Confluence Bitbucket Bamboo Crowd Opsgenie Jira Service Management Data Center Halp Open Devops Enterprise Services Resellers Customer Agreement Bamboo Pricing & Licensing Our FAQ contains the most frequently asked questions relating to our licensing and pricing. Of course, if you would like to talk to us directly, please contact our Customer Advocate Team. Show All What is software maintenance? Copy link to heading Copied! Show Software maintenance, included in your first year of use, gives you the following benefits: Security patches Critical bug fixes New features & enhancements Access to Atlassian's world-class Support team for technical troubleshooting Developer license* keys (lets you test the upgrades before committing!) Potential to enroll in Beta Programs In order for your software maintenance to remain active year-round, you must purchase it annually. Learn more about our renewing terms for software maintenance above. We also suggest making sure your business processes allow for frequent upgrades so your team can continue reaping the benefits enjoyed by other world-class teams. How do I request a quote? Copy link to heading Copied! Show Requesting an obligation-free quote is easy! You will just need to submit your request through the Atlassian Store . In the final step check Get a Quote and click Submit . Only quotes submitted though our online form are considered valid, screen prints of the shopping cart is not the same as a quote. How is Bamboo Data Center licensed? Copy link to heading Copied! Show A Bamboo commercial, academic, or starter license entitles you to: Deploy Bamboo in a single-node or a clustered multi-node setup with an unlimited number of secondary nodes Use remote , elastic , or ephemeral agents as per edition Unlimited projects and users Software maintenance for 12 months — including all updates and online support Application source code (under a developer source license which allows for broad customization) An academic license is available for qualified educational and academic institutions ( learn more ). What are agents? Copy link to heading Copied! Show A Bamboo agent is a service that provides capabilities to run job builds . Bamboo includes the following types of agents: Remote agents , which run in their own separate processes (that is, have their own JVM). Elastic agents , which run in the Amazon Elastic Compute Cloud (EC2) . Ephemeral agents , which are short-lived remote agents that start on demand inside a Kubernetes cluster t ocarry out a single build or deployment before being shut down. Each agent has a defined set of capabilities . An agent's capabilities enable it to run build and deployment jobs with matching requirements . Agents communicate with Bamboo using the Java Message Service (JMS). What is the difference between online/offline/disabled agents? Copy link to heading Copied! Show A Bamboo Agent is a service that provides capabilities to run job builds . offline agents are disconnected from Bamboo and do not affect billing or licensing. online agents are connected to Bamboo and consume a license. disabled is a status set on the agent by an administrator which means they cannot be utilized by builds, however, if they are connected to the server — they are still considered online and will consume a license. Bamboo Data Center Show All What is Data Center? Copy link to heading Copied! Show Data Center is Atlassian’s self-managed offering, available for Jira Software, Jira Service Management, Confluence, Bitbucket, Bamboo, and Crowd. The end of life for impacted Data Center products will take place on March 28, 2029 at 23:59 PST . Data Center subscriptions and any associated Marketplace apps will expire on this date, making Data Center products and apps read-only. You can find a full timeline, details on this transition, and links to access support here . What does Bamboo Data Center cost? Copy link to heading Copied! Show Please refer to Bamboo Pricing for complete pricing details. What level of support is provided with Bamboo Data Center? Copy link to heading Copied! Show Priority Support will be included with your Bamboo Data Center subscription for customers with 100 or more agents. What will replace Bamboo in cloud? Copy link to heading Copied! Show During the 3-year transition until Data Center EOL, Bamboo will remain available for your CI/CD needs. Company Careers Events Blogs Investor Relations Atlassian Foundation Press kit Contact us products Rovo Jira Jira Align Jira Service Management Confluence Loom Trello Bitbucket See all products Resources Technical support Purchasing & licensing Atlassian Community Knowledge base Marketplace My account Create support ticket Learn Partners Training & certification Documentation Developer resources Enterprise services See all resources Copyright © 2025 Atlassian Privacy Policy Terms Impressum Choose language Deutsch English Español Français Italiano 한국어 Nederlands 日本語 Polski Português русский 中文 | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#2-bounding-box-precision | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/datacore-software-datacore-sansymphony/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | DataCore SANsymphony | LinkedIn Skip to main content LinkedIn DataCore Software in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in DataCore SANsymphony Block Storage Software by DataCore Software See who's skilled in this Add as skill Learn more Report this product About DataCore SANsymphony is a software-defined storage platform that unifies and safeguards diverse storage systems across SAN and HCI environments. It delivers continuous data availability and cyber resilience through automated business continuity and disaster recovery workflows and continuous data protection. SANsymphony keeps your infrastructure secure and recoverable from outages, hardware failures, or ransomware—ensuring mission-critical workloads stay online and your data always remains in control. Built for flexibility, performance, and efficiency, SANsymphony lets you modernize without disruption—integrating new storage technologies seamlessly while maximizing the value of existing assets. Its intelligent caching, auto-tiering, and parallel I/O acceleration drive consistent high-speed performance, while centralized management and automation reduce operational overhead and lower TCO. DataCore SANsymphony gives enterprises true storage freedom—resilient, optimized, and future-ready. This product is intended for Information Technology Manager Storage Engineer Storage Administrator Information Technology Technician Information Technology Specialist Information Communication Technology Manager Information Technology Consultant Director of Information Technology Chief Information Officer Chief Technology Officer Media Products media viewer No more previous content SANsymphony Management Console Automatically tier data across storage devices based on data access temperatures. Storage the most accessed (hot) data on the most fastest tier such as flash/NVMe and warm/cold data on slower storage systems. Synchronous Mirroring for High Availability SANsymphony mirrors data synchronously across local and metro clusters. Automatic and transparent failover, resynchronization, and fallback ensure zero RPO and RTO while eliminating downtime and guaranteeing 100% application and data availability and access. Three Lines of Defense with SANsymphony Enhanced Cybersecurity to Protect Your Data No more next content Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software OVHcloud Block Storage OVHcloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software Sign in to see more Show more Show less DataCore Software products DataCore Nexus DataCore Nexus Cloud File Storage Software DataCore Puls8 DataCore Puls8 Container Management Software DataCore Swarm DataCore Swarm Object Storage Software StarWind HCI StarWind HCI Hyper-Converged Infrastructure (HCI) LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://opensource.guide/es/leadership-and-governance/ | Liderazgo y Gobierno | Open Source Guides Acerca de Contribuir العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Liderazgo y Gobierno Los proyectos de código abierto crecientes pueden beneficiarse de reglas formales para tomar decisiones. Tabla de contenidos Entendiendo el gobierno de su proyecto en crecimiento ¿Cuáles son ejemplos de roles formales utilizados en proyectos de código abierto? ¿Cómo formalizo los roles de liderazgo? ¿Cuándo le puedo dar acceso a hacer commits a alguien? ¿Cuáles son algunas de las estructuras de gobierno comunes para los proyectos de código abierto? ¿Necesito documentación de gobierno cuando lanzo mi proyecto? ¿Qué pasa cuando los empleados de corporaciones comienzan a enviar contribuciones? Entendiendo el gobierno de su proyecto en crecimiento Tu proyecto está creciendo, la gente está comprometida, y estás comprometido a mantener esto en marcha. En esta etapa, es posible que te preguntes cómo incorporar a los contribuyentes regulares de proyectos en su flujo de trabajo, ya sea para darle a alguien el compromiso de acceso o para resolver los debates de la comunidad. Si tiene preguntas, tenemos respuestas. ¿Cuáles son ejemplos de roles formales utilizados en proyectos de código abierto? Muchos proyectos siguen estructuras similares para reconocer y asignar roles a los contribuyentes. El significado de estos roles queda a tu criterio. Aquí puedes encontrar algunos tipos de rol que quizás reconozcas: Mantenedor Contribuyente Committer Para algunos proyectos, los “mantenedores” son las únicas personas en el proyecto con permisos de commit. En otros proyectos, son simplemente personas que están listadas en el archivo README.md como mantenedores. Un mantenedor no necesariamente tiene que ser alguien que escribe código para su proyecto. Podría ser alguien que ha hecho mucho trabajo evangelizando su proyecto, o documentación escrita que hizo el proyecto más accesible a los demás. Independientemente de lo que hacen día a día, un mantenedor es probablemente alguien que se siente responsable sobre la dirección del proyecto y se ha comprometido a mejorarlo. Un “contribuyente” puede ser cualquiera que comente en una issue o un pull request, personas que agreguen valor al proyecto (sin importar si sólo está clasificando issues, escribiendo código u organizando eventos), o cualquiera con un merged pull request (esta es la definición más estrecha de un contribuyente). [Para Node.js], cada persona que se presenta para comentar un problema o envía código es un miembro de la comunidad de un proyecto. Sólo ser capaz de verlos significa que han cruzado la línea de ser un usuario a ser un contribuyente. — @mikeal, “Healthy Open Source” El término “committer” podría utilizarse para distinguir entre el acceso a commit, que es un tipo específico de responsabilidad, de otras formas de contribución. Mientras que puedes definir tus roles de proyecto de cualquier forma que quieras te gustaría considerar usar definiciones más amplias para fomentar más formas de contribución. Puedes utilizar funciones de liderazgo para reconocer formalmente a personas que han hecho contribuciones excepcionales a su proyecto, independientemente de su habilidad técnica. Quizás me conozcan como el “inventor” de Django… pero realmente soy el individuo que consiguió ser contratado para trabajar en algo un año después de que ya fuera hecho. (…) La gente sospecha que tengo éxito debido a mi habilidad de programación … pero, en el mejor de los casos soy un programador promedio. — @jacobian, “PyCon 2015 Keynote” (video) ¿Cómo formalizo los roles de liderazgo? La formalización de tus funciones de liderazgo ayuda a las personas a sentirse propietarias y les dice a otros miembros de la comunidad a quién deben buscar para conseguir ayuda. Para un proyecto más pequeño, designar líderes puede ser tan simple como agregar sus nombres a su archivo de texto README o CONTRIBUTORS. Por un proyecto más grande, si tienes una página web, crea una página de equipo o lista tus líderes de proyecto allí. Por ejemplo, PostgreSQL tiene una página exhaustiva de equipo con perfiles cortos para cada contribuyente. Si tu proyecto tiene una comunidad de contribuidores muy activa, puede formar un “equipo central” de mantenedores, o incluso subcomisiones de personas que se apropian de diferentes áreas temáticas (por ejemplo, seguridad, clasificación de temas o conducta comunitaria). Permite que la gente se auto-organice y se ofrezca como voluntaria para los papeles que más le entusiasman, en lugar de asignarlos. Nosotros complementamos el equipo central con varios “sub-grupos”. Cada sub-grupo se centra en un área específica, por ejemplo, diseño de lenguajes o bibliotecas. (…) Para garantizar una coordinación global y una visión sólida y coherente del proyecto en su conjunto, cada sub-grupo está dirigido por un miembro del equipo central. — “Rust Governance RFC” Los equipos de liderazgo pueden querer crear un canal designado (como en IRC) o reunirse regularmente para discutir el proyecto (como en Gitter o Google Hangout). Incluso puedes hacer públicas esas reuniones para que otras personas puedan escucharlas. Cucumber-rubí , por ejemplo, hospeda las horas de oficina cada semana . Una vez que haya establecido roles de liderazgo, ¡no olvides documentar cómo la gente puede alcanzarlos! Establece un proceso claro para que alguien pueda convertirse en un mantenedor o unirse a un subcomité en su proyecto y escribirlo en su GOVERNANCE.md. Herramientas como Vossibility puede ayudarte a hacer un seguimiento público de quién (o no) está haciendo contribuciones al proyecto. Documentar esta información evita la percepción de la comunidad de que los mantenedores son un grupo que toma sus decisiones en privado. Por último, si su proyecto está en GitHub, considere la posibilidad de mover su proyecto de su cuenta personal a una organización y agregar al menos un administrador de copias de seguridad. Las organizaciones GitHub facilitan la administración de permisos y múltiples repositorios y protegen el legado de su proyecto mediante la propiedad compartida . ¿Cuándo le puedo dar acceso a hacer commits a alguien? Algunas personas piensan que debe dar acceso de commits a todos los que hacen una contribución. Hacerlo podría alentar a más personas a sentirse dueñas de su proyecto. Por otro lado, especialmente para proyectos más grandes y complejos, es posible que desee dar sólo el acceso de commit a las personas que han demostrado su compromiso. No hay una manera correcta de hacerlo - ¡Haz lo que te parezca más cómodo! Si tu proyecto está en GitHub, podés utilizar ramas protegidas para administrar quién puede enviar a una rama en particular y bajo qué circunstancias. Cada vez que alguien te envía un pull request, dales acceso de commit a tu proyecto. Si bien puede sonar increíblemente tonto al principio, el uso de esta estrategia te permitirá liberar el verdadero poder de GitHub. (…) Una vez que las personas tienen acceso de commit, ya no están preocupados de que su parche pudiese quedar fuera de merge… haciendo que coloquen mucho más trabajo en él. — @felixge, “The Pull Request Hack” ¿Cuáles son algunas de las estructuras de gobierno comunes para los proyectos de código abierto? Hay tres estructuras de gobierno comunes asociadas a los proyectos de código abierto. BDFL: BDFL significa “Benevolent Dictator for Life” (en español, “Dictador benevolente para la vida”). Bajo esta estructura, una persona (generalmente el autor inicial del proyecto) tiene la palabra final en todas las decisiones importantes del proyecto. Python es un ejemplo clásico. Los proyectos más pequeños son probablemente BDFL por defecto, porque sólo hay uno o dos mantenedores. Un proyecto que se originó en una empresa también podría caer en la categoría BDFL. Meritocracia: (Nota: el término “meritocracia” tiene connotaciones negativas para algunas comunidades y tiene un historia social y político compleja .) Bajo una meritocracia, a los contribuyentes activos del proyecto (aquellos que demuestran “mérito”) se les da un papel formal de toma de decisiones. Las decisiones se toman generalmente en base a un consenso de voto puro. El concepto de meritocracia fue iniciado por la Fundación Apache ; Todos los proyectos de Apache son meritocracias. Las contribuciones sólo pueden ser hechas por individuos que se representan a sí mismos, no por una empresa. Contribución liberal: Bajo un modelo de contribución liberal, las personas que hacen más trabajo son reconocidas como las más influyentes, pero esto se basa en el trabajo actual y no en contribuciones históricas. Las decisiones importantes del proyecto se toman sobre la base de un proceso de búsqueda de consenso (discutir quejas mayores) en lugar de voto puro, y tratar de incluir tantas perspectivas de la comunidad como sea posible. Ejemplos populares de proyectos que utilizan un modelo de contribución liberal incluyen Node.js y Rust . ¿Cuál deberías usar? ¡Tú decides! Cada modelo tiene ventajas y compensaciones. Y aunque pueden parecer muy diferentes al principio, los tres modelos tienen más en común de lo que parece. Si estás interesado en adoptar uno de estos modelos, consulta estas plantillas: BDFL model template Meritocracy model template Node.js’s liberal contribution policy ¿Necesito documentación de gobierno cuando lanzo mi proyecto? No hay momento adecuado para describir el gobierno de su proyecto, pero es mucho más fácil definirlo una vez que haya visto cómo se desarrolla la dinámica de su comunidad. ¡La mejor parte (y más difícil) sobre el gobierno de código abierto es que está conformado por la comunidad! Sin embargo, una cierta documentación temprana contribuirá inevitablemente al gobierno de su proyecto, así que empiece a escribir lo que pueda. Por ejemplo, puede definir expectativas claras de comportamiento o cómo funciona su proceso de contribución, incluso en el lanzamiento de su proyecto. Si usted es parte de una empresa lanzando un proyecto de código abierto, vale la pena tener una discusión interna antes del lanzamiento acerca de cómo su empresa espera mantener y tomar decisiones sobre el proyecto de seguir adelante. También es posible que desee explicar públicamente algo en particular sobre cómo su empresa (o no) participará en el proyecto. Nosotros asignamos pequeños equipos para gestionar proyectos en GitHub, los cuales está actualmente trabajando en ellos en Facebook. Por ejemplo, React es ejecutado por un Ingeniero de React. — @caabernathy, “Una vista interna del código abierto en Facebook” ¿Qué pasa cuando los empleados de corporaciones comienzan a enviar contribuciones? Los proyectos exitosos de código abierto se utilizan por muchas personas y empresas, y algunas empresas pueden eventualmente tener flujos de ingresos generalmente vinculados al proyecto. Por ejemplo, una empresa puede utilizar el código del proyecto como un componente en una oferta de servicios comerciales. A medida que el proyecto se utiliza más ampliamente, las personas que tienen experiencia en ella comienzan a estar más demandados - ¡puedes ser uno de ellos! - y a veces se les paga por el trabajo que realizan en el proyecto. Es importante tratar la actividad comercial como algo normal y como otra fuente de energía de desarrollo. Por supuesto, los desarrolladores pagados no deben recibir un trato especial sobre los no pagados. Cada contribución debe ser evaluada por sus méritos técnicos. Sin embargo, la gente debe sentirse cómoda participando en la actividad comercial, y sentirse cómoda diciendo sus casos de uso al argumentar a favor de una mejora o característica en particular. “Comercial” es completamente compatible con “código abierto”. “Comercial” sólo significa que existe dinero involucrado en alguna parte - que el software se utiliza en el comercio, que es cada vez más probable como un proyecto gana la adopción. (Cuando se utiliza software de código abierto como parte de un producto que no es de código abierto, el producto general sigue siendo un software “propietario”, aunque, al igual que el código abierto, podría utilizarse con fines comerciales o no comerciales). Como cualquier otra persona, los desarrolladores con motivación comercial ganan influencia en el proyecto a través de la calidad y la cantidad de sus contribuciones. Obviamente, un desarrollador al cual se le paga por su tiempo, puede ser capaz de hacer algo más que alguien al que no se le paga, pero eso está bien: el pago es sólo uno de los muchos factores posibles que podrían afectar cuánto una persona hace. Manten los debates del proyecto centrados en las contribuciones, no en los factores externos que permiten a las personas a hacer esas contribuciones. ¿Necesito una entidad legal para apoyar a mi proyecto? Usted no necesita una entidad legal para apoyar su proyecto de código abierto a menos que esté manejando dinero. Por ejemplo, si desea crear un negocio comercial, desee configurar una C Corp o LLC (si vives en los EE.UU.). Si está haciendo un trabajo de contrato relacionado con su proyecto de código abierto, puede aceptar dinero como propietario único, o establecer una LLC (si vives en los EE.UU.). Si quieres aceptar donaciones para tu proyecto de código abierto, podes configurar un botón de donación (mediante PayPal o Stripe, por ejemplo), pero el dinero no será deducible de impuestos a menos que sea una organización sin fines de lucro calificada (un 501c3, si estás en los EE.UU.). Muchos proyectos no desean pasar por la molestia de crear una organización sin fines de lucro, por lo que encuentran un patrocinador fiscal sin fines de lucro en su lugar. Un patrocinador fiscal acepta donaciones en su nombre, normalmente a cambio de un porcentaje de la donación. Software Freedom Conservancy , Apache Foundation , Eclipse Foundation , Linux Foundation y Open Collective son ejemplos de organizaciones que sirven como patrocinadores fiscales para proyectos de código abierto. Nuestra meta es proveer una infraestructura que las comunidades puedan usar para ser autosostenibles, creando así un ambiente en el que todos, contribuyentes, patrocinadores, obtengan beneficios concretos. — @piamancini, “Moving beyond the charity framework” Si tu proyecto está estrechamente asociado con un determinado idioma o ecosistema, también puede haber un framework relacionado con el que pueda trabajar. Por ejemplo, la Python Software Foundation ayuda a PyPI , el gestor de paquetes de Python y el Node.js Foundation ayuda a apoyar Express.js , un framework basado en nodos. Volver a todas las guías Guías relacionadas Buenas Prácticas para Mantenedores de Código. Haciéndote la vida más fácil como un mantenedor de código abierto, desde el proceso de documentación hasta sacar el máximo provecho de la comunidad. Métricas de código abierto Tomar decisiones informadas para ayudar a tu proyecto de código abierto a prosperar mediante la medición y el seguimiento de su éxito. Scroll to Top Contribuir ¿Tienes alguna sugerencia? Este contenido es de código abierto. Ayúdanos a mejorarlo. Contribuir Suscribirse para novedades Sea el primero en enterarse de los últimos consejos y recursos de código abierto. Dirección de correo fine print con hecho por y amigos | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/cisco--hybrid-cloud/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | Cisco+ Hybrid Cloud | LinkedIn Skip to main content LinkedIn Cisco in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cisco+ Hybrid Cloud Enterprise Cloud Platforms by Cisco See who's skilled in this Add as skill Learn more Report this product About Cisco+ Hybrid Cloud invites you to experience a new way to manage hybrid cloud with a simplified, consumption-based solution Similar products Workday Workday Enterprise Cloud Platforms Oracle Cloud Infrastructure (OCI) Oracle Cloud Infrastructure (OCI) Enterprise Cloud Platforms Cloud transformation Cloud transformation Enterprise Cloud Platforms Nutanix Central Nutanix Central Enterprise Cloud Platforms Nutanix Kubernetes Platform Nutanix Kubernetes Platform Enterprise Cloud Platforms Nutanix Enterprise AI Nutanix Enterprise AI Enterprise Cloud Platforms Sign in to see more Show more Show less Cisco products Cisco Anyconnect Secure Mobility Client Cisco Anyconnect Secure Mobility Client Virtual Private Network (VPN) Software Cisco Cloudlock Cisco Cloudlock Cloud Access Security Broker (CASB) Software Cisco Meraki Systems Manager Cisco Meraki Systems Manager Mobile Device Management (MDM) Software Cisco Secure Malware Analytics Cisco Secure Malware Analytics Threat Intelligence Platforms Cisco Security Cloud Cisco Security Cloud Cloud Workload Protection Platforms Cisco ThousandEyes Cisco ThousandEyes Application Performance Monitoring (APM) Software Cisco Umbrella Cisco Umbrella Secure Web Gateways Cisco Unified IP Interactive Voice Response Cisco Unified IP Interactive Voice Response Interactive Voice Response (IVR) Software Cisco Webex Meetings Cisco Webex Meetings Video Conferencing Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert | Vec in std::vec - Rust This old browser is unsupported and will most likely display funky things. Vec std 1.92.0 (ded5c06cf 2025-12-08) Vec Sections Examples Indexing Slicing Capacity and reallocation Guarantees Methods allocator append as_mut_ptr as_mut_slice as_non_null as_ptr as_slice capacity clear dedup dedup_by dedup_by_key drain extend_from_slice extend_from_within extract_if from_parts from_parts_in from_raw_parts from_raw_parts_in insert insert_mut into_boxed_slice into_chunks into_flattened into_parts into_parts_with_alloc into_raw_parts into_raw_parts_with_alloc is_empty leak len new new_in peek_mut pop pop_if push push_mut push_mut_within_capacity push_within_capacity remove reserve reserve_exact resize resize_with retain retain_mut set_len shrink_to shrink_to_fit spare_capacity_mut splice split_at_spare_mut split_off swap_remove truncate try_remove try_reserve try_reserve_exact try_with_capacity try_with_capacity_in with_capacity with_capacity_in Methods from Deref< Target=[ T]> align_to align_to_mut array_windows as_array as_chunks as_chunks_mut as_chunks_unchecked as_chunks_unchecked_mut as_mut_array as_mut_ptr as_mut_ptr_range as_ptr as_ptr_range as_rchunks as_rchunks_mut as_simd as_simd_mut binary_search binary_search_by binary_search_by_key chunk_by chunk_by_mut chunks chunks_exact chunks_exact_mut chunks_mut clone_from_slice concat connect contains copy_from_slice copy_within element_offset ends_with fill fill_with first first_chunk first_chunk_mut first_mut get get_disjoint_mut get_disjoint_unchecked_mut get_mut get_unchecked get_unchecked_mut is_empty is_sorted is_sorted_by is_sorted_by_key iter iter_mut join last last_chunk last_chunk_mut last_mut len partition_dedup partition_dedup_by partition_dedup_by_key partition_point rchunks rchunks_exact rchunks_exact_mut rchunks_mut repeat reverse rotate_left rotate_right rsplit rsplit_mut rsplit_once rsplitn rsplitn_mut select_nth_unstable select_nth_unstable_by select_nth_unstable_by_key sort sort_by sort_by_cached_key sort_by_key sort_unstable sort_unstable_by sort_unstable_by_key split split_at split_at_checked split_at_mut split_at_mut_checked split_at_mut_unchecked split_at_unchecked split_first split_first_chunk split_first_chunk_mut split_first_mut split_inclusive split_inclusive_mut split_last split_last_chunk split_last_chunk_mut split_last_mut split_mut split_off split_off_first split_off_first_mut split_off_last split_off_last_mut split_off_mut split_once splitn splitn_mut starts_with strip_prefix strip_suffix subslice_range swap swap_unchecked swap_with_slice to_vec to_vec_in trim_prefix trim_suffix windows Trait Implementations AsMut<Vec<T, A>> AsMut<[T]> AsRef<Vec<T, A>> AsRef<[T]> Borrow<[T]> BorrowMut<[T]> Clone Debug Default Deref DerefMut DerefPure Drop Eq Extend<&'a T> Extend<T> From<&'a Vec<T>> From<&[T; N]> From<&[T]> From<&mut [T; N]> From<&mut [T]> From<&str> From<BinaryHeap<T, A>> From<Box<[T], A>> From<ByteString> From<CString> From<Cow<'a, [T]>> From<String> From<Vec<NonZero<u8>>> From<Vec<T, A>> From<Vec<T, A>> From<Vec<T, A>> From<Vec<T, A>> From<Vec<T, A>> From<Vec<T>> From<VecDeque<T, A>> From<[T; N]> FromIterator<T> Hash Index<I> IndexMut<I> IntoIterator IntoIterator IntoIterator Ord PartialEq<&[U; N]> PartialEq<&[U]> PartialEq<&mut [U]> PartialEq<ByteStr> PartialEq<ByteString> PartialEq<Vec<U, A2>> PartialEq<Vec<U, A>> PartialEq<Vec<U, A>> PartialEq<Vec<U, A>> PartialEq<Vec<U, A>> PartialEq<Vec<U, A>> PartialEq<Vec<u8>> PartialEq<Vec<u8>> PartialEq<[U; N]> PartialEq<[U]> PartialOrd<Vec<T, A2>> TryFrom<Vec<T, A>> TryFrom<Vec<T>> TryFrom<Vec<u8>> Write Auto Trait Implementations Freeze RefUnwindSafe Send Sync Unpin UnwindSafe Blanket Implementations Any Borrow<T> BorrowMut<T> CloneToUninit From<T> Into<U> Receiver ToOwned TryFrom<U> TryInto<U> In std:: vec std :: vec Struct Vec Copy item path 1.0.0 · Source pub struct Vec<T, A = Global > where A: Allocator , { /* private fields */ } Expand description A contiguous growable array type, written as Vec<T> , short for ‘vector’. § Examples let mut vec = Vec::new(); vec.push( 1 ); vec.push( 2 ); assert_eq! (vec.len(), 2 ); assert_eq! (vec[ 0 ], 1 ); assert_eq! (vec.pop(), Some ( 2 )); assert_eq! (vec.len(), 1 ); vec[ 0 ] = 7 ; assert_eq! (vec[ 0 ], 7 ); vec.extend([ 1 , 2 , 3 ]); for x in & vec { println! ( "{x}" ); } assert_eq! (vec, [ 7 , 1 , 2 , 3 ]); The vec! macro is provided for convenient initialization: let mut vec1 = vec! [ 1 , 2 , 3 ]; vec1.push( 4 ); let vec2 = Vec::from([ 1 , 2 , 3 , 4 ]); assert_eq! (vec1, vec2); It can also initialize each element of a Vec<T> with a given value. This may be more efficient than performing allocation and initialization in separate steps, especially when initializing a vector of zeros: let vec = vec! [ 0 ; 5 ]; assert_eq! (vec, [ 0 , 0 , 0 , 0 , 0 ]); // The following is equivalent, but potentially slower: let mut vec = Vec::with_capacity( 5 ); vec.resize( 5 , 0 ); assert_eq! (vec, [ 0 , 0 , 0 , 0 , 0 ]); For more information, see Capacity and Reallocation . Use a Vec<T> as an efficient stack: let mut stack = Vec::new(); stack.push( 1 ); stack.push( 2 ); stack.push( 3 ); while let Some (top) = stack.pop() { // Prints 3, 2, 1 println! ( "{top}" ); } § Indexing The Vec type allows access to values by index, because it implements the Index trait. An example will be more explicit: let v = vec! [ 0 , 2 , 4 , 6 ]; println! ( "{}" , v[ 1 ]); // it will display '2' However be careful: if you try to access an index which isn’t in the Vec , your software will panic! You cannot do this: ⓘ let v = vec! [ 0 , 2 , 4 , 6 ]; println! ( "{}" , v[ 6 ]); // it will panic! Use get and get_mut if you want to check whether the index is in the Vec . § Slicing A Vec can be mutable. On the other hand, slices are read-only objects. To get a slice , use & . Example: fn read_slice(slice: & [usize]) { // ... } let v = vec! [ 0 , 1 ]; read_slice( & v); // ... and that's all! // you can also do it like this: let u: & [usize] = & v; // or like this: let u: & [ _ ] = & v; In Rust, it’s more common to pass slices as arguments rather than vectors when you just want to provide read access. The same goes for String and &str . § Capacity and reallocation The capacity of a vector is the amount of space allocated for any future elements that will be added onto the vector. This is not to be confused with the length of a vector, which specifies the number of actual elements within the vector. If a vector’s length exceeds its capacity, its capacity will automatically be increased, but its elements will have to be reallocated. For example, a vector with capacity 10 and length 0 would be an empty vector with space for 10 more elements. Pushing 10 or fewer elements onto the vector will not change its capacity or cause reallocation to occur. However, if the vector’s length is increased to 11, it will have to reallocate, which can be slow. For this reason, it is recommended to use Vec::with_capacity whenever possible to specify how big the vector is expected to get. § Guarantees Due to its incredibly fundamental nature, Vec makes a lot of guarantees about its design. This ensures that it’s as low-overhead as possible in the general case, and can be correctly manipulated in primitive ways by unsafe code. Note that these guarantees refer to an unqualified Vec<T> . If additional type parameters are added (e.g., to support custom allocators), overriding their defaults may change the behavior. Most fundamentally, Vec is and always will be a (pointer, capacity, length) triplet. No more, no less. The order of these fields is completely unspecified, and you should use the appropriate methods to modify these. The pointer will never be null, so this type is null-pointer-optimized. However, the pointer might not actually point to allocated memory. In particular, if you construct a Vec with capacity 0 via Vec::new , vec![] , Vec::with_capacity(0) , or by calling shrink_to_fit on an empty Vec, it will not allocate memory. Similarly, if you store zero-sized types inside a Vec , it will not allocate space for them. Note that in this case the Vec might not report a capacity of 0 . Vec will allocate if and only if size_of::<T> () * capacity () > 0 . In general, Vec ’s allocation details are very subtle — if you intend to allocate memory using a Vec and use it for something else (either to pass to unsafe code, or to build your own memory-backed collection), be sure to deallocate this memory by using from_raw_parts to recover the Vec and then dropping it. If a Vec has allocated memory, then the memory it points to is on the heap (as defined by the allocator Rust is configured to use by default), and its pointer points to len initialized, contiguous elements in order (what you would see if you coerced it to a slice), followed by capacity - len logically uninitialized, contiguous elements. A vector containing the elements 'a' and 'b' with capacity 4 can be visualized as below. The top part is the Vec struct, it contains a pointer to the head of the allocation in the heap, length and capacity. The bottom part is the allocation on the heap, a contiguous memory block. ptr len capacity +--------+--------+--------+ | 0x0123 | 2 | 4 | +--------+--------+--------+ | v Heap +--------+--------+--------+--------+ | 'a' | 'b' | uninit | uninit | +--------+--------+--------+--------+ uninit represents memory that is not initialized, see MaybeUninit . Note: the ABI is not stable and Vec makes no guarantees about its memory layout (including the order of fields). Vec will never perform a “small optimization” where elements are actually stored on the stack for two reasons: It would make it more difficult for unsafe code to correctly manipulate a Vec . The contents of a Vec wouldn’t have a stable address if it were only moved, and it would be more difficult to determine if a Vec had actually allocated memory. It would penalize the general case, incurring an additional branch on every access. Vec will never automatically shrink itself, even if completely empty. This ensures no unnecessary allocations or deallocations occur. Emptying a Vec and then filling it back up to the same len should incur no calls to the allocator. If you wish to free up unused memory, use shrink_to_fit or shrink_to . push and insert will never (re)allocate if the reported capacity is sufficient. push and insert will (re)allocate if len == capacity . That is, the reported capacity is completely accurate, and can be relied on. It can even be used to manually free the memory allocated by a Vec if desired. Bulk insertion methods may reallocate, even when not necessary. Vec does not guarantee any particular growth strategy when reallocating when full, nor when reserve is called. The current strategy is basic and it may prove desirable to use a non-constant growth factor. Whatever strategy is used will of course guarantee O (1) amortized push . It is guaranteed, in order to respect the intentions of the programmer, that all of vec![e_1, e_2, ..., e_n] , vec![x; n] , and Vec::with_capacity(n) produce a Vec that requests an allocation of the exact size needed for precisely n elements from the allocator, and no other size (such as, for example: a size rounded up to the nearest power of 2). The allocator will return an allocation that is at least as large as requested, but it may be larger. It is guaranteed that the Vec::capacity method returns a value that is at least the requested capacity and not more than the allocated capacity. The method Vec::shrink_to_fit will attempt to discard excess capacity an allocator has given to a Vec . If len == capacity , then a Vec<T> can be converted to and from a Box<[T]> without reallocating or moving the elements. Vec exploits this fact as much as reasonable when implementing common conversions such as into_boxed_slice . Vec will not specifically overwrite any data that is removed from it, but also won’t specifically preserve it. Its uninitialized memory is scratch space that it may use however it wants. It will generally just do whatever is most efficient or otherwise easy to implement. Do not rely on removed data to be erased for security purposes. Even if you drop a Vec , its buffer may simply be reused by another allocation. Even if you zero a Vec ’s memory first, that might not actually happen because the optimizer does not consider this a side-effect that must be preserved. There is one case which we will not break, however: using unsafe code to write to the excess capacity, and then increasing the length to match, is always valid. Currently, Vec does not guarantee the order in which elements are dropped. The order has changed in the past and may change again. Implementations § Source § impl<T> Vec <T> 1.0.0 (const: 1.39.0) · Source pub const fn new () -> Vec <T> Constructs a new, empty Vec<T> . The vector will not allocate until elements are pushed onto it. § Examples let mut vec: Vec<i32> = Vec::new(); 1.0.0 · Source pub fn with_capacity (capacity: usize ) -> Vec <T> Constructs a new, empty Vec<T> with at least the specified capacity. The vector will be able to hold at least capacity elements without reallocating. This method is allowed to allocate for more elements than capacity . If capacity is zero, the vector will not allocate. It is important to note that although the returned vector has the minimum capacity specified, the vector will have a zero length . For an explanation of the difference between length and capacity, see Capacity and reallocation . If it is important to know the exact allocated capacity of a Vec , always use the capacity method after construction. For Vec<T> where T is a zero-sized type, there will be no allocation and the capacity will always be usize::MAX . § Panics Panics if the new capacity exceeds isize::MAX bytes . § Examples let mut vec = Vec::with_capacity( 10 ); // The vector contains no items, even though it has capacity for more assert_eq! (vec.len(), 0 ); assert! (vec.capacity() >= 10 ); // These are all done without reallocating... for i in 0 .. 10 { vec.push(i); } assert_eq! (vec.len(), 10 ); assert! (vec.capacity() >= 10 ); // ...but this may make the vector reallocate vec.push( 11 ); assert_eq! (vec.len(), 11 ); assert! (vec.capacity() >= 11 ); // A vector of a zero-sized type will always over-allocate, since no // allocation is necessary let vec_units = Vec::<()>::with_capacity( 10 ); assert_eq! (vec_units.capacity(), usize::MAX); Source pub fn try_with_capacity (capacity: usize ) -> Result < Vec <T>, TryReserveError > 🔬 This is a nightly-only experimental API. ( try_with_capacity #91913 ) Constructs a new, empty Vec<T> with at least the specified capacity. The vector will be able to hold at least capacity elements without reallocating. This method is allowed to allocate for more elements than capacity . If capacity is zero, the vector will not allocate. § Errors Returns an error if the capacity exceeds isize::MAX bytes , or if the allocator reports allocation failure. 1.0.0 · Source pub unsafe fn from_raw_parts ( ptr: *mut T , length: usize , capacity: usize , ) -> Vec <T> Creates a Vec<T> directly from a pointer, a length, and a capacity. § Safety This is highly unsafe, due to the number of invariants that aren’t checked: If T is not a zero-sized type and the capacity is nonzero, ptr must have been allocated using the global allocator, such as via the alloc::alloc function. If T is a zero-sized type or the capacity is zero, ptr need only be non-null and aligned. T needs to have the same alignment as what ptr was allocated with, if the pointer is required to be allocated. ( T having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the dealloc requirement that memory must be allocated and deallocated with the same layout.) The size of T times the capacity (ie. the allocated size in bytes), if nonzero, needs to be the same size as the pointer was allocated with. (Because similar to alignment, dealloc must be called with the same layout size .) length needs to be less than or equal to capacity . The first length values must be properly initialized values of type T . capacity needs to be the capacity that the pointer was allocated with, if the pointer is required to be allocated. The allocated size in bytes must be no larger than isize::MAX . See the safety documentation of pointer::offset . These requirements are always upheld by any ptr that has been allocated via Vec<T> . Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator’s internal data structures. For example it is normally not safe to build a Vec<u8> from a pointer to a C char array with length size_t , doing so is only safe if the array was initially allocated by a Vec or String . It’s also not safe to build one from a Vec<u16> and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for u16 ), but after turning it into a Vec<u8> it’ll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using slice::from_raw_parts instead. The ownership of ptr is effectively transferred to the Vec<T> which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. § Examples use std::ptr; use std::mem; let v = vec! [ 1 , 2 , 3 ]; // Prevent running `v`'s destructor so we are in complete control // of the allocation. let mut v = mem::ManuallyDrop::new(v); // Pull out the various important pieces of information about `v` let p = v.as_mut_ptr(); let len = v.len(); let cap = v.capacity(); unsafe { // Overwrite memory with 4, 5, 6 for i in 0 ..len { ptr::write(p.add(i), 4 + i); } // Put everything back together into a Vec let rebuilt = Vec::from_raw_parts(p, len, cap); assert_eq! (rebuilt, [ 4 , 5 , 6 ]); } Using memory that was allocated elsewhere: use std::alloc::{alloc, Layout}; fn main() { let layout = Layout::array::<u32>( 16 ).expect( "overflow cannot happen" ); let vec = unsafe { let mem = alloc(layout).cast::<u32>(); if mem.is_null() { return ; } mem.write( 1_000_000 ); Vec::from_raw_parts(mem, 1 , 16 ) }; assert_eq! (vec, & [ 1_000_000 ]); assert_eq! (vec.capacity(), 16 ); } Source pub unsafe fn from_parts ( ptr: NonNull <T>, length: usize , capacity: usize , ) -> Vec <T> 🔬 This is a nightly-only experimental API. ( box_vec_non_null #130364 ) Creates a Vec<T> directly from a NonNull pointer, a length, and a capacity. § Safety This is highly unsafe, due to the number of invariants that aren’t checked: ptr must have been allocated using the global allocator, such as via the alloc::alloc function. T needs to have the same alignment as what ptr was allocated with. ( T having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the dealloc requirement that memory must be allocated and deallocated with the same layout.) The size of T times the capacity (ie. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, dealloc must be called with the same layout size .) length needs to be less than or equal to capacity . The first length values must be properly initialized values of type T . capacity needs to be the capacity that the pointer was allocated with. The allocated size in bytes must be no larger than isize::MAX . See the safety documentation of pointer::offset . These requirements are always upheld by any ptr that has been allocated via Vec<T> . Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator’s internal data structures. For example it is normally not safe to build a Vec<u8> from a pointer to a C char array with length size_t , doing so is only safe if the array was initially allocated by a Vec or String . It’s also not safe to build one from a Vec<u16> and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for u16 ), but after turning it into a Vec<u8> it’ll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using NonNull::slice_from_raw_parts instead. The ownership of ptr is effectively transferred to the Vec<T> which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. § Examples #![feature(box_vec_non_null)] use std::ptr::NonNull; use std::mem; let v = vec! [ 1 , 2 , 3 ]; // Prevent running `v`'s destructor so we are in complete control // of the allocation. let mut v = mem::ManuallyDrop::new(v); // Pull out the various important pieces of information about `v` let p = unsafe { NonNull::new_unchecked(v.as_mut_ptr()) }; let len = v.len(); let cap = v.capacity(); unsafe { // Overwrite memory with 4, 5, 6 for i in 0 ..len { p.add(i).write( 4 + i); } // Put everything back together into a Vec let rebuilt = Vec::from_parts(p, len, cap); assert_eq! (rebuilt, [ 4 , 5 , 6 ]); } Using memory that was allocated elsewhere: #![feature(box_vec_non_null)] use std::alloc::{alloc, Layout}; use std::ptr::NonNull; fn main() { let layout = Layout::array::<u32>( 16 ).expect( "overflow cannot happen" ); let vec = unsafe { let Some (mem) = NonNull::new(alloc(layout).cast::<u32>()) else { return ; }; mem.write( 1_000_000 ); Vec::from_parts(mem, 1 , 16 ) }; assert_eq! (vec, & [ 1_000_000 ]); assert_eq! (vec.capacity(), 16 ); } Source pub fn into_raw_parts (self) -> ( *mut T , usize , usize ) 🔬 This is a nightly-only experimental API. ( vec_into_raw_parts #65816 ) Decomposes a Vec<T> into its raw components: (pointer, length, capacity) . Returns the raw pointer to the underlying data, the length of the vector (in elements), and the allocated capacity of the data (in elements). These are the same arguments in the same order as the arguments to from_raw_parts . After calling this function, the caller is responsible for the memory previously managed by the Vec . Most often, one does this by converting the raw pointer, length, and capacity back into a Vec with the from_raw_parts function; more generally, if T is non-zero-sized and the capacity is nonzero, one may use any method that calls dealloc with a layout of Layout::array::<T>(capacity) ; if T is zero-sized or the capacity is zero, nothing needs to be done. § Examples #![feature(vec_into_raw_parts)] let v: Vec<i32> = vec! [- 1 , 0 , 1 ]; let (ptr, len, cap) = v.into_raw_parts(); let rebuilt = unsafe { // We can now make changes to the components, such as // transmuting the raw pointer to a compatible type. let ptr = ptr as *mut u32; Vec::from_raw_parts(ptr, len, cap) }; assert_eq! (rebuilt, [ 4294967295 , 0 , 1 ]); Source pub fn into_parts (self) -> ( NonNull <T>, usize , usize ) 🔬 This is a nightly-only experimental API. ( box_vec_non_null #130364 ) Decomposes a Vec<T> into its raw components: (NonNull pointer, length, capacity) . Returns the NonNull pointer to the underlying data, the length of the vector (in elements), and the allocated capacity of the data (in elements). These are the same arguments in the same order as the arguments to from_parts . After calling this function, the caller is responsible for the memory previously managed by the Vec . The only way to do this is to convert the NonNull pointer, length, and capacity back into a Vec with the from_parts function, allowing the destructor to perform the cleanup. § Examples #![feature(vec_into_raw_parts, box_vec_non_null)] let v: Vec<i32> = vec! [- 1 , 0 , 1 ]; let (ptr, len, cap) = v.into_parts(); let rebuilt = unsafe { // We can now make changes to the components, such as // transmuting the raw pointer to a compatible type. let ptr = ptr.cast::<u32>(); Vec::from_parts(ptr, len, cap) }; assert_eq! (rebuilt, [ 4294967295 , 0 , 1 ]); Source § impl<T, A> Vec <T, A> where A: Allocator , Source pub const fn new_in (alloc: A) -> Vec <T, A> 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Constructs a new, empty Vec<T, A> . The vector will not allocate until elements are pushed onto it. § Examples #![feature(allocator_api)] use std::alloc::System; let mut vec: Vec<i32, _ > = Vec::new_in(System); Source pub fn with_capacity_in (capacity: usize , alloc: A) -> Vec <T, A> 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Constructs a new, empty Vec<T, A> with at least the specified capacity with the provided allocator. The vector will be able to hold at least capacity elements without reallocating. This method is allowed to allocate for more elements than capacity . If capacity is zero, the vector will not allocate. It is important to note that although the returned vector has the minimum capacity specified, the vector will have a zero length . For an explanation of the difference between length and capacity, see Capacity and reallocation . If it is important to know the exact allocated capacity of a Vec , always use the capacity method after construction. For Vec<T, A> where T is a zero-sized type, there will be no allocation and the capacity will always be usize::MAX . § Panics Panics if the new capacity exceeds isize::MAX bytes . § Examples #![feature(allocator_api)] use std::alloc::System; let mut vec = Vec::with_capacity_in( 10 , System); // The vector contains no items, even though it has capacity for more assert_eq! (vec.len(), 0 ); assert! (vec.capacity() >= 10 ); // These are all done without reallocating... for i in 0 .. 10 { vec.push(i); } assert_eq! (vec.len(), 10 ); assert! (vec.capacity() >= 10 ); // ...but this may make the vector reallocate vec.push( 11 ); assert_eq! (vec.len(), 11 ); assert! (vec.capacity() >= 11 ); // A vector of a zero-sized type will always over-allocate, since no // allocation is necessary let vec_units = Vec::<(), System>::with_capacity_in( 10 , System); assert_eq! (vec_units.capacity(), usize::MAX); Source pub fn try_with_capacity_in ( capacity: usize , alloc: A, ) -> Result < Vec <T, A>, TryReserveError > 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Constructs a new, empty Vec<T, A> with at least the specified capacity with the provided allocator. The vector will be able to hold at least capacity elements without reallocating. This method is allowed to allocate for more elements than capacity . If capacity is zero, the vector will not allocate. § Errors Returns an error if the capacity exceeds isize::MAX bytes , or if the allocator reports allocation failure. Source pub unsafe fn from_raw_parts_in ( ptr: *mut T , length: usize , capacity: usize , alloc: A, ) -> Vec <T, A> 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Creates a Vec<T, A> directly from a pointer, a length, a capacity, and an allocator. § Safety This is highly unsafe, due to the number of invariants that aren’t checked: ptr must be currently allocated via the given allocator alloc . T needs to have the same alignment as what ptr was allocated with. ( T having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the dealloc requirement that memory must be allocated and deallocated with the same layout.) The size of T times the capacity (ie. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, dealloc must be called with the same layout size .) length needs to be less than or equal to capacity . The first length values must be properly initialized values of type T . capacity needs to fit the layout size that the pointer was allocated with. The allocated size in bytes must be no larger than isize::MAX . See the safety documentation of pointer::offset . These requirements are always upheld by any ptr that has been allocated via Vec<T, A> . Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator’s internal data structures. For example it is not safe to build a Vec<u8> from a pointer to a C char array with length size_t . It’s also not safe to build one from a Vec<u16> and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for u16 ), but after turning it into a Vec<u8> it’ll be deallocated with alignment 1. The ownership of ptr is effectively transferred to the Vec<T> which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. § Examples #![feature(allocator_api)] use std::alloc::System; use std::ptr; use std::mem; let mut v = Vec::with_capacity_in( 3 , System); v.push( 1 ); v.push( 2 ); v.push( 3 ); // Prevent running `v`'s destructor so we are in complete control // of the allocation. let mut v = mem::ManuallyDrop::new(v); // Pull out the various important pieces of information about `v` let p = v.as_mut_ptr(); let len = v.len(); let cap = v.capacity(); let alloc = v.allocator(); unsafe { // Overwrite memory with 4, 5, 6 for i in 0 ..len { ptr::write(p.add(i), 4 + i); } // Put everything back together into a Vec let rebuilt = Vec::from_raw_parts_in(p, len, cap, alloc.clone()); assert_eq! (rebuilt, [ 4 , 5 , 6 ]); } Using memory that was allocated elsewhere: #![feature(allocator_api)] use std::alloc::{AllocError, Allocator, Global, Layout}; fn main() { let layout = Layout::array::<u32>( 16 ).expect( "overflow cannot happen" ); let vec = unsafe { let mem = match Global.allocate(layout) { Ok (mem) => mem.cast::<u32>().as_ptr(), Err (AllocError) => return , }; mem.write( 1_000_000 ); Vec::from_raw_parts_in(mem, 1 , 16 , Global) }; assert_eq! (vec, & [ 1_000_000 ]); assert_eq! (vec.capacity(), 16 ); } Source pub unsafe fn from_parts_in ( ptr: NonNull <T>, length: usize , capacity: usize , alloc: A, ) -> Vec <T, A> 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Creates a Vec<T, A> directly from a NonNull pointer, a length, a capacity, and an allocator. § Safety This is highly unsafe, due to the number of invariants that aren’t checked: ptr must be currently allocated via the given allocator alloc . T needs to have the same alignment as what ptr was allocated with. ( T having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the dealloc requirement that memory must be allocated and deallocated with the same layout.) The size of T times the capacity (ie. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, dealloc must be called with the same layout size .) length needs to be less than or equal to capacity . The first length values must be properly initialized values of type T . capacity needs to fit the layout size that the pointer was allocated with. The allocated size in bytes must be no larger than isize::MAX . See the safety documentation of pointer::offset . These requirements are always upheld by any ptr that has been allocated via Vec<T, A> . Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator’s internal data structures. For example it is not safe to build a Vec<u8> from a pointer to a C char array with length size_t . It’s also not safe to build one from a Vec<u16> and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for u16 ), but after turning it into a Vec<u8> it’ll be deallocated with alignment 1. The ownership of ptr is effectively transferred to the Vec<T> which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. § Examples #![feature(allocator_api, box_vec_non_null)] use std::alloc::System; use std::ptr::NonNull; use std::mem; let mut v = Vec::with_capacity_in( 3 , System); v.push( 1 ); v.push( 2 ); v.push( 3 ); // Prevent running `v`'s destructor so we are in complete control // of the allocation. let mut v = mem::ManuallyDrop::new(v); // Pull out the various important pieces of information about `v` let p = unsafe { NonNull::new_unchecked(v.as_mut_ptr()) }; let len = v.len(); let cap = v.capacity(); let alloc = v.allocator(); unsafe { // Overwrite memory with 4, 5, 6 for i in 0 ..len { p.add(i).write( 4 + i); } // Put everything back together into a Vec let rebuilt = Vec::from_parts_in(p, len, cap, alloc.clone()); assert_eq! (rebuilt, [ 4 , 5 , 6 ]); } Using memory that was allocated elsewhere: #![feature(allocator_api, box_vec_non_null)] use std::alloc::{AllocError, Allocator, Global, Layout}; fn main() { let layout = Layout::array::<u32>( 16 ).expect( "overflow cannot happen" ); let vec = unsafe { let mem = match Global.allocate(layout) { Ok (mem) => mem.cast::<u32>(), Err (AllocError) => return , }; mem.write( 1_000_000 ); Vec::from_parts_in(mem, 1 , 16 , Global) }; assert_eq! (vec, & [ 1_000_000 ]); assert_eq! (vec.capacity(), 16 ); } Source pub fn into_raw_parts_with_alloc (self) -> ( *mut T , usize , usize , A) 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Decomposes a Vec<T> into its raw components: (pointer, length, capacity, allocator) . Returns the raw pointer to the underlying data, the length of the vector (in elements), the allocated capacity of the data (in elements), and the allocator. These are the same arguments in the same order as the arguments to from_raw_parts_in . After calling this function, the caller is responsible for the memory previously managed by the Vec . The only way to do this is to convert the raw pointer, length, and capacity back into a Vec with the from_raw_parts_in function, allowing the destructor to perform the cleanup. § Examples #![feature(allocator_api, vec_into_raw_parts)] use std::alloc::System; let mut v: Vec<i32, System> = Vec::new_in(System); v.push(- 1 ); v.push( 0 ); v.push( 1 ); let (ptr, len, cap, alloc) = v.into_raw_parts_with_alloc(); let rebuilt = unsafe { // We can now make changes to the components, such as // transmuting the raw pointer to a compatible type. let ptr = ptr as *mut u32; Vec::from_raw_parts_in(ptr, len, cap, alloc) }; assert_eq! (rebuilt, [ 4294967295 , 0 , 1 ]); Source pub fn into_parts_with_alloc (self) -> ( NonNull <T>, usize , usize , A) 🔬 This is a nightly-only experimental API. ( allocator_api #32838 ) Decomposes a Vec<T> into its raw components: (NonNull pointer, length, capacity, allocator) . Returns the NonNull pointer to the underlying data, the length of the vector (in elements), the allocated capacity of the data (in elements), and the allocator. These are the same arguments in the same order as the arguments to from_parts_in . After calling this function, the caller is responsible for the memory previously managed by the Vec . The only way to do this is to convert the NonNull pointer, length, and capacity back into a Vec with the from_parts_in function, allowing the destructor to perform the cleanup. § Examples #![feature(allocator_api, vec_into_raw_parts, box_vec_non_null)] use std::alloc::System; let mut v: Vec<i32, System> = Vec::new_in(System); v.push(- 1 ); v.push( 0 ); v.push( 1 ); let (ptr, len, cap, alloc) = v.into_parts_with_alloc(); let rebuilt = unsafe { // We can now make changes to the components, such as // transmuting the raw pointer to a compatible type. let ptr = ptr.cast::<u32>(); Vec::from_parts_in(ptr, len, cap, alloc) }; assert_eq! (rebuilt, [ 4294967295 , 0 , 1 ]); 1.0.0 (const: 1.87.0) · Source pub const fn capacity (&self) -> usize Returns the total number of elements the vector can hold without reallocating. § Examples let mut vec: Vec<i32> = Vec::with_capacity( 10 ); vec.push( 42 ); assert! (vec.capacity() >= 10 ); A vector with zero-sized elements will always have a capacity of usize::MAX: #[derive(Clone)] struct ZeroSized; fn main() { assert_eq! (std::mem::size_of::<ZeroSized>(), 0 ); let v = vec! [ZeroSized; 0 ]; assert_eq! (v.capacity(), usize::MAX); } 1.0.0 · Source pub fn reserve (&mut self, additional: usize ) Reserves capacity for at least additional more elements to be inserted in the given Vec<T> . The collection may reserve more space to speculatively avoid frequent reallocations. After calling reserve , capacity will be greater than or equal to self.len() + additional . Does nothing if capacity is already sufficient. § Panics Panics if the new capacity exceeds isize::MAX bytes . § Examples let mut vec = vec! [ 1 ]; vec.reserve( 10 ); assert! (vec.capacity() >= 11 ); 1.0.0 · Source pub fn reserve_exact (&mut self, additional: usize ) Reserves the minimum capacity for at least additional more elements to be inserted in the given Vec<T> . Unlike reserve , this will not deliberately over-allocate to speculatively avoid frequent allocations. After calling reserve_exact , capacity will be greater than or equal to self.len() + additional . Does nothing if the capacity is already sufficient. Note that the allocator may give the collection more space than it requests. Therefore, capacity can not be relied upon to be precisely minimal. Prefer reserve if future insertions are expected. § Panics Panics if the new capacity exceeds isize::MAX bytes . § Examples let mut vec = vec! [ 1 ]; vec.reserve_exact( 10 ); assert! (vec.capacity() >= 11 ); 1.57.0 · Source pub fn try_reserve (&mut self, additional: usize ) -> Result < () , TryReserveError > Tries to reserve capacity for at least additional more elements to be inserted in the given Vec<T> . The collection may reserve more space to speculatively avoid frequent reallocations. After calling try_reserve , capacity will be greater than or equal to self.len() + additional if it returns Ok(()) . Does nothing if capacity is already sufficient. This method preserves the contents even if an error occurs. § Errors If the capacity overflows, or the allocator reports a failure, then an error is returned. § Examples use std::collections::TryReserveError; fn process_data(data: & [u32]) -> Result <Vec<u32>, TryReserveError> { let mut output = Vec::new(); // Pre-reserve the memory, exiting if we can't output.try_reserve(data.len()) ? ; // Now we know this can't OOM in the middle of our complex work output.extend(data.iter().map(| & val| { val * 2 + 5 // very complicated })); Ok (output) } 1.57.0 · Source pub fn try_reserve_exact ( &mut self, additional: usize , ) -> Result < () , TryReserveError > Tries to reserve the minimum capacity for at least additional elements to be inserted in the given Vec<T> . Unlike try_reserve , this will not deliberately over-allocate to speculatively avoid frequent allocations. After calling try_reserve_exact , capacity will be greater than or equal to self.len() + additional if it returns Ok(()) . Does nothing if the capacity is already sufficient. Note that the allocator may give the collection more space than it requests. Therefore, capacity can not be relied upon to be precisely minimal. Prefer try_reserve if future insertions are expected. § Errors If the capacity overflows, or the allocator reports a failure, then an error is returned. § Examples use std::collections::TryReserveError; fn process_data(data: & [u32]) -> Result <Vec<u32>, TryReserveError> { let mut output = Vec::new(); // Pre-reserve the memory, exiting if we can't output.try_reserve_exact(data.len()) ? ; // Now we know this can't OOM in the middle of our complex work output.extend(data.iter().map(| & val| { val * 2 + 5 // very complicated })); Ok (output) } 1.0.0 · Source pub fn shrink_to_fit (&mut self) Shrinks the capacity of the vector as much as possible. The behavior of this method depends on the allocator, which may either shrink the vector in-place or reallocate. The resulting vector might still have some excess capacity, just as is the case for with_capacity . See Allocator::shrink for more details. § Examples let mut vec = Vec::with_capacity( 10 ); vec.extend([ 1 , 2 , 3 ]); assert! (vec.capacity() >= 10 ); vec.shrink_to_fit(); assert! (vec.capacity() >= 3 ); 1.56.0 · Source pub fn shrink_to (&mut self, min_capacity: usize ) Shrinks the capacity of the vector with a lower bound. The capacity will remain at least as large as both the length and the supplied value. If the current capacity is less than the lower limit, this is a no-op. § Examples let mut vec = Vec::with_capacity( 10 ); vec.extend([ 1 , 2 , 3 ]); assert! (vec.capacity() >= 10 ); vec.shrink_to( 4 ); assert! (vec.capacity() >= 4 ); vec.shrink_to( 0 ); assert! (vec.capacity() >= 3 ); 1.0.0 · Source pub fn into_boxed_slice (self) -> Box < [T] , A> Converts the vector into Box<[T]> . Before doing the conversion, this method discards excess capacity like shrink_to_fit . § Examples let v = vec! [ 1 , 2 , 3 ]; let slice = v.into_boxed_slice(); Any excess capacity is removed: let mut vec = Vec::with_capacity( 10 ); vec.extend([ 1 , 2 , 3 ]); assert! (vec.capacity() >= 10 ); let slice = vec.into_boxed_slice(); assert_eq! (slice.into_vec().capacity(), 3 ); 1.0.0 · Source pub fn truncate (&mut self, len: usize ) Shortens the vector, keeping the first len elements and dropping the rest. If len is greater or equal to the vector’s current length, this has no effect. The drain method can emulate truncate , but causes the excess elements to be returned instead of dropped. Note that this method has no effect on the allocated capacity of the vector. § Examples Truncating a five element vector to two elements: let mut vec = vec! [ 1 , 2 , 3 , 4 , 5 ]; vec.truncate( 2 ); assert_eq! (vec, [ 1 , 2 ]); No truncation occurs when len is greater than the vector’s current length: let mut vec = vec! [ 1 , 2 , 3 ]; vec.truncate( 8 ); assert_eq! (vec, [ 1 , 2 , 3 ]); Truncating when len == 0 is equivalent to calling the clear method. let mut vec = vec! [ 1 , 2 , 3 ]; vec.truncate( 0 ); assert_eq! (vec, []); 1.7.0 (const: 1.87.0) · Source pub const fn as_slice (&self) -> & [T] Extracts a slice containing the entire vector. Equivalent to &s[..] . § Examples use std::io::{ self , Write}; let buffer = vec! [ 1 , 2 , 3 , 5 , 8 ]; io::sink().write(buffer.as_slice()).unwrap(); 1.7.0 (const: 1.87.0) · Source pub const fn as_mut_slice (&mut self) -> &mut [T] Extracts a mutable slice of the entire vector. Equivalent to &mut s[..] . § Examples use std::io::{ self , Read}; let mut buffer = vec! [ 0 ; 3 ]; io::repeat( 0b101 ).read_exact(buffer.as_mut_slice()).unwrap(); 1.37.0 (const: 1.87.0) · Source pub const fn as_ptr (&self) -> *const T Returns a raw pointer to the vector’s buffer, or a dangling raw pointer valid for zero sized reads if the vector didn’t allocate. The caller must ensure that the vector outlives the pointer this function returns, or else it will end up dangling. Modifying the vector may cause its buffer to be reallocated, which would also make any pointers to it invalid. The caller must also ensure that the memory the pointer (non-transitively) points to is never written to (except inside an UnsafeCell ) using this pointer or any pointer derived from it. If you need to mutate the contents of the slice, use as_mut_ptr . This method guarantees that for the purpose of the aliasing model, this method does not materialize a reference to the underlying slice, and thus the returned pointer will remain valid when mixed with other calls to as_ptr , as_mut_ptr , and as_non_null . Note that calling other methods that materialize mutable references to the slice, or mutable references to specific elements you are planning on accessing through this pointer, as well as writing to those elements, may still invalidate this pointer. See the second example below for how this guarantee can be used. § Examples let x = vec! [ 1 , 2 , 4 ]; let x_ptr = x.as_ptr(); unsafe { for i in 0 ..x.len() { assert_eq! ( * x_ptr.add(i), 1 << i); } } Due to the aliasing guarantee, the following code is legal: unsafe { let mut v = vec! [ 0 , 1 , 2 ]; let ptr1 = v.as_ptr(); let _ = ptr1.read(); let ptr2 = v.as_mut_ptr().offset( 2 ); ptr2.write( 2 ); // Notably, the write to `ptr2` did *not* invalidate `ptr1` // because it mutated a different element: let _ = ptr1.read(); } 1.37.0 (const: 1.87.0) · Source pub const fn as_mut_ptr (&mut self) -> *mut T Returns a raw mutable pointer to the vector’s buffer, or a dangling raw pointer valid for zero sized reads if the vector didn’t allocate. The caller must ensure that the vector outlives the pointer this function returns, or else it will end up dangling. Modifying the vector may cause its buffer to be reallocated, which would also make any pointers to it invalid. This method guarantees that for the purpose of the aliasing model, this method does not materialize a reference to the underlying slice, and thus the returned pointer will remain valid when mixed with other calls to as_ptr , as_mut_ptr , and as_non_null . Note that calling other methods that materialize references to the slice, or references to specific elements you are planning on accessing through this pointer, may still invalidate this pointer. See the second example below for how this guarantee can be used. The method also guarantees that, as long as T is not zero-sized and the capacity is nonzero, the pointer may be passed into dealloc with a layout of Layout::array::<T>(capacity) in order to deallocate the backing memory. If this is done, be careful not to run the destructor of the Vec , as dropping it will result in double-frees. Wrapping the Vec in a ManuallyDrop is the typical way to achieve this. § Examples // Allocate vector big enough for 4 elements. let size = 4 ; let mut x: Vec<i32> = Vec::with_capacity(size); let x_ptr = x.as_mut_ptr(); // Initialize elements via raw pointer writes, then set length. unsafe { for i in 0 ..size { * x_ptr.add(i) = i as i32; } x.set_len(size); } assert_eq! ( &* x, & [ 0 , 1 , 2 , 3 ]); Due to the aliasing guarantee, the following code is legal: unsafe { let mut v = vec! [ 0 ]; let ptr1 = v.as_mut_ptr(); ptr1.write( 1 ); let ptr2 = v.as_mut_ptr(); ptr2.write( 2 ); // Notably, the write to `ptr2` did *not* invalidate `ptr1`: ptr1.write( 3 ); } Deallocating a vector using Box (which uses dealloc internally): use std::mem::{ManuallyDrop, MaybeUninit}; let mut v = ManuallyDrop::new( vec! [ 0 , 1 , 2 ]); let ptr = v.as_mut_ptr(); let capacity = v.capacity(); let slice_ptr: *mut [MaybeUninit<i32>] = std::ptr::slice_from_raw_parts_mut(ptr.cast(), capacity); drop( unsafe { Box::from_raw(slice_ptr) }); Source pub const fn as_non_null (&mut self) -> NonNull <T> 🔬 This is a nightly-only experimental API. ( box_vec_non_null #130364 ) Returns a NonNull pointer to the vector’s buffer, or a dangling NonNull pointer valid for zero sized reads if the vector didn’t allocate. The caller must ensure that the vector outlives the pointer this function returns, or else it will end up dangling. Modifying the vector may cause its buffer to be reallocated, which would also make any pointers to it invalid. This method guarantees that for the purpose of the aliasing model, this method does not materialize a reference to the underlying slice, and thus the returned pointer will remain valid when mixed with other calls to as_ptr , as_mut_ptr , and as_non_null . Note that calling other methods that materialize references to the slice, or references to specific elements you are planning on accessing through this pointer, may still invalidate this pointer. See the second example below for how this guarantee can be used. § Examples #![feature(box_vec_non_null)] // Allocate vector big enough for 4 elements. let size = 4 ; let mut x: Vec<i32> = Vec::with_capacity(size); let x_ptr = x.as_non_null(); // Initialize elements via raw pointer writes, then set length | 2026-01-13T09:29:15 |
https://www.linkedin.com/checkpoint/lg/login?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fposts%2Fpratiksha-somanath_uiux-designtrends-uxdesign-activity-7414547119597858818-Dh4N&fromSignIn=true&trk=public_post_ellipsis-menu-semaphore-sign-in-redirect | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:15 |
https://huggingface.co/datasets?modality=modality%3Atabular | Tabular Datasets – Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up 1T","label":"> 1T","type":"size_categories"}],"format":[{"id":"format:json","label":"json","type":"format"},{"id":"format:csv","label":"csv","type":"format"},{"id":"format:parquet","label":"parquet","type":"format"},{"id":"format:optimized-parquet","label":"optimized-parquet","type":"format"},{"id":"format:imagefolder","label":"imagefolder","type":"format"},{"id":"format:audiofolder","label":"soundfolder","type":"format"},{"id":"format:webdataset","label":"webdataset","type":"format"},{"id":"format:text","label":"text","type":"format"},{"id":"format:arrow","label":"arrow","type":"format"}],"benchmark":[{"id":"benchmark:official","label":"Benchmark","type":"benchmark"}]},"numItemsPerPage":30,"numTotalItems":110695,"pageIndex":0,"searchQuery":"","sortBy":""}}"> Edit Datasets filters Main Tasks Libraries Languages Licenses Other Modalities Reset Modalities 3D Audio Document Geospatial Image Tabular Text Time-series Video Size (rows) Reset Size < 1K > 1T Format json csv parquet optimized-parquet imagefolder soundfolder webdataset text arrow Evaluation Benchmark Apply filters Datasets 110,695 Full-text search Edit filters Sort: Trending Active filters: tabular Clear all HuggingFaceFW/finetranslations Viewer • Updated 4 days ago • 3.33B • 7.34k • 143 123olp/binance-futures-ohlcv-2018-2026 Viewer • Updated 2 days ago • 425M • 478 • 96 OpenDataArena/ODA-Math-460k Viewer • Updated 13 days ago • 460k • 4.93k • 96 HuggingFaceFW/finepdfs Viewer • Updated 4 days ago • 476M • 23.2k • 711 PleIAs/common_corpus Viewer • Updated Jun 10, 2025 • 470M • 42.6k • 330 HuggingFaceFW/fineweb Viewer • Updated Jul 11, 2025 • 52.5B • 184k • 2.6k HuggingFaceFW/finetranslations-edu Viewer • Updated 4 days ago • 109M • 213 • 9 HuggingFaceFW/fineweb-edu Viewer • Updated Jul 11, 2025 • 3.5B • 326k • 901 bigcode/the-stack Viewer • Updated Apr 13, 2023 • 546M • 18.5k • 907 Idavidrein/gpqa Viewer • Updated Mar 28, 2024 • 1.25k • 68.4k • 340 facebook/jepa-wms Viewer • Updated 11 days ago • 524 • 159 • 7 youliangtan/so101-table-cleanup Viewer • Updated Jun 6, 2025 • 47k • 1.4k • 23 nyu-mll/glue Viewer • Updated Jan 30, 2024 • 1.49M • 367k • 466 jtatman/stable-diffusion-prompts-stats-full-uncensored Viewer • Updated Nov 8, 2024 • 897k • 768 • 109 nvidia/Granary Viewer • Updated Aug 14, 2025 • 116M • 3.42k • 169 Codatta/MM-Food-100K Viewer • Updated Aug 18, 2025 • 100k • 690 • 53 HuggingFaceFW/finepdfs-edu Viewer • Updated Nov 11, 2025 • 49.5M • 4.14k • 63 deepmind/code_contests Viewer • Updated Jun 11, 2023 • 4.04k • 1.65M • 206 azhx/counterfact Viewer • Updated Apr 7, 2023 • 21.9k • 509 • 5 hackaprompt/hackaprompt-dataset Viewer • Updated Jan 24, 2024 • 602k • 375 • 80 BoKelvin/SLAKE Viewer • Updated Feb 28, 2024 • 14k • 2.13k • 45 openfoodfacts/product-database Viewer • Updated about 15 hours ago • 4.31M • 2.39k • 61 zai-org/CC-Bench-trajectories Viewer • Updated Sep 30, 2025 • 260 • 784 • 88 HuggingFaceFW/finewiki Viewer • Updated Oct 22, 2025 • 61.6M • 8.68k • 273 NoeFlandre/fineweb-legal-pilot Viewer • Updated 9 days ago • 101k • 68 • 6 kmfoda/booksum Viewer • Updated Nov 30, 2022 • 12.5k • 1.15k • 65 laion/aesthetics_v2_4.5 Viewer • Updated Sep 1, 2024 • 1.33B • 258 • 28 allenai/real-toxicity-prompts Viewer • Updated Sep 30, 2022 • 99.4k • 5.22k • 109 kakaobrain/coyo-700m Viewer • Updated Aug 30, 2022 • 747M • 1.87k • 154 bigcode/the-stack-dedup Viewer • Updated Aug 17, 2023 • 237M • 14.4k • 378 Previous 1 2 3 ... 100 Next System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://www.atlassian.com/legal/customer-agreement | Atlassian Customer Agreement | Atlassian Close View this page in your language ? All languages Choose your language 中文 Deutsch English Español Français Italiano 한국어 Nederlands 日本語 Português Pусский Polski Get it free Products Featured Developers Product Managers IT professionals Business Teams Leadership Teams Featured Developers Product Managers IT professionals Business Teams Leadership Teams See all apps Featured FEATURED APPS Jira Flexible project management Confluence Collaborative knowledge workspace Jira Service Management High-velocity service delivery ATLASSIAN COLLECTIONS Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Optimize strategy and outcomes confidently Focus • Talent • Align Deliver service at high-velocity Jira Service Management • Customer Service Management • Assets • Rovo Ship high-quality software fast Rovo Dev • DX • Pipelines • Bitbucket • Compass POWERED BY Rovo AI-powered apps – driven by your team's knowledge. Developers Jira Flexible project management Compass Software catalog for teams Pipelines Scalable CI/CD automation Bitbucket Source code and CI/CD DX Measure productivity and AI impact Rovo Dev Agentic AI for developers Ship high-quality software fast Rovo Dev • DX • Pipelines • Bitbucket • Compass Rovo AI-powered apps – driven by your team's knowledge. Product Managers Jira Flexible project management Confluence Knowledge, all in one place Jira Product Discovery Capture & prioritize ideas Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Rovo AI-powered apps – driven by your team's knowledge. IT professionals Jira Service Management High-velocity service delivery Guard Enhanced cloud security Rovo AI-powered apps – driven by your team's knowledge. Business Teams Jira Flexible project management Confluence Knowledge, all in one place Trello Organized & visualized work Loom Quick, async video updates Jira Service Management High-velocity service delivery Customer Service Management Customer experiences reimagined Supercharge teamwork seamlessly Jira • Confluence • Loom • Rovo Rovo AI-powered apps – driven by your team's knowledge. Leadership Teams Focus Enterprise-scale strategic planning Talent Knowledge workforce planning Jira Align Enterprise-wide work planning & value Optimize strategy and outcomes confidently Focus • Talent • Align Rovo AI-powered apps – driven by your team's knowledge. Back Solutions Solutions By use case Team collaboration Strategy and planning Service management Software development By team Software Marketing IT By size Enterprise Small business Startup Non-profit By industry Retail Telecommunications Professional services Government Rovo AI-powered apps – driven by your team's knowledge. Back Why Atlassian Why Atlassian System of Work New Atlassian's blueprint for how teams work together Integrations Connect thousands of apps to your Atlassian products Customers Case studies & stories powered by teamwork FedRAMP Compliant solutions for the public sector Resilience Enterprise-grade & highly performant infrastructure Platform Our deeply integrated, reliable & secure platform Trust center Ensure your data’s security, compliance & availability Back Resources Resources Customer Support Ask questions, report bugs & give us feedback Find Partners Consulting, training & product customization support Atlassian Ascend Resources and support for your transformation Community Learn, connect and grow with the Atlassian Community support General inquiries Product Advice Partner support Enterprise support Technical Support Pricing and Billing Developer support Purchasing & Licensing Resources Project Management Agile Atlassian Learning Get Started Project Collaboration Team Playbook Product Documentation Back Enterprise More + Less - Get it free Jira Flexible project management Jira Service Management High-velocity service delivery Confluence Knowledge, all in one place See all products Get it free Back Get it free Legal Legal information Atlassian Customer Agreement Atlassian Customer Agreement Atlassian provides a wide range of products, services, and deployment options. As part of our dedication to building with heart and balance, we have developed the Atlassian Customer Agreement to encompass all these offerings in a comprehensive and customized manner. Our products adhere to an enterprise-level set of controls and policies that align with various international standards and market expectations. The agreement is crafted to be customer-friendly, undergoing regular updates informed by market practices and customer input. For further insight into our approach, please visit this page . Atlassian Customer Agreement Download PDF Effective starting: October 7, 2025 This Agreement is between Customer and Atlassian. “ Customer ” means the entity on behalf of which this Agreement is accepted or, if that does not apply, the individual accepting this Agreement. “ Atlassian ” means the Atlassian entity that owns or operates the Products that Customer uses or accesses listed at https://www.atlassian.com/legal/product-terms. If you (the person accepting this Agreement) are accepting this Agreement on behalf of your employer or another entity, you agree that: (i) you have full legal authority to bind your employer or such entity to this Agreement, and (ii) you agree to this Agreement on behalf of your employer or such entity. If you are accepting this Agreement using an email address from your employer or another entity, then: (i) you will be deemed to represent that party, (ii) your acceptance of this Agreement will bind your employer or that entity to these terms, and (iii) the word “you” or “Customer” in this Agreement will refer to your employer or that entity. By clicking on the “Agree” (or similar button or checkbox) that is presented to you at the time of placing an Order, downloading Products, or by using or accessing the Products, you confirm you are bound by this Agreement. If you do not wish to be bound by this Agreement, do not click “Agree” (or similar button or checkbox), download the Products, or use or access the Products. 1. Overview This Agreement applies to Customer’s Orders for Products and related Support and Advisory Services. The terms of this Agreement apply to both Cloud Products and Software Products, although certain terms apply only to Cloud Products or Software Products, as specified below. In addition, some Products are subject to additional Product-Specific Terms, and Support and Advisory Services are subject to the applicable Policies. 2. Use of Products 2.1. Permitted Use . Subject to this Agreement and during the applicable Subscription Term, Atlassian grants Customer a non-exclusive, worldwide right to use the Products and related Support and Advisory Services for its and its Affiliates’ internal business purposes, in accordance with the Documentation and subject to Customer’s Scope of Use, as specified in the Order and the Product-Specific Terms. 2.2. Restrictions . Except to the extent otherwise expressly permitted by this Agreement, Customer must not (and must not permit anyone else to): (a) rent, lease, sell, distribute or sublicense the Products or (except for Affiliates) include them in a service bureau or outsourcing offering, (b) provide access to the Products to a third party, other than to Users, (c) charge its customers a specific fee for use of the Products, but Customer may charge an overall fee for its own offerings (of which the Products are ancillary), (d) use the Products to develop a similar or competing product or service, (e) reverse engineer, decompile, disassemble or seek to access the source code or non-public APIs to the Products, (f) modify or create derivative works of the Products, (g) interfere with or circumvent Product usage limits or Scope of Use restrictions, (h) remove, obscure or modify in any way any proprietary or other notices or attributions in the Products, or (i) violate the Acceptable Use Policy. 2.3. DPA . The DPA applies to Customer’s use of Products and related Support and Advisory Services and forms part of this Agreement. 3. Users 3.1. Responsibility . Customer may authorize Users to access and use the Products, in accordance with the Documentation and Customer’s Scope of Use. Customer is responsible for its Users’ compliance with this Agreement and all activities of its Users, including Orders they may place, apps and Third Party-Products enabled, and how Users access and use Customer Data. 3.2. Login Credentials . Customer must ensure that each User keeps its login credentials confidential and must promptly notify Atlassian if it becomes aware of any unauthorized access to any User login credentials or other unauthorized access to or use of the Products. 3.3. Domain Ownership . Where a Cloud Product requires Customer to specify a domain (such as www.example.com ) for the Cloud Product’s or a feature’s operation, Atlassian may verify that Customer or an Affiliate owns or controls that domain. Atlassian has no obligation to provide that Cloud Product or feature if Atlassian cannot verify that Customer or an Affiliate owns or controls the domain. Product administrators appointed by Customer may also take over management of accounts previously registered using an email address belonging to Customer’s domain, which become “managed accounts” (or similar term), as described in the Documentation 3.4. Age Requirements . The Products are not intended for use by anyone under the age of 16. Customer is responsible for ensuring that all Users are at least 16 years old. 4. Cloud Products This Section 4 only applies to Cloud Products. 4.1. Customer Data . Atlassian may process Customer Data to provide the Cloud Products and related Support or Advisory Services in accordance with this Agreement. 4.2. Security Program . Atlassian has implemented and will maintain an information security program that uses appropriate physical, technical and organizational measures designed to protect Customer Data and Customer Materials from unauthorized access, destruction, use, modification or disclosure, as described in its Security Measures. Atlassian will also maintain a compliance program that includes independent third-party audits and certifications, as described in its Security Measures. Further information about Atlassian’s security program is available on the Atlassian Trust Center at https://www.atlassian.com/trust , as updated from time to time. 4.3. Service Levels . Where applicable, the service level commitment for a Cloud Product is specified in the Service Level Agreement. 4.4. Data Retrieval . The Documentation describes how Customer may retrieve its Customer Data from the Cloud Products. 4.5. Removals and Suspension . Atlassian has no obligation to monitor Customer Data. Nonetheless, if Atlassian becomes aware that: (a) Customer Data may violate Law, Section 2.2 (Restrictions), or the rights of others (including relating to a takedown request received following the guidelines for Reporting Copyright and Trademark Violations at https://www.atlassian.com/legal/copyright-and-trademark-violations ), or (b) Customer’s use of the Cloud Products threatens the security or operation of the Cloud Products, then Atlassian may: (i) limit access to, or remove, the relevant Customer Data, or (ii) suspend Customer’s or any User’s access to the relevant Cloud Products. Atlassian may also take any such measures where required by Law, or at the request of a governmental authority. When practicable, Atlassian will give Customer the opportunity to remedy the issue before taking any such measures. 4.6. AI Offerings . Cloud Products may be, or include access to, AI Offerings. Atlassian provides AI Offerings under the AI Terms . 5. Software Products This Section 5 only applies to Software Products. 5.1. Modifications . Atlassian may provide some portions of the Software Products in source code form for Customer to use internally to create bug fixes, configurations or other modifications of the Software Products, as permitted in the Documentation (“ Modifications ”). Customer must keep such source code secure (on computer devices and online repositories controlled by Customer), confidential, and only make it available to Customer’s employees who have a legitimate need to access and use the source code to create and maintain Modifications. Customer may only use Modifications with the Software Products, and only in accordance with this Agreement, including the Third-Party Code Policy, the Documentation, and Customer’s Scope of Use. Customer must not distribute source code or Modifications to third parties. Customer must securely destroy the source code at the earliest of: (a) Customer no longer needing to use source code to create or maintain Modifications, (b) termination or non-renewal of a relevant Subscription Term, or (c) Atlassian’s request for any reason. Notwithstanding anything else in this Agreement, Atlassian has no support, warranty, indemnity or other responsibility for Modifications. 5.2. License Verification . Upon Atlassian’s written request, Customer will promptly confirm in writing whether its use of the Software Products is in compliance with the applicable Scope of Use. Atlassian or its authorized agents may audit Customer’s use of the Software Products no more than once every twelve (12) months to confirm compliance with Customer’s Scope of Use, provided Atlassian gives Customer reasonable advance notice and uses reasonable efforts to minimize disruption to Customer. If Customer exceeds its Scope of Use, Atlassian may invoice for that excess use, and Customer will pay Atlassian promptly after invoice receipt. 5.3. Number of Instances . Unless otherwise specified in the Order or the Product-Specific Terms, Customer may install up to one (1) production instance of each Software Product included in an Order on systems owned or operated by Customer or its Users. 6. Customer Obligations 6.1. Disclosures and Rights . Customer must ensure it has made all disclosures and obtained all rights and consents necessary for Atlassian to use Customer Data and Customer Materials to provide the Cloud Products, Support or Advisory Services. 6.2. Product Assessment . Customer is responsible for determining whether the Products meet Customer’s requirements and any regulatory obligations related to its intended use. 6.3. Sensitive Health Information and HIPAA . Unless the parties have entered into a ‘Business Associate Agreement,’ Customer must not (and must not permit anyone else to) upload to the Cloud Products (or use the Cloud Products to process) any patient, medical or other protected health information regulated by the Health Insurance Portability and Accountability Act. 7. Third-Party Code and Third-Party Products 7.1. Third-Party Code . This Agreement and the Third-Party Code Policy apply to open source software and commercial third-party software Atlassian includes in the Products. 7.2. Third-Party Products . Customer may choose to use the Products with third-party platforms, apps, add-ons, services or products, including offerings made available through the Atlassian Marketplace (“ Third-Party Products ”). Use of such Third-Party Products with the Products may require access to Customer Data and other data by the third-party provider, which, for Cloud Products Atlassian will permit on Customer’s behalf if Customer has enabled that Third-Party Product. Likewise, such use of Third-Party Products may entail access to Third-Party Product data by Atlassian. Customer’s use of Third-Party Products is subject to the relevant provider’s terms of use, not this Agreement, but data sent from the Third-Party Product to Customer’s Atlassian account is subject to this Agreement. Atlassian does not control and has no liability for Third-Party Products. Atlassian makes no warranty that the Products will, or will continue to, connect or interoperate with any Third-Party Product. 8. Support and Advisory Services Atlassian will provide Support and Advisory Services as described in the Order and applicable Policies. Atlassian’s provision of Support or Advisory Services is subject to Customer providing timely access to Customer Materials and personnel reasonably requested by Atlassian. 9. Ordering Process and Delivery No Order is binding until Atlassian provides its acceptance, including by sending a confirmation email, providing access to the Products, or making license or access keys available to Customer. No terms of any purchase order or other business form used by Customer will supersede, supplement, or otherwise apply to this Agreement or Atlassian. Atlassian will deliver login instructions or license keys for Products electronically, to Customer’s account (or through other reasonable means) promptly upon receiving payment of the fees. Customer is responsible for the installation of Software Products, and Atlassian has no further delivery obligations with respect to the Software Products after delivery of license keys. 10. Billing and Payment 10.1. Fees . (a) Direct Purchases . If Customer purchases directly from Atlassian, fees and any payment terms are specified in Customer’s Order with Atlassian. (b) Resellers . If Customer purchases through a Reseller, Customer must pay all applicable amounts directly to the Reseller, and Customer’s order details (e.g., Products and Scope of Use) will be specified in the Order placed by the Reseller with Atlassian on Customer’s behalf. (c) Renewals . Unless otherwise specified in an Order and subject to the Product, Support or Advisory Services continuing to be generally available, a Subscription Term will automatically renew at Atlassian’s then current rates for: (i) if Customer’s prior Subscription Term was for a period less than twelve (12) months, another Subscription Term of a period equal to Customer’s prior Subscription Term, or (ii) if Customer’s prior Subscription Term was for twelve (12) months or more, twelve (12) months. Either party may elect not to renew a Subscription Term by giving notice to the other party before the end of the current Subscription Term. Customer must provide any notice of non-renewal through account settings in the Products, by contacting Atlassian’s support team or by otherwise providing Atlassian notice. (d) Increased Scope of Use . If Customer exceeds the Scope of Use purchased, unless otherwise agreed with Atlassian in writing, Customer must upgrade its subscription or pay for the increased Scope of Use. Unless otherwise specified in an applicable Order, Atlassian will charge Customer for any increased Scope of Use at Atlassian’s then-current rates, which may be prorated for the remainder of the then-current Subscription Term. (e) Refunds . All fees and expenses are non-refundable, except as otherwise provided in this Agreement. For any purchases Customer makes through a Reseller, any refunds from Atlassian payable to Customer relating to that purchase will be remitted by that Reseller, unless Atlassian specifically notifies Customer otherwise at the time of refund. (f) Credit Cards . If Customer uses a credit card or similar online payment method for its initial Order, then Atlassian may bill that payment method for renewals, additional Orders, overages to scopes of use, expenses, and unpaid fees, as applicable. 10.2. Taxes . (a) Taxes Generally . Fees and expenses are exclusive of any sales, use, GST, value-added, withholding or similar taxes or levies that apply to Customer’s Orders. Other than taxes on Atlassian’s net income, Customer is responsible for any such taxes or levies and must pay those taxes or levies, which Atlassian will itemize separately, in accordance with an applicable invoice. (b) Withholding Taxes . To the extent Customer is required to withhold tax from payment to Atlassian in certain jurisdictions, Customer must provide valid documentation it receives from the taxing authority in such jurisdictions confirming remittance of withholding. This documentation must be provided at the time of payment of the applicable invoice to Atlassian. (c) Exemptions . If Customer claims exemption from any sales tax, VAT, GST or similar taxes under this Agreement, Customer must provide Atlassian a valid tax exemption certificate or tax ID at the time of Order, and after receipt of valid evidence of exemption, Atlassian will not include applicable taxes on the relevant Customer invoice. 10.3. Return Policy . Within thirty (30) days of its initial Order for a Product, Customer may terminate the Subscription Term for that Product, for any or no reason, by providing notice to Atlassian. Following such termination, upon request (which may be made through Customer’s Atlassian account), Atlassian will refund Customer the amount paid for that Product and any associated Support under the applicable Order. Unless otherwise specified in the Policies or Product-Specific Terms, this return policy does not apply to Advisory Services. 10.4. Suspension for Non-payment . Atlassian may suspend Customer’s rights to use Products or receive Support or Advisory Services if payment is overdue, and Atlassian has given Customer no fewer than ten (10) days’ written notice. 11. Atlassian Warranties 11.1. Performance Warranties . Atlassian warrants to Customer that: (a) the Products will operate in substantial conformity with the applicable Documentation during the applicable Subscription Term, (b) Atlassian will not materially decrease the functionality or overall security of the Products during the applicable Subscription Term, and (c) Atlassian will use reasonable efforts designed to ensure that the Products, when and as provided by Atlassian, are free of any viruses, malware or similar malicious code (each, a “ Performance Warranty ”). 11.2. Performance Warranty Remedy . If Atlassian breaches a Performance Warranty and Customer makes a reasonably detailed warranty claim within 30 days of discovering the issue, Atlassian will use reasonable efforts to correct the non-conformity. If Atlassian determines such remedy to be impracticable, either party may terminate the affected Subscription Term. Atlassian will then refund to Customer any pre-paid, unused fees for the terminated portion of the Subscription Term. These procedures are Customer’s exclusive remedy and Atlassian’s entire liability for breach of a Performance Warranty. 11.3. Exclusions . The warranties in this Section 11 (Atlassian Warranties) do not apply to: (a) the extent the issue or non-conformity is caused by Customer’s unauthorized use or modification of the Products, (b) unsupported releases of Software Products or Cloud Clients, or (c) Third-Party Products. 11.4. Disclaimers . Except as expressly provided in this Section 11 (Atlassian Warranties), the Products, Support and Advisory Services and all related Atlassian services and deliverables are provided “AS IS.” Atlassian makes no other warranties, whether express, implied, statutory or otherwise, including warranties of merchantability, fitness for a particular purpose, title or non-infringement. Atlassian does not warrant that Customer’s use of the Products will be uninterrupted or error-free. Atlassian is not liable for delays, failures or problems inherent in use of the internet and electronic communications or other systems outside Atlassian’s control. 12. Term and Termination 12.1. Term . This Agreement commences on the date Customer accepts it and expires when all Subscription Terms have ended. 12.2. Termination for Convenience . Customer may terminate this Agreement or a Subscription Term upon notice for any reason. Subject to Section 10.3 (Return Policy), Customer will not be entitled to any refunds as a result of exercising its rights under this Section 12.2, and any unpaid amounts for the then-current Subscription Terms and any related service periods will become due and payable immediately upon such termination. 12.3. Termination for Cause . Either party may terminate this Agreement or a Subscription Term if the other party: (a) fails to cure a material breach of this Agreement (including a failure to pay fees) within 30 days after notice, (b) ceases operation without a successor, or (c) seeks protection under a bankruptcy, receivership, trust deed, creditors’ arrangement, composition or comparable proceeding, or if such a proceeding is instituted against that party and not dismissed within 60 days. If Customer terminates this Agreement or a Subscription Term in accordance with this Section 12.3, Atlassian will refund to Customer any pre-paid, unused fees for the terminated portion of the Agreement or applicable Subscription Term. 12.4. Effect of Termination . Upon expiration or termination of this Agreement or a Subscription Term: (a) Customer’s rights to use the applicable Products, Support or Advisory Services will cease, (b) Customer must immediately cease accessing the Cloud Products and using the applicable Software Products and Cloud Clients, and (c) Customer must delete (or, on request, return) all license keys, access keys and any Product copies. Following expiration or termination, unless prohibited by Law, Atlassian will delete Customer Data in accordance with the Documentation. 12.5. Survival . These Sections survive expiration or termination of this Agreement: 2.2 (Restrictions), 4.2 (Security Program), 10.1 (Fees), 10.2 (Taxes), 11.4 (Disclaimers), 12.4 (Effect of Termination), 12.5 (Survival), 13 (Ownership), 14 (Limitations of Liability), 15 (Indemnification by Atlassian), 16 (Confidentiality), 17.4 (Disclaimer), 18 (Feedback), 20 (General Terms) and 21 (Definitions). 13. Ownership Except as expressly specified in this Agreement, neither party grants the other any rights or licenses to its intellectual property under this Agreement. As between the parties, Customer owns all intellectual property and other rights in Customer Data and Customer Materials provided to Atlassian or used with the Products. Atlassian and its licensors retain all intellectual property and other rights in the Products, any Support and Advisory Services deliverables and related source code, Atlassian technology, templates, formats and dashboards, including any modifications or improvements. 14. Limitations of Liability 14.1. Damages Waiver . Except for Excluded Claims or Special Claims, to the maximum extent permitted by Law, neither party will have any liability arising out of or related to this Agreement for any loss of use, lost data, lost profits, interruption of business or any indirect, special, incidental, reliance or consequential damages of any kind, even if informed of their possibility in advance. 14.2. General Liability Cap . Except for Excluded Claims or Special Claims, to the maximum extent permitted by Law, each party’s entire liability arising out of or related to this Agreement will not exceed in aggregate the amounts paid to Atlassian for the Products, Support and Advisory Services giving rise to the liability during the twelve (12) months preceding the first event out of which the liability arose. Customer’s payment obligations under Sections 10.1 (Fees) and 10.2 (Taxes) are not limited by this Section 14.2. 14.3. Excluded Claims . “ Excluded Claims ” means: (a) Customer’s breach of Section 2.2 (Restrictions) or Section 6 (Customer Obligations), (b) either party’s breach of Section 16 (Confidentiality) but excluding claims relating to Customer Data or Customer Materials, or (c) amounts payable to third parties under Atlassian’s obligations in Section 15 (Indemnification by Atlassian). 14.4. Special Claims . For Special Claims, Atlassian’s aggregate liability under this Agreement will be the lesser of: (a) two times (2x) the amounts paid to Atlassian for the Products, Support and Advisory Services giving rise to the Special Claim during the twelve (12) months preceding the first event out of which the Special Claim arose, and (b) US$5,000,000. “ Special Claims ” means any unauthorized disclosure of Customer Data or Customer Materials caused by a breach by Atlassian of its obligations in Section 4.2 (Security Program). 14.5. Nature of Claims and Failure of Essential Purpose . The exclusions and limitations in this Section 14 (Limitations of Liability) apply regardless of the form of action, whether in contract, tort (including negligence), strict liability or otherwise and will survive and apply even if any limited remedy in this Agreement fails of its essential purpose. 15. Indemnification by Atlassian 15.1. IP Indemnification . Atlassian must: (a) defend Customer from and against any third-party claim to the extent alleging that the Products, when used by Customer as authorized by this Agreement, infringe any intellectual property right of a third party (an “ Infringement Claim ”), and (b) indemnify and hold harmless Customer against any damages, fines or costs finally awarded by a court of competent jurisdiction (including reasonable attorneys’ fees) or agreed in settlement by Atlassian resulting from an Infringement Claim. 15.2. Procedures . Atlassian’s obligations in Section 15.1 (IP Indemnification) are subject to Customer providing Atlassian: (a) sufficient notice of the Infringement Claim so as to not prejudice Atlassian’s defense of the Infringement Claim, (b) the exclusive right to control and direct the investigation, defense and settlement of the Infringement Claim, and (c) all reasonably requested cooperation, at Atlassian’s expense for reasonable out-of-pocket expenses. Customer may participate in the defense of an Infringement Claim with its own counsel at its own expense. 15.3. Settlement . Customer may not settle an Infringement Claim without Atlassian’s prior written consent. Atlassian may not settle an Infringement Claim without Customer’s prior written consent if settlement would require Customer to admit fault or take or refrain from taking any action (other than relating to use of the Products). 15.4. Mitigation . In response to an actual or potential Infringement Claim, Atlassian may, at its option: (a) procure rights for Customer’s continued use of the Products, (b) replace or modify the alleged infringing portion of the Products without reducing the overall functionality of the Products, or (c) terminate the affected Subscription Term and refund to Customer any pre-paid, unused fees for the terminated portion of the Subscription Term. 15.5. Exceptions . Atlassian’s obligations in this Section 15 (Indemnification by Atlassian) do not apply to the extent an Infringement Claim arises from: (a) Customer’s modification or unauthorized use of the Products, (b) use of the Products in combination with items not provided by Atlassian (including Third-Party Products), (c) any unsupported release of the Software Products or Cloud Clients, or (d) Third-Party Products, Customer Data or Customer Materials. 15.6. Exclusive Remedy . This Section 15 (Indemnification by Atlassian) sets out Customer’s exclusive remedy and Atlassian’s entire liability regarding infringement of third-party intellectual property rights. 16. Confidentiality. 16.1. Definition . “ Confidential Information ” means information disclosed by one party to the other under or in connection with this Agreement that: (a) is designated by the disclosing party as proprietary or confidential, or (b) should be reasonably understood to be proprietary or confidential due to its nature and the circumstances of its disclosure. Atlassian’s Confidential Information includes any source code and technical or performance information about the Products. Customer’s Confidential Information includes Customer Data and Customer Materials. 16.2. Obligations . Unless expressly permitted by the disclosing party in writing, the receiving party must: (a) hold the disclosing party’s Confidential Information in confidence and not disclose it to third parties except as permitted in this Agreement, and (b) only use such Confidential Information to fulfill its obligations and exercise its rights in this Agreement. The receiving party may disclose such Confidential Information to its employees, agents, contractors and other representatives having a legitimate need to know (including, for Atlassian, the subcontractors referenced in Section 20.11 (Subcontractors and Affiliates)), provided the receiving party remains responsible for their compliance with this Section 16 (Confidentiality) and they are bound to confidentiality obligations no less protective than this Section 16 (Confidentiality). 16.3. Exclusions . These confidentiality obligations do not apply to information that the receiving party can demonstrate: (a) is or becomes publicly available through no fault of the receiving party, (b) it knew or possessed prior to receipt under this Agreement without breach of confidentiality obligations, (c) it received from a third party without breach of confidentiality obligations, or (d) it independently developed without using the disclosing party’s Confidential Information. The receiving party may disclose Confidential Information if required by Law, subpoena or court order, provided (if permitted by Law) it notifies the disclosing party in advance and cooperates, at the disclosing party’s cost, in any reasonable effort to obtain confidential treatment. 16.4. Remedies . Unauthorized use or disclosure of Confidential Information may cause substantial harm for which damages alone are an insufficient remedy. Each party may seek appropriate equitable relief, in addition to other available remedies, for breach or anticipated breach of this Section 16 (Confidentiality). 17. Free or Beta Products 17.1. Access . Customer may receive access to certain Products or Product features on a free, fully discounted or trial basis, or as an alpha, beta or early access offering (“ Free or Beta Products ”). Use of Free or Beta Products is subject to this Agreement and any additional terms specified by Atlassian, such as the applicable scope and term of use. 17.2. Termination or Modification . At any time, Atlassian may terminate or modify Customer’s use of (including applicable terms) Free or Beta Products or modify Free or Beta Products, without any liability to Customer. For modifications to Free or Beta Products or Customer’s use, Customer must accept those modifications to continue accessing or using the Free or Beta Products. 17.3. Pre GA . Free or Beta Products may be inoperable, incomplete or include errors and bugs or features that Atlassian may never release, and their features and performance information are Atlassian’s Confidential Information. 17.4. Disclaimer . Notwithstanding anything else in this Agreement, to the maximum extent permitted by Law, Atlassian provides no warranty, indemnity, service level agreement or support for Free or Beta Products and its aggregate liability for Free or Beta Products is limited to US$100. 18. Feedback If Customer provides Atlassian with feedback or suggestions regarding the Products or other Atlassian offerings, Atlassian may use the feedback or suggestions without restriction or obligation. 19. Publicity Atlassian may identify Customer as a customer of Atlassian in its promotional materials. Atlassian will promptly stop doing so upon Customer request to https://www.atlassian.com/company/contact/general-inquiries#/ . 20. General Terms 20.1. Compliance with Laws . Each party must comply with all Laws applicable to its business in its performance of obligations or exercise of rights under this Agreement. 20.2. Code of Conduct . Atlassian must comply with its Code of Conduct in its performance of obligations or exercise of rights under this Agreement. 20.3. Assignment . (a) Customer may not assign or transfer any of its rights or obligations under this Agreement or an Order without Atlassian’s prior written consent. However, Customer may assign this Agreement in its entirety (including all Orders) to its successor resulting from a merger, acquisition, or sale of all or substantially all of Customer’s assets or voting securities, provided that Customer provides Atlassian with prompt written notice of the assignment and the assignee agrees in writing to assume all of Customer’s obligations under this Agreement and complies with Atlassian’s procedural and documentation requirements to give effect to the assignment. (b) Any attempt by Customer to transfer or assign this Agreement or an Order, except as expressly authorized above, will be null and void. (c) Atlassian may assign its rights and obligations under this Agreement (in whole or in part) without Customer’s consent. 20.4. Governing Law, Jurisdiction and Venue . (a) If Customer is domiciled: (i) in Europe, the Middle East, or Africa, this Agreement is governed by the laws of the Republic of Ireland, with the jurisdiction and venue for actions related to this Agreement in the courts of the Republic of Ireland, or (ii) elsewhere, this Agreement is governed by the laws of the State of California, with the jurisdiction and venue for actions related to this Agreement in the state and United States federal courts located in San Francisco, California. (b) This Agreement will be governed by such laws without regard to conflicts of laws provisions, and both parties submit to the personal jurisdiction of the applicable courts. The United Nations Convention on the International Sale of Goods does not apply to this Agreement. 20.5. Notices . (a) Except as specified elsewhere in this Agreement, notices under this Agreement must be in writing and are deemed given on: (i) personal delivery, (ii) when received by the addressee if sent by a recognized overnight courier with receipt request, (iii) the third business day after mailing, or (iv) the first business day after sending by email, except that email will not be sufficient for notices regarding Infringement Claims, alleging breach of this Agreement by Atlassian, or of Customer’s termination of this Agreement in accordance with Section 12.3 (Termination for Cause). (b) Notices to Atlassian must be provided according to the details provided at https://www.atlassian.com/legal#how-do-i-provide-legal-notices-to-atlassian , as may be updated from time to time (c) Notices to Customer must be provided to the billing or technical contact provided to Atlassian, which may be updated by Customer from time to time in Customer’s account portal. However, Atlassian may provide general or operational notices via email, on its website or through the Products. Customer may subscribe to receive email notice of updates to this Agreement, as described at https://www.atlassian.com/legal#notification-of-updates-in-terms-and-policies . 20.6. Entire Agreement . This Agreement is the parties’ entire agreement regarding its subject matter and supersedes any prior or contemporaneous agreements regarding its subject matter. In the event of a conflict among the documents making up this Agreement, the main body of this Agreement (i.e., Sections 1 through 21, inclusive) will control, except that the Policies, Product-Specific Terms and DPA will control for their specific subject matter. 20.7. Other Atlassian Offerings . Atlassian makes available other offerings that can be used with the Products which, in some cases, are subject to separate terms and conditions, available at https://www.atlassian.com/legal . These other offerings include training services, developer tools and the Atlassian Marketplace. For clarity, this Agreement controls over any such terms and conditions with respect to Customer’s use of the Products (including any Atlassian Apps). 20.8. Interpretation, Waivers and Severability . In this Agreement, headings are for convenience only and “including” and similar terms are to be construed without limitation. Waivers must be granted in writing and signed by the waiving party’s authorized representative. If any provision of this Agreement is held invalid, illegal or unenforceable, it will be limited to the minimum extent necessary so the rest of this Agreement remains in effect. 20.9. Changes to this Agreement . (a) Atlassian may modify this Agreement (which includes the Policies, Product-Specific Terms and DPA) from time to time, by posting the modified portion(s) of this Agreement on Atlassian’s website. Atlassian must use commercially reasonable efforts to post any such modification at least thirty (30) days prior to its effective date. (b) For free subscriptions, modifications become effective during the then current Subscription Term, in accordance with Atlassian’s notice. (c) For paid subscriptions: (i) except as specified below, modifications to this Agreement will take effect at the next Order or renewal unless either party elects to not renew pursuant to Section 10.1(c) (Renewals), and (ii) Atlassian may specify that modifications will become effective during a then-current Subscription Term if: (A) required to address compliance with Law, or (B) required to reflect updates to Product functionality or introduction of new Product features. If Customer objects, Customer may terminate the remainder of the then-current Subscription Term for the affected Products as its exclusive remedy. To exercise this right, Customer must notify Atlassian of its termination under this Section 20.9(c) within thirty (30) days of the modification notice, and Atlassian will refund any pre-paid fees for the terminated portion of the applicable Subscription Term. 20.10. Force Majeure . Neither party is liable for any delay or failure to perform any obligation under this Agreement (except for a failure to pay fees) due to events beyond its reasonable control and occurring without that party’s fault or negligence. 20.11. Subcontractors and Affiliates . Atlassian may use subcontractors or its Affiliates in the performance of its obligations under this Agreement, but Atlassian remains responsible for its overall performance under this Agreement and for having appropriate written agreements in place with its subcontractors to enable Atlassian to meet its obligations under this Agreement. 20.12. Independent Contractors . The parties are independent contractors, not agents, partners or joint venturers. 20.13. Export Restrictions . The Products may be subject to U.S. export restrictions and import restrictions of other jurisdictions. Customer must comply with all applicable export and import Laws in its access to, use of, and download of the Products or any content or records entered into the Products. Customer must not (and must not allow anyone else to) export, re-export, transfer or disclose the Products or any direct product of the Products: (a) to (or to a national or resident of) any U.S. embargoed jurisdiction, (b) to anyone on any U.S. or applicable non-U.S. restricted- or denied-party list, or (c) to any party that Customer has reason to know will use the Products in violation of U.S. export Law, or for any restricted end user under U.S. export Law. 20.14. Government End-Users . If Customer is a United States federal, state or local government customer, this Agreement is subject to, and is varied by, the Government Amendment available at https://www.atlassian.com/legal/government-amendment . 20.15. No Contingencies . The Products, Support and Advisory Services in each Order are purchased separately and not contingent on purchase or use of other Atlassian products and services, even if listed in the same Order. Customer’s purchases are not contingent on delivery of any future functionality or features. 21. Definitions “ Acceptable Use Policy ” means Atlassian’s acceptable use policy available at https://www.atlassian.com/legal/acceptable-use-policy . “ Advisory Services ” means advisory services as described in the Advisory Services Policy. “ Advisory Services Policy ” means Atlassian’s advisory services policy available at https://www.atlassian.com/legal/advisory-services-policy . “ Affiliate ” means an entity that, directly or indirectly, owns or controls, is owned or is controlled by or is under common ownership or control with a party, where “ownership” means the beneficial ownership of more than fifty percent (50%) of an entity’s voting equity securities or other equivalent voting interests and “control” means the power to direct the management or affairs of an entity. “ Agreement ” means this Atlassian Customer Agreement, as well as the Product-Specific Terms, the DPA and the Policies. “ AI Offerings ” means AI Offerings as described in the AI Terms. “ AI Terms ” means the Atlassian AI terms available at https://www.atlassian.com/legal/ai-terms . “ Atlassian Apps ” means apps developed by Atlassian for use with Cloud Products or Software Products, as designated by Atlassian in the Atlassian Marketplace. “ Atlassian Marketplace ” means the online platform to purchase apps for Atlassian products currently branded the Atlassian Marketplace and accessible at https://marketplace.atlassian.com/. “ Cloud Products ” means Atlassian’s cloud products, including client software for its cloud products (“ Cloud Clients ”). “ Code of Conduct ” means the Atlassian Code of Business Conduct & Ethics, available at https://investors.atlassian.com/governance/governance-documents/default.aspx. “ Customer Data ” means any data, content or materials provided to Atlassian by or at the direction of Customer or its Users via the Cloud Products, including from Third-Party Products. “ Customer Materials ” means materials and other resources that Customer provides to Atlassian in connection with Support or Advisory Services. “ Documentation ” means Atlassian’s usage guidelines and standard technical documentation for the applicable Product, available at https://support.atlassian.com/ , unless otherwise specified in the Product-Specific Terms. “ DPA ” means the Atlassian data processing addendum available at https://www.atlassian.com/legal/data-processing-addendum. “ Laws ” means all applicable laws, regulations, conventions, decrees, decisions, orders, judgments, codes and requirements of any government authority (federal, state, local or international) having jurisdiction. “ Order ” means Atlassian’s ordering document, online sign-up or other ordering process that Atlassian enables specifying the Products, Support or Advisory Services to be provided under this Agreement, accepted by Atlassian in accordance with Section 9 (Ordering Process and Delivery). “ Policies ” means the Acceptable Use Policy, Advisory Services Policy, guidelines for Reporting Copyright and Trademark Violations, Privacy Policy, Security Measures, Service Level Agreement, Support Policy, Third-Party Code Policy and any additional Atlassian policies specified in Product-Specific Terms. “ Privacy Policy ” means Atlassian’s privacy policy available at https://www.atlassian.com/legal/privacy-policy . “ Products ” means the applicable Cloud Products or Software Products made available by Atlassian in connection with an Order. Products also include Atlassian Apps. “ Product-Specific Terms ” means terms that apply only to certain Products or features, available at https://www.atlassian.com/legal/product-terms , including the AI Terms. “ Reseller ” means a partner authorized by Atlassian to resell Atlassian’s Products, Support and Advisory Services to customers “ Scope of Use ” means Customer’s entitlements to the Products. Such entitlements may be based on: (a) number of licenses, copies or instances, (b) entity, division, business unit, website, or field of use, (c) number and type of Users, (d) number of queries, requests or other usage-based subscription units, or (e) other restrictions or billable units. “ Security Measures ” means Atlassian’s security practices available at https://www.atlassian.com/legal/security-measures . “ Service Level Agreement ” means the service level commitments, if any, for a Cloud Product as described at https://www.atlassian.com/legal/sla . “ Software Products ” means Atlassian’s installed software products and any generally-available bug fixes, updates and upgrades it provides to Customer, including through Support. “ Subscription Term ” means the term for Customer’s use of or access to the Products and related Support and Advisory Services as identified in an Order. “ Support ” means the level of support for the Products corresponding to Customer’s Scope of Use, as identified in the Support Policy. “ Support Policy ” means the Atlassian support offerings documentation available at https://confluence.atlassian.com/support/atlassian-support-offerings-193299636.html . “ Third-Party Code Policy ” means Atlassian’s third-party code policy available at https://www.atlassian.com/legal/third-party-code-policy . “ User ” means any individual that Customer authorizes to use the Products. Users may include: (i) Customer’s and its Affiliates’ employees, consultants, contractors and agents, (ii) third parties with which Customer or its Affiliates transact business, (iii) individuals invited by Customer’s users, (iv) individuals under managed accounts, or (v) individuals interacting with a Product as Customer’s customer. Related content Product-specific terms Learn more Advisory Services Learn more Data Processing Addendum Learn more Data Transfer Impact Assessment Learn more | 2026-01-13T09:29:15 |
https://downloads.buildernet.org/buildernet-images/v1.2.0/ | Buildernet Public Artifacts | v1.2.0 Buildernet Public Artifacts / buildernet-images / v1.2.0 / Name Description Size Modified Go up — — — buildernet-v1.2.0-azure-tdx-7797d66c2e056c4b562082463a3e0bed7d14057ca6bbfd15e65b208af765cb14.wic.vhd.minisig — 395 B 2025-06-25T09:26:02.066Z buildernet-v1.2.0-azure-tdx-7797d66c2e056c4b562082463a3e0bed7d14057ca6bbfd15e65b208af765cb14.wic.vhd.sha256 — 169 B 2025-06-25T09:26:03.244Z buildernet-v1.2.0-azure-tdx-7797d66c2e056c4b562082463a3e0bed7d14057ca6bbfd15e65b208af765cb14.wic.vhd — 301.0 MiB 2025-06-25T09:26:01.015Z BuilderNet | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/email-marketing-software | Best Email Marketing Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Email Marketing Manager (48) Email Marketing Specialist (44) Marketing Specialist (41) Marketing Manager (37) Sales And Marketing Specialist (20) See all products Find top products in Email Marketing Software category Software used to send commercial electronic mail to potential or current customers. - Deliver and optimize marketing messages (e.g. newsletters, promotions, advertisements, personalized communications) - Design and build customized emails using HTML and CSS or drag-and-drop editors - Create and store opt-in email lists, and manage unsubscribe/subscribe responses - Use reporting and analytics tools to target audiences 277 results Square Marketing Email Marketing Software by Square Use Square email marketing to send automated email campaigns, specialized offers, newsletters, track campaigns, and much more. View product Simple Email Service (SES) Email Marketing Software by Tencent A secure, stable and simple email push service View product Email Email Marketing Software by Netcore Cloud Email Marketing- From engagement to experience, build beautiful email campaigns, choose the right people, and send at the right time. Email API- An email API that supports blazing fast email delivery powered by AI for your critical emails. View product lemlist Email Marketing Software by lemlist Send Cold Emails That Get Replies With lemlist View product Everest Email Marketing Software by Validity Inc. Validity Everest is the email deliverability platform that provides crucial insights and guidance so you can reach more people, increase engagement, and protect your email performance. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Litmus Email Marketing Software by Litmus Litmus is how marketers make every send count. It’s a core component of your tech stack that allows you to create, personalize, test, review, and analyze every email you send. Too many companies are sending out broken emails without even knowing it, missing the deeper insights needed to really improve results, and struggling to personalize emails at scale. With Litmus, you get a better way to build, test, and trust every outgoing message. That’s why over 700,000 marketing professionals, including 80% of the Fortune 100, have turned to us. And it’s how we’ve achieved a customer satisfaction rating of 98%. Backed by Spectrum Equity, Litmus is used by major global brands across every industry that trust its platform to make every send count.™ View product Kasplo Email Marketing Email Marketing Software by Kasplo Kasplo's email marketing makes it simple to design eye-catching emails with its drag-and-drop editor or HTML support, including interactive AMP emails. Send personalized campaigns using advanced segmentation, automate workflows for any industry, and monitor performance with in-depth analytics to keep improving results. View product SHOPLINE SmartPush Email Marketing Software by SHOPLINE SmartPush is an effective and user-friendly marketing tool for EDM and SMS. Leverage customer traffic, reduce costs, improve repurchase rates, drive GMV growth, and strengthen customer-brand connections. SmartPush Can Help You: 1. Unified Data Activation for Maximum Engagement 2. Consistent Customer Experiences Across Channels 3. Smart and Stress-Free Marketing Automation 4. AI-Driven Personalization with Smart Insights View product MailerLite Email Marketing Software by MailerLite MailerLite is an email marketing platform that helps creators, small businesses, and ecommerce stores grow their audience and boost revenue—effortlessly and on their terms. Capture leads in style: Turn visitors into subscribers with landing pages, forms, and pop-ups. 150+ templates, AI-powered magic content generation, and full customization options make creating a joy. Monetize with 0% commission: Sell paid newsletters and digital products through MailerLite—with 0% commissions. Integrate with your ecommerce store to showcase products, track sales, and drive conversions. Personalize like never before: Use multi-trigger automations, machine learning, unlimited segmentation, and dynamic content to create tailored experiences for every subscriber to maximize engagement. Insights to grow smarter: Total visibility into campaign performance so you know what works. Reports on clicks, open rates, and much more. Start your 30-day free trial. Our support team is ready to help! View product Guest Connect - Email Marketing Email Marketing Software by Simplotel Hotel focussed Email Marketing solution that converts website offers to email campaigns with the single click of a button. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1 | miromind-ai/MiroVerse-v0.1 · Datasets at Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up Datasets: miromind-ai / MiroVerse-v0.1 like 177 Follow MiroMind AI 425 Tasks: Question Answering Modalities: Text Languages: English Size: 100K - 1M ArXiv: arxiv: 2511.11793 Tags: deep research agent miromind License: cc-by-nc-4.0 Dataset card Data Studio Files Files and versions xet Community 6 You need to agree to share your contact information to access this dataset This repository is publicly accessible, but you have to accept the conditions to access its files and content . Log in or Sign Up to review the conditions and access this dataset content. 🔥 News &amp; Updates ","children":[],"isValid":true,"title":"🔥 News &amp; Updates"},{"id":"🔥-first-batch-of-miroverse","label":" 🔥 First Batch of MiroVerse ","children":[],"isValid":true,"title":"🔥 First Batch of MiroVerse"},{"id":"📦-dataset-overview","label":" 📦 Dataset Overview ","children":[],"isValid":true,"title":"📦 Dataset Overview"},{"id":"🆚-why-were-different","label":" 🆚 Why We're Different ","children":[],"isValid":true,"title":"🆚 Why We&#39;re Different"},{"id":"📈-benchmark-performance","label":" 📈 Benchmark Performance ","children":[{"id":"gaia-benchmark","label":"GAIA Benchmark","children":[],"isValid":true,"title":"GAIA Benchmark"},{"id":"more-benchmarks","label":"More Benchmarks","children":[],"isValid":true,"title":"More Benchmarks"}],"isValid":true,"title":"📈 Benchmark Performance"},{"id":"🧩-examples","label":" 🧩 Examples ","children":[{"id":"case-1","label":" Case 1 ","children":[],"isValid":true,"title":"Case 1"},{"id":"case-2","label":" Case 2 ","children":[],"isValid":true,"title":"Case 2"}],"isValid":true,"title":"🧩 Examples"},{"id":"🛠️-free-trace-rollout-let-us-help-you-train","label":" 🛠️ Free Trace Rollout: Let Us Help You Train ","children":[{"id":"how-it-works","label":" How It Works: ","children":[],"isValid":true,"title":"How It Works:"}],"isValid":true,"title":"🛠️ Free Trace Rollout: Let Us Help You Train"},{"id":"🤝-license","label":" 🤝 License ","children":[],"isValid":true,"title":"🤝 License"},{"id":"citation","label":" Citation ","children":[],"isValid":true,"title":"Citation"},{"id":"contact-us","label":"Contact Us","children":[],"isValid":true,"title":"Contact Us"}],"classNames":""}"> 🔥 News & Updates 🔥 First Batch of MiroVerse 📦 Dataset Overview 🆚 Why We're Different 📈 Benchmark Performance GAIA Benchmark More Benchmarks 🧩 Examples Case 1 Case 2 🛠️ Free Trace Rollout: Let Us Help You Train How It Works: 🤝 License Citation Contact Us MiroVerse: A Reproducible, Full-Trajectory, Ever-Growing Deep Research Dataset 🔥 News & Updates MiroVerse v0.1 has been released. This dataset can be used with our training framework, MiroTrain . In MiroVerse v0.1, we provide both SFT and DPO data, making it easy to reproduce MiroThinker-v0.1 ’s benchmark performance on Qwen3. Give it a try! The initial release of MiroVerse (v0.1) is coming this Friday—stay tuned! 🔥 First Batch of MiroVerse ✨ What makes this release special: 📚 Diverse Verified Open Source Data — Carefully curated and validated community datasets 🧠 Fresh Large-Scale Deep Research Data — Generated by our proprietary data engine 🔄 Complete Trajectory Coverage — Every single sample includes full rollout trajectories ✅ Quality Assurance: — Each trajectory has been verified, ensuring high-quality training data for your models. 🌱 Always Growing, Always Open — Regular updates, powered by collaboration with the community 📦 Dataset Overview MiroVerse-v0.1 is a large-scale agent dataset with 147K+ samples featuring full rollout trajectories across diverse AI agent tasks including multi-hop QA, web navigation, and scientific reasoning. Every single sample includes complete execution traces with 1.9B+ tokens and 602K+ tool interactions, providing comprehensive training data for tool-using and web-browsing AI agents. Split #Sample #Main Trace #Browse Trace #Token #Turns #Tools License MiroVerse-Voyager1.0 59097 19115 39982 1129113893 444723 325537 CC-BY-NC-4.0 MiroVerse-MuSiQue 29572 10422 19150 294351053 143080 90486 CC-BY-4.0 MiroVerse-HotpotQA 12942 6553 6389 67352039 46320 20524 CC-BY-SA-4.0 MiroVerse-WebWalkerQA-Silver 10817 4961 5856 107650324 67846 46215 Apache 2.0 MiroVerse-MegaScience 10615 8270 2345 111120264 63594 42443 CC-BY-NC-SA-4.0 MiroVerse-TaskCraft 8890 4277 4613 95518109 35013 17236 MIT MiroVerse-QA-Expert-Multi-Hop-V1.0 6187 2091 4096 63983151 31957 19585 Apache 2.0 MiroVerse-OneGen-TrainDataset-MultiHopQA 3289 1347 1942 33214386 17187 11449 MIT MiroVerse-2WikiMultihopQA 3001 1410 1591 28977451 13982 7981 Apache 2.0 MiroVerse-WikiTables 1606 1288 318 16461870 12089 8877 MIT MiroVerse-WebShaper 1514 486 1028 31240265 12126 9578 MIT MiroVerse-WebDancer 455 192 263 7817689 3170 2268 MIT MiroVerse-v0.1 147985 60412 87573 1993099086 891087 602179 / Every sample includes successful MiroFlow rollout trajactories that reached the verified answer—one JSON line, zero secrets. MiroVerse-v0.1 dataset follows a hybrid licensing model: query and answer data retain their original source licenses, while all trace data is licensed under CC-BY-NC-4.0; for commercial use, please contact us to request a commercial license. 🆚 Why We're Different While high-quality data is essential for training advanced models and often kept private, we believe that the path to truly general-purpose agents is still long. That’s why we’re committed to open-sourcing as much of our data as possible—including raw samples and exploration traces—to support and accelerate progress across the community. Org Work Samples Trace Data Reproducible? OpenAI Deep Research — ❌ ❌ Gemini Gemini Deep Research — ❌ ❌ Tencent Cognitive Kernel-Pro 7 k ❌ ❌ Tongyi WebShaper 500 ❌ ❌ MiroMind (ours) this repo 147 k+ ✅ ✅ 📈 Benchmark Performance MiroVerse-v0.1 is used in the training of our MiroThinker-v0.1 models. By using this dataset, we achieved the following benchmark performance. GAIA Benchmark Method Text-103 Best Pass@1 Text-103 Pass@1 (Avg@8) Val-165 Best Pass@1 Val-165 Pass@1 (Avg@8) Search-o1-7B 17.5 - - - R1-Searcher-7B 20.4 - - - WebDancer-7B 31.0 - - - WebSailor-7B 37.9 - - - CK-Pro-8B 43.7 - 35.2 - MiroThinker-8B-SFT-v0.1 44.7 40.1 34.6 31.8 + Commercial Tools 46.6 42.1 37.6 33.9 MiroThinker-8B-DPO-v0.1 46.6 44.8 37.0 35.4 + Commercial Tools 50.5 46.7 38.2 35.9 MiroThinker-14B-SFT-v0.1 47.6 44.4 37.0 34.4 + Commercial Tools 49.5 47.5 41.8 39.8 MiroThinker-14B-DPO-v0.1 48.5 46.6 42.4 39.2 + Commercial Tools 52.4 48.5 45.5 42.0 Qwen3-32B 31.1 26.7 29.7 26.4 Search-o1-32B 28.2 - - - WebThinker-32B-RL 48.5 - - - WebDancer-QwQ-32B 51.5 - - - WebSailor-32B 53.2 - - - WebShaper-QwQ-32B 53.3 - - - MiroThinker-32B-SFT-v0.1 55.3 51.3 44.9 42.7 + Commercial Tools 58.3 54.2 48.5 45.8 MiroThinker-32B-DPO-v0.1 57.3 54.1 48.5 45.9 + Commercial Tools 60.2 57.9 50.9 48.9 Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script. By default, we use open-source tools wherever possible, except for the code tool E2B and the Google search tool Serper . We use Whisper , Qwen2.5-VL-72B-Instruct , and Qwen3-235B-A22B-Thinking-2507 in our implementation. The framework can be easily extended to other open-source tools of your choice. Replacing these open-source tools with commercial alternatives can yield performance gains. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models. More Benchmarks Method HLE Pass@1 Frames Pass@1 BrowseComp Pass@1 BrowseComp-ZH Pass@1 WebWalkerQA Pass@1 OpenAI Deep Research 26.6 - 51.5 42.9 - Gemini Deep Research 26.9 - - - - Kimi-Researcher 26.9 78.8 - - - WebDancer-7B - - - - 36.0 WebSailor-7B - - 6.7 14.2 - MiroThinker-8B-SFT-v0.1 - 58.0 5.5 9.3 41.3 MiroThinker-8B-DPO-v0.1 - 64.4 8.7 13.5 45.7 WebThinker-32B-RL - - - - 46.5 WebDancer-QwQ-32B - - 3.8 18.0 47.9 WebSailor-32B - - 10.5 25.5 - WebShaper-32B - - - - 51.4 MiroThinker-32B-SFT-v0.1 10.2 70.4 10.6 13.8 45.7 MiroThinker-32B-DPO-v0.1 11.8 71.7 13.0 17.0 49.3 MiroThinker’s performance was tested with this repository and open-source tools; other models’ results are from their papers and official sites. As MiroVerse-v0.1 mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data to improve performance in the next version. 🧩 Examples Below are two QA examples synthesized by our data engine (MiroVerse-Voyager1.0). Case 1 Q: A female lead actress received her first major annual Hindi film performance award for best actress for her role in a late-2000s comedy-drama, directed by the filmmaker who later created a sports-themed drama released in 2023 starring an actress known for completing an athletic triathlon event in Berlin. What is the title of the film for which this actress first won that award? A: Paa Case 2 Q: Identify the agricultural practice, unique to a mountain range that forms a border including an independent principality and known for spectacular geologic landforms, that was one of the key reasons for part of the range's inscription as a UNESCO World Heritage Site in the decade before the 21st century. This region's history features a brief early-1800s reorganization of provincial boundaries after a liberal revolution in the southern country, and the northern country is globally recognized as the leading tourist destination with the fourth-largest number of heritage sites. What is this traditional agricultural system called? A: transhumance 🛠️ Free Trace Rollout: Let Us Help You Train Generating high-quality training trajectories is expensive — on average, $1.50 per sample using top-tier commercial models. To empower the community, we’re offering free rollout services for qualifying seed data: How It Works: Submit a Request Open a ticket via this template and provide the basic info, rollout requirements, and up to 100 sample rows in one go. Review & Rollout We’ll review your submission within 48 hours. Once approved, we’ll reach out to you for the full dataset and then launch the complete trace rollout using top-tier commercial models. Delivery & Recognition Upon completion, we’ll send the augmented dataset to you via email. With your explicit consent , we’ll also publish it publicly and credit you as a Community Contributor — with a permanent badge in this README. 🤝 License This project is released under the CC BY-NC 4.0. Parts of this project contain code and models from other sources, which are subject to their respective licenses. For commercial use cases , please contact us at: service@miromind.ai . Citation If you find this project useful in your research, please consider citing: @article{miromind2025mirothinker, title={MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling}, author={MiroMind Team and Bai, Song and Bing, Lidong and Chen, Carson and Chen, Guanzheng and Chen, Yuntao and Chen, Zhe and Chen, Ziyi and Dai, Jifeng and Dong, Xuan and others}, journal={arXiv preprint arXiv:2511.11793}, year={2025} } Contact Us MiroVerse is developed by the MiroMind Data Team. If you would like to leave us a message, feel free to get in touch. In addition to GitHub , Discord , WeChat , and RedNote , you can also reach us via email at service@miromind.ai . Downloads last month 767 Space using miromind-ai/MiroVerse-v0.1 1 miromind-ai/MiroMind-Open-Source-Deep-Research Collection including miromind-ai/MiroVerse-v0.1 MiroThinker-v0.1 Collection High performance in deep research and tool use. • 7 items • Updated Sep 8, 2025 • 37 Paper for miromind-ai/MiroVerse-v0.1 MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling Paper • 2511.11793 • Published Nov 14, 2025 • 175 System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://www.linkedin.com/legal/user-agreement?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fposts%2Fmakerxau_were-incredibly-proud-to-have-been-nominated-activity-7401472884281692161-_WTa&trk=registration-frontend_join-form-user-agreement | User Agreement | LinkedIn Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Effective on November 3, 2025 Our mission is to connect the world’s professionals to allow them to be more productive and successful. Our services are designed to promote economic opportunity for our members by enabling you and millions of other professionals to meet, exchange ideas, learn, and find opportunities or employees, work, and make decisions in a network of trusted relationships. Table of Contents: Introduction Obligations Rights and Limits Disclaimer and Limit of Liability Termination Governing Law and Dispute Resolution General Terms LinkedIn “Dos and Don’ts” Complaints Regarding Content How To Contact Us Introduction 1.1 Contract When you use our Services you agree to all of these terms. Your use of our Services is also subject to our Cookie Policy and our Privacy Policy, which covers how we collect, use, share, and store your personal information. By creating a LinkedIn account or accessing or using our Services (described below), you are agreeing to enter into a legally binding contract with LinkedIn (even if you are using third party credentials or using our Services on behalf of a company). If you do not agree to this contract (“Contract” or “User Agreement”), do not create an account or access or otherwise use any of our Services. If you wish to terminate this Contract at any time, you can do so by closing your account and no longer accessing or using our Services. As a Visitor or Member of our Services, the collection, use, and sharing of your personal data is subject to our Privacy Policy , our Cookie Policy and other documents referenced in our Privacy Policy , and updates. You acknowledge and have read our Privacy Policy . Services This Contract applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-related sites, apps, communications, and other services that state that they are offered under this Contract (“Services”), including the offsite collection of data for those Services, such as via our ads and the “Apply with LinkedIn” and “Share with LinkedIn” plugins. LinkedIn and other Key Terms You are entering into this Contract with LinkedIn (also referred to as “we” and “us”). Designated Countries . We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. If you reside in the “Designated Countries”, you are entering into this Contract with LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) and LinkedIn Ireland will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you reside outside of the “Designated Countries”, you are entering into this Contract with LinkedIn Corporation (“LinkedIn Corp.”) and LinkedIn Corp. will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. Affiliates . Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., Github, Inc.). Social Action . Actions that members take on our services such as likes, comments, follows, sharing content. Content . Content includes, for example, feed posts, feedback, comments, profiles, articles (and contributions), group posts, job postings, messages (including InMails), videos, photos, audio, and/or PDFs. 1.2 Members and Visitors This Contract applies to Members and Visitors. When you register and join the LinkedIn Services, you become a “Member”. If you have chosen not to register for our Services, you may access certain features as a “Visitor.” 1.3 Changes We may make changes to this Contract. We may modify this Contract, our Privacy Policy and our Cookie Policy from time to time. If we materially change these terms or if we are legally required to provide notice, we will provide you notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. However, we may not always provide prior notice of changes to these terms (1) when those changes are legally required to be implemented with immediate effect, or (2) when those changes relate to a newly launched service or feature. We agree that changes cannot be retroactive. If you object to any of these changes, you may close your account . Your continued use of our Services after we publish or send a notice about our changes to these terms means that you are consenting to the updated terms as of their effective date. 2. Obligations 2.1 Service Eligibility Here are some promises that you make to us in this Contract: You’re eligible to enter into this Contract and you are at least our “Minimum Age.” The Services are not for use by anyone under the age of 16. To use the Services, you agree that: (1) you must be the "Minimum Age" (described below) or older; (2) you will only have one LinkedIn account, which must be in your real name; and (3) you are not already restricted by LinkedIn from using the Services. Creating an account with false information is a violation of our terms, including accounts registered on behalf of others or persons under the age of 16. “Minimum Age” means 16 years old. However, if law requires that you must be older in order for LinkedIn to lawfully provide the Services to you without parental consent (including using your personal data) then the Minimum Age is such older age. Learn More 2.2 Your Account You will keep your password a secret You will not share your account with anyone else and will follow our policies and the law. Members are account holders. You agree to: (1) protect against wrongful access to your account (e.g., use a strong password and keep it confidential); (2) not share or transfer your account or any part of it (e.g., sell or transfer the personal data of others by transferring your connections); and (3) follow the law, our list of Dos and Don’ts (below), and our Professional Community Policies . Learn More You are responsible for anything that happens through your account unless you close it or report misuse. As between you and others (including your employer), your account belongs to you. However, if the Services were purchased by another party for you to use in connection with your work for them (e.g., Recruiter seat or LinkedIn Learning subscription bought by your employer), the party paying for such Service has the right to control access to and get reports on your use of such paid Service; however, they do not have rights to your personal account. 2.3 Payment You’ll honor your payment obligations and you are okay with us storing your payment information. You understand that there may be fees and taxes that are added to our prices. Refunds are subject to our policy, and we may modify our prices and those modified prices will apply prospectively. If you buy any of our paid Services, you agree to pay us the applicable fees and taxes and you agree to the additional terms specific to the paid Services. Failure to pay these fees will result in the termination of your paid Services. Also, you agree that: Your purchase may be subject to foreign exchange fees or differences in prices based on location (e.g., exchange rates). We may store and continue billing your payment method (e.g., credit card), even after it has expired, to avoid interruptions in your paid Services and to use it to pay for other Services you may buy. If your primary payment method fails, we may automatically charge a secondary payment method, if you have provided one. You may update or change your payment method. Learn more If you purchase a subscription, your payment method automatically will be charged at the start of each subscription period for the fees and taxes applicable to that period. To avoid future charges, cancel before the renewal date. Learn how to cancel or suspend your paid subscription Services. We may modify our prices effective prospectively upon reasonable notice to the extent allowed under the law. All of your paid Services are subject to LinkedIn’s refund policy . We may calculate taxes payable by you based on the billing information that you provide us. You can get a copy of your invoice through your LinkedIn account settings under “ Purchase History ”. 2.4 Notices and Messages You’re okay with us providing notices and messages to you through our websites, apps, and contact information. If your contact information is out of date, you may miss out on important notices. You agree that we will provide notices and messages to you in the following ways: (1) within the Services or (2) sent to the contact information you provided us (e.g., email, mobile number, physical address). You agree to keep your contact information up to date. Please review your settings to control and limit the types of messages you receive from us. 2.5 Sharing When you share information on our Services, others can see, copy and use that information. Our Services allow sharing of information (including content) in many ways, such as through your profile, posts, articles, group posts, links to news articles, job postings, messages, and InMails. Depending on the feature and choices you make, information that you share may be seen by other Members, Visitors, or others (on or off of the Services). Where we have made settings available, we will honor the choices you make about who can see content or other information (e.g., message content to your addressees, sharing content only to LinkedIn connections, restricting your profile visibility from search tools, or opting not to notify others of your LinkedIn profile update). For job searching activities, we default to not notifying your connections or the public. So, if you apply for a job through our Services or opt to signal that you are interested in a job, our default is to share it only with the job poster. To the extent that laws allow this, we are not obligated to publish any content or other information on our Services and can remove it with or without notice. 3. Rights and Limits 3.1. Your License to LinkedIn You own all of your original content that you provide to us, but you also grant us a non-exclusive license to it. We’ll honor the choices you make about who gets to see your content, including how it can be used for ads. As between you and LinkedIn, you own your original content that you submit or post to the Services. You grant LinkedIn and our Affiliates the following non-exclusive license to the content and other information you provide (e.g., share, post, upload, and/or otherwise submit) to our Services: A worldwide, transferable and sublicensable right to use, copy, modify, distribute, publicly perform and display, host, and process your content and other information without any further consent, notice and/or compensation to you or others. These rights are limited in the following ways: You can end this license for specific content by deleting such content from the Services, or generally by closing your account, except (a) to the extent you (1) shared it with others as part of the Services and they copied, re-shared it or stored it, (2) we had already sublicensed others prior to your content removal or closing of your account, or (3) we are required by law to retain or share it with others, and (b) for the reasonable time it takes to remove the content you delete from backup and other systems. We will not include your content in advertisements for the products and services of third parties to others without your separate consent (including sponsored content). However, without compensation to you or others, ads may be served near your content and other information, and your social actions may be visible and included with ads, as noted in the Privacy Policy. If you use a Service feature, we may mention that with your name or photo to promote that feature within our Services, subject to your settings. We will honor the audience choices for shared content (e.g., “Connections only”). For example, if you choose to share your post to "Anyone on or off LinkedIn” (or similar): (a) we may make it available off LinkedIn; (b) we may enable others to publicly share onto third-party services (e.g., a Member embedding your post on a third party service); and/or (c) we may enable search tools to make that public content findable though their services. Learn More While we may edit and make format changes to your content (such as translating or transcribing it, modifying the size, layout or file type, and removing or adding labels or metadata), we will take steps to avoid materially modifying the meaning of your expression in content you share with others. Because you own your original content and we only have non-exclusive rights to it, you may choose to make it available to others, including under the terms of a Creative Commons license . You and LinkedIn agree that if content includes personal data, it is subject to our Privacy Policy. You and LinkedIn agree that we may access, store, process, and use any information (including content and/or personal data) that you provide in accordance with the terms of the Privacy Policy and your choices (including settings). By submitting suggestions or other feedback regarding our Services to LinkedIn, you agree that LinkedIn can use and share (but does not have to) such feedback for any purpose without compensation to you. You promise to only provide content and other information that you have the right to share and that your LinkedIn profile will be truthful. You agree to only provide content and other information that does not violate the law or anyone’s rights (including intellectual property rights). You have choices about how much information to provide on your profile but also agree that the profile information you provide will be truthful. LinkedIn may be required by law to remove certain content and other information in certain countries. 3.2 Service Availability We may change or limit the availability of some features, or end any Service. We may change, suspend or discontinue any of our Services. We may also limit the availability of features, content and other information so that they are not available to all Visitors or Members (e.g., by country or by subscription access). We don’t promise to store or show (or keep showing) any information (including content) that you’ve shared. LinkedIn is not a storage service. You agree that we have no obligation to store, maintain or provide you a copy of any content or other information that you or others provide, except to the extent required by applicable law and as noted in our Privacy Policy. 3.3 Other Content, Sites and Apps Your use of others’ content and information posted on our Services, is at your own risk. Others may offer their own products and services through our Services, and we aren’t responsible for those third-party activities. Others’ Content: By using the Services, you may encounter content or other information that might be inaccurate, incomplete, delayed, misleading, illegal, offensive, or otherwise harmful. You agree that we are not responsible for content or other information made available through or within the Services by others, including Members. While we apply automated tools to review much of the content and other information presented in the Services, we cannot always prevent misuse of our Services, and you agree that we are not responsible for any such misuse. You also acknowledge the risk that others may share inaccurate or misleading information about you or your organization, and that you or your organization may be mistakenly associated with content about others, for example, when we let connections and followers know you or your organization were mentioned in the news. Members have choices about this feature . Others’ Products and Services: LinkedIn may help connect you to other Members (e.g., Members using Services Marketplace or our enterprise recruiting, jobs, sales, or marketing products) who offer you opportunities (on behalf of themselves, their organizations, or others) such as offers to become a candidate for employment or other work or offers to purchase products or services. You acknowledge that LinkedIn does not perform these offered services, employ those who perform these services, or provide these offered products. You further acknowledge that LinkedIn does not supervise, direct, control, or monitor Members in the making of these offers, or in their providing you with work, delivering products or performing services, and you agree that (1) LinkedIn is not responsible for these offers, or performance or procurement of them, (2) LinkedIn does not endorse any particular Member’s offers, and (3) LinkedIn is not an agent or employment agency on behalf of any Member offering employment or other work, products or services. With respect to employment or other work, LinkedIn does not make employment or hiring decisions on behalf of Members offering opportunities and does not have such authority from Members or organizations using our products. For Services Marketplace , (a) you must be at least 18 years of age to procure, offer, or perform services, and (b) you represent and warrant that you have all the required licenses and will provide services consistent with the relevant industry standards and our Professional Community Policies . Others’ Events: Similarly, LinkedIn may help you register for and/or attend events organized by Members and connect with other Members who are attendees at such events. You agree that (1) LinkedIn is not responsible for the conduct of any of the Members or other attendees at such events, (2) LinkedIn does not endorse any particular event listed on our Services, (3) LinkedIn does not review and/or vet any of these events or speakers, and (4) you will adhere to the terms and conditions that apply to such events. 3.4 Limits We have the right to limit how you connect and interact on our Services. LinkedIn reserves the right to limit your use of the Services, including the number of your connections and your ability to contact other Members. LinkedIn reserves the right to restrict, suspend, or terminate your account if you breach this Contract or the law or are misusing the Services (e.g., violating any of the Dos and Don’ts or Professional Community Policies ). We can also remove any content or other information you shared if we believe it violates our Professional Community Policies or Dos and Don’ts or otherwise violates this Contract. Learn more about how we moderate content. 3.5 Intellectual Property Rights We’re providing you notice about our intellectual property rights. LinkedIn reserves all of its intellectual property rights in the Services. Trademarks and logos used in connection with the Services are the trademarks of their respective owners. LinkedIn, and “in” logos and other LinkedIn trademarks, service marks, graphics and logos used for our Services are trademarks or registered trademarks of LinkedIn. 3.6 Recommendations and Automated Processing We use data and other information about you to make and order relevant suggestions and to generate content for you and others. Recommendations: We use the data and other information that you provide and that we have about Members and content on the Services to make recommendations for connections, content, ads, and features that may be useful to you. We use that data and other information to recommend and to present information to you in an order that may be more relevant for you. For example, that data and information may be used to recommend jobs to you and you to recruiters and to organize content in your feed in order to optimize your experience and use of the Services. Keeping your profile accurate and up to date helps us to make these recommendations more accurate and relevant. Learn More Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies , including not sharing misleading information. The Services may include content automatically generated and shared using tools offered by LinkedIn or others off LinkedIn. Like all content and other information on our Services, regardless of whether it's labeled as created by “AI”, be sure to carefully review before relying on it. 4. Disclaimer and Limit of Liability 4.1 No Warranty This is our disclaimer of legal liability for the quality, safety, or reliability of our Services. LINKEDIN AND ITS AFFILIATES MAKE NO REPRESENTATION OR WARRANTY ABOUT THE SERVICES, INCLUDING ANY REPRESENTATION THAT THE SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, AND PROVIDE THE SERVICES (INCLUDING CONTENT, OUTPUT AND INFORMATION) ON AN “AS IS” AND “AS AVAILABLE” BASIS. TO THE FULLEST EXTENT PERMITTED UNDER APPLICABLE LAW, LINKEDIN AND ITS AFFILIATES DISCLAIM ANY IMPLIED OR STATUTORY WARRANTY, INCLUDING ANY IMPLIED WARRANTY OF TITLE, ACCURACY, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. If you plan to use content, output and information for any reason, it is your responsibility to verify its accuracy and fitness for your purposes, because any content, output and information on the service may not reflect accurate, complete, or current information. 4.2 Exclusion of Liability These are the limits of legal liability we may have to you. TO THE FULLEST EXTENT PERMITTED BY LAW (AND UNLESS LINKEDIN HAS ENTERED INTO A SEPARATE WRITTEN AGREEMENT THAT OVERRIDES THIS CONTRACT), LINKEDIN AND ITS AFFILIATES, WILL NOT BE LIABLE IN CONNECTION WITH THIS CONTRACT FOR LOST PROFITS OR LOST BUSINESS OPPORTUNITIES, REPUTATION (E.G., OFFENSIVE OR DEFAMATORY STATEMENTS), LOSS OF DATA (E.G., DOWN TIME OR LOSS, USE OF, OR CHANGES TO, YOUR INFORMATION OR CONTENT) OR ANY INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES. LINKEDIN AND ITS AFFILIATES WILL NOT BE LIABLE TO YOU IN CONNECTION WITH THIS CONTRACT FOR ANY AMOUNT THAT EXCEEDS (A) THE TOTAL FEES PAID OR PAYABLE BY YOU TO LINKEDIN FOR THE SERVICES DURING THE TERM OF THIS CONTRACT, IF ANY, OR (B) US $1000. 4.3 Basis of the Bargain; Exclusions The limitations of liability in this Section 4 are part of the basis of the bargain between you and LinkedIn and shall apply to all claims of liability (e.g., warranty, tort, negligence, contract and law) even if LinkedIn or its affiliates has been told of the possibility of any such damage, and even if these remedies fail their essential purpose. THESE LIMITATIONS OF LIABILITY DO NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY OR FOR FRAUD, GROSS NEGLIGENCE OR INTENTIONAL MISCONDUCT, OR IN CASES OF NEGLIGENCE, WHERE A MATERIAL OBLIGATION HAS BEEN BREACHED. A MATERIAL OBLIGATION BEING AN OBLIGATION WHICH FORMS A PREREQUISITE TO OUR DELIVERY OF SERVICES AND ON WHICH YOU MAY REASONABLY RELY, BUT ONLY TO THE EXTENT THAT THE DAMAGES WERE DIRECTLY CAUSED BY THE BREACH AND WERE FORESEEABLE UPON CONCLUSION OF THIS CONTRACT AND TO THE EXTENT THAT THEY ARE TYPICAL IN THE CONTEXT OF THIS CONTRACT. 5. Termination We can each end this Contract, but some rights and obligations survive. Both you and LinkedIn may terminate this Contract at any time with notice to the other. On termination, you lose the right to access or use the Services. The following shall survive termination: Our rights to use and disclose your feedback; Section 3 (subject to 3.1.1); Sections 4, 6, 7, and 8.2 of this Contract; and Any amounts owed by either party prior to termination remain owed after termination. You can visit our Help Center to learn about how to close your account 6. Governing Law and Dispute Resolution In the unlikely event we end up in a legal dispute, depending on where you live, you and LinkedIn agree to resolve it in California courts using California law, Dublin, Ireland courts using Irish law, or as otherwise provided in this section. If you live in the Designated Countries, the laws of Ireland govern all claims related to LinkedIn's provision of the Services, but this shall not deprive you of the mandatory consumer protections under the law of the country to which we direct your Services where you have habitual residence. With respect to jurisdiction, you and LinkedIn agree to choose the courts of the country to which we direct your Services where you have habitual residence for all disputes arising out of or relating to this User Agreement, or in the alternative, you may choose the responsible court in Ireland. If you are a business user within the scope of Article 6(12) of the EU Digital Markets Act (“DMA”) and have a dispute arising out of or in connection with Article 6(12) of the DMA, you may also utilize the alternative dispute resolution mechanism available in the Help Center . For others outside of Designated Countries, including those who live outside of the United States: You and LinkedIn agree that the laws of the State of California, U.S.A., excluding its conflict of laws rules, shall exclusively govern any dispute relating to this Contract and/or the Services. You and LinkedIn both agree that all claims and disputes can be litigated only in the federal or state courts in Santa Clara County, California, USA, and you and LinkedIn each agree to personal jurisdiction in those courts. You may have additional rights of redress and appeal for some decisions made by LinkedIn that impact you. 7. General Terms Here are some important details about the Contract. If a court with authority over this Contract finds any part of it unenforceable, you and we agree that the court should modify the terms to make that part enforceable while still achieving its intent. If the court cannot do that, you and we agree to ask the court to remove that unenforceable part and still enforce the rest of this Contract. This Contract (including additional terms that may be provided by us when you engage with a feature of the Services) is the only agreement between us regarding the Services and supersedes all prior agreements for the Services. If we don't act to enforce a breach of this Contract, that does not mean that LinkedIn has waived its right to enforce this Contract. You may not assign or transfer this Contract (or your membership or use of Services) to anyone without our consent. However, you agree that LinkedIn may assign this Contract to its affiliates or a party that buys it without your consent. There are no third-party beneficiaries to this Contract. You agree that the only way to provide us legal notice is at the addresses provided in Section 10. 8. LinkedIn “Dos and Don’ts” LinkedIn is a community of professionals. This list of “Dos and Don’ts” along with our Professional Community Policies limits what you can and cannot do on our Services, unless otherwise explicitly permitted by LinkedIn in a separate writing (e.g., through a research agreement). 8.1. Dos You agree that you will: Comply with all applicable laws, including, without limitation, privacy laws, intellectual property laws, anti-spam laws, export control laws, laws governing the content shared, and other applicable laws and regulatory requirements; Provide accurate contact and identity information to us and keep it updated; Use your real name on your profile; and Use the Services in a professional manner. 8.2. Don’ts You agree that you will not : Create a false identity on LinkedIn, misrepresent your identity, create a Member profile for anyone other than yourself (a real person), or use or attempt to use another’s account (such as sharing log-in credentials or copying cookies); Develop, support or use software, devices, scripts, robots or any other means or processes (such as crawlers, browser plugins and add-ons or any other technology) to scrape or copy the Services, including profiles and other data from the Services; Override any security feature or bypass or circumvent any access controls or use limits of the Services (such as search results, profiles, or videos); Copy, use, display or distribute any information (including content) obtained from the Services, whether directly or through third parties (such as search tools or data aggregators or brokers), without the consent of the content owner (such as LinkedIn for content it owns); Disclose information that you do not have the consent to disclose (such as confidential information of others (including your employer); Violate the intellectual property rights of others, including copyrights, patents, trademarks, trade secrets or other proprietary rights. For example, do not copy or distribute (except through the available sharing functionality) the posts or other content of others without their permission, which they may give by posting under a Creative Commons license; Violate the intellectual property or other rights of LinkedIn, including, without limitation, (i) copying or distributing our learning videos or other materials, (ii) copying or distributing our technology, unless it is released under open source licenses; or (iii) using the word “LinkedIn” or our logos in any business name, email, or URL except as provided in the Brand Guidelines ; Post (or otherwise share) anything that contains software viruses, worms, or any other harmful code; Reverse engineer, decompile, disassemble, decipher or otherwise attempt to derive the source code for the Services or any related technology that is not open source; Imply or state that you are affiliated with or endorsed by LinkedIn without our express consent (e.g., representing yourself as an accredited LinkedIn trainer); Rent, lease, loan, trade, sell/re-sell or otherwise monetize the Services or related data or access to the same, without LinkedIn’s consent; Deep-link to our Services for any purpose other than to promote your profile or a Group on our Services, without LinkedIn’s consent; Use bots or other unauthorized automated methods to access the Services, add or download contacts, send or redirect messages, create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement; Engage in “framing”, “mirroring”, or otherwise simulating the appearance or function of the Services; Overlay or otherwise modify the Services or their appearance (such as by inserting elements into the Services or removing, covering, or obscuring an advertisement included on the Services); Interfere with the operation of, or place an unreasonable load on, the Services (e.g., spam, denial of service attack, viruses, manipulating algorithms); Violate the Professional Community Policies , certain third party terms where applicable, or any additional terms concerning a specific Service that are provided when you sign up for or start using such Service; Use our Services to do anything that is unlawful, misleading, discriminatory, fraudulent or deceitful (e.g. manipulated media that wrongfully depicts a person saying or doing something they did not say or do); and/or Misuse our reporting or appeals process, including by submitting duplicative, fraudulent or unfounded reports, complaints or appeals. 9. Complaints Regarding Content Contact information for complaints about content provided by our Members. We ask that you report content and other information that you believe violates your rights (including intellectual property rights), our Professional Community Policies or otherwise violates this Contract or the law. To the extent we can under law, we may remove or restrict access to content, features, services, or information, including if we believe that it’s reasonably necessary to avoid harm to LinkedIn or others, violates the law or is reasonably necessary to prevent misuse of our Services. We reserve the right to take action against serious violations of this Contract, including by implementing account restrictions for significant violations. We respect the intellectual property rights of others. We require that information shared by Members be accurate and not in violation of the intellectual property rights or other rights of third parties. We provide a policy and process for complaints concerning content shared, and/or trademarks used, by our Members. 10. How To Contact Us Our Contact information. Our Help Center also provides information about our Services. For general inquiries, you may contact us online . For legal notices or service of process, you may write us at these addresses . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#tldr | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://kabirk.com/wimmelbench#analysis | How well can LLMs see? · Kabir Khandpur HOME EMAIL --> --> How well can LLMs see? 21 Nov 2024 --> TL;DR I created Wimmelbench, a benchmark to test how well multimodal language models can find specific objects in complex illustrations. The best model (Gemini Pro) could accurately describe objects 31% of the time but struggled with precise location marking. Models performed better with larger objects and showed varying tendencies to hallucinate non-existent items. Introduction I was recently reminded of old picture books I used to own. The illustrations inside were completely hand-drawn, richly detailed scenes, packed with dozens of people, overlapping objects, and multiple activities — reminiscent of Where’s Waldo , except that you’re not hunting for a particular character. This type of illustration is called wimmelbilder , which is German for “teeming picture”. A wimmelbilder. My favourite bit is man playing the clarinet in his balcony to an audience of rooftop cats. While I’m convinced that LLMs can really “read” language, I’m less sure that they “see” images. To measure sight quantitatively, I created a small benchmark that I’m calling Wimmelbench. Wimmelbench takes inspiration from needle in a haystack . In that benchmark, a random fact (the needle) is inserted into the middle of a large piece of text (the haystack), and an LLM is asked to retrieve the fact. Most LLMs these days score close to 100% on this task. Wimmelbench is the image analogue: a model is asked to describe a small object (the needle) in a complex scene (the haystack) and draw a bounding box around it. The best multimodal language model I tested was Gemini Pro, which can accurately locate small objects 31% of the time . This increases to 73% if ‘mostly correct’ descriptions are also included. The models were much poorer at drawing bounding boxes, with Gemini scoring the best at a mean GIoU Generalized Intersection over Union . Typically a GIoU score of 0.5 and above is considered good. of 0.13 and Claude 3.6 Sonnet & GPT-4o scoring -0.5. In this post, I’ll talk about constructing the benchmark , go into more detail on the results , and offer some analysis . Aside: why not SAM? Segment Anything Model (SAM) is Meta’s foundation model for promptable segmentation in images and videos. It accepts masks, points, bounding boxes — but not text — as prompts. SAM segmenting an image SAM-based models like LISA do allow text inputs like “Segment the red car in the image”. I expect that it would score well on Wimmelbench. So why benchmark large language models? It tests how well the most general purpose, human-like models we have today understand images. I’m interested in “easy” unsaturated benchmarks that reveal a real shortcoming in current LLMs. I didn’t have easy access to the SAM-based models. It seemed like a fun project. Benchmark construction I collected 50 images from various sources on the internet, mostly from book storefronts and Behance. I searched specifically for wimmelbilder scenes that had a diverse range of objects and activities. They tended to be cartoonish since the style is so popular in children’s books. To create the ground truth dataset, I (well, mostly Claude) built a web-based annotation tool, which I used to manually write descriptions and draw bounding boxes for 84 objects in 50 images. I aimed to have a range of sizes and locations for the objects. The annotation tool In each description, I included the object’s location, distinguishing features, and relation to nearby objects. The annotations were saved to a JSON file. To visualize the range of object sizes in the dataset, I plotted the distribution of each bounding box’s area as a percentage of the total image area (figure 1). Figure 1: Distribution of (ground truth bounding box area / image area) The distribution covered a range from 0.1% to 10% of the image area, with most boxes in the 0.1% - 1% range. Ideally, I’d have hundreds of boxes and a more uniform distribution, but this looked reasonable enough to begin evaluation. Next, I asked Claude 3.6 Sonnet, Gemini 1.5 Pro, and GPT-4o Specifically claude-3-5-sonnet-20241022 , gemini-1.5-pro-002 , and gpt-4o-2024-08-06 . to do two things: first, describe a specific object in an image; second, return the bounding box coordinates around that object. I noted that the object may not be present in the image, so that I could later test how much models would hallucinate. I used a slightly different prompt All prompts are available on GitHub . for Gemini because it’s been trained to return bounding box coordinates in the format [ymin, xmin, ymax, xmax] from the range [ 0 , 1000 ] [0, 1000] [ 0 , 1000 ] . I experimented with the same format for Claude and GPT-4o, but performance was slightly worse than using the internet-standard bounding box coordinates of [x1, y1, x2, y2] with values in [ 0 , 1 ] [0, 1] [ 0 , 1 ] . I then asked Gemini 1.5 Pro to grade the accuracy of the bounding box predictions on a scale of 0-3, where: 0 : Completely incorrect or missing critical details. 1 : Partially correct but missing many important details or containing significant inaccuracies. 2 : Majorly correct with some inaccuracies or missing details. 3 : Mostly or fully correct, capturing the majority of key details and spatial relationships accurately. I tried using the smaller Gemini Flash, but Gemini Pro was closer to my own judgement — looking through its reasoning, I’d quibble with at most 10% of its ratings. It was also consistent in returning the same grades when run multiple times. See the appendix for examples of bounding box predictions and grading. You can find the full evaluation code , grading code and results in the GitHub repository. Results 1. Description accuracy All models scored a grade of 3 for about 30% of object descriptions. If we also include the mostly correct descriptions (grade 2), then Gemini outperforms the other models, scoring 73%, compared to 55% for GPT-4o and 49% for Claude (figure 2). Figure 2: Grade 2. Bounding box precision The models are much worse at predicting bounding box coordinates. For each object, I plotted the model’s Generalized Intersection over Union (GIoU) G I o U = ∣ A ∩ B ∣ ∣ A ∪ B ∣ − ∣ C \ ( A ∪ B ) ∣ ∣ C ∣ GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} G I o U = ∣ A ∪ B ∣ ∣ A ∩ B ∣ − ∣ C ∣ ∣ C \ ( A ∪ B ) ∣ , where C C C is the smallest convex hull that encloses both A A A and B B B . between the predicted and ground truth bounding boxes (figure 3). GIoU scores range from -1 to 1, where 1.0 is perfect overlap; 0.0 means no overlap; negative values indicate the predicted box is far from the target. 0.5+ is typically considered good in computer vision tasks. Gemini Pro scores the best, with a mean GIoU of 0.13. Claude and GPT-4o score -0.60 and -0.67 respectively. Figure 3: Distribution of GIoU scores 3. Hallucination rate To test how much the models would pretend to see objects that weren’t there, I asked them to locate a telescope in each of the 50 images (figure 4). GPT-4o was the biggest liar, finding a telescope in 96% of the images (!). Claude hallucinated 30% of the time, but again Gemini led the pack, only hallucinating in 12% of the images. When it did hallucinate, it tended to be with similar objects like cameras or binoculars. Figure 4: Hallucination rate Analysis Takeaways Gemini Pro scores best on both description and bounding box accuracy. Models can often describe objects well, but struggle to locate them precisely. Object size strongly influences bounding box accuracy, but not description quality. Size sometimes matters Bounding box accuracy showed a strong correlation with object size (figure 5). R 2 R^2 R 2 was ~0.5, which means that half of the variance in bounding box accuracy is explained by object size. Figure 5: GIoU vs area ratio Grade was much less correlated with object size (figure 6). Models seemed to struggle (and succeed) more equally to describe an object across a range of sizes. Figure 6: Grade vs area ratio Grade vs bounding box accuracy Grade was only weakly correlated with bounding box accuracy (figure 7). Here Gemini showed the strongest correlation: if it was able to describe an object accurately, it would tend to draw an accurate bounding box. Figure 7: Grade vs GIoU This means that a model could often describe an object in the correct place but give completely incorrect bounding box coordinates, even for large objects. When asked to describe the cartoon tiger in this image, Claude said: "There is a yellow cartoon tiger with black stripes sitting at one of the classroom tables, approximately in the middle-left portion of the image. It appears to be a stuffed animal or toy tiger sitting on a chair at a desk with some school supplies. The tiger has a typical cartoonish appearance with distinctive black stripes on yellow fur." But the bounding box prediction (drawn in red without a label) is completely incorrect. If you gave a human the same task, this wouldn’t happen — if someone could see an object, they could draw a bounding box around it. Maybe this is a prompt issue? I wouldn’t be able to guess bounding box coordinates as 0-1 ratios either, so maybe this isn’t a fair comparison. I saw a Reddit post that suggested overlaying a grid on the image and asking the model to return grid coordinates yielded better results. Perhaps performance could also improve with additional post-training techniques. Claude with computer use is somehow able to move the cursor to a pixel-perfect location on screen, which I assume is the result of specific post-training. Similar strategies would likely transfer well to determining bounding box coordinates. Gemini’s stronger performance also suggests that there are post-training steps From the Gemini docs : “For object detection, the Gemini model has been trained to provide these coordinates as relative widths or heights…” that can be taken to improve accuracy. I don’t understand the description accuracy variance Object size only explains half the variance in description accuracy: that is, the same object size would sometimes score perfectly, and other times completely incorrectly. For example, I was surprised at how well Gemini described the ‘couple kissing’: "In the bottom right corner of the image, a couple is kissing behind some large, bright green leaves. The woman has long brown hair and is wearing a red headscarf and top. The man has short brown curly hair and is wearing a bright red shirt. Just below the kissing couple, a person with short yellow hair and a yellow-green shirt is holding a knife." On the other hand, it completely fails to find the lifebuoy in the picture below, even though it’s roughly the same size as the previous object. "A child floats on an orange lifebuoy in a small pool near the bottom right of the image. The lifebuoy is round and appears inflated. Another child is jumping into the pool. A waiter in a black suit walks towards the pool, carrying a tray with drinks." The lifebuoy is actually in the top right, on the side of the lifeguard's tall chair. I don’t understand what causes the difference in ability. Future work If I had more time… Expand the dataset to several hundred objects. Test with more LLMs and also specialized vision models like SAM. Improve prompting strategies for bounding box prediction. Conclusion Wimmelbench convinced me that the big lab’s language models indeed cannot see as well as they read. I suspect part of it is due to object size, part due to post-training, and part due to tokenization and model architecture. I’d love to hear your thoughts — feel free to email me . Appendix: example results Claude Gemini GPT-4o Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. There appears to be a small green cactus in a pot being carried on a serving tray by one of the rabbit/bunny characters on the left side of the street. The bunny is one of two gray bunnies carrying food items on trays, likely heading towards the ‘EAT ME’ café in the scene. 0 -0.89 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. There is a large grey elephant character sitting in the center of the street scene. The elephant is wearing a purple velvet robe or jacket and appears to be painting on a canvas that’s set up on an easel. The elephant has distinctive large ears, a long trunk, and is shown in a sitting position while engaged in artistic activity. It appears to be one of the main focal points of the busy street scene, surrounded by various other animal characters. 2 -0.51 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. A small, green cactus is located inside the first floor window of the grey building on the left side of the image. It is positioned on the window sill. 3 0.52 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. An elephant wearing a purple robe is painting on a canvas, standing on the street, slightly left of the image’s center. The elephant is facing left and holds a paintbrush in its trunk and a palette in its left hand. A wooden easel holds the canvas. Directly to the elephant’s right, a light gray cow seems to be giving a presentation while holding a microphone. 3 0.70 Click to zoom in. Object Ground truth description Model description Grade GIoU Cactus A small, green cactus is potted in the window of a light gray building on the left side of the image, near the middle vertically. In the window next to it is an orange cat. Both windows are above teal awning. The cactus is located in the bottom-right corner of the image. It is held by the giraffe and is green with several small red-orange flowers or spikes protruding from it. The giraffe is standing on the sidewalk next to a colorful group of animals. 0 -0.98 Elephant There is an elephant standing on two legs near the bottom-middle of the image. Its is holding a paintbrush and paint palette, wearing a purple top, and has an easel in front of it. The elephant is standing on the road. Mice are playing by its feet. Other animals are nearby. The elephant is located slightly left of center in the lower half of the image. It is wearing a purple robe and holding a paint palette and brush, standing next to an easel. The elephant appears to be engaging in painting. 3 0.31 Claude Gemini GPT-4o Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. In the far right upper portion of the chaotic scene, there is a man waving a white flag or surrendering. He’s wearing what appears to be blue clothing and is positioned near the green paneled wall. He’s surrounded by the general mayhem of the room, with other figures nearby also in various states of distress or motion. 0 -0.94 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. There is a black bowling ball visible in the chaotic scene, located in roughly the center-right portion of the image. It appears to be rolling or has been knocked loose during the mayhem, sitting on the wooden floor among the scattered people and overturned furniture. The bowling ball stands out as a small, round, black object amid the chaos of the cartoon scene. 0 -0.94 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. A man waving a white flag from which pieces of paper are falling is visible in the top left part of the image. He is wearing a blue suit, is seated on a short wall, and appears to be looking down at the chaotic scene below him. Behind him, one person climbs in through a broken window while another climbs out. The room has pale green walls with wood panelling. 1 0.04 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. A bowling ball can be seen lying on the floor amidst the chaotic scene. It’s located slightly below the center-left portion of the image, amongst a group of people engaged in a brawl. Its color is predominantly dark gray or black. It is not in motion. It’s spatially close to a scattered pile of what appear to be pins, suggesting that it’s part of the chaotic activity in the depicted scene. 0 -0.99 Object Ground truth description Model description Grade GIoU Man waving a white flag A man waving a white flag is in the top left of the image, by a broken a window. He is hung up on a light fixture by his jacket. To his right is a person falling, visible behind a broken window. Below him are people fighting each other. The man waving a white flag is located in the bottom right portion of the image. He is wearing a white outfit, lying on the ground on his back, and holding a white flag up in the air. Nearby, there are people in animated poses, and the floor is wooden. 0 -0.98 Bowling ball A small black bowling ball is in the middle-right of the image. It is in the air, in between two men. The man on the right has punched the man on the left who is reeling backwards. The bowling ball looks like it came off left man’s hand. The bowling ball is black and located slightly to the right of the center of the image. It is on the floor in front of a group of people who appear to be crouching or dancing. The ball is near a man who is about to step on it, creating a sense of impending comedic chaos. 0 -0.85 | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/email-marketing-software?trk=products_details_guest_other_products_by_org_section_product_link_result-card_subtitle-click#main-content | Best Email Marketing Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Email Marketing Manager (48) Email Marketing Specialist (44) Marketing Specialist (41) Marketing Manager (37) Sales And Marketing Specialist (20) See all products Find top products in Email Marketing Software category Software used to send commercial electronic mail to potential or current customers. - Deliver and optimize marketing messages (e.g. newsletters, promotions, advertisements, personalized communications) - Design and build customized emails using HTML and CSS or drag-and-drop editors - Create and store opt-in email lists, and manage unsubscribe/subscribe responses - Use reporting and analytics tools to target audiences 277 results Square Marketing Email Marketing Software by Square Use Square email marketing to send automated email campaigns, specialized offers, newsletters, track campaigns, and much more. View product Simple Email Service (SES) Email Marketing Software by Tencent A secure, stable and simple email push service View product Email Email Marketing Software by Netcore Cloud Email Marketing- From engagement to experience, build beautiful email campaigns, choose the right people, and send at the right time. Email API- An email API that supports blazing fast email delivery powered by AI for your critical emails. View product lemlist Email Marketing Software by lemlist Send Cold Emails That Get Replies With lemlist View product Everest Email Marketing Software by Validity Inc. Validity Everest is the email deliverability platform that provides crucial insights and guidance so you can reach more people, increase engagement, and protect your email performance. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Litmus Email Marketing Software by Litmus Litmus is how marketers make every send count. It’s a core component of your tech stack that allows you to create, personalize, test, review, and analyze every email you send. Too many companies are sending out broken emails without even knowing it, missing the deeper insights needed to really improve results, and struggling to personalize emails at scale. With Litmus, you get a better way to build, test, and trust every outgoing message. That’s why over 700,000 marketing professionals, including 80% of the Fortune 100, have turned to us. And it’s how we’ve achieved a customer satisfaction rating of 98%. Backed by Spectrum Equity, Litmus is used by major global brands across every industry that trust its platform to make every send count.™ View product Kasplo Email Marketing Email Marketing Software by Kasplo Kasplo's email marketing makes it simple to design eye-catching emails with its drag-and-drop editor or HTML support, including interactive AMP emails. Send personalized campaigns using advanced segmentation, automate workflows for any industry, and monitor performance with in-depth analytics to keep improving results. View product SHOPLINE SmartPush Email Marketing Software by SHOPLINE SmartPush is an effective and user-friendly marketing tool for EDM and SMS. Leverage customer traffic, reduce costs, improve repurchase rates, drive GMV growth, and strengthen customer-brand connections. SmartPush Can Help You: 1. Unified Data Activation for Maximum Engagement 2. Consistent Customer Experiences Across Channels 3. Smart and Stress-Free Marketing Automation 4. AI-Driven Personalization with Smart Insights View product MailerLite Email Marketing Software by MailerLite MailerLite is an email marketing platform that helps creators, small businesses, and ecommerce stores grow their audience and boost revenue—effortlessly and on their terms. Capture leads in style: Turn visitors into subscribers with landing pages, forms, and pop-ups. 150+ templates, AI-powered magic content generation, and full customization options make creating a joy. Monetize with 0% commission: Sell paid newsletters and digital products through MailerLite—with 0% commissions. Integrate with your ecommerce store to showcase products, track sales, and drive conversions. Personalize like never before: Use multi-trigger automations, machine learning, unlimited segmentation, and dynamic content to create tailored experiences for every subscriber to maximize engagement. Insights to grow smarter: Total visibility into campaign performance so you know what works. Reports on clicks, open rates, and much more. Start your 30-day free trial. Our support team is ready to help! View product Guest Connect - Email Marketing Email Marketing Software by Simplotel Hotel focussed Email Marketing solution that converts website offers to email campaigns with the single click of a button. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/relational-database-management-systems?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_subtitle-click | Best Relational Database Management Systems | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Database Administrator (8) Chief Technology Officer (8) Chief Information Officer (5) Data Scientist (3) Information Technology Manager (3) See all products Find top products in Relational Database Management Systems category Software used to manage, maintain, and access data stored in structured tables. - Store and organize data in columns and rows - Manage simultaneous access of the same data by multiple users - Use Structured Query Language (SQL) to manipulate and view data - Establish ACID (Atomicity, Consistency, Isolation, Durability) compliance of stored data 70 results Amazon Aurora Relational Database Management Systems by Amazon Web Services (AWS) Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. View product Amazon RDS Relational Database Management Systems by Amazon Web Services (AWS) With Amazon RDS, set up, operate and scale a commercial or open source relational database in the cloud in just a few clicks while automating time-consuming administrative tasks. View product Google Cloud SQL Relational Database Management Systems by Google Cloud Cloud SQL is a fully managed service that makes it easy to set up, manage, and administer relational databases: PostgreSQL, MySQL, and SQL Server. View product Teradata Vantage Relational Database Management Systems by Teradata Teradata Vantage™ is the most complete cloud analytics and data platform for AI at scale. It combines the scalability of public clouds with the control of private clouds, all on the same portable software stack. Vantage works on public clouds like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud; hybrid multi-cloud setups; on premises; or on commodity hardware with VMware. It emphasizes fit-for-purpose compute connected through a unified data foundation, ensuring optimal performance and cost efficiency while enabling seamless data flow and collaboration across the organization. With no up-front costs and pay-as-you-go pricing, Vantage adapts to emerging needs, especially in the era of generative AI. Teradata Vantage® is the only platform with the massively parallel processing and multidimensional scalability to handle complex data workloads with the ease and flexibility needed to drive faster ROI across your enterprise. View product Cloud Native Database TDSQL-C Relational Database Management Systems by Tencent A cloud native database with high performance, storage capacity, and compatibility View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights SQL Server Relational Database Management Systems by Microsoft SQL Server Get the flexibility you need to use integrated solutions, apps, and innovations in technology with your data—wherever it lives—in the cloud, on-premises, or at the edge. View product CockroachDB Relational Database Management Systems by Cockroach Labs Run your business-critical applications across any environment — whether in the cloud, on-premises, or hybrid — with global scale and unmatched resilience. -> Scale to meet demand: as demand shifts, your CockroachDB clusters will automatically rebalance to ensure peak performance. -> Deliver always-on experiences: eliminate downtime & protect your data w/ multi-active availability & custom survival goals -> Ensure low-latency access to accurate data across the globe -> Simplify database operations CockroachDB delivers the reliability, robust consistency and ACID transactions of RDBMS, while offering the horizontal scaling and distributed architecture of NoSQL. All of this is built to thrive in any environment, ensuring effortless scalability, global availability and distribution, and continuous uptime all with a familiar, compliant SQL interface. Integrate across many systems including applications, analytics, streaming, security, logging, and observability. View product YugabyteDB Relational Database Management Systems by Yugabyte Open source is at the heart of YugabyteDB, a distributed SQL database. Join our community and see how our users are using YugabyteDB for their applications. View product SingleStoreDB Relational Database Management Systems by SingleStore SingleStore is a cloud database available on leading public clouds and delivers elastic scalability, high availability and exceptional price-performance for your applications, without the burden of managing your data infrastructure. SingleStore scales storage independently of compute and includes Jupyter notebooks, data integration services and a compute service for running AI workloads — meaning developers, ML engineers and data engineers can quickly build awesome things together. View product MariaDB Enterprise Platform Relational Database Management Systems by MariaDB MariaDB Enterprise Platform is engineered to deliver best-in-class performance, data security, replication, clustering and high availability for production workloads. MariaDB Enterprise Platform empowers modern data strategies with an end-to-end database solution, delivering consistent performance and unwavering stability for mission-critical applications in demanding environments. Eliminate database sprawl and simplify your data landscape by running all your workloads in a single, robust MariaDB environment, powered by open source technology and fortified with enterprise-grade reliability, security and support. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://opensource.guide/it/how-to-contribute/ | Come contribuire all’open source | Open Source Guides Chi siamo Contribuire العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Come contribuire all'open source Vuoi contribuire all'open source? Una guida per fornire contributi open source sia per principianti che per veterani. Contenuto Perché contribuire all’Open Source? Cosa significa contribuire Orientamento verso un nuovo progetto Trova un progetto a cui contribuire Come inviare un contributo Cosa succede dopo aver inviato il tuo contributo Perché contribuire all’Open Source? Lavorare su [freenode] mi ha aiutato ad acquisire molte delle competenze che poi ho utilizzato per i miei studi universitari e per il mio lavoro vero e proprio. Penso che lavorare su progetti open source mi aiuti tanto quanto il progetto stesso! — @errietta , “Perché amo contribuire al software open source” Contribuire all’open source può essere un modo gratificante per apprendere, insegnare e sviluppare competenze in quasi tutte le competenze immaginabili. Perché le persone contribuiscono all’open source? Molte ragioni! Migliora il software su cui fai affidamento Molti contributori open source iniziano come utenti del software a cui contribuiscono. Quando trovi un bug nel software open source che stai utilizzando, potresti voler guardare la fonte per vedere se puoi risolverlo da solo. Se questo è il caso, ripristinare la patch è il modo migliore per garantire che i tuoi amici (e te stesso quando aggiorni alla versione successiva) possano trarne vantaggio. Migliorare le competenze esistenti Che si tratti di codifica, progettazione dell’interfaccia utente, progettazione grafica, scrittura o organizzazione, se stai cercando pratica, c’è un progetto open source adatto a te. Incontra persone interessate a cose simili I progetti open source con comunità calorose e accoglienti fanno sì che le persone ritornino per anni. Molte persone stringono amicizie durature attraverso la loro partecipazione all’open source, sia che si incontrino alle conferenze o alle chat di burrito online a tarda notte. Trova mentori e insegna agli altri Lavorare con altri su un progetto condiviso significa che dovrai spiegare come stai facendo le cose e chiedere aiuto ad altre persone. Gli atti di apprendimento e insegnamento possono essere un’attività soddisfacente per tutti i soggetti coinvolti. Costruisci artefatti pubblici che ti aiutino a sviluppare una reputazione (e una carriera) Per definizione, tutto il tuo lavoro open source è pubblico, il che significa che ottieni esempi gratuiti da portare ovunque come dimostrazione di ciò che sai fare. Impara le abilità delle persone L’open source offre opportunità per esercitare capacità di leadership e gestione, come la risoluzione dei conflitti, l’organizzazione di gruppi di persone e la definizione delle priorità del lavoro. Essere in grado di apportare cambiamenti, anche piccoli, dà potere Non è necessario essere un collaboratore permanente per apprezzare la partecipazione all’open source. Hai mai visto un errore di battitura su un sito web e vorresti che qualcuno lo correggesse? In un progetto open source, puoi fare proprio questo. L’open source aiuta le persone a sentirsi indipendenti riguardo alla propria vita e al modo in cui sperimentano il mondo, e questo di per sé è soddisfacente. Cosa significa contribuire Se sei un nuovo collaboratore open source, il processo può creare confusione. Come trovi il progetto giusto? E se non sai programmare? E se qualcosa va storto? Non preoccuparti! Esistono molti modi per essere coinvolti in un progetto open source e alcuni suggerimenti ti aiuteranno a ottenere il massimo dalla tua esperienza. Non è necessario aggiungere alcun codice Un malinteso comune riguardo al contributo all’open source è che si debba contribuire con il codice. In effetti, sono spesso le altre parti del progetto ad essere più trascurate o trascurate . Farai un enorme favore al progetto offrendoti di contribuire con questo tipo di contributi! Sono conosciuto per il mio lavoro su CocoaPods, ma la maggior parte delle persone non sa che in realtà non svolgo alcun lavoro reale sullo strumento CocoaPods stesso. Il mio tempo nel progetto è dedicato principalmente a cose come la documentazione e il lavoro di branding. — @orta , “Passa a OSS per impostazione predefinita” Anche se ti piace scrivere codice, altri tipi di contributi sono un ottimo modo per essere coinvolto in un progetto e incontrare altri membri della comunità. Costruire queste relazioni ti darà l’opportunità di lavorare su altre parti del progetto. Ti piace pianificare eventi? Organizzare workshop o incontri per il progetto, come ha fatto @fzamperin per NodeSchool Organizzare una conferenza di progetto (se applicabile) Aiuta i membri della comunità a trovare le conferenze giuste e a inviare proposte di interventi Ti piace progettare? Ristrutturare i layout per migliorare l’usabilità del progetto Condurre ricerche sugli utenti per riorganizzare e perfezionare la navigazione o i menu del progetto, come suggerito da Drupal Compila una guida di stile per aiutare il progetto ad avere un design visivo coerente Crea una grafica per la maglietta o un nuovo logo, come hanno fatto i contributori di hapi.js Ti piace scrivere? Scrivi e migliora la documentazione del progetto, come ha fatto @CBID2 per la documentazione di OpenSauced Preparare una cartella di esempi che mostrano come viene utilizzato il progetto Avvia una newsletter del progetto o cura i punti salienti della mailing list, come ha fatto @opensauced per il suo prodotto Scrivi tutorial per il progetto, come hanno fatto i contributori PyPA Scrivi una traduzione per la documentazione del progetto, come ha fatto @frontendwizard per le istruzioni CSS Flexbox della sfida freeCodeCamp Seriamente, la [documentazione] è estremamente importante. La documentazione finora è stata eccezionale ed è una caratteristica fondamentale di Babel. Ci sono sezioni che potrebbero sicuramente funzionare, e anche aggiungere un paragrafo qua o là è molto apprezzato. — @kittens, “Invito per contribuire” Ti piace organizzare? Collegamento a problemi duplicati e suggerimento di nuovi thread di problemi per mantenerti organizzato Esamina i problemi aperti e suggerisci di chiudere quelli vecchi, come ha fatto @nzakas per ESLint Fai domande chiarificatrici sui problemi appena scoperti per portare avanti la discussione Ti piace programmare? Trova un problema aperto da risolvere, come ha fatto @dianjin per Leaflet Chiedi se puoi aiutare a registrare una nuova funzionalità Automatizza le impostazioni del progetto Migliorare strumenti e test Ti piace aiutare le persone? Rispondi alle domande sul progetto, ad es. Stack Overflow ( come questo esempio di Postgres ) o Reddit Rispondi alle domande delle persone in domande aperte Aiuta a moderare forum di discussione o canali di chat Ti piace aiutare gli altri a programmare? Rivedi il codice delle dichiarazioni di altre persone Scrivi tutorial su come utilizzare un progetto Offrirti di fare da mentore a un altro collaboratore, come ha fatto @ereichert per @bronzdoc in Rust Non devi lavorare solo su progetti software! Sebbene “open source” si riferisca spesso al software, puoi collaborare praticamente su qualsiasi cosa. Ci sono libri, ricette, elenchi e lezioni che vengono sviluppati come progetti open source. Per esempio: @sindresorhus preparare un elenco di elenchi “grandi”. @h5bp mantenere un elenco di potenziali domande per l’intervista per i candidati sviluppatori front-end @stuartlynn e @nicole-a-tesla hanno realizzato una raccolta di curiosità sui puffini Anche se sei uno sviluppatore di software, lavorare su un progetto di documentazione può aiutarti a iniziare con l’open source. Spesso è meno intimidatorio lavorare su progetti che non implicano codice e il processo collaborativo aumenterà la tua sicurezza e la tua esperienza. Orientamento verso un nuovo progetto Se vai a un rilevatore di problemi e le cose sembrano confuse, non sei solo tu. Questi strumenti richiedono molta conoscenza implicita, ma le persone possono aiutarti a esplorarli e puoi porre loro domande. — @shaunagm, “Come contribuire all’open source” Oltre a correggere un errore di battitura, contribuire all’open source è come avvicinarsi a un gruppo di sconosciuti a una festa. Se inizi a parlare dei lama mentre sono immersi in una discussione sui pesci rossi, probabilmente ti guarderanno in modo un po’ strano. Prima di lanciarti alla cieca con i tuoi suggerimenti, inizia imparando a leggere la stanza. In questo modo aumenti le possibilità che le tue idee vengano notate e ascoltate. Anatomia di un progetto open source Ogni comunità open source è diversa. Trascorrere anni su un progetto open source significa che hai acquisito familiarità con un progetto open source. Passa a un altro progetto e potresti scoprire che il vocabolario, le norme e gli stili di comunicazione sono completamente diversi. Tuttavia, molti progetti open source seguono una struttura organizzativa simile. Comprendere i diversi ruoli nella comunità e il processo complessivo ti aiuterà a navigare rapidamente in qualsiasi nuovo progetto. Un tipico progetto open source prevede i seguenti tipi di persone: Autore: La/e persona/e o organizzazione che ha creato il progetto Proprietario: la/e persona/e che ha la proprietà amministrativa dell’organizzazione o del repository (non sempre coincide con l’autore originale) Sostenitori: Collaboratori responsabili della gestione della visione e della gestione degli aspetti organizzativi del progetto (possono anche essere autori o proprietari del progetto.) Contributori: chiunque abbia contribuito con qualcosa al progetto Membri della comunità: le persone che utilizzano il progetto. Possono essere attivi nelle conversazioni o esprimere la loro opinione sulla direzione del progetto I progetti più grandi possono anche avere sottocomitati o gruppi di lavoro focalizzati su compiti diversi, come strumenti, smistamento, moderazione della comunità e organizzazione di eventi. Cerca nel sito web di un progetto una pagina “team” o il repository della documentazione di gestione per trovare queste informazioni. C’è anche la documentazione del progetto. Questi file sono generalmente elencati al livello più alto del repository. LICENZA: Per definizione, ogni progetto open source deve avere una licenza open source . Se il progetto non ha una licenza, non è open source. README: Il README è la guida pratica che accoglie i nuovi membri della comunità nel progetto. Spiega perché il progetto è utile e come iniziare. CONTRIBUTO: Mentre i README aiutano le persone a utilizzare il progetto, i documenti che contribuiscono aiutano le persone a contribuire al progetto. Spiega quali tipi di contributi sono richiesti e come funziona il processo. Anche se non tutti i progetti hanno un file CONTRIBUTOR, la sua presenza segnala che è un progetto accogliente a cui contribuire. Un buon esempio di guida ai contributi efficace sarebbe questo dal repository di documenti Codecademy . CODE_OF_CONDUCT: Il Codice di condotta stabilisce le regole di base per la condotta associata dei partecipanti e aiuta a creare un ambiente amichevole e accogliente. Anche se non tutti i progetti hanno un file CODE_OF_CONDUCT, la sua presenza segnala che si tratta di un progetto accogliente a cui contribuire. Altra documentazione: Potrebbe essere presente documentazione aggiuntiva come tutorial, istruzioni o politiche di gestione, in particolare per progetti più grandi come Astro Docs . Infine, i progetti open source utilizzano i seguenti strumenti per organizzare la discussione. Leggere gli archivi ti darà una buona idea di come pensa e funziona la comunità. Tracciamento dei problemi: dove le persone discutono questioni relative al progetto. Richieste pull: quando le persone discutono e rivedono le modifiche in corso, sia che si tratti di migliorare l’ordine del codice dei contributori, l’utilizzo della grammatica, l’utilizzo delle immagini, ecc. Alcuni progetti, come MDN Web Docs , utilizzano determinati flussi di azioni GitHub per automatizzare e velocizzare le loro revisioni del codice. Forum di discussione o mailing list: Alcuni progetti possono utilizzare questi canali per argomenti di conversazione (ad esempio “Come…“ o “Cosa ne pensi di…“ invece di segnalazioni di bug o richieste per le funzioni). Altri utilizzano il tracker dei problemi per tutte le chiamate. Un buon esempio di ciò potrebbe essere la newsletter settimanale CHAOSS Canale di chat sincrono: alcuni progetti utilizzano canali di chat (come Slack o IRC) per conversazioni informali, collaborazione e scambi rapidi. Un buon esempio di ciò potrebbe essere EddieHub Discord Community . Trova un progetto a cui contribuire Ora che hai capito come funzionano i progetti open source, è il momento di trovare un progetto a cui contribuire! Se non hai mai contribuito all’open source prima, segui il consiglio del presidente degli Stati Uniti John F. Kennedy, che una volta disse: “Non chiederti cosa può fare il tuo paese per te, chiediti cosa puoi fare tu per il tuo paese”. Non chiederti cosa può fare il tuo Paese per te: chiediti cosa puoi fare tu per il tuo Paese. — Biblioteca John F. Kennedy Il contributo all’open source avviene a tutti i livelli, in tutti i progetti. Non devi pensare troppo a quale sarà esattamente il tuo primo contributo o come sarà. Inizia invece pensando ai progetti che già usi o che desideri utilizzare. I progetti a cui partecipi attivamente sono quelli a cui continui a tornare. All’interno di questi progetti, ogni volta che ti sorprendi a pensare che qualcosa potrebbe essere migliore o diverso, agisci secondo il tuo istinto. L’open source non è un club esclusivo; è fatto da persone proprio come te. “Open source” è solo un termine elegante per considerare i problemi del mondo come risolvibili. È possibile eseguire la scansione del README e trovare un collegamento interrotto o un errore di battitura. O sei un nuovo utente e hai notato che qualcosa non funziona, oppure c’è un problema che ritieni dovrebbe essere presente nella documentazione. Invece di ignorarlo e andare avanti o chiedere a qualcun altro di risolverlo, vedi se puoi aiutare partecipando. Ecco cos’è l’open source! Secondo uno studio di Igor Steinmacher e altri ricercatori di informatica, il 28% dei contributi accessori in open source sono documenti, come come correzioni di errori di battitura, riformattazione o scrittura di traduzioni. Se stai cercando problemi esistenti che puoi risolvere, ogni progetto open source ha una pagina “/contribute” che evidenzia problemi adatti ai principianti con cui puoi iniziare. Vai alla pagina principale del repository GitHub e aggiungi “/contribute” alla fine dell’URL (ad es. https://github.com/facebook/react/contribute ). Puoi anche utilizzare una delle seguenti risorse per aiutarti a scoprire e contribuire a nuovi progetti: GitHub Explore Open Source Friday First Timers Only CodeTriage 24 Pull Requests Up For Grabs First Contributions SourceSort OpenSauced Lista di controllo prima di contribuire Quando trovi un progetto a cui vuoi contribuire, esegui una rapida scansione per assicurarti che il progetto sia idoneo ad accettare contributi. Altrimenti, il tuo duro lavoro potrebbe non ricevere mai risposta. Ecco una pratica lista di controllo per valutare se un progetto è adatto ai nuovi contributori. Soddisfa la definizione di open source Ha la licenza? Di solito c'è un file chiamato LICENSE nella radice del repository. Il progetto accetta attivamente contributi Guarda l’attività di commit sul ramo master. Su GitHub, puoi vedere queste informazioni nella scheda Approfondimenti della home page di un repository, ad esempio Virtual-Coffee Quando è stato l'ultimo fidanzamento? Quanti collaboratori ha il progetto? Quanto spesso le persone si impegnano? (Su GitHub, puoi trovarlo facendo clic su "Commit" nella barra in alto.) Poi guarda i problemi del progetto. Quante domande aperte ci sono? I manutentori sono rapidi nel rispondere ai problemi quando vengono sollevati? C’è una discussione attiva sulle questioni? I problemi sono recenti? I problemi stanno finendo? (Su GitHub, fai clic sulla scheda "chiuso" nella pagina dei problemi per visualizzare i problemi chiusi.) Ora fai lo stesso per le richieste pull del progetto. Quante richieste pull aperte ci sono? I manutentori rispondono rapidamente alle richieste pull quando vengono aperte? Esiste una discussione attiva sulle richieste pull? Le richieste di download sono recenti? Quanto tempo fa sono state unite tutte le richieste pull? (Su GitHub, fai clic sulla scheda "chiuso" nella pagina Pull Requests per vedere i PR chiusi.) Il progetto è accogliente Un progetto amichevole e accogliente segnala che saranno ricettivi verso nuovi collaboratori. I manutentori rispondono in modo utile alle domande sui problemi? Le persone sono amichevoli nei problemi, nei forum di discussione e nelle chat (ad esempio IRC o Slack)? Vengono prese in considerazione le richieste pull? I manutentori ringraziano le persone per i loro contributi? Ogni volta che vedi un thread lungo, controlla a campione le risposte degli sviluppatori principali che arrivano alla fine del thread. Riassumono in modo costruttivo e adottano misure per portare il thread a una risoluzione pur rimanendo educati? Se vedi molte guerre di fiamma in corso, spesso è un segno che l’energia viene utilizzata nella discussione invece che nello sviluppo. — @kfogel, Produrre OSS Come inviare un contributo Hai trovato un progetto che ti piace e sei pronto a contribuire. Finalmente! Ecco come ottenere il tuo contributo nel modo giusto. Comunicazione effettiva Che tu collabori occasionalmente o cerchi di entrare a far parte di una comunità, lavorare con gli altri è una delle competenze più importanti che svilupperai nell’open source. [Come nuovo collaboratore,] mi sono reso conto subito che dovevo fare domande se volevo riuscire a chiudere la questione. Ho dato una rapida occhiata al codice base. Dopo aver percepito cosa stava succedendo, ho chiesto ulteriori indicazioni. E fatto! Sono riuscito a risolvere il problema dopo aver ottenuto tutti i dettagli di cui avevo bisogno. — @shubheksha, Un viaggio molto accidentato per principianti attraverso il mondo dell’open source Prima di aprire un problema o una richiesta pull o porre una domanda in chat, tieni a mente questi punti per aiutare le tue idee a prendere vita in modo efficace. Fornisci contesto. Aiuta gli altri a salire a bordo rapidamente. Se riscontri un errore, spiega cosa stai cercando di fare e come riprodurlo. Se stai proponendo una nuova idea, spiega perché pensi che sarebbe utile per il progetto (non solo per te!). 😇 “X non accade quando faccio Y” 😢 “X è rotto! Per favore aggiustalo.” Fai i compiti in anticipo. Va bene non sapere le cose, ma dimostra che ci hai provato. Prima di chiedere aiuto, assicurati di controllare il README del progetto, la documentazione, i problemi (aperti o chiusi), la mailing list e cerca una risposta in Internet. Le persone apprezzeranno quando dimostrerai che stai cercando di imparare. 😇 “Non sono sicuro di come implementare X. Ho controllato i documenti di aiuto e non ho trovato alcuna menzione.” 😢 “Come faccio X?” Mantieni le richieste brevi e dirette. Come inviare un’email, qualsiasi contributo, non importa quanto semplice o utile, richiede che qualcun altro lo esamini. Molti progetti hanno più richieste in arrivo che persone che possono aiutare. Sii breve. Aumenterai le possibilità che qualcuno possa aiutarti. 😇 “Vorrei scrivere un tutorial API.” 😢 “L’altro giorno stavo guidando lungo l’autostrada e mi sono fermato per fare benzina e poi ho avuto questa fantastica idea di qualcosa che dovremmo fare, ma prima di spiegartelo, lascia che te lo mostri…“ Mantieni pubbliche tutte le comunicazioni. Sebbene sia allettante, non contattare personalmente i manutentori a meno che non sia necessario condividere informazioni sensibili (come un problema di sicurezza o una cattiva condotta grave). Quando mantieni pubblica la conversazione, più persone possono imparare e trarre vantaggio dal tuo scambio. Le discussioni possono essere di per sé un contributo. 😇 (come commento) “@-maintainer Ciao! Come procediamo con questo PR?” 😢 (come email) “Ciao, scusa se ti disturbo via email, ma mi chiedevo se avessi la possibilità di rivedere il mio PR” Va bene fare domande (ma sii paziente!). Tutti sono stati nuovi ad un progetto ad un certo punto, e anche i contributori più esperti devono aggiornarsi quando guardano un nuovo progetto. Allo stesso modo, anche i manutentori di lunga data non hanno sempre familiarità con ogni parte del progetto. Mostra loro la stessa pazienza che vorresti che mostrassero a te. 😇 “Grazie per aver esaminato questo errore. Ho seguito i tuoi suggerimenti. Ecco il risultato.” 😢 “Perché non riesci a risolvere il mio problema? Non è questo il tuo progetto?” Rispetta le decisioni della comunità. Le tue idee potrebbero differire dalle priorità o dalla visione della comunità. Potrebbero offrire feedback o decidere di non perseguire la tua idea. Anche se dovresti discutere e trovare un compromesso, i manutentori devono convivere con la tua decisione più a lungo di te. Se non sei d’accordo con la loro direzione, puoi sempre lavorare sul tuo fork o iniziare il tuo progetto. 😇 “Mi dispiace che tu non possa supportare il mio caso d’uso, ma come hai spiegato tu riguarda solo una piccola parte di utenti, capisco il motivo. Grazie per l’ascolto.” 😢 “Perché non supporti il mio caso d’uso? Questo è inaccettabile!” Soprattutto, mantienilo elegante. L’open source è composto da contributori provenienti da tutto il mondo. Si perde il contesto tra lingue, culture, aree geografiche e fusi orari. Inoltre, la comunicazione scritta rende più difficile trasmettere il tono o l’umore. Assumi buone intenzioni in queste conversazioni. Va bene rifiutare educatamente un’idea, chiedere più contesto o chiarire ulteriormente la tua posizione. Prova solo a lasciare Internet in un posto migliore di quando l’hai trovato. Raccogli il contesto Prima di agire, fai un rapido controllo per assicurarti che la tua idea non sia stata discussa altrove. Visualizza il README del progetto, i problemi (aperti e chiusi), la mailing list e Stack Overflow. Non devi passare ore a esaminare tutto, ma una rapida ricerca di alcuni termini chiave può fare molto. Se non riesci a trovare la tua idea altrove, sei pronto a fare una mossa. Se il progetto è su GitHub, probabilmente comunicherai nel modo seguente: Sollevare un problema: è come avviare una conversazione o una discussione Le richieste pull servono per iniziare a lavorare su una soluzione. Canali di comunicazione: se il progetto ha un canale Discord, IRC o Slack designato, valuta la possibilità di avviare una conversazione o chiedere chiarimenti sul tuo contributo. Prima di aprire un problema o una richiesta pull, controlla i documenti che contribuiscono al progetto (di solito un file chiamato CONTRIBUTING o nel README) per vedere se è necessario includere qualcosa di specifico. Ad esempio, potrebbero chiederti di seguire uno schema o richiederti di utilizzare dei test. Se vuoi dare un contributo significativo, apri un issue da chiedere prima di lavorarci. È utile guardare il progetto per un po’ (su GitHub, puoi fare clic su “Guarda” per ricevere una notifica di tutte le conversazioni) e accedere a conoscere i membri della comunità prima di svolgere lavori che potrebbero non essere accettati. Imparerai molto prendendo un progetto che stai utilizzando attivamente, “guardandolo” su GitHub e leggendo ogni numero e PR. — @gaearon per partecipare ai progetti Apertura di un problema Di solito dovresti aprire un problema nelle seguenti situazioni: Segnala un bug che non puoi risolvere da solo Discutere un argomento o un’idea di alto livello (ad esempio comunità, visione o politiche) Suggerisci una nuova funzionalità o un’altra idea di progetto Suggerimenti per comunicare sui problemi: Se vedi un problema aperto su cui vuoi lavorare , commenta il problema per far sapere alle persone che ci stai lavorando. In questo modo, le persone avranno meno probabilità di duplicare il tuo lavoro. Se un problema è stato scoperto qualche tempo fa, potrebbe essere stato risolto altrove o già risolto, quindi commenta per chiedere conferma prima di iniziare il lavoro. Se hai aperto un problema ma hai trovato la risposta da solo in seguito, commenta il problema per farlo sapere alle persone, quindi chiudi il problema. Anche documentare questo risultato è un contributo al progetto. Apertura di una richiesta pull Di solito dovresti aprire una richiesta pull nelle seguenti situazioni: Invia correzioni minori come errori di battitura, collegamenti interrotti o errori evidenti. Inizia a lavorare su un contributo che ti è già stato richiesto o di cui hai già parlato in un numero. Una richiesta pull non deve rappresentare un lavoro completato. Di solito è meglio aprire una richiesta pull in anticipo in modo che altri possano guardare o fornire feedback sui tuoi progressi. Basta aprirlo come “bozza” o contrassegnarlo come “WIP” (Lavori in corso) nella riga dell’oggetto o nelle sezioni “Note ai revisori” se fornite (oppure puoi semplicemente crearne una tua. In questo modo: ** # # Note per il revisore** ). Puoi sempre aggiungere altri impegni in un secondo momento. Se il progetto è su GitHub, ecco come inviare una richiesta pull: Fork il repository e clonalo localmente. Connetti il tuo locale al repository upstream originale aggiungendolo come remoto. Estrai spesso le modifiche da “upstream” per rimanere aggiornato, quindi quando invii la tua richiesta di pull, i conflitti di unione saranno meno probabili. (Vedi istruzioni più dettagliate qui .) Crea un ramo per le tue modifiche. Elenca eventuali problemi rilevanti o documentazione di supporto nel tuo PR (ad esempio “Chiude n. 37.”) Includi screenshot prima e dopo se le modifiche comportano differenze HTML/CSS. Trascina e rilascia le immagini nel corpo della tua richiesta pull. Testa le tue modifiche! Esegui le tue modifiche confrontandole con eventuali test esistenti, se presenti, e creane di nuovi quando necessario. È importante assicurarsi che le modifiche non interrompano il progetto esistente. Contribuisci allo stile del progetto secondo le tue capacità. Ciò può significare utilizzare il rientro, il punto e virgola o i commenti in modo diverso rispetto a quanto faresti nel tuo repository, ma rende più facile per il manutentore unirli e per gli altri comprenderli e mantenerli in futuro. Se questa è la tua prima richiesta pull, dai un’occhiata a Crea una richiesta pull che @kentcdodds ha creato come tutorial video dimostrativo. Puoi anche esercitarti a creare una richiesta pull sul repository First Contributions creato da @Roshanjossey. Cosa succede dopo aver inviato il tuo contributo Prima di iniziare a festeggiare, dopo che avrai inviato il tuo contributo si verificherà una delle seguenti situazioni: 😭 Non riceverai risposta Ci auguriamo che tu abbia controllato eventuali segni di attività nel progetto prima di contribuire. Anche con un progetto attivo, però, è possibile che il tuo contributo non riceva risposta. Se non ricevi notizie da più di una settimana, è giusto rispondere educatamente nello stesso thread chiedendo a qualcuno di recensire. Se conosci il nome della persona giusta per rivedere il tuo contributo, puoi @ menzionarla in questo thread. Non contattare personalmente questa persona ; ricorda che la comunicazione pubblica è vitale per i progetti open source. Se fai un cortese promemoria e continui a non ricevere risposta, è possibile che nessuno risponderà mai. Non è una sensazione fantastica, ma non lasciarti scoraggiare! 😄 Esistono molte possibili ragioni per cui non hai ricevuto una risposta, comprese circostanze personali che potrebbero andare oltre il tuo controllo. Prova a trovare un altro progetto o un modo per contribuire. Se non altro, è un buon motivo per non investire troppo tempo nel contribuire prima che gli altri membri della comunità siano coinvolti e reattivi. 🚧 Qualcuno vuole modifiche al tuo contributo È comune che ti venga chiesto di apportare modifiche al tuo contributo, che si tratti di feedback sulla portata della tua idea o di modifiche al tuo codice. Quando qualcuno chiede modifiche, sii reattivo. Si sono presi il tempo per rivedere il tuo contributo. Aprire un PR e andarsene è una cattiva forma. Se non sai come apportare modifiche, ricerca il problema e poi chiedi aiuto se ne hai bisogno. Un buon esempio di ciò potrebbe essere il feedback che un altro collaboratore ha dato a @a-m-lamb sulla sua richiesta di pull ai documenti Codecademy . Se non hai più tempo per lavorare sul problema perché la conversazione va avanti da mesi e le tue circostanze sono cambiate o non riesci a trovare una soluzione, informa un manutentore in modo che possa aprire il problema a qualcun altro , come ha fatto @RitaDee per un problema nelle applicazioni OpenSauced . 👎 Il tuo contributo non è accettato Il tuo contributo potrebbe essere accettato o meno alla fine. Spero che tu non ci abbia già dedicato troppo lavoro. Se non sei sicuro del motivo per cui non è stato accettato, è perfettamente ragionevole chiedere al manutentore feedback e chiarimenti. Alla fine, però, dovrai rispettare il fatto che è una loro decisione. Non discutere e non essere ostile. Sei sempre il benvenuto ad espanderti e lavorare sulla tua versione se non sei d’accordo! 🎉 Il tuo contributo è accettato Evviva! Hai dato con successo un contributo open source! Fallo! 🎉 Se hai appena dato il tuo primo contributo open source o stai cercando nuovi modi per contribuire, speriamo che tu sia ispirato ad agire. Anche se il tuo contributo non è stato accettato, assicurati di dire grazie quando il manutentore ha fatto uno sforzo per aiutarti. L’open source è creato da persone come te: un problema, una richiesta pull, un commento o il cinque alla volta. Torna a tutte le guide Guide correlate Avvio di un progetto open source Scopri di più sul mondo dell’open source e preparati a iniziare il tuo progetto. Costruire comunità accoglienti Costruire una comunità che incoraggi le persone a utilizzare, contribuire ed educare con il tuo progetto Scroll to Top Contribuire Vuoi dare un suggerimento? Questo contenuto è open source. Aiutaci a migliorarlo. Contribuire Rimani in contatto Sii il primo a conoscere gli ultimi suggerimenti e risorse open source su GitHub. Email fine print con da e [amici] | 2026-01-13T09:29:15 |
https://www.linkedin.com/posts/digitaleconomycouncil_looking-ahead-to-2026-deca-a-has-a-full-activity-7414450454530543616-uA6N | Looking ahead to 2026 DECA a has a full program of events across policy, industry and international engagement planned for 2026. The year ahead is focused on execution and delivery. Practical… | Digital Economy Council of Australia (DECA) Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join now Digital Economy Council of Australia (DECA)’s Post Digital Economy Council of Australia (DECA) 16,292 followers 6d Report this post Looking ahead to 2026 DECA a has a full program of events across policy, industry and international engagement planned for 2026. The year ahead is focused on execution and delivery. Practical policy work. Industry coordination. International engagement. And outcomes that support Australia’s digital asset and digital infrastructure ecosystem. Across the year, DECA will convene government, regulators, industry and international counterparts through forums, summits, roundtables and conferences designed to inform, connect and deliver tangible outcomes. Our 2026 events calendar is now live. 🔗 View the full schedule: https://bit.ly/45FVNsR We look forward to working with members, partners and stakeholders throughout the year ahead. 16 Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in 16,292 followers View Profile Connect Explore content categories Career Productivity Finance Soft Skills & Emotional Intelligence Project Management Education Technology Leadership Ecommerce User Experience Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Sign in to view more content Create your free account or sign in to continue your search Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/signup/cold-join?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fcompany%2Fdigitaleconomycouncil&trk=organization_guest_main-feed-card_social-actions-comments | Sign Up | LinkedIn Join LinkedIn now — it’s free! Not you? Remove photo Join LinkedIn To create a LinkedIn account, you must understand how LinkedIn processes your personal information by selecting learn more for each item listed. Agree to all terms We collect and use personal information. Learn more We share personal information with third parties to provide our services. Learn more Further information is available in our Korea Privacy Addendum . Privacy Policy Addendum 1 of 2 2 of 2 Agree to the term Continue Back Agree to all terms Email Password Show Keep me logged in First name Last name By clicking Agree & Join, you agree to the LinkedIn User Agreement , Privacy Policy , and Cookie Policy . Agree & Join or Security verification Already on LinkedIn? Sign in Looking to create a page for a business? Get help LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Follow top companies Over 60% of the fortune 100 companies use LinkedIn to hire. Join now Leave | 2026-01-13T09:29:15 |
https://www.linkedin.com/legal/privacy-policy?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2FshareArticle%3Fmini%3Dtrue%26url%3Dhttps%253A%252F%252Fdev%2Eto%252Faditya_singh_172b37651201%252F30-core-algorithm-ep-06-prefix-sum-1ii0%26title%3D30%2520Core%2520Algorithm%2520%253A%2520Ep-06%2520%253APrefix%2520Sum%26summary%3DWhy%2520Prefix%2520Sum%2520Is%2520Really%2520About%2520Making%2520Accumulated%2520Cost%2520Explicit%2520%2520%2520Many%2520performance%2E%2E%2E%26source%3DDEV%2520Community&trk=registration-frontend_join-form-privacy-policy | LinkedIn Privacy Policy Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws Privacy Policy Effective November 3, 2025 Your Privacy Matters LinkedIn’s mission is to connect the world’s professionals to allow them to be more productive and successful. Central to this mission is our commitment to be transparent about the data we collect about you, how it is used and with whom it is shared. This Privacy Policy applies when you use our Services (described below). We offer our users choices about the data we collect, use and share as described in this Privacy Policy, Cookie Policy , Settings and our Help Center. Key Terms Choices Settings are available to Members of LinkedIn and Visitors are provided separate controls. Learn More . Table of Contents Data We Collect How We Use Your Data How We Share Information Your Choices and Obligations Other Important Information Introduction We are a social network and online platform for professionals. People use our Services to find and be found for business opportunities, to connect with others and find information. Our Privacy Policy applies to any Member or Visitor to our Services. Our registered users (“Members”) share their professional identities, engage with their network, exchange knowledge and professional insights, post and view relevant content, learn and develop skills, and find business and career opportunities. Content and data on some of our Services is viewable to non-Members (“Visitors”). We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. Members and Visitors located in the Designated Countries or the UK can review additional information in our European Regional Privacy Notice . Services This Privacy Policy, including our Cookie Policy applies to your use of our Services. This Privacy Policy applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-branded sites, apps, communications and services offered by LinkedIn (“Services”), including off-site Services, such as our ad services and the “Apply with LinkedIn” and “Share with LinkedIn” plugins, but excluding services that state that they are offered under a different privacy policy. For California residents, additional disclosures required by California law may be found in our California Privacy Disclosure . Data Controllers and Contracting Parties If you are in the “Designated Countries”, LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you are outside of the Designated Countries, LinkedIn Corporation will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. As a Visitor or Member of our Services, the collection, use and sharing of your personal data is subject to this Privacy Policy and other documents referenced in this Privacy Policy, as well as updates. Change Changes to the Privacy Policy apply to your use of our Services after the “effective date.” LinkedIn (“we” or “us”) can modify this Privacy Policy, and if we make material changes to it, we will provide notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. If you object to any changes, you may close your account. You acknowledge that your continued use of our Services after we publish or send a notice about our changes to this Privacy Policy means that the collection, use and sharing of your personal data is subject to the updated Privacy Policy, as of its effective date. 1. Data We Collect 1.1 Data You Provide To Us You provide data to create an account with us. Registration To create an account you need to provide data including your name, email address and/or mobile number, general location (e.g., city), and a password. If you register for a premium Service, you will need to provide payment (e.g., credit card) and billing information. You create your LinkedIn profile (a complete profile helps you get the most from our Services). Profile You have choices about the information on your profile, such as your education, work experience, skills, photo, city or area , endorsements, and optional verifications of information on your profile (such as verifications of your identity or workplace). You don’t have to provide additional information on your profile; however, profile information helps you to get more from our Services, including helping recruiters and business opportunities find you. It’s your choice whether to include sensitive information on your profile and to make that sensitive information public. Please do not post or add personal data to your profile that you would not want to be publicly available. You may give other data to us, such as by syncing your calendar. Posting and Uploading We collect personal data from you when you provide, post or upload it to our Services, such as when you fill out a form, (e.g., with demographic data or salary), respond to a survey, or submit a resume or fill out a job application on our Services. If you sync your calendars with our Services, we will collect your calendar meeting information to keep growing your network by suggesting connections for you and others, and by providing information about events, e.g. times, places, attendees and contacts. You don’t have to post or upload personal data; though if you don’t, it may limit your ability to grow and engage with your network over our Services. 1.2 Data From Others Others may post or write about you. Content and News You and others may post content that includes information about you (as part of articles, posts, comments, videos) on our Services. We also may collect public information about you, such as professional-related news and accomplishments, and make it available as part of our Services, including, as permitted by your settings, in notifications to others of mentions in the news . Others may sync their calendar with our Services Contact and Calendar Information We receive personal data (including contact information) about you when others import or sync their calendar with our Services, associate their contacts with Member profiles, scan and upload business cards, or send messages using our Services (including invites or connection requests). If you or others opt-in to sync email accounts with our Services, we will also collect “email header” information that we can associate with Member profiles. Customers and partners may provide data to us. Partners We receive personal data (e.g., your job title and work email address) about you when you use the services of our customers and partners, such as employers or prospective employers and applicant tracking systems providing us job application data. Related Companies and Other Services We receive data about you when you use some of the other services provided by us or our Affiliates , including Microsoft. For example, you may choose to send us information about your contacts in Microsoft apps and services, such as Outlook, for improved professional networking activities on our Services or we may receive information from Microsoft about your engagement with their sites and services. 1.3 Service Use We log your visits and use of our Services, including mobile apps. We log usage data when you visit or otherwise use our Services, including our sites, app and platform technology, such as when you view or click on content (e.g., learning video) or ads (on or off our sites and apps), perform a search, install or update one of our mobile apps, share articles or apply for jobs. We use log-ins, cookies, device information and internet protocol (“IP”) addresses to identify you and log your use. 1.4 Cookies and Similar Technologies We collect data through cookies and similar technologies. As further described in our Cookie Policy , we use cookies and similar technologies (e.g., pixels and ad tags) to collect data (e.g., device IDs) to recognize you and your device(s) on, off and across different services and devices where you have engaged with our Services. We also allow some others to use cookies as described in our Cookie Policy. If you are outside the Designated Countries, we also collect (or rely on others, including Microsoft, who collect) information about your device where you have not engaged with our Services (e.g., ad ID, IP address, operating system and browser information) so we can provide our Members with relevant ads and better understand their effectiveness. Learn more . You can opt out from our use of data from cookies and similar technologies that track your behavior on the sites of others for ad targeting and other ad-related purposes. For Visitors, the controls are here . 1.5 Your Device and Location We receive data through cookies and similar technologies When you visit or leave our Services (including some plugins and our cookies or similar technology on the sites of others), we receive the URL of both the site you came from and the one you go to and the time of your visit. We also get information about your network and device (e.g., IP address, proxy server, operating system, web browser and add-ons, device identifier and features, cookie IDs and/or ISP, or your mobile carrier). If you use our Services from a mobile device, that device will send us data about your location based on your phone settings. We will ask you to opt-in before we use GPS or other tools to identify your precise location. 1.6 Communications If you communicate through our Services, we learn about that. We collect information about you when you communicate with others through our Services (e.g., when you send, receive, or engage with messages, events, or connection requests, including our marketing communications). This may include information that indicates who you are communicating with and when. We also use automated systems to support and protect our site. For example, we use such systems to suggest possible responses to messages and to manage or block content that violates our User Agreement or Professional Community Policies . 1.7 Workplace and School Provided Information When your organization (e.g., employer or school) buys a premium Service for you to use, they give us data about you. Others buying our Services for your use, such as your employer or your school, provide us with personal data about you and your eligibility to use the Services that they purchase for use by their workers, students or alumni. For example, we will get contact information for “ LinkedIn Page ” (formerly Company Page) administrators and for authorizing users of our premium Services, such as our recruiting, sales or learning products. 1.8 Sites and Services of Others We get data when you visit sites that include our ads, cookies or plugins or when you log-in to others’ services with your LinkedIn account. We receive information about your visits and interaction with services provided by others when you log-in with LinkedIn or visit others’ services that include some of our plugins (such as “Apply with LinkedIn”) or our ads, cookies or similar technologies. 1.9 Other We are improving our Services, which means we get new data and create new ways to use data. Our Services are dynamic, and we often introduce new features, which may require the collection of new information. If we collect materially different personal data or materially change how we collect, use or share your data, we will notify you and may also modify this Privacy Policy. Key Terms Affiliates Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., GitHub, Inc.). 2. How We Use Your Data We use your data to provide, support, personalize and develop our Services. How we use your personal data will depend on which Services you use, how you use those Services and the choices you make in your settings . We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others. You can review LinkedIn's Responsible AI principles here and learn more about our approach to generative AI here . Learn more about the inferences we may make, including as to your age and gender and how we use them. 2.1 Services Our Services help you connect with others, find and be found for work and business opportunities, stay informed, get training and be more productive. We use your data to authorize access to our Services and honor your settings. Stay Connected Our Services allow you to stay in touch and up to date with colleagues, partners, clients, and other professional contacts. To do so, you can “connect” with the professionals who you choose, and who also wish to “connect” with you. Subject to your and their settings , when you connect with other Members, you will be able to search each others’ connections in order to exchange professional opportunities. We use data about you (such as your profile, profiles you have viewed or data provided through address book uploads or partner integrations) to help others find your profile, suggest connections for you and others (e.g. Members who share your contacts or job experiences) and enable you to invite others to become a Member and connect with you. You can also opt-in to allow us to use your precise location or proximity to others for certain tasks (e.g. to suggest other nearby Members for you to connect with, calculate the commute to a new job, or notify your connections that you are at a professional event). It is your choice whether to invite someone to our Services, send a connection request, or allow another Member to become your connection. When you invite someone to connect with you, your invitation will include your network and basic profile information (e.g., name, profile photo, job title, region). We will send invitation reminders to the person you invited. You can choose whether or not to share your own list of connections with your connections. Visitors have choices about how we use their data. Stay Informed Our Services allow you to stay informed about news, events and ideas regarding professional topics you care about, and from professionals you respect. Our Services also allow you to improve your professional skills, or learn new ones. We use the data we have about you (e.g., data you provide, data we collect from your engagement with our Services and inferences we make from the data we have about you), to personalize our Services for you, such as by recommending or ranking relevant content and conversations on our Services. We also use the data we have about you to suggest skills you could add to your profile and skills that you might need to pursue your next opportunity. So, if you let us know that you are interested in a new skill (e.g., by watching a learning video), we will use this information to personalize content in your feed, suggest that you follow certain Members on our site, or suggest related learning content to help you towards that new skill. We use your content, activity and other data, including your name and photo, to provide notices to your network and others. For example, subject to your settings , we may notify others that you have updated your profile, posted content, took a social action , used a feature, made new connections or were mentioned in the news . Career Our Services allow you to explore careers, evaluate educational opportunities, and seek out, and be found for, career opportunities. Your profile can be found by those looking to hire (for a job or a specific task ) or be hired by you. We will use your data to recommend jobs and show you and others relevant professional contacts (e.g., who work at a company, in an industry, function or location or have certain skills and connections). You can signal that you are interested in changing jobs and share information with recruiters. We will use your data to recommend jobs to you and you to recruiters. We may use automated systems to provide content and recommendations to help make our Services more relevant to our Members, Visitors and customers. Keeping your profile accurate and up-to-date may help you better connect to others and to opportunities through our Services. Productivity Our Services allow you to collaborate with colleagues, search for potential clients, customers, partners and others to do business with. Our Services allow you to communicate with other Members and schedule and prepare meetings with them. If your settings allow, we scan messages to provide “bots” or similar tools that facilitate tasks such as scheduling meetings, drafting responses, summarizing messages or recommending next steps. Learn more . 2.2 Premium Services Our premium Services help paying users to search for and contact Members through our Services, such as searching for and contacting job candidates, sales leads and co-workers, manage talent and promote content. We sell premium Services that provide our customers and subscribers with customized-search functionality and tools (including messaging and activity alerts) as part of our talent, marketing and sales solutions. Customers can export limited information from your profile, such as name, headline, current company, current title, and general location (e.g., Dublin), such as to manage sales leads or talent, unless you opt-out . We do not provide contact information to customers as part of these premium Services without your consent. Premium Services customers can store information they have about you in our premium Services, such as a resume or contact information or sales history. The data stored about you by these customers is subject to the policies of those customers. Other enterprise Services and features that use your data include TeamLink and LinkedIn Pages (e.g., content analytics and followers). 2.3 Communications We contact you and enable communications between Members. We offer settings to control what messages you receive and how often you receive some types of messages. We will contact you through email, mobile phone, notices posted on our websites or apps, messages to your LinkedIn inbox, and other ways through our Services, including text messages and push notifications. We will send you messages about the availability of our Services, security, or other service-related issues. We also send messages about how to use our Services, network updates, reminders, job suggestions and promotional messages from us and our partners. You may change your communication preferences at any time. Please be aware that you cannot opt out of receiving service messages from us, including security and legal notices. We also enable communications between you and others through our Services, including for example invitations , InMail , groups and messages between connections. 2.4 Advertising We serve you tailored ads both on and off our Services. We offer you choices regarding personalized ads, but you cannot opt-out of seeing non-personalized ads. We target (and measure the performance of) ads to Members, Visitors and others both on and off our Services directly or through a variety of partners, using the following data, whether separately or combined: Data collected by advertising technologies on and off our Services using pixels, ad tags (e.g., when an advertiser installs a LinkedIn tag on their website), cookies, and other device identifiers; Member-provided information (e.g., profile, contact information, title and industry); Data from your use of our Services (e.g., search history, feed, content you read, who you follow or is following you, connections, groups participation, page visits, videos you watch, clicking on an ad, etc.), including as described in Section 1.3; Information from advertising partners , vendors and publishers ; and Information inferred from data described above (e.g., using job titles from a profile to infer industry, seniority, and compensation bracket; using graduation dates to infer age or using first names or pronoun usage to infer gender; using your feed activity to infer your interests; or using device data to recognize you as a Member). Learn more about the inferences we make and how they may be used for advertising. Learn more about the ad technologies we use and our advertising services and partners. You can learn more about our compliance with laws in the Designated Countries or the UK in our European Regional Privacy Notice . We will show you ads called sponsored content which look similar to non-sponsored content, except that they are labeled as advertising (e.g., as “ad” or “sponsored”). If you take a social action (such as like, comment or share) on these ads, your action is associated with your name and viewable by others, including the advertiser. Subject to your settings , if you take a social action on the LinkedIn Services, that action may be mentioned with related ads. For example, when you like a company we may include your name and photo when their sponsored content is shown. Ad Choices You have choices regarding our uses of certain categories of data to show you more relevant ads. Member settings can be found here . For Visitors, the setting is here . Info to Ad Providers We do not share your personal data with any non-Affiliated third-party advertisers or ad networks except for: (i) hashed IDs or device identifiers (to the extent they are personal data in some countries); (ii) with your separate permission (e.g., in a lead generation form) or (iii) data already visible to any users of the Services (e.g., profile). However, if you view or click on an ad on or off our Services, the ad provider will get a signal that someone visited the page that displayed the ad, and they may, through the use of mechanisms such as cookies, determine it is you. Advertising partners can associate personal data collected by the advertiser directly from you with hashed IDs or device identifiers received from us. We seek to contractually require such advertising partners to obtain your explicit, opt-in consent before doing so where legally required, and in such instances, we take steps to ensure that consent has been provided before processing data from them. 2.5 Marketing We promote our Services to you and others. In addition to advertising our Services, we use Members’ data and content for invitations and communications promoting membership and network growth, engagement and our Services, such as by showing your connections that you have used a feature on our Services. 2.6 Developing Services and Research We develop our Services and conduct research Service Development We use data, including public feedback, to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity. Other Research We seek to create economic opportunity for Members of the global workforce and to help them be more productive and successful. We use the personal data available to us to research social, economic and workplace trends, such as jobs availability and skills needed for these jobs and policies that help bridge the gap in various industries and geographic areas. In some cases, we work with trusted third parties to perform this research, under controls that are designed to protect your privacy. We may also make public data available to researchers to enable assessment of the safety and legal compliance of our Services. We publish or allow others to publish economic insights, presented as aggregated data rather than personal data. Surveys Polls and surveys are conducted by us and others through our Services. You are not obligated to respond to polls or surveys, and you have choices about the information you provide. You may opt-out of survey invitations. 2.7 Customer Support We use data to help you and fix problems. We use data (which can include your communications) to investigate, respond to and resolve complaints and for Service issues (e.g., bugs). 2.8 Insights That Do Not Identify You We use data to generate insights that do not identify you. We use your data to perform analytics to produce and share insights that do not identify you. For example, we may use your data to generate statistics about our Members, their profession or industry, to calculate ad impressions served or clicked on (e.g., for basic business reporting to support billing and budget management or, subject to your settings , for reports to advertisers who may use them to inform their advertising campaigns), to show Members' information about engagement with a post or LinkedIn Page , to publish visitor demographics for a Service or create demographic workforce insights, or to understand usage of our services. 2.9 Security and Investigations We use data for security, fraud prevention and investigations. We and our Affiliates, including Microsoft, may use your data (including your communications) for security purposes or to prevent or investigate possible fraud or other violations of the law, our User Agreement and/or attempts to harm our Members, Visitors, company, Affiliates, or others. Key Terms Social Action E.g. like, comment, follow, share Partners Partners include ad networks, exchanges and others 3. How We Share Information 3.1 Our Services Any data that you include on your profile and any content you post or social action (e.g., likes, follows, comments, shares) you take on our Services will be seen by others, consistent with your settings. Profile Your profile is fully visible to all Members and customers of our Services. Subject to your settings , it can also be visible to others on or off of our Services (e.g., Visitors to our Services or users of third-party search tools). As detailed in our Help Center , your settings, degree of connection with the viewing Member, the subscriptions they may have, their usage of our Services , access channels and search types (e.g., by name or by keyword) impact the availability of your profile and whether they can view certain fields in your profile. Posts, Likes, Follows, Comments, Messages Our Services allow viewing and sharing information including through posts, likes, follows and comments. When you share an article or a post (e.g., an update, image, video or article) publicly it can be viewed by everyone and re-shared anywhere (subject to your settings ). Members, Visitors and others will be able to find and see your publicly-shared content, including your name (and photo if you have provided one). In a group , posts are visible to others according to group type. For example, posts in private groups are visible to others in the group and posts in public groups are visible publicly. Your membership in groups is public and part of your profile, but you can change visibility in your settings . Any information you share through companies’ or other organizations’ pages on our Services will be viewable by those organizations and others who view those pages' content. When you follow a person or organization, you are visible to others and that “page owner” as a follower. We let senders know when you act on their message, subject to your settings where applicable. Subject to your settings , we let a Member know when you view their profile. We also give you choices about letting organizations know when you've viewed their Page. When you like or re-share or comment on another’s content (including ads), others will be able to view these “social actions” and associate it with you (e.g., your name, profile and photo if you provided it). Your employer can see how you use Services they provided for your work (e.g. as a recruiter or sales agent) and related information. We will not show them your job searches or personal messages. Enterprise Accounts Your employer may offer you access to our enterprise Services such as Recruiter, Sales Navigator, LinkedIn Learning or our advertising Campaign Manager. Your employer can review and manage your use of such enterprise Services. Depending on the enterprise Service, before you use such Service, we will ask for permission to share with your employer relevant data from your profile or use of our non-enterprise Services. For example, users of Sales Navigator will be asked to share their “social selling index”, a score calculated in part based on their personal account activity. We understand that certain activities such as job hunting and personal messages are sensitive, and so we do not share those with your employer unless you choose to share it with them through our Services (for example, by applying for a new position in the same company or mentioning your job hunting in a message to a co-worker through our Services). Subject to your settings , when you use workplace tools and services (e.g., interactive employee directory tools) certain of your data may also be made available to your employer or be connected with information we receive from your employer to enable these tools and services. 3.2 Communication Archival Regulated Members may need to store communications outside of our Service. Some Members (or their employers) need, for legal or professional compliance, to archive their communications and social media activity, and will use services of others to provide these archival services. We enable archiving of messages by and to those Members outside of our Services. For example, a financial advisor needs to archive communications with her clients through our Services in order to maintain her professional financial advisor license. 3.3 Others’ Services You may link your account with others’ services so that they can look up your contacts’ profiles, post your shares on such platforms, or enable you to start conversations with your connections on such platforms. Excerpts from your profile will also appear on the services of others. Subject to your settings , other services may look up your profile. When you opt to link your account with other services, personal data (e.g., your name, title, and company) will become available to them. The sharing and use of that personal data will be described in, or linked to, a consent screen when you opt to link the accounts. For example, you may link your Twitter or WeChat account to share content from our Services into these other services, or your email provider may give you the option to upload your LinkedIn contacts into its own service. Third-party services have their own privacy policies, and you may be giving them permission to use your data in ways we would not. You may revoke the link with such accounts. The information you make available to others in our Services (e.g., information from your profile, your posts, your engagement with the posts, or message to Pages) may be available to them on other services . For example, search tools, mail and calendar applications, or talent and lead managers may show a user limited profile data (subject to your settings ), and social media management tools or other platforms may display your posts. The information retained on these services may not reflect updates you make on LinkedIn. 3.4 Related Services We share your data across our different Services and LinkedIn affiliated entities. We will share your personal data with our Affiliates to provide and develop our Services. For example, we may refer a query to Bing in some instances, such as where you'd benefit from a more up to date response in a chat experience. Subject to our European Regional Privacy Notice , we may also share with our Affiliates, including Microsoft, your (1) publicly-shared content (such as your public LinkedIn posts) to provide or develop their services and (2) personal data to improve, provide or develop their advertising services. Where allowed , we may combine information internally across the different Services covered by this Privacy Policy to help our Services be more relevant and useful to you and others. For example, we may personalize your feed or job recommendations based on your learning history. 3.5 Service Providers We may use others to help us with our Services. We use others to help us provide our Services (e.g., maintenance, analysis, audit, payments, fraud detection, customer support, marketing and development). They will have access to your information (e.g., the contents of a customer support request) as reasonably necessary to perform these tasks on our behalf and are obligated not to disclose or use it for other purposes. If you purchase a Service from us, we may use a payments service provider who may separately collect information about you (e.g., for fraud prevention or to comply with legal obligations). 3.6 Legal Disclosures We may need to share your data when we believe it’s required by law or to help protect the rights and safety of you, us or others. It is possible that we will need to disclose information about you when required by law, subpoena, or other legal process or if we have a good faith belief that disclosure is reasonably necessary to (1) investigate, prevent or take action regarding suspected or actual illegal activities or to assist government enforcement agencies; (2) enforce our agreements with you; (3) investigate and defend ourselves against any third-party claims or allegations; (4) protect the security or integrity of our Services or the products or services of our Affiliates (such as by sharing with companies facing similar threats); or (5) exercise or protect the rights and safety of LinkedIn, our Members, personnel or others. We attempt to notify Members about legal demands for their personal data when appropriate in our judgment, unless prohibited by law or court order or when the request is an emergency. We may dispute such demands when we believe, in our discretion, that the requests are overbroad, vague or lack proper authority, but we do not promise to challenge every demand. To learn more see our Data Request Guidelines and Transparency Report . 3.7 Change in Control or Sale We may share your data when our business is sold to others, but it must continue to be used in accordance with this Privacy Policy. We can also share your personal data as part of a sale, merger or change in control, or in preparation for any of these events. Any other entity which buys us or part of our business will have the right to continue to use your data, but only in the manner set out in this Privacy Policy unless you agree otherwise. 4. Your Choices & Obligations 4.1 Data Retention We keep most of your personal data for as long as your account is open. We generally retain your personal data as long as you keep your account open or as needed to provide you Services. This includes data you or others provided to us and data generated or inferred from your use of our Services. Even if you only use our Services when looking for a new job every few years, we will retain your information and keep your profile open, unless you close your account. In some cases we choose to retain certain information (e.g., insights about Services use) in a depersonalized or aggregated form. 4.2 Rights to Access and Control Your Personal Data You can access or delete your personal data. You have many choices about how your data is collected, used and shared. We provide many choices about the collection, use and sharing of your data, from deleting or correcting data you include in your profile and controlling the visibility of your posts to advertising opt-outs and communication controls. We offer you settings to control and manage the personal data we have about you. For personal data that we have about you, you can: Delete Data : You can ask us to erase or delete all or some of your personal data (e.g., if it is no longer necessary to provide Services to you). Change or Correct Data : You can edit some of your personal data through your account. You can also ask us to change, update or fix your data in certain cases, particularly if it’s inaccurate. Object to, or Limit or Restrict, Use of Data : You can ask us to stop using all or some of your personal data (e.g., if we have no legal right to keep using it) or to limit our use of it (e.g., if your personal data is inaccurate or unlawfully held). Right to Access and/or Take Your Data : You can ask us for a copy of your personal data and can ask for a copy of personal data you provided in machine readable form. Visitors can learn more about how to make these requests here . You may also contact us using the contact information below, and we will consider your request in accordance with applicable laws. Residents in the Designated Countries and the UK , and other regions , may have additional rights under their laws. 4.3 Account Closure We keep some of your data even after you close your account. If you choose to close your LinkedIn account, your personal data will generally stop being visible to others on our Services within 24 hours. We generally delete closed account information within 30 days of account closure, except as noted below. We retain your personal data even after you have closed your account if reasonably necessary to comply with our legal obligations (including law enforcement requests), meet regulatory requirements, resolve disputes, maintain security, prevent fraud and abuse (e.g., if we have restricted your account for breach of our Professional Community Policies ), enforce our User Agreement, or fulfill your request to "unsubscribe" from further messages from us. We will retain de-personalized information after your account has been closed. Information you have shared with others (e.g., through InMail, updates or group posts) will remain visible after you close your account or delete the information from your own profile or mailbox, and we do not control data that other Members have copied out of our Services. Groups content and ratings or review content associated with closed accounts will show an unknown user as the source. Your profile may continue to be displayed in the services of others (e.g., search tools) until they refresh their cache. 5. Other Important Information 5.1. Security We monitor for and try to prevent security breaches. Please use the security features available through our Services. We implement security safeguards designed to protect your data, such as HTTPS. We regularly monitor our systems for possible vulnerabilities and attacks. However, we cannot warrant the security of any information that you send us. There is no guarantee that data may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. 5.2. Cross-Border Data Transfers We store and use your data outside your country. We process data both inside and outside of the United States and rely on legally-provided mechanisms to lawfully transfer data across borders. Learn more . Countries where we process data may have laws which are different from, and potentially not as protective as, the laws of your own country. 5.3 Lawful Bases for Processing We have lawful bases to collect, use and share data about you. You have choices about our use of your data. At any time, you can withdraw consent you have provided by going to settings. We will only collect and process personal data about you where we have lawful bases. Lawful bases include consent (where you have given consent), contract (where processing is necessary for the performance of a contract with you (e.g., to deliver the LinkedIn Services you have requested) and “legitimate interests.” Learn more . Where we rely on your consent to process personal data, you have the right to withdraw or decline your consent at any time and where we rely on legitimate interests, you have the right to object. Learn More . If you have any questions about the lawful bases upon which we collect and use your personal data, please contact our Data Protection Officer here . If you're located in one of the Designated Countries or the UK, you can learn more about our lawful bases for processing in our European Regional Privacy Notice . 5.4. Direct Marketing and Do Not Track Signals Our statements regarding direct marketing and “do not track” signals. We currently do not share personal data with third parties for their direct marketing purposes without your permission. Learn more about this and about our response to “do not track” signals. 5.5. Contact Information You can contact us or use other options to resolve any complaints. If you have questions or complaints regarding this Policy, please first contact LinkedIn online. You can also reach us by physical mail . If contacting us does not resolve your complaint, you have more options . Residents in the Designated Countries and other regions may also have the right to contact our Data Protection Officer here . If this does not resolve your complaint, Residents in the Designated Countries and other regions may have more options under their laws. Key Terms Consent Where we process data based on consent, we will ask for your explicit consent. You may withdraw your consent at any time, but that will not affect the lawfulness of the processing of your personal data prior to such withdrawal. Where we rely on contract, we will ask that you agree to the processing of personal data that is necessary for entering into or performance of your contract with us. We will rely on legitimate interests as a basis for data processing where the processing of your data is not overridden by your interests or fundamental rights and freedoms. LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/company/doordash?trk=products_details_guest_organization_name | DoorDash | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join now DoorDash Software Development San Francisco, California 1,437,130 followers See jobs Follow View all 76,325 employees Report this company Overview Jobs Life About us At DoorDash, our mission to empower local economies shapes how our team members move quickly and always learn and reiterate to support merchants, Dashers and the communities we serve. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. Our leaders seek the truth and welcome big, hairy, audacious questions. We are grounded in our company values, and we make intentional decisions that are both logical and display empathy for our range of users—from Dashers to Merchants to Customers. Website https://careersatdoordash.com/ External link for DoorDash Industry Software Development Company size 10,001+ employees Headquarters San Francisco, California Type Public Company Specialties Local Logistics, Restaurant Delivery, On-Demand Delivery, and eCommerce Locations Primary San Francisco, California, US Get directions New York City, New York, US Get directions Toronto, CA Get directions Phoenix, Arizona, US Get directions Tempe, Arizona, US Get directions Mountain View, California, US Get directions Calgary, CA Get directions Los Angeles, California, US Get directions Denver, Colorado, US Get directions Chicago, Illinois, US Get directions Montreal, CA Get directions Melbourne, AU Get directions Seattle, Washington, US Get directions Tokyo, JP Get directions Mexico City, MX Get directions São Paulo, BR Get directions Show more locations Show fewer locations Employees at DoorDash Alfred Lin Ellie Mertz Pejman Nozad Chandler Caulkins See all employees Updates DoorDash 1,437,130 followers 5d Report this post Discovery doesn’t just happen in stores anymore. With millions of customers shopping on DoorDash each month, emerging CPG brands are scaling faster by meeting consumers where shopping already happens. We’re excited to launch our inaugural CPG Momentum Brands list, celebrating 50 challenger brands driving discovery and growth across grocery, convenience, and retail. Explore the full list → https://lnkd.in/ewVU7rmg The DoorDash Momentum Brands List advertising.doordash.com 120 10 Comments Like Comment Share DoorDash 1,437,130 followers 6d Report this post Today, we’re excited to share DoorDash’s 26 Healthy Eats for 2026, a list spotlighting small and medium-sized restaurants that make eating well easy, delicious, and convenient. One trend stands out this year: protein-forward meals are having a major moment. Orders of protein shakes on DoorDash grew 93% year-over-year, reflecting how many customers are choosing meals that keep them fueled throughout the day. Our list celebrates restaurants across the country serving vibrant bowls, hearty salads, energizing smoothies, and flavor-forward wraps. These restaurants were selected based on customer ratings, with each having at least 500 reviews and fewer than 10 locations on DoorDash. Explore the full list here: https://lnkd.in/eSGbfjNn From Trending Bowls to Protein-Packed Plates: DoorDash’s 26 Healthy Eats to Kick Start 2026 about.doordash.com 62 1 Comment Like Comment Share DoorDash 1,437,130 followers 6d Report this post We’re making it easier for CPG brands to reach the right shoppers and understand what’s really driving performance on DoorDash. Today, DoorDash Ads is rolling out new tools built around real customer behavior and clearer insights: - Interest targeting based on dish preferences — reach people based on what they actually order and eat - Retailer targeting in Sponsored Products — connect with loyal retail shoppers right where they already shop - Category share insights — quickly see how your brand compares in your category, across sales, units, and impressions These updates give advertisers more visibility into the full shopper journey and more control over how they plan, launch, and improve campaigns on DoorDash. Retail media is moving fast. Our goal is to keep things simple, transparent, and built around signals that actually matter. Learn more here: https://lnkd.in/eKy5sCYF DoorDash Ads Launches Interest Targeting, Retailer Targeting, and Category Share Insights for CPG Brands about.doordash.com 224 8 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post At DoorDash, we’re committed to creating an intuitive, personalized shopping journey. Our latest GenAI-powered homepage carousels dynamically generate unique recommendations for each user, blending deep customer insight with cutting-edge AI. https://lnkd.in/eGmAa4w2 When GenAI Meets Personalization: Powering DoorDash’s next-generation homepage experience - DoorDash careersatdoordash.com 97 5 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post Today we launched a new partnership with OpenAI that brings grocery shopping on DoorDash directly into ChatGPT. With the new DoorDash app in ChatGPT, consumers can turn recipe ideas into a shoppable grocery list in seconds and seamlessly check out on DoorDash. By enabling DoorDash’s leading grocery selection and on-demand logistics network in ChatGPT, we’re making grocery shopping easier, faster, and more convenient. With access to tens of thousands of grocery stores across the U.S., from national retailers to regional and local favorites, consumers can find the ingredients they want from the stores they trust – all at the point of inspiration. It makes it easier to plan meals, build a weekly grocery list, or try something new without the hassle of running to a store to hunt for ingredients. The DoorDash app in ChatGPT is another step in building the most convenient, connected and reliable local commerce platform, embedding AI where it creates the most value for consumers, merchants, and Dashers. Learn more about how we’re partnering with OpenAI here: https://lnkd.in/e3T4Bpzf DoorDash Launches Grocery Shopping App Within ChatGPT about.doordash.com 1,250 28 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post This holiday season, we’re making dessert planning easier (and sweeter) than ever with DoorDash’s inaugural Rising Stars: The Holiday Dessert 50 — a roundup spotlighting fan-favorite independent bakeries and dessert shops across the U.S. 🎂✨ These Rising Stars, identified by analyzing DoorDash customer ratings, celebrate the neighborhood spots that anchor their communities with creativity and sweetness. Dive into the list to discover your next dessert destination. Read more on our newsroom here: https://lnkd.in/gR2cyJd2 Holiday Dessert Plans, On-Demand: DoorDash Reveals List of Local Spots to Sweeten the Season about.doordash.com 15 1 Comment Like Comment Share DoorDash 1,437,130 followers 3w Report this post This November, DoorDash stood with the millions of Americans who count on SNAP to put food on the table. Through our Project DASH partners, we hosted events from coast to coast with one clear message: no American should go hungry. Project DASH’s fight against hunger is only possible through the work of our partners like San Francisco-Marin Food Bank . Not only is it the organization we announced 135 million Project DASH deliveries with this year, it was also the first partner we visited during the government shutdown. Last week, we joined them again — but for a different reason: to volunteer. DoorDash team members spent the morning packing food boxes for delivery to local seniors. It’s all part of our commitment to ensure that no American goes hungry and to give back to an organization that gives so much to the community. Organizations like San Francisco-Marin Food Bank make our communities better — full stop. DoorDash is proud to support their work and honored to bring our one-team-one-fight mentality to support their mission of fighting hunger nationwide. 184 12 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post Knowledge at DoorDash lives everywhere from dashboards, docs, metrics hubs, and team chats. Finding answers often means switching between tools and people. To unify this distributed knowledge, we’re building an agentic AI platform that acts as a cognitive layer over DoorDash’s data and operations. Our latest engineering post shares how we’re evolving from isolated assistants to a collaborative multi-agent ecosystem, progressing through four stages: Workflows, Agents, Deep Agents, and Agent Swarms. https://lnkd.in/eUnBBEwv Beyond Single Agents: How DoorDash is building a collaborative AI ecosystem - DoorDash careersatdoordash.com 107 5 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post We’re proud to share the newly released Small Business Jam Report, highlighting how hundreds of local entrepreneurs across the country were able to use AI to strengthen and grow their businesses. Through the Small Business Jam, we partnered with OpenAI , SCORE Mentors , and community partners to work directly with small business owners — from neighborhood restaurants to boutiques and shop owners — to help them build AI tools tailored to their real operational needs. The results are powerful: owners report saving hours each week on marketing and administrative work, improving customer engagement, and unlocking new opportunities for growth. As one DoorDash merchant put it: "It helped me uncover a solution I would have never found on my own." At DoorDash, our mission has always been to empower local economies. We see AI as a meaningful extension of that work and offer a way to help merchants streamline their operations, reach more customers, and spend more time doing what they do best. Read the full report for more: 72 6 Comments Like Comment Share DoorDash 1,437,130 followers 3w Report this post The best way to make dashing better is to talk to the people who know it best: Dashers. We gathered top Dashers and influencers from around the country at our San Francisco headquarters for a Dasher Roundtable, for an exclusive, behind-the-scenes look at upcoming changes to the Dasher app, and give company leaders the opportunity to learn first-hand from their experiences on the road. This wasn’t our first roundtable and it won’t be our last – but it’s our latest step toward making dashing the best it’s ever been. …more 336 17 Comments Like Comment Share Join now to see what you are missing Find people you know at DoorDash Browse recommended jobs for you View all updates, news, and articles Join now Affiliated pages Bbot Software Development New York, New York DoorDash for Business Technology, Information and Internet San Francisco, California DashCorps Staffing and Recruiting NYC, NY Similar pages Uber Internet Marketplace Platforms San Francisco, California Instacart Software Development San Francisco, CA Airbnb Software Development San Francisco, CA Google Software Development Mountain View, CA Netflix Entertainment Providers Los Gatos, CA Stripe Technology, Information and Internet South San Francisco, California Amazon Software Development Seattle, WA Lyft Ground Passenger Transportation San Francisco, CA Meta Software Development Menlo Park, CA Spotify Musicians Stockholm, Stockholm County Show more similar pages Show fewer similar pages Browse jobs DoorDash jobs 7,620 open jobs Analyst jobs 694,057 open jobs Manager jobs 1,880,925 open jobs Engineer jobs 555,845 open jobs Associate jobs 1,091,945 open jobs Marketing Manager jobs 106,879 open jobs Director jobs 1,220,357 open jobs Account Manager jobs 121,519 open jobs Scientist jobs 48,969 open jobs Project Manager jobs 253,048 open jobs Developer jobs 258,935 open jobs Account Executive jobs 71,457 open jobs Software Engineer jobs 300,699 open jobs Executive jobs 690,514 open jobs Specialist jobs 768,666 open jobs Human Resources Director jobs 16,428 open jobs Vice President jobs 235,270 open jobs Intern jobs 71,196 open jobs Customer Service Representative jobs 185,647 open jobs Sales Manager jobs 310,050 open jobs Show more jobs like this Show fewer jobs like this Funding DoorDash 14 total rounds Last Round Post IPO debt Jun 27, 2025 External Crunchbase Link for last round of funding US$ 2.8B See more info on crunchbase More searches More searches DoorDash jobs Analyst jobs Engineer jobs Manager jobs Scientist jobs Associate jobs Developer jobs Software Engineer jobs Director jobs Project Manager jobs Marketing Manager jobs Account Manager jobs Account Executive jobs Intern jobs Data Analyst jobs Data Scientist jobs Specialist jobs Senior Software Engineer jobs Product Designer jobs Product Manager jobs Consultant jobs Machine Learning Engineer jobs Senior Manager jobs Program Manager jobs Vice President jobs Executive jobs Engineering Manager jobs Recruiter jobs User Experience Designer jobs Marketing Specialist jobs Associate Product Manager jobs Data Engineer jobs Human Resources Director jobs Business Analyst jobs Senior Analyst jobs Human Resources Specialist jobs Assistant jobs User Experience Researcher jobs Senior Associate jobs Operational Specialist jobs Director of Operations jobs Head jobs President jobs Customer Service Representative jobs Designer jobs Partner jobs Human Resources Business Partner jobs Marketing Analyst jobs Coordinator jobs Full Stack Engineer jobs Senior Director jobs Salesforce Administrator jobs Strategist jobs Sales Manager jobs Graduate jobs Business Development Specialist jobs Strategy Manager jobs Strategy Associate jobs Buyer jobs Accountant jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at DoorDash Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/enterprise-cloud-platforms?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_subtitle-click | Best Enterprise Cloud Platforms | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Chief Technology Officer (38) Information Technology Manager (36) Chief Information Officer (33) Cloud Architect (29) Cloud Engineer (27) See all products Find top products in Enterprise Cloud Platforms category Software used to provide infrastructure and platform cloud hosting on a pay-as-you-go model. - Use cloud-based resources that provide computing, virtualization, servers, storage, and networking - Create applications with hosted database and management tools, development kits, and resources for deployment - Manage infrastructure and applications in a centralized environment behind a firewall 387 results Workday Enterprise Cloud Platforms by Workday Move beyond ERP. Leave the limitations of legacy vendors behind with Workday Enterprise Management Cloud. Workday embeds AI into the core of our system, giving you the power to adapt quickly to a changing world. The world is changing fast. And ERP can’t keep up. That’s why we designed Workday with AI at the core. So you can move with speed and intelligence. Ready to deliver extraordinary results in a changing world? Good, so are we. View product Cisco+ Hybrid Cloud Enterprise Cloud Platforms by Cisco Cisco+ Hybrid Cloud invites you to experience a new way to manage hybrid cloud with a simplified, consumption-based solution View product Oracle Cloud Infrastructure (OCI) Enterprise Cloud Platforms by Oracle Cloud Oracle Cloud Infrastructure (OCI) has all the services you need to migrate, build, and run all your IT, from existing enterprise workloads to new cloud native applications and data platforms. With more than 100 cloud infrastructure and platform services offered, OCI can help you optimize enterprise applications without costly rearchitecture, build and run cloud native or third-party apps with familiar open standards and open source tools, and harness your data with OCI services, such as Oracle MySQL Heatwave and Oracle Autonomous Database, to uncover new business value. OCI’s distributed cloud solutions are available wherever you need them, including on-premises with OCI Dedicated Region and Exadata Cloud@Customer, multicloud with Oracle Database Service for Microsoft Azure, or public cloud. OCI offers the fastest growing network of global data centers, with 41 cloud regions available to solve your toughest business challenges. View product Cloud transformation Enterprise Cloud Platforms by Zensar Technologies Build a hyper-scale technology backbone with Zensar`s Cloud Services. View product Nutanix Central Enterprise Cloud Platforms by Nutanix Nutanix Central provides a unified management solution to view and manage all Nutanix deployments, whether on-premises and on public clouds. It streamlines governance, operations, and navigation for increased efficiency across Nutanix hybrid multicloud infrastructure. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Nutanix Kubernetes Platform Enterprise Cloud Platforms by Nutanix Nutanix Kubernetes Platform (NKP) is a complete, open, and enterprise-grade Kubernetes platform that brings resiliency, security, and day 2 operations to cloud native applications. NKP accelerates app delivery with an intelligent platform that standardizes management for fleets of clusters across public clouds, datacenters, and the edge. View product Nutanix Enterprise AI Enterprise Cloud Platforms by Nutanix Nutanix Enterprise AI (NAI) is a scalable, Kubernetes-native platform for private GenAI inference, whether on‑premises, edge, air‑gapped, or in public clouds like AWS EKS, Azure AKS, and GKE. Deploy LLMs and endpoints with secure APIs to run from the edge to public clouds for all your GenAI apps and agents with standardized and centralized inferencing, including Day 2 operations. View product Nutanix Cloud Clusters Enterprise Cloud Platforms by Nutanix Nutanix Cloud Clusters (NC2) provides a hybrid multicloud platform that enables customers to seamlessly migrate apps and data quickly across clouds without rearchitecting. Then modernize with Nutanix AI, Kubernetes, and database services and integrate with native cloud services. NC2 offers consistent IT management, constructs, and policies with placement of workloads where they best fit, for speed, simplicity and efficiency. View product High Performance Computing on Oracle Cloud Infrastructure Enterprise Cloud Platforms by Oracle Cloud OCI offers powerful HPC solutions to solve complex mathematical and scientific problems faster. HPC on OCI rivals the performance of on-premises solutions with the elasticity and consumption-based costs of the cloud, offering on-demand availability and capability to scale tens of thousands of cores simultaneously. OCI offers high-frequency processors, fast and dense local storage, high throughput, microsecond-latency RDMA cluster networking, and the tools to automate and run jobs seamlessly in the cloud at a lower cost. View product OCI AI Infrastructure Enterprise Cloud Platforms by Oracle Cloud Oracle Cloud Infrastructure (OCI) AI infrastructure includes bare metal instances and virtual machines with the latest GPUs for building and deploying AI models quickly and cost effectively. Use cases for Oracle AI infrastructure include computer vision, natural language processing, recommendation systems, and much more. For large-scale AI training, OCI Supercluster provides ultrafast cluster networking, HPC storage, and bare metal instances. OCI Supercluster is ideal for training generative AI, including conversational applications and diffusion models. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
https://huggingface.co/datasets?modality=modality%3Ageospatial | Geospatial Datasets – Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up 1T","label":"> 1T","type":"size_categories"}],"format":[{"id":"format:json","label":"json","type":"format"},{"id":"format:csv","label":"csv","type":"format"},{"id":"format:parquet","label":"parquet","type":"format"},{"id":"format:optimized-parquet","label":"optimized-parquet","type":"format"},{"id":"format:imagefolder","label":"imagefolder","type":"format"},{"id":"format:audiofolder","label":"soundfolder","type":"format"},{"id":"format:webdataset","label":"webdataset","type":"format"},{"id":"format:text","label":"text","type":"format"},{"id":"format:arrow","label":"arrow","type":"format"}],"benchmark":[{"id":"benchmark:official","label":"Benchmark","type":"benchmark"}]},"numItemsPerPage":30,"numTotalItems":353,"pageIndex":0,"searchQuery":"","sortBy":""}}"> Edit Datasets filters Main Tasks Libraries Languages Licenses Other Modalities Reset Modalities 3D Audio Document Geospatial Image Tabular Text Time-series Video Size (rows) Reset Size < 1K > 1T Format json csv parquet optimized-parquet imagefolder soundfolder webdataset text arrow Evaluation Benchmark Apply filters Datasets 353 Full-text search Edit filters Sort: Trending Active filters: geospatial Clear all Kingdrone-Junjue/EarthVLSet Preview • Updated 5 days ago • 120 • 4 do-me/Flickr-Geo Updated Mar 1, 2025 • 2.63k • 18 Major-TOM/Core-S2L2A Viewer • Updated Nov 12, 2024 • 4.49M • 15.8k • 66 Codatta/Refined-TCGA-PRAD-Prostate-Cancer-Pathology-Dataset Viewer • Updated Dec 21, 2024 • 138 • 508 • 15 AdaptLLM/remote-sensing-visual-instructions Viewer • Updated Aug 21, 2025 • 36.4k • 73 • 7 theaidevlab/IWI-Africa-Multimodal Updated Aug 6, 2025 • 8 • 3 elliotvincent/b-FLAIR-test Updated about 20 hours ago • 93 • 1 CEBPM/Municipalities-of-Russia Viewer • Updated 25 days ago • 2.67k • 128 • 6 hvent90/lunar-albedo Viewer • Updated 13 days ago • 256 • 61 • 1 cimadure/snow_removal_transactions_in_montreal Preview • Updated May 16, 2023 • 27 • 2 kraina/airbnb Viewer • Updated Jun 3, 2023 • 103k • 218 • 11 unisaacarroyov/focus_investigaciones Preview • Updated Jun 23, 2023 • 45 psalama/NYC_sensitive_sites Viewer • Updated Jun 24, 2023 • 2.04k • 56 yachay/text_coordinates_seasons Viewer • Updated Sep 22, 2023 • 624k • 118 • 2 yachay/text_coordinates_regions Viewer • Updated Sep 21, 2023 • 615k • 210 • 8 Jerry-Master/lung-tumour-study Preview • Updated Mar 28, 2024 • 797 • 1 nasa-cisto-data-science-group/tutorial-senegal-lcluc Viewer • Updated Oct 9, 2023 • 2 • 70 satellite-image-deep-learning/SODA-A Preview • Updated Oct 22, 2023 • 19 • 14 satellite-image-deep-learning/DOTAv2 Viewer • Updated Oct 28, 2023 • 2.81k • 144 • 16 jfloresf/demo Viewer • Updated Nov 12, 2023 • 45 • 491 danaroth/cuprite Viewer • Updated Nov 10, 2023 • 1 • 91 danaroth/chikusei Viewer • Updated Nov 9, 2023 • 29 • 237 • 1 danaroth/jasper_ridge Viewer • Updated Nov 10, 2023 • 1 • 146 danaroth/urban Viewer • Updated Nov 10, 2023 • 1 • 512 • 3 danaroth/samson Viewer • Updated Nov 10, 2023 • 1 • 239 jfloresf/mlstac-demo Viewer • Updated Nov 13, 2023 • 17 • 253 joshuasundance/govgis_nov2023 Viewer • Updated Nov 17, 2023 • 510k • 50 • 4 fadingNA/CovidCases.csv Preview • Updated Nov 16, 2023 • 45 danaroth/moffett_field Viewer • Updated Nov 17, 2023 • 4 • 341 IGNF/TreeSatAI-Time-Series Preview • Updated Aug 19, 2025 • 518 • 8 Previous 1 2 3 ... 12 Next System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://huggingface.co/datasets?format=format%3Aaudiofolder | Datasets in audiofolder – Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up 1T","label":"> 1T","type":"size_categories"}],"format":[{"id":"format:json","label":"json","type":"format"},{"id":"format:csv","label":"csv","type":"format"},{"id":"format:parquet","label":"parquet","type":"format"},{"id":"format:optimized-parquet","label":"optimized-parquet","type":"format"},{"id":"format:imagefolder","label":"imagefolder","type":"format"},{"id":"format:audiofolder","label":"soundfolder","type":"format"},{"id":"format:webdataset","label":"webdataset","type":"format"},{"id":"format:text","label":"text","type":"format"},{"id":"format:arrow","label":"arrow","type":"format"}],"benchmark":[{"id":"benchmark:official","label":"Benchmark","type":"benchmark"}]},"numItemsPerPage":30,"numTotalItems":7887,"pageIndex":0,"searchQuery":"","sortBy":""}}"> Edit Datasets filters Main Tasks Libraries Languages Licenses Other Modalities 3D Audio Document Geospatial Image Tabular Text Time-series Video Size (rows) Reset Size < 1K > 1T Format Reset Format json csv parquet optimized-parquet imagefolder soundfolder webdataset text arrow Evaluation Benchmark Apply filters Datasets 7,887 Full-text search Edit filters Sort: Trending Active filters: audiofolder Clear all ashraf-ali/quran-data Viewer • Updated Dec 10, 2022 • 9k • 2.04k • 22 jamendolyrics/jam-alt Viewer • Updated Jul 1, 2025 • 158 • 2.28k • 14 haidermasood99/quran_asr_final Viewer • Updated Jul 11, 2024 • 9.63k • 1.76k • 1 campwill/HAL-9000-Speech Viewer • Updated Apr 11, 2025 • 96 • 1.13k • 11 TraceAI/TheNOexistenceNofyouANDme-Lilith-Chinese Viewer • Updated Apr 22, 2025 • 777 • 447 • 4 moonshotai/Kimi-Audio-GenTest Viewer • Updated Apr 28, 2025 • 191 • 177 • 8 Shelton1013/SwitchLingua_audio Viewer • Updated 29 days ago • 1.79k • 99 • 6 AIxBlock/English-role-playing-call-center-convers-different-moods Viewer • Updated May 21, 2025 • 40 • 8 • 4 AIxBlock/Eng-Filipino-Accented-audio-with-human-transcription-call-center-topic Viewer • Updated May 21, 2025 • 8 • 20 • 4 ChiSER5/ChiSER5 Viewer • Updated Jun 16, 2025 • 1.49k • 638 • 2 karl-wang/SaMoyeSVC Viewer • Updated Jul 13, 2025 • 4 • 34 • 1 ChengqianMa/C3 Viewer • Updated Aug 6, 2025 • 1.59k • 1.25k • 4 ASLP-lab/WSC-Eval Viewer • Updated Dec 10, 2025 • 1.19k • 2.32k • 6 adwumatech-ai/mghana-st Viewer • Updated 5 days ago • 10.4k • 1.29k • 1 zlinao/WearVox Viewer • Updated 23 days ago • 4k • 15 • 1 FunAudioLLM/SpeechFCEval Viewer • Updated 19 days ago • 1.91k • 587 • 9 ArkAiLab-Adl/Nexora-music-pd-v1-mini Viewer • Updated 5 days ago • 6 • 17 • 1 ArkAiLab-Adl/Nexora-music-pd-v1-medium Viewer • Updated 5 days ago • 17 • 42 • 1 NbAiLab/NPSC_test Viewer • Updated Nov 7, 2022 • 112 • 401 abidlabs/crowdsourced-speech-demo Viewer • Updated Apr 28, 2022 • 6 • 74 • 1 abidlabs/crowdsourced-speech Viewer • Updated Jan 21, 2022 • 5 • 28 abidlabs/crowdsourced-speech2 Viewer • Updated Jan 21, 2022 • 2 • 59 abidlabs/crowdsourced-speech4 Viewer • Updated Jan 21, 2022 • 1 • 49 abidlabs/crowdsourced-speech5 Viewer • Updated Jan 21, 2022 • 2 • 73 abidlabs/crowdsourced-speech7 Viewer • Updated Jan 21, 2022 • 1 • 63 abidlabs/test-audio-1 Viewer • Updated Jan 19, 2022 • 2 • 54 abidlabs/test-audio-13 Viewer • Updated Jan 21, 2022 • 4 • 43 albertvillanova/dummy_libri2mix Viewer • Updated Jul 28, 2021 • 9 • 133 • 1 cahya/luganda-zindi Viewer • Updated Jan 18, 2022 • 7.07k • 61 • 1 ebrigham/asr_files Viewer • Updated Jan 3, 2022 • 1 • 49 Previous 1 2 3 ... 100 Next System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://www.linkedin.com/posts/digitaleconomycouncil_seasons-greetings-from-the-digital-economy-activity-7409002809846620160-D7S0 | Season’s Greetings from the Digital Economy Council of Australia (DECA) As we reflect on the year that has passed, we would like to sincerely thank our members, partners and broader community for… | Digital Economy Council of Australia (DECA) Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join now for free Digital Economy Council of Australia (DECA)’s Post Digital Economy Council of Australia (DECA) 16,292 followers 3w Report this post Season’s Greetings from the Digital Economy Council of Australia (DECA) As we reflect on the year that has passed, we would like to sincerely thank our members, partners and broader community for your trust, collaboration and continued support. The progress made this year would not have been possible without your engagement, leadership and shared commitment to advancing Australia’s digital assets and digital economy. We wish you a restful holiday season and a New Year filled with opportunity, momentum and shared success. We look forward to continuing our work together in the year ahead. Happy Holidays and best wishes for 2026 The Digital Economy Council of Australia (DECA) 8 1 Comment Like Comment Share Copy LinkedIn Facebook X BFM Times 1w Report this comment Hello! I’m Varun Sakhuja from BFM Times, I take care of the Media Partnerships. We collaborate with Web3 & crypto events as a media partner. Can you please share your email address, so we can share our collaboration offers. Thank You. Like Reply 1 Reaction To view or add a comment, sign in 16,292 followers View Profile Follow Explore content categories Career Productivity Finance Soft Skills & Emotional Intelligence Project Management Education Technology Leadership Ecommerce User Experience Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Sign in to view more content Create your free account or sign in to continue your search Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:15 |
https://huggingface.co/datasets?format=format%3Aimagefolder | Datasets in imagefolder – Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up 1T","label":"> 1T","type":"size_categories"}],"format":[{"id":"format:json","label":"json","type":"format"},{"id":"format:csv","label":"csv","type":"format"},{"id":"format:parquet","label":"parquet","type":"format"},{"id":"format:optimized-parquet","label":"optimized-parquet","type":"format"},{"id":"format:imagefolder","label":"imagefolder","type":"format"},{"id":"format:audiofolder","label":"soundfolder","type":"format"},{"id":"format:webdataset","label":"webdataset","type":"format"},{"id":"format:text","label":"text","type":"format"},{"id":"format:arrow","label":"arrow","type":"format"}],"benchmark":[{"id":"benchmark:official","label":"Benchmark","type":"benchmark"}]},"numItemsPerPage":30,"numTotalItems":18761,"pageIndex":0,"searchQuery":"","sortBy":""}}"> Edit Datasets filters Main Tasks Libraries Languages Licenses Other Modalities 3D Audio Document Geospatial Image Tabular Text Time-series Video Size (rows) Reset Size < 1K > 1T Format Reset Format json csv parquet optimized-parquet imagefolder soundfolder webdataset text arrow Evaluation Benchmark Apply filters Datasets 18,761 Full-text search Edit filters Sort: Trending Active filters: imagefolder Clear all edc505/pokemon Viewer • Updated Sep 12, 2022 • 369 • 2.35k • 60 x1101/nsfw-full Viewer • Updated Mar 26, 2023 • 126 • 483 • 32 songjhPKU/RxnCaption-11k Viewer • Updated Oct 31, 2025 • 10.5k • 35 • 8 m-Just/O3-Bench Viewer • Updated 18 days ago • 345 • 1.26k • 14 ShiroOnigami23/MyTorch-MNIST-Dataset Viewer • Updated 2 days ago • 1 • 24 • 3 edbeeching/godot_rl_FlyBy Viewer • Updated Feb 22, 2024 • 9 • 1.3k • 3 HuggingFace-CN-community/Diffusion-book-cn Viewer • Updated Apr 19, 2023 • 167 • 1.97k • 76 Mialiu/nerf Viewer • Updated Apr 2, 2023 • 1.73k • 48 • 2 jyj7913/spectro-polarimetric Viewer • Updated Dec 9, 2025 • 2.33k • 1.56k • 15 CollectorN01/PonyXL-Lora-MyAhhArchiveCN01 Viewer • Updated Oct 23, 2025 • 1 • 5.44k • 11 RussRobin/StitchBench Viewer • Updated Nov 28, 2025 • 523 • 56 • 6 nunchaku-ai/cdn Viewer • Updated about 13 hours ago • 10 • 84.9k • 3 Naveen0501/dlgenai-nppe-dataset Viewer • Updated 2 days ago • 8.68k • 68.1k • 2 yangzhou99/DrivingGen Viewer • Updated 4 days ago • 1.78k • 45 • 2 LejuRobotics/kuavo_data_challenge_icra Viewer • Updated 6 days ago • 2 • 15.4k • 2 chris1004336379/360DVO Viewer • Updated about 22 hours ago • 8.5k • 1.23k • 4 webxos/wireframe_patterns_v1 Viewer • Updated 3 days ago • 100 • 92 • 2 aisamxu/something-wrong Viewer • Updated 3 days ago • 1 • 53 • 2 webxos/3D_torus_dataset Viewer • Updated 3 days ago • 71 • 74 • 2 webxos/timelink_dataset Viewer • Updated 3 days ago • 248 • 69 • 2 shubh303/dense_object_detection_FiftyOne Viewer • Updated 3 days ago • 5.38k • 2.06k • 2 edbeeching/cpp_graphics_engineer_test_datasets Viewer • Updated Oct 7, 2022 • 18 • 151 • 2 Nerfgun3/bad_prompt Viewer • Updated Nov 19, 2022 • 1 • 3.05k • 930 Tritanium/personal-lora-images Viewer • Updated Apr 29, 2023 • 26 • 18 • 1 JingyeChen22/TextDiffuser Viewer • Updated Sep 24, 2025 • 66 • 198 • 6 xieyizheng/pose Viewer • Updated Jul 14, 2023 • 67.2k • 62 • 3 YouLiXiya/nerf Viewer • Updated Apr 15, 2024 • 3k • 97 • 4 google/dreambooth Viewer • Updated Aug 15, 2023 • 164 • 563 • 54 banned-historical-archives/banned-historical-archives Viewer • Updated Oct 19, 2025 • 1 • 818k • 11 realslimman/REFUGE-MultiRater Viewer • Updated Dec 22, 2023 • 20.4k • 69 • 1 Previous 1 2 3 ... 100 Next System theme Company TOS Privacy About Careers Website Models Datasets Spaces Pricing Docs | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/vultr-block-storage/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | Vultr Block Storage | LinkedIn Skip to main content LinkedIn Vultr in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Vultr Block Storage Block Storage Software by Vultr See who's skilled in this Add as skill Learn more Report this product About Create expandable HDD or NVMe SSD storage volumes and mount them directly to your compute instances. Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software OVHcloud Block Storage OVHcloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software DataCore SANsymphony DataCore SANsymphony Block Storage Software Sign in to see more Show more Show less Vultr products Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/signup/cold-join?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fcompany%2Fdigitaleconomycouncil&trk=organization_guest_main-feed-card_share-cta | Sign Up | LinkedIn Join LinkedIn now — it’s free! Not you? Remove photo Join LinkedIn To create a LinkedIn account, you must understand how LinkedIn processes your personal information by selecting learn more for each item listed. Agree to all terms We collect and use personal information. Learn more We share personal information with third parties to provide our services. Learn more Further information is available in our Korea Privacy Addendum . Privacy Policy Addendum 1 of 2 2 of 2 Agree to the term Continue Back Agree to all terms Email Password Show Remember me First name Last name By clicking Agree & Join, you agree to the LinkedIn User Agreement , Privacy Policy , and Cookie Policy . Agree & Join or Security verification Already on LinkedIn? Sign in Looking to create a page for a business? Get help LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/amazon-web-services-amazon-aurora/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | Amazon Aurora | LinkedIn Skip to main content LinkedIn Amazon Web Services (AWS) in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Amazon Aurora Relational Database Management Systems by Amazon Web Services (AWS) See who's skilled in this Add as skill Learn more Report this product About Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Similar products Amazon RDS Amazon RDS Relational Database Management Systems Google Cloud SQL Google Cloud SQL Relational Database Management Systems Teradata Vantage Teradata Vantage Relational Database Management Systems Cloud Native Database TDSQL-C Cloud Native Database TDSQL-C Relational Database Management Systems SQL Server SQL Server Relational Database Management Systems CockroachDB CockroachDB Relational Database Management Systems Sign in to see more Show more Show less Amazon Web Services (AWS) products Amazon DynamoDB Amazon DynamoDB NoSQL Database Software Amazon EC2 Amazon EC2 Infrastructure as a Service (IaaS) Amazon Lightsail Amazon Lightsail Platform as a Service (PaaS) Software Amazon RDS Amazon RDS Relational Database Management Systems Amazon SageMaker Amazon SageMaker Data Science & Machine Learning Platforms Amazon Simple Storage Service (S3) Amazon Simple Storage Service (S3) Object Storage Software Amazon Virtual Private Cloud Amazon Virtual Private Cloud Virtual Private Cloud (VPC) Software AWS Lambda AWS Lambda Platform as a Service (PaaS) Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/zensar-cloud-transformation/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | Cloud transformation | LinkedIn Skip to main content LinkedIn Zensar Technologies in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cloud transformation Enterprise Cloud Platforms by Zensar Technologies See who's skilled in this Add as skill Learn more Report this product About Build a hyper-scale technology backbone with Zensar`s Cloud Services. Media Products media viewer No more previous content Start your cloud journey right Migrate seamlessly, successfully, and securely and embrace a cloud model that meets your organization’s ever-changing needs and allows you to scale without breaking the bank. No more next content Similar products Workday Workday Enterprise Cloud Platforms Cisco+ Hybrid Cloud Cisco+ Hybrid Cloud Enterprise Cloud Platforms Oracle Cloud Infrastructure (OCI) Oracle Cloud Infrastructure (OCI) Enterprise Cloud Platforms Nutanix Central Nutanix Central Enterprise Cloud Platforms Nutanix Kubernetes Platform Nutanix Kubernetes Platform Enterprise Cloud Platforms Nutanix Enterprise AI Nutanix Enterprise AI Enterprise Cloud Platforms Sign in to see more Show more Show less Zensar Technologies products Digital Applications Digital Applications Digital Adoption Platforms Ruptive Ruptive Digital Experience Platforms (DXP) LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/vultr/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | Vultr | LinkedIn Skip to main content LinkedIn Vultr in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Vultr Block Storage Software by Vultr See who's skilled in this Add as skill Learn more Report this product About View details and pricing information for the entire suite of Vultr products, including cloud compute, cloud GPU, bare metal, Kubernetes, and more. Media Products media viewer No more previous content No more next content Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software OVHcloud Block Storage OVHcloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software DataCore SANsymphony DataCore SANsymphony Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software Sign in to see more Show more Show less Vultr products Vultr Bare Metal Vultr Bare Metal Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/datacore-software-datacore-sansymphony/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | DataCore SANsymphony | LinkedIn Skip to main content LinkedIn DataCore Software in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in DataCore SANsymphony Block Storage Software by DataCore Software See who's skilled in this Add as skill Learn more Report this product About DataCore SANsymphony is a software-defined storage platform that unifies and safeguards diverse storage systems across SAN and HCI environments. It delivers continuous data availability and cyber resilience through automated business continuity and disaster recovery workflows and continuous data protection. SANsymphony keeps your infrastructure secure and recoverable from outages, hardware failures, or ransomware—ensuring mission-critical workloads stay online and your data always remains in control. Built for flexibility, performance, and efficiency, SANsymphony lets you modernize without disruption—integrating new storage technologies seamlessly while maximizing the value of existing assets. Its intelligent caching, auto-tiering, and parallel I/O acceleration drive consistent high-speed performance, while centralized management and automation reduce operational overhead and lower TCO. DataCore SANsymphony gives enterprises true storage freedom—resilient, optimized, and future-ready. This product is intended for Information Technology Manager Storage Engineer Storage Administrator Information Technology Technician Information Technology Specialist Information Communication Technology Manager Information Technology Consultant Director of Information Technology Chief Information Officer Chief Technology Officer Media Products media viewer No more previous content SANsymphony Management Console Automatically tier data across storage devices based on data access temperatures. Storage the most accessed (hot) data on the most fastest tier such as flash/NVMe and warm/cold data on slower storage systems. Synchronous Mirroring for High Availability SANsymphony mirrors data synchronously across local and metro clusters. Automatic and transparent failover, resynchronization, and fallback ensure zero RPO and RTO while eliminating downtime and guaranteeing 100% application and data availability and access. Three Lines of Defense with SANsymphony Enhanced Cybersecurity to Protect Your Data No more next content Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software OVHcloud Block Storage OVHcloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software Sign in to see more Show more Show less DataCore Software products DataCore Nexus DataCore Nexus Cloud File Storage Software DataCore Puls8 DataCore Puls8 Container Management Software DataCore Swarm DataCore Swarm Object Storage Software StarWind HCI StarWind HCI Hyper-Converged Infrastructure (HCI) LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/workday/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | Workday | LinkedIn Skip to main content LinkedIn Workday in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Workday Enterprise Cloud Platforms by Workday See who's skilled in this Add as skill Learn more Report this product About Move beyond ERP. Leave the limitations of legacy vendors behind with Workday Enterprise Management Cloud. Workday embeds AI into the core of our system, giving you the power to adapt quickly to a changing world. The world is changing fast. And ERP can’t keep up. That’s why we designed Workday with AI at the core. So you can move with speed and intelligence. Ready to deliver extraordinary results in a changing world? Good, so are we. Media Products media viewer No more previous content Move beyond ERP. No more next content Similar products Cisco+ Hybrid Cloud Cisco+ Hybrid Cloud Enterprise Cloud Platforms Oracle Cloud Infrastructure (OCI) Oracle Cloud Infrastructure (OCI) Enterprise Cloud Platforms Cloud transformation Cloud transformation Enterprise Cloud Platforms Nutanix Central Nutanix Central Enterprise Cloud Platforms Nutanix Kubernetes Platform Nutanix Kubernetes Platform Enterprise Cloud Platforms Nutanix Enterprise AI Nutanix Enterprise AI Enterprise Cloud Platforms Sign in to see more Show more Show less Workday products Workday Payroll and Workforce Management Workday Payroll and Workforce Management Workforce Management Software Workday Peakon Employee Voice Workday Peakon Employee Voice Workday Platform and Product Extensions Workday Platform and Product Extensions Workday Professional Services Automation Workday Professional Services Automation Professional Services Automation (PSA) Software Workday Spend Management Workday Spend Management Spend Management Software Workday Strategic Sourcing Workday Strategic Sourcing Workday Student Workday Student Workday Talent Management Workday Talent Management Talent Intelligence Software Workday VNDLY Workday VNDLY Workday Workforce Management Workday Workforce Management Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://rubygems.org | RubyGems.org | your community gem host ⬢ RubyGems nav#focus mousedown->nav#mouseDown click@window->nav#hide"> Navigation menu Releases Blog Gems Guides Sign in Sign up Find, install, and publish RubyGems. autocomplete#choose mouseover->autocomplete#highlight"> Search Gems… Advanced Search 234,215,858,655 downloads & counting Install RubyGems Status Uptime Status Uptime Code Data Stats Contribute About Help API Policies Support Us Security RubyGems.org is the Ruby community’s gem hosting service. Instantly publish your gems and then install them . Use the API to find out more about available gems . Become a contributor and improve the site yourself. The RubyGems.org website and service are maintained and operated by Ruby Central’s Open Source Program and the RubyGems team. It is funded by the greater Ruby community through support from sponsors, members, and infrastructure donations. If you build with Ruby and believe in our mission, you can join us in keeping RubyGems.org, RubyGems, and Bundler secure and sustainable for years to come by contributing here . Operated by Ruby Central Designed by DockYard Hosted by AWS Resolved with DNSimple Monitored by Datadog Gems served by Fastly Monitored by Honeybadger Secured by Mend.io English Nederlands 简体中文 正體中文 Português do Brasil Français Español Deutsch 日本語 | 2026-01-13T09:29:15 |
http://proceedings.mlr.press/spec.html | PMLR Proceedings Specification | Proceedings of Machine Learning Research Proceedings of Machine Learning Research Volume JMLR DMLR TMLR MLOSS FAQ Submission Format [ edit ] PMLR Proceedings Specification Version 1.3 – Jul 27th, 2025 Once the proceedings is accepted you'll be provided with a github repository where you will upload the pdfs and the bibtex. The proceedings repo will be derived from this template . You should fork the proceedings and add the files specified below. When the files are ready you can submit a pull request. Your forked proceedings should include: A BibTeX file (described below) with the details of each paper in the proceedings. A collection of PDF files (and, optionally, supplementary files). When you submit pull request you will be asked to fill in this check list. The following specification is for simple conference proceedings that consist of one or more papers that are to be displayed in page order with links to associated PDF and supplementary files. If you need any clarifications, or need to publish proceedings with special sections, awards, multiple cycles, or other features, please email proceedings mlr.press The BibTeX file In titles and author names, please use Latex commands for representing special characters (such as \'e for an acute accent on e . For mathematics use standard latex inside two $ marks, e.g. $\int_{x=1}^z x^2 \text{d}x$. Note : The contents of each field must be valid BibTeX (i.e., make sure characters are escaped correctly) . All of the conference and paper details are contained in a single BibTeX file. You can name it whatever you like but we suggest you use some association with the conference name (e.g., icml13.bib ) There are two types of entry that must appear in the BibTeX file: There must be a single @Proceedings entry with the conference details. It must include the following fields: booktitle : the citation to the volume (e.g. Proceedings of the Twenty First International Conference on Machine Learning) name : the long name of the conference (e.g. International Conference on Machine Learning) shortname the abbreviation for the conference (without the year, e.g., ICML) year : the year of the conference (e.g., 2013) editor : the editors’ names (in “Lastname, Firstnames” format, separated by “and”) volume : the PMLR number assigned to your conference/workshop start : the first day of the conference/workshop in YYYY-MM-DD format. end : the last day of the conference/workshop in YYYY-MM-DD format. address : the location of the conference in venue, city, country format. conference_url : the URL of the conference website. conference_number : if the conference is part of a series, give the number in the series here. Other fields may be present in the @Proceedings entry but these will be ignored at present. There must be at one @InProceedings entry for each paper. Each entry must have the following fields: title : The title of the paper author : The paper’s authors in “Lastname, Firstnames” format, separated by “and”. Do not use unicode characters, use the LaTeX equivalents. pages : The page numbers in “startpage–endpage” format abstract : The paper’s abstract. It can include maths in valid LaTeX. Make sure there are no special characters in the abstract, such as those arising from copying and pasting ligatures from the pdf. You can use <em> tags for emphasis. You can use <b> for bold. You can use <ul> tags for bullets. You can use <ol> for numerated lists. Other fields may also appear in the @InProceedings entries but these will be ignored at present. The identifiers should be in the form lastnameYY where lastname is the lowercase last name of the first author and YY is the last two digits of the year of the conference. If two papers share the same identifier under this scheme, they must be disambiguated by appending one of a , b , c , etc. to the conflicting identifiers (e.g., reid12a and reid12b ). Please do not use UTF-8 charcters in these names (or the file names). ASCII only. The Papers and Supplementary Material If needed they can be disambiguated by adding an additional letter after the year letters. E.g. reid12a and reid12b The paper PDFs must have a filename corresponding to the identifier of its BibTeX entry. For example, the PDF for the reid12a entry must have filename reid12a.pdf . These should all appear in the same directory as the BibTeX file describing all the entries. Make sure all supplementary filenames for a paper with ID lastnameYY begin with the same ID but end with the suffix -supp (e.g., The PDF supplementary should be named lastnameYY-supp.pdf ). These should appear in the same directory as the BibTeX file you provide us. Use whichever file extension is most appropriate (e.g., .pdf for PDF files) Please do not provide a single zipped file with supplementary material in it. This provides a poor experience for the user. For software and video see sections below on Software and videos . For example, suppose the paper with identifier reid12b had a supplementary PDF file with proofs. You should give us a file named reid12b-supp.pdf . The generated site for the reid12b entry will contain a link titled supplementary to the files reid12b-supp.pdf . Sample BibTeX file and papers Below is a sample of a BibTeX file for the 2nd International Conference on Examples (ICE’13) in 2013. It has been given volume number 57 and is edited by Jane Smith and Joe Bloggs. It contains three papers. It was provided as part of a gzipped tarball 1 ice13.tar.gz along with the PDF and supplementary files. The contents of the tarball is: ice13.bib (The BibTeX file below) jones13a.pdf jones13a-supp.pdf (Supplementary proofs for jones13a ) jones13b.pdf smythe13.pdf The content of the BibTeX file ice13.bib is: % if the proceedings is a reissued proceedings, please add the field % 'firstpublished' to the entry below, giving the original data of % publication in YYYY-MM-DD format. @Proceedings{ICE-2013, booktitle = {Proceedings of The 2nd International Conference on Examples}, name = {International Conference on Examples}, shortname = {ICE}, editor = {Smith, Jane and Bloggs, Joe}, volume = {57}, year = {2013}, start = {2012-12-15}, end = {2012-12-15}, published = {2021-03-29}, conference_url = {http://exemplar.cc/ice2013}, address = {Breckenridge, Colorado, USA} } @InProceedings{jones13a, title = {A Meta-Heuristic for Example Invention}, author = {Jones, Raphael and Howard, Peter}, pages = {1-9}, abstract = {This paper considers the problem of inventing examples to convey features of an abstract specification. We provide a straightforward meta-technique to develop examples by writing any old rubbish that fills up space but still reads like English. Experiments show that this approach fares better than ``lorum ipsum'' text in a variety of cases. Running time (including LaTeX mathematics) is $O(1)$.} } @InProceedings{jones13b, title = {Example Generalization via Skim Reading}, author = {K\"oller, Klaus and Frankel, David and Deutsch, Mark}, pages = {10-18}, abstract = {We show how skim reading a small number of examples can give a good sense of a more abstract specification. This is a complementary paper to (Jones et al. 2013) which shares the same first author as this paper and so must be properly disambiguated.} } @InProceedings{smythe13, title = {Are Three Examples Sufficient?}, author = {Smythe, Byron and Gordon, Trent}, pages = {19-26}, abstract = {We empirically verify that, for the purposes of conveying the details of an abstract BibTeX specification, three example suffice. The related problem of whether three examples are necessary is left as future work.} } @InProceedings{vonHamburg13, title = {Handling Prefixes and Suffixes in Names}, author = {von Hamburg, Hans and van der Petersburg, Pieter and de Gier, Willhelm and Hirst, Jr, Peter}, pages = {27-41}, abstract = {We demonstrate the ability of scripts to parse names containing prefixes such as 'van der' or 'von'. As a further extension, we consider suffices to names such as 'Jr' and 'III'.} } Sectioned Proceedings We also support publishing in sections. To publish your proceedings in sections, add the following field to the proccedings bibtex entry. @Proceedings{... shortname = {ICE}, sections = {Preface|Contributed Papers|Invited Papers} } And then to each bibtex entry add a field defining the section e.g. @InProceedings{smythe13, title = ... section = {Contributed Papers} } Openreview If you reviewed your proceedings using openreview, then you might like to have the reviews linked to from the proceedings. If you require this, you need to add a section to each bib entry specifying the openreview code. e.g. @InProceedings{smythe13, title = ... openreview = {zaMwvOjsyym} } Software Availability Many papers have associated code provided. You can provide a link to a url that specifies the code. If you require this, you need to add a section to each bib entry specifying the openreview code. e.g. @InProceedings{smythe13, title = ... software = {https://github.com/SheffieldML/GPy} } Video Availability Please don't send videos as supplementary material. Instead you can provide a link to a url that specifies the video location. If you require this, you need to add a section to each bib entry specifying the openreview code. e.g. @InProceedings{smythe13, title = ... video = {https://www.youtube.com/watch?v=01z6agTRvOo} } This site last compiled Wed, 31 Dec 2025 21:13:50 +0000 Github Account MLResearchPress | 2026-01-13T09:29:15 |
https://opensource.guide/pcm/leadership-and-governance/ | Leadership and Governance | Open Source Guides About Put hand العربية Български Bangla Deutsch Ελληνικά English Español Farsi Français Hindi Magyar Indonesia Italiano 日本語 한국어 Malay Nederlands Pidgin Polski Português Romanian Русский संस्कृतम् Swahili தமிழ் Türkçe 简体中文 繁體中文 Open Source Guides Leadership and Governance Growing open source projects can benefit from formal rules for making decisions. Table of Contents Understanding governance for your growing project What are examples of formal roles used in open source projects? How do I formalize these leadership roles? When should I give someone commit access? What are some of the common governance structures for open source projects? Do I need governance docs when I launch my project? What happens if corporate employees start submitting contributions? Understanding governance for your growing project Your project is growing, people are engaged, and you’re committed to keeping this thing going. At this stage, you may be wondering how to incorporate regular project contributors into your workflow, whether it’s giving someone commit access or resolving community debates. If you have questions, we’ve got answers. What are examples of formal roles used in open source projects? Many projects follow a similar structure for contributor roles and recognition. What these roles actually mean, though, is entirely up to you. Here are a few types of roles you may recognize: Maintainer Contributor Committer For some projects, “maintainers” are the only people in a project with commit access. In other projects, they’re simply the people who are listed in the README as maintainers. A maintainer doesn’t necessarily have to be someone who writes code for your project. It could be someone who’s done a lot of work evangelizing your project, or written documentation that made the project more accessible to others. Regardless of what they do day-to-day, a maintainer is probably someone who feels responsibility over the direction of the project and is committed to improving it. A “contributor” could be anyone who comments on an issue or pull request, people who add value to the project (whether it’s triaging issues, writing code, or organizing events), or anybody with a merged pull request (perhaps the narrowest definition of a contributor). [For Node.js,] every person who shows up to comment on an issue or submit code is a member of a project’s community. Just being able to see them means that they have crossed the line from being a user to being a contributor. — @mikeal, “Healthy Open Source” The term “committer” might be used to distinguish commit access, which is a specific type of responsibility, from other forms of contribution. While you can define your project roles any way you’d like, consider using broader definitions to encourage more forms of contribution. You can use leadership roles to formally recognize people who have made outstanding contributions to your project, regardless of their technical skill. You might know me as the “inventor” of Django…but really I’m the guy who got hired to work on a thing a year after it was already made. (…) People suspect that I’m successful because of my programming skill…but I’m at best an average programmer. — @jacobian, “PyCon 2015 Keynote” (video) How do I formalize these leadership roles? Formalizing your leadership roles helps people feel ownership and tells other community members who to look to for help. For a smaller project, designating leaders can be as simple as adding their names to your README or a CONTRIBUTORS text file. For a bigger project, if you have a website, create a team page or list your project leaders there. For example, Postgres has a comprehensive team page with short profiles for each contributor. If your project has a very active contributor community, you might form a “core team” of maintainers, or even subcommittees of people who take ownership of different issue areas (for example, security, issue triaging, or community conduct). Let people self-organize and volunteer for the roles they’re most excited about, rather than assigning them. [We] supplement the core team with several “subteams”. Each subteam is focused on a specific area, e.g., language design or libraries. (…) To ensure global coordination and a strong, coherent vision for the project as a whole, each subteam is led by a member of the core team. — “Rust Governance RFC” Leadership teams may want to create a designated channel (like on IRC) or meet regularly to discuss the project (like on Gitter or Google Hangout). You can even make those meetings public so other people can listen. Cucumber-ruby , for example, hosts office hours every week . Once you’ve established leadership roles, don’t forget to document how people can attain them! Establish a clear process for how someone can become a maintainer or join a subcommittee in your project, and write it into your GOVERNANCE.md. Tools like Vossibility can help you publicly track who is (or isn’t) making contributions to the project. Documenting this information avoids the community perception that maintainers are a clique that makes its decisions privately. Finally, if your project is on GitHub, consider moving your project from your personal account to an Organization and adding at least one backup admin. GitHub Organizations make it easier to manage permissions and multiple repositories and protect your project’s legacy through shared ownership . When should I give someone commit access? Some people think you should give commit access to everybody who makes a contribution. Doing so could encourage more people to feel ownership of your project. On the other hand, especially for bigger, more complex projects, you may want to only give commit access to people who have demonstrated their commitment. There’s no one right way of doing it - do what makes you most comfortable! If your project is on GitHub, you can use protected branches to manage who can push to a particular branch, and under which circumstances. Whenever somebody sends you a pull request, give them commit access to your project. While it may sound incredibly stupid at first, using this strategy will allow you to unleash the true power of GitHub. (…) Once people have commit access, they are no longer worried that their patch might go unmerged…causing them to put much more work into it. — @felixge, “The Pull Request Hack” What are some of the common governance structures for open source projects? There are three common governance structures associated with open source projects. BDFL: BDFL stands for “Benevolent Dictator for Life”. Under this structure, one person (usually the initial author of the project) has final say on all major project decisions. Python is a classic example. Smaller projects are probably BDFL by default, because there are only one or two maintainers. A project that originated at a company might also fall into the BDFL category. Meritocracy: (Note: the term “meritocracy” carries negative connotations for some communities and has a complex social and political history .) Under a meritocracy, active project contributors (those who demonstrate “merit”) are given a formal decision making role. Decisions are usually made based on pure voting consensus. The meritocracy concept was pioneered by the Apache Foundation ; all Apache projects are meritocracies. Contributions can only be made by individuals representing themselves, not by a company. Liberal contribution: Under a liberal contribution model, the people who do the most work are recognized as most influential, but this is based on current work and not historic contributions. Major project decisions are made based on a consensus seeking process (discuss major grievances) rather than pure vote, and strive to include as many community perspectives as possible. Popular examples of projects that use a liberal contribution model include Node.js and Rust . Which one should you use? It’s up to you! Every model has advantages and trade-offs. And although they may seem quite different at first, all three models have more in common than they seem. If you’re interested in adopting one of these models, check out these templates: BDFL model template Meritocracy model template Node.js’s liberal contribution policy Do I need governance docs when I launch my project? There is no right time to write down your project’s governance, but it’s much easier to define once you’ve seen your community dynamics play out. The best (and hardest) part about open source governance is that it is shaped by the community! Some early documentation will inevitably contribute to your project’s governance, however, so start writing down what you can. For example, you can define clear expectations for behavior, or how your contributor process works, even at your project’s launch. If you’re part of a company launching an open source project, it’s worth having an internal discussion before launch about how your company expects to maintain and make decisions about the project moving forward. You may also want to publicly explain anything particular to how your company will (or won’t!) be involved with the project. We assign small teams to manage projects on GitHub who are actually working on these at Facebook. For example, React is run by a React engineer. — @caabernathy, “An inside look at open source at Facebook” What happens if corporate employees start submitting contributions? Successful open source projects get used by many people and companies, and some companies may eventually have revenue streams eventually tied to the project. For example, a company may use the project’s code as one component in a commercial service offering. As the project gets more widely used, people who have expertise in it become more in-demand - you may be one of them! - and will sometimes get paid for work they do in the project. It’s important to treat commercial activity as normal and as just another source of development energy. Paid developers shouldn’t get special treatment over unpaid ones, of course; each contribution must be evaluated on its technical merits. However, people should feel comfortable engaging in commercial activity, and feel comfortable stating their use cases when arguing in favor of a particular enhancement or feature. “Commercial” is completely compatible with “open source”. “Commercial” just means there is money involved somewhere - that the software is used in commerce, which is increasingly likely as a project gains adoption. (When open source software is used as part of a non-open-source product, the overall product is still “proprietary” software, though, like open source, it might be used for commercial or non-commercial purposes.) Like anyone else, commercially-motivated developers gain influence in the project through the quality and quantity of their contributions. Obviously, a developer who is paid for her time may be able to do more than someone who is not paid, but that’s okay: payment is just one of many possible factors that could affect how much someone does. Keep your project discussions focused on the contributions, not on the external factors that enable people to make those contributions. Do I need a legal entity to support my project? You don’t need a legal entity to support your open source project unless you’re handling money. For example, if you want to create a commercial business, you’ll want to set up a C Corp or LLC (if you’re based in the US). If you’re just doing contract work related to your open source project, you can accept money as a sole proprietor, or set up an LLC (if you’re based in the US). If you want to accept donations for your open source project, you can set up a donation button (using PayPal or Stripe, for example), but the money won’t be tax-deductible unless you are a qualifying nonprofit (a 501c3, if you’re in the US). Many projects don’t wish to go through the trouble of setting up a nonprofit, so they find a nonprofit fiscal sponsor instead. A fiscal sponsor accepts donations on your behalf, usually in exchange for a percentage of the donation. Software Freedom Conservancy , Apache Foundation , Eclipse Foundation , Linux Foundation and Open Collective are examples of organizations that serve as fiscal sponsors for open source projects. Our goal is to provide an infrastructure that communities can use to be self sustainable, thus creating an environment where everyone — contributors, backers, sponsors — get concrete benefits out of it. — @piamancini, “Moving beyond the charity framework” If your project is closely associated with a certain language or ecosystem, there may also be a related software foundation you can work with. For example, the Python Software Foundation helps support PyPI , the Python package manager, and the Node.js Foundation helps support Express.js , a Node-based framework. Abeg, make I run go back to all those guides Related Guides Best Practices for Maintainers Learn how to make your life easy as an open source maintainer, from documentation to community collaboration. Make sense of maintaining open source projects with these top-notch tips. Open Source Metrics Make informed decisions to help your open source project thrive by measuring and tracking its success. Scroll to Top fine print | 2026-01-13T09:29:15 |
http://proceedings.mlr.press/ | Proceedings of Machine Learning Research | The Proceedings of Machine Learning Research (formerly JMLR Workshop and Conference Proceedings) is a series aimed specifically at publishing machine learning research presented at workshops and conferences. Each volume is separately titled and associated with a particular workshop or conference. Volumes are published online on the PMLR web site. The Series Editors are Neil D. Lawrence and Mark Reid. Proceedings of Machine Learning Research PMLR JMLR DMLR TMLR MLOSS FAQ Submission Format [ edit ] Proceedings of Machine Learning Research ISSN: 2640-3498 The Proceedings of Machine Learning Research is a series that publishes machine learning research papers presented at conferences and workshops. Each volume is separately titled and associated with a particular workshop or conference and will be published online on the PMLR web site. Authors retain copyright. Editors The Series Editor is Neil Lawrence . Please send proposals for new volumes under this series to us via e-mail: proceedings mlr.press . Each proposal should include: A brief description of the event's scope and topics to be covered. A description of the review process for the proceedings. The names and short CVs (a few lines) of the proposed Proceedings Editors. For details on how to prepare a proceedings for publication please see the Specification . For frequently asked questions on publishing proceedings please see the FAQ . Reissue Series We have launched a "reissue series" for republishing volumes of papers that are no longer on line. Our first reissues are early editions of AISTATS, AISTATS 1995 , AISTATS 1997 , AISTATS 1999 , AISTATS 2001 , AISTATS 2003 and AISTATS 2005 . To suggest volumes for the reissue series contact the editors. You will need to provide the information provided in the Proceedings Specification to provide the volume. Volume R6 Reissue of UAI 1998 Volume R0 Pre-proceedings of AISTATS 1995 Volume R5 Proceedings of AISTATS 2005 Volume R4 Proceedings of AISTATS 2003 Volume R3 Proceedings of AISTATS 2001 Volume R2 Proceedings of AISTATS 1999 Volume R1 Proceedings of AISTATS 1997 Proceedings Volume 311 Proceedings of MLCB 2025 Volume 310 Proceedings of Reliable AI Workshop at ACML Volume 305 Proceedings of CoRL 2025 Volume 299 Proceedings of CAMLIS 2025 Volume 298 Proceedings of MLHC 2025 Volume 268 AAAI 2024 Bridge Program on Continual Causality Volume 271 Proceedings of ProbNum 2025 Volume 266 Proceedings of COPA 2025 Volume 269 Proceedings of Learning on Graphs 2025 Volume 284 Proceedings of NeSy 2025 Volume 278 Proceedings of MLIC 2025 Volume 277 Proceedings of ML-DE 2025 Volume 267 Proceedings of ICML 2025 Volume 296 "I Can't Believe It's Not Better" ICLR Workshop 2025 Volume 295 Proceedings of UK AI Conference 2024 Volume 286 Proceedings of UAI 2025 Volume 294 Proceedings of European Workshop on Algorithmic Fairness Volume 293 Proceedings of AutoML 2025 Volume 280 Proceedings of CPAL 2025 Volume 287 Proceedings of CHIL 2025 Volume 292 TerraBytes at ICML 2025 Volume 275 Proceedings of CLeaR 2025 Volume 285 Proceedings of Unireps 2025 Volume 291 Proceedings of COLT 2025 Volume 290 Proceedings of ISIPTA 2025 Volume 288 Proceedings of Neurosymbolic Systems 2025 Volume 289 Proceedings of AABI 2025 Volume 283 Proceedings of L4DC 2025 Volume 279 Proceedings of Algorithmic Fairness Through the Lens of Metrics and Evaluation 2024 Volume 258 Proceedings of AISTATS 2025 Volume 281 Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare Volume 273 Proceedings of iRAISE 2025 Volume 274 Proceedings of CoLLAs 2024 Volume 272 Proceedings of ALT 2025 Volume 259 Proceedings or ML4H 2024 Volume 264 Proceedings of Large Foundation Models for Educational Assessment Volume 260 Proceedings of ACML 2024 Volume 265 Proceedings of the Northern Lights Deep Learning Conference 2025 Volume 270 Proceedings of CoRL 2024 Volume 250 Proceedings of MIDL 2024 Volume 251 Proceedings of GRaM at ICML 2024 Volume 262 Proceedings of Efficient Natural Language and Signal Processing Workshop at NeurIPS 2024 Volume 263 Proceedings of CLDD 2024 Volume 252 Proceedings of MLHC 2024 Volume 254 Proceedings of MICCAI COMPAYL 2024 Volume 261 Proceedings of MLCB 2024 Volume 256 Proceedings of AutoML 2024 Volume 255 Proceedings of ML Meets Differential Equations Volume 244 Proceedings of UAI 2024 Volume 230 Proceedings of COPA 2024 Volume 246 Proceedings of PGM 2024 Volume 249 Proceedings of the 1st ContinualAI Unconference Volume 257 AI for Education Workshop 2024 Volume 228 Proceedings of NeuReps 2023 Volume 245 Proceedings of MLIC 2024 Volume 253 AABI 2024 Proceedings Volume 248 Proceedings of CHIL 2024 Volume 235 Proceedings of ICML 2024 Volume 247 Proceedings of COLT 2024 Volume 242 Proceedings of L4DC 2024 Volume 241 Proceedings of LIDTA 2023 Volume 243 Proceedings of UniReps Volume 226 Proceedings of the NeurIPS 2023 Gaze Meets ML Workshop Volume 239 "I Can't Believe It's Not Better" NeurIPS Workshop 2023 Volume 238 Proceedings of AISTATS 2024 Volume 231 Proceedings of Learning on Graphs 2023 Volume 236 Proceedings of CLeaR 2024 Volume 240 Proceedings of MLCB 2024 Volume 237 Proceedings of ALT 2024 Volume 222 Proceedings of ACML 2023 Volume 227 Proceedings of Medical Imaging with Deep Learning 2023 Volume 233 Proceedings of the Northern Lights Deep Learning Conference 2024 Volume 234 Proceedings of Conference on Parsimony and Learning 2024 Volume 219 Proceedings of Machine Learning for Healthcare 2023 Volume 225 Proceedings of the 3rd Machine Learning for Health Symposium Volume 207 Proceedings of the 3rd TeachML Workshop Volume 224 Proceedings of AutoML 2023 Volume 229 Proceedings of CoRL 2023 Volume 223 Proceedings of CAWS 2023 Volume 232 Proceedings of CoLLAs 2023 Volume 221 Proceedings of the 2nd Annual Topology, Algebra, and Geometry in Machine Learning (TAG-ML) Volume 220 Proceedings of the NeurIPS 2022 Competitions Track Volume 204 COPA 2023 Proceedings Volume 213 Proceedings of CLeaR 2023 Volume 218 Proceedings of KDD Workshop on Causal Discovery, Prediction and Decision 2023 Volume 195 Proceedings of COLT 2023 Volume 215 Proceedings of ISIPTA 2023 Volume 217 Proceedings of ICGI 2023 Volume 202 Proceedings of ICML 2023 Volume 216 Proceedings of UAI 2023 Volume 209 Proceedings of Conference on Health, Inference, and Learning 2023 Volume 211 Proceedings of L4DC 2023 Volume 214 Proceedings of AFCP 2022 Volume 208 Proceedings of the AAAI Bridge Symposium on Continual Causality 2023 Volume 212 Proceedings of the NeurIPS Cell Segmentation Challenge 2022 Volume 189 Proceedings of ACML 2022 Volume 206 Proceedings of AISTATS 2023 Volume 210 Proceedings of the NeurIPS 2022 Gaze Meets ML Workshop Volume 192 Proceedings of the 3rd Workshop on Self-Supervised Learning Volume 205 Proceedings of CoRL 2022 Volume 187 "I Can't Believe It's Not Better" NeurIPS Workshop 2022 Volume 201 Proceedings of ALT 2023 Volume 197 Proceedings of NeurReps Workshop 2022 Volume 182 Proceedings of MLHC 2022 Volume 203 Proceedings of Transfer Learning for NLP 2022 Volume 166 Proceedings of HEAR 2021 Volume 191 Proceedings of ECML/PKDD Workshop on Meta-Knowledge Transfer 2022 Volume 198 Proceedings of Learning on Graphs 2022 Volume 200 Proceedings of MLCB 2022 Volume 172 Proceedings of Medical Imaging with Deep Learning Volume 194 Proceedings of Geometric Deep Learning in Medical Image Analysis Volume 199 Proceedings of CoLLAs 2022 Volume 193 Proceedings of 2nd Machine Learning for Health Symposium Volume 196 Proceedings of Topological, Algebraic and Geometric Learning Workshops 2022 Volume 183 Proceedings of LIDTA 2022 Volume 181 Proceedings of the NeurIPS 2021 Workshop on Pre-registration in Machine Learning Volume 188 Proceedings of AutoML-Conf 2022 Volume 186 Proceedings of PGM 2022 Volume 190 Proceedings of MSML 2022 Volume 179 Proceedings of the 11th Symposium on Conformal and Probabilistic Prediction with Applications Volume 180 Proceedings of UAI 2022 Volume 185 Proceedings of KDD Workshop on Causal Discovery 2022 Volume 184 Proceedings of ICML 2022 Workshop on Healthcare AI and COVID-19 Volume 176 Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track Volume 162 Proceedings of ICML 2022 Volume 177 Proceedings of CLeaR 2022 Volume 178 Proceedings of COLT 2022 Volume 168 Proceedings of Learning for Dynamics and Control 2022 Volume 151 Proceedings of AISTATS 2022 Volume 145 Proceedings of MSML 2021 Volume 159 Proceedings of IWSSL 2021 Volume 174 Proceedings of CHIL 2022 Volume 173 Proceedings of ICCV 2021 DYAD Workshop Volume 167 ALT 2022 Proceedings Volume 170 Proceedings of the 2nd Teaching Machine Learning and Artificial Intelligence Workshop Volume 171 Proceedings of AFCR 2021 Volume 163 "I Can't Believe It's Not Better" NeurIPS Workshop 2021 Volume 164 Proceedings of CoRL 2021 Volume 165 Proceedings of MLCB 2021 Volume 161 Proceedings of UAI 2021 Volume 160 Proceedings of Causal Analysis Workshop 2021 Volume 158 ML4H Symposium 2021 Volume 157 Proceedings of ACML 2021 Volume 149 Proceedings of Machine Learning for Healthcare 2021 Volume 155 Proceedings of CoRL 2020 Volume 154 Proceedings of LIDTA 2021 Volume 152 COPA 2021 Proceedings Volume 142 Proceedings of 2nd Workshop on Diversity in Artificial Intelligence Volume 156 Proceedings of COMPAY 2021 Volume 153 Proceedings of ICGI 2020 Volume 143 Proceedings of MIDL 2021 Volume 140 Proceedings of AAAI Workshop on Meta-Learning and MetaDL Challenge Volume 147 Proceedings of ISIPTA 2021 Volume 133 NeurIPS 2020 Competition and Demonstration Track Volume 150 Proceedings of KDD Workshop on Causal Discovery 2021 Volume 134 Proceedings of COLT 2021 Volume 148 Workshop on Preregistration in Machine Learning Volume 139 Proceedings of ICML 2021 Volume 144 Proceedings of Learning for Dynamics and Control 2021 Volume 146 Proceedings of AAAI Symposium on Survival Prediction - Algorithms, Challenges, and Applications Volume 141 Proceedings of the First Teaching Machine Learning and Artificial Intelligence Workshop Volume 130 Proceedings of AISTATS 2021 Volume 132 ALT 2021 Proceedings Volume 137 "I Can't Believe It's Not Better" NeurIPS Workshop 2020 Volume 138 Proceedings of PGM 2020 Volume 136 NeurIPS 2020 ML4H Workshop Volume 122 Proceedings of 3rd Workshop on AI in Affective Computing Volume 119 Proceedings of ICML 2020 Volume 129 Proceedings of ACML 2020 Volume 121 Proceedings of MIDL 2020 Volume 126 Proceedings of Machine Learning for Healthcare 2020 Volume 131 Proceedings of the International Workshop on Self-Supervised Learning Volume 124 Proceedings of UAI 2020 Volume 127 KDD 2020 Workshop on Causal Discovery Volume 123 NeurIPS 2019 Competition and Demonstration Track Volume 107 Proceedings of MSML 2020 Volume 128 COPA 2020 Proceedings Volume 115 Proceedings of UAI 2019 Volume 120 Proceedings of Learning for Dynamics and Control Volume 125 COLT 2020 Proceedings Volume 108 Proceedings of AISTATS 2020 Volume 116 NeurIPS 2019 ML4H Workshop Volume 86 IJCAI workshop on AI and Affective Computing Volume 118 AABI 2019 Proceedings Volume 117 ALT 2020 Proceedings Volume 106 Proceedings of Machine Learning for Healthcare 2019 Volume 101 ACML 2019 Proceedings Volume 109 Workshop on Online Recommender Systems and User Modeling 2019 Volume 105 Workshop on Conformal and Probabilistic Prediction with Applications (COPA) 2019 Volume 104 KDD Workshop on Causal Discovery 2019 Volume 103 ISIPTA 2019 Volume 102 Proceedings of MIDL 2019 Volume 97 Proceedings of ICML 2019 Volume 96 AABI 2018 Proceedings Volume 89 Proceedings of AISTATS 2019 Volume 93 ICGI 2018 Proceedings Volume 100 CoRL 2019 Volume 99 COLT 2019 Volume 98 ALT 2019 Proceedings Volume 85 Proceedings of Machine Learning for Healthcare 2018 Volume 94 LIDTA 2018 Proceedings Volume 95 ACML 2018 Proceedings Volume 87 CoRL 2018 Volume 92 2018 KDD Workshop on Causal Discovery Volume 82 Proceedings of PAPIs 2017 Volume 88 Proceedings of COST 2018 Volume 75 Proceedings of COLT 2018 Volume 80 Proceedings of ICML 2018 Volume 91 COPA 2018 Volume 90 ADE Identification Volume 83 ALT 2018 Proceedings Volume 84 AISTATS 2018 Proceedings Volume 81 FAT* 2018 Proceedings Volume 79 Open Algorithm Selection Challenge 2017 Volume 68 Proceedings of Machine Learning for Healthcare 2017 Volume 77 ACML 2017 Proceedings Volume 78 CoRL 2017 Proceedings Volume 74 LIDTA 2017 Proceedings Volume 76 ALT 2017 Proceedings Volume 72 Conference on Probabilistic Graphical Models 2018 Volume 73 2017 Workshop on Advanced Methodologies for Bayesian Networks Volume 71 Proceedings of the KDD 2017 Workshop on Anomaly Detection in Finance Volume 70 Proceedings of ICML 2017 Volume 69 MIH 2017 Proceedings Volume 67 3rd International Conference on Predictive Applications and APIs Volume 66 1st IJCAI Workshop on Artificial Intelligence in Affective Computing Volume 65 COLT 2017 Proceedings Volume 62 ISIPTA '17 Proceedings Volume 60 Workshop on Conformal and Probabilistic Prediction with Applications (COPA) 2017 Volume 58 NIPS 2016 Workshop on Imperfect Decision Makers Volume 54 AISTATS 2017 Proceedings Volume 64 AutoML 2016 Workshop Proceedings Volume 63 ACML 2016 Proceedings Volume 57 ICGI 2016 Proceedings Volume 56 Proceedings of Machine Learning for Healthcare 2016 Volume 55 NIPS 2016 Time Series Workshop Volume 53 BigMine 2016 Proceedings Volume 52 PGM 2016 Proceedings Volume 51 AISTATS 2016 Proceedings Volume 50 PAPIs 2015 Proceedings Volume 49 COLT 2016 Proceedings Volume 48 ICML 2016 Proceedings Volume 47 ECML/PKDD Workshop on Statistically Sound Data Mining 2014 Proceedings Volume 46 Neural Connectomics Workshop 2014 Proceedings Volume 45 ACML 2015 Proceedings Volume 44 NIPS Workshop on Feature Extraction 2015 Proceedings Volume 43 MLIS 2015 Proceedings Volume 42 HEPML 2014 Proceedings Volume 41 BigMine 2015 Proceedings Volume 40 COLT 2015 Proceedings Volume 39 ACML 2014 Proceedings Volume 38 AISTATS 2015 Proceedings Volume 37 ICML 2015 Proceedings Volume 36 BIGMINE 2014 Proceedings Volume 35 COLT 2014 Proceedings Volume 34 ICGI 2014 Proceedings Volume 33 AISTATS 2014 Proceedings Volume 32 ICML 2014 Proceedings Volume 31 AISTATS 2013 Proceedings Volume 30 COLT 2013 Proceedings Volume 29 ACML 2013 Proceedings Volume 28 ICML 2013 Proceedings Volume 27 Unsupervised and Transfer Learning Workshop at ICML 2011 Volume 26 On-line Trading of Exploration and Exploitation 2011 Proceedings Volume 25 ACML 2012 Proceedings Volume 24 EWRL 2012 Proceedings Volume 23 COLT 2012 Proceedings Volume 22 AISTATS 2012 Proceedings Volume 21 ICGI 2012 Proceedings Volume 20 ACML 2011 Proceedings Volume 19 COLT 2011 Proceedings Volume 18 KDD Cup 2011 Volume 17 Workshop on Applications of Pattern Analysis (WAPA) 2011 Proceedings Volume 16 Active Learning and Experimental Design Workshop Volume 15 AISTATS 2011 Proceedings Volume 14 Proceedings of the Learning to Rank Challenge Volume 13 ACML 2010 Proceedings Volume 12 NIPS Mini-Symposium on Causality in Time Series Volume 11 Workshop on Applications of Pattern Analysis (WAPA) 2010 Proceedings Volume 10 FSDM 2010 Proceedings Volume 9 AISTATS 2010 Proceedings Volume 8 Proceedings of MLSB 2009 Volume 7 Proceedings of KDD-Cup 2009 competition Volume 6 Causality: Objectives and Assessment (NIPS 2008 Workshop) Volume 5 AISTATS 2009 Proceedings Volume 4 New Challenges for Feature Selection in Data Mining and Knowledge Discovery Volume 3 Causation and Prediction Challenge (WCCI 2008) Volume 2 AISTATS 2007 Proceedings Volume 1 Gaussian Processes in Practice The proceedings were originally known as JMLR W&CP. Rebranding was finalised on 10th April 2017. The first volume to be published at the PMLR site was volume 54, the 2017 AISTATS Proceedings. The ISSN of the old proceedings was: 1938-7228. subscribe via RSS This site last compiled Wed, 31 Dec 2025 21:13:50 +0000 Github Account MLResearchPress | 2026-01-13T09:29:15 |
https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-beta/+index | Chromium Beta branch : Saikrishna Arcot Log in / Register Saikrishna Arcot Overview Code Bugs Blueprints Translations Answers Chromium Beta branch PPA description This PPA contains the latest Chromium Beta builds, with hardware video decoding enabled (hidden behind a flag), and support for Widevine (needed for viewing many DRM-protected videos) enabled. == Hardware Video Decoding == To enable hardware video decoding, start Chromium with the --enable- features= VaapiVideoDecod er argument. To make this persistent, create a file at /etc/chromium- browser/ customizations/ 92-vaapi- hardware- decoding with the following contents: CHROMIUM_ FLAGS=" ${CHROMIUM_ FLAGS} --enable- features= VaapiVideoDecod er" See also https:/ /wiki.archlinux .org/title/ Chromium# Hardware_ video_accelerat ion for more information on VAAPI video decoding support. === Widevine Support === The packages in this PPA have support for Widevine inside Chromium enabled. However, you still need to copy some files from Chrome into Chromium for you to use Netflix (or other websites using Encrypted Media Extensions) in Chromium. 1. Download and install Chrome (or extract the necessary files, if you know how to do that). 2. From the Chrome installation directory (probably /opt/google/chome or something similar), copy libwidevinecdm.so into ~/.config/chromium. 3. Restart Chromium. === Preventing falling back to the standard version of Chromium === Sometimes, this PPA doesn't get updated within a few days of a release, and in some cases, the version of Chromium in the standard Ubuntu repos may be newer than the version provided by the PPA. If you want to prevent apt from updating to the version in the standard Ubuntu repos, add the following three lines into /etc/apt/ preferences. d/chromium: Package: * Pin: release o=LP-PPA- saiarcot895- chromium- beta Pin-Priority: 700 === Help === If you need help with installing Chromium or notice stuff not working that should normally work, click on my name below in the "Adding this PPA to your system" section. === Packaging === Packaging for this PPA is at https:/ /github. com/saiarcot895 /chromium- ubuntu- build . A separate branch is created for each upstream branch number. Adding this PPA to your system You can update your system with unsupported packages from this untrusted PPA by adding ppa:saiarcot895/chromium-beta to your system's Software Sources. ( Read about installing ) sudo add-apt-repository ppa:saiarcot895/chromium-beta sudo apt update Technical details about this PPA This PPA can be added to your system manually by copying the lines below and adding them to your system's software sources. Display sources.list entries for: Choose your Ubuntu version Kinetic (22.10) Jammy (22.04) Focal (20.04) Cosmic (18.10) Bionic (18.04) Xenial (16.04) deb https://ppa.launchpadcontent.net/saiarcot895/chromium-beta/ubuntu YOUR_UBUNTU_VERSION_HERE main deb-src https://ppa.launchpadcontent.net/saiarcot895/chromium-beta/ubuntu YOUR_UBUNTU_VERSION_HERE main Signing key: 4096R/4B2F369E32D934F066ACB0C3F6C3EADDA7D08424 ( What is this? ) Fingerprint: 4B2F369E32D934F066ACB0C3F6C3EADDA7D08424 Dependencies: Chromium Dev branch (included on 2015-06-25 ) For questions and bugs with software in this PPA please contact Saikrishna Arcot . PPA statistics Activity 0 updates added during the past month. View package details Overview of published packages Published in: Any series Kinetic Jammy Focal Cosmic Bionic Xenial 1 → 10 of 10 results First • Previous • Next • Last Package Version Uploaded by chromium-browser 1:108.0.5359.40-0ubuntu1~ppa1~22.10.1 Saikrishna Arcot ( 2022-11-14 ) chromium-browser 1:108.0.5359.40-0ubuntu1~ppa1~22.04.1 Saikrishna Arcot ( 2022-11-14 ) chromium-browser 1:108.0.5359.40-0ubuntu1~ppa1~20.04.1 Saikrishna Arcot ( 2022-11-14 ) chromium-browser 1:101.0.4951.15-0ubuntu1~ppa1~18.04.1 Saikrishna Arcot ( 2022-04-06 ) libvdpau 1.4-2~ubuntu20.04.1 Saikrishna Arcot ( 2020-10-17 ) pipewire 0.2.3-3~ubuntu18.04.1 Saikrishna Arcot ( 2019-10-04 ) vdpau-video 0.7.4-7ubuntu1~ppa2~20.04.1 Saikrishna Arcot ( 2020-10-17 ) vdpau-video 0.7.4-7ubuntu1~ppa1~18.10.1 Saikrishna Arcot ( 2019-04-12 ) vdpau-video 0.7.4-6ubuntu2~ppa1~18.04.1 Saikrishna Arcot ( 2019-04-12 ) vdpau-video 0.7.4-5ubuntu1~ppa3~16.04.1 Saikrishna Arcot ( 2017-09-02 ) 1 → 10 of 10 results First • Previous • Next • Last View package details Latest updates chromium-browser 165 weeks ago Failed to build: arm64 chromium-browser 165 weeks ago Failed to build: arm64 chromium-browser 165 weeks ago Failed to build: arm64 chromium-browser 197 weeks ago Failed to build: arm64 libvdpau 273 weeks ago Successfully built • Take the tour • Read the guide © 2004 Canonical Ltd. • Terms of use • Data privacy • Contact Launchpad Support • Blog • Careers • System status • 837c972 ( Get the code! ) | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/ovhgroup-ovh-block-storage/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click | OVHcloud Block Storage | LinkedIn Skip to main content LinkedIn OVHcloud in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in OVHcloud Block Storage Block Storage Software by OVHcloud See who's skilled in this Add as skill Learn more Report this product About With Block Storage, you can use secure and flexible storage volumes that are available on demand. Block Storage is based on Ceph, an open-source technology. Similar products Tencent Cloud Block Storage Tencent Cloud Block Storage Block Storage Software DigitalOcean Block Storage DigitalOcean Block Storage Block Storage Software Vultr Vultr Block Storage Software Vultr Bare Metal Vultr Bare Metal Block Storage Software DataCore SANsymphony DataCore SANsymphony Block Storage Software Vultr Block Storage Vultr Block Storage Block Storage Software Sign in to see more Show more Show less OVHcloud products OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software OVHcloud Marketplace OVHcloud Marketplace Marketplace Platforms OVHcloud Object Storage OVHcloud Object Storage Object Storage Software OVHcloud Zerto DRP OVHcloud Zerto DRP Disaster Recovery as a Service (DRaaS) Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:15 |
https://typedoc.org/documents/Options.html | Options | TypeDoc TypeDoc Example API GitHub Preparing search index... Options Any command line arguments that are passed without a flag will be parsed as entry points. Any options passed on the command line will override options set in a configuration file. Configuration Options Options which control what files TypeDoc reads. options tsconfig compilerOptions plugin Input Options Options which control how input is converted into a project that can be rendered to HTML or JSON. entryPoints entryPointStrategy packageOptions alwaysCreateEntryPointModule projectDocuments exclude externalPattern excludeExternals excludeNotDocumented excludeNotDocumentedKinds excludeInternal excludePrivate excludePrivateClassFields excludeProtected excludeReferences excludeCategories maxTypeConversionDepth name includeVersion disableSources sourceLinkTemplate gitRevision gitRemote disableGit readme basePath Output Options Options which control TypeDoc's HTML output. outputs out html json pretty emit theme router lightHighlightTheme darkHighlightTheme highlightLanguages ignoredHighlightLanguages typePrintWidth customCss customJs customFooterHtml customFooterHtmlDisableWrapper markdownItOptions markdownItLoader displayBasePath cname favicon sourceLinkExternal markdownLinkExternal lang locales githubPages cacheBust hideGenerator searchInComments searchInDocuments cleanOutputDir titleLink navigationLinks sidebarLinks navigation headings sluggerConfiguration navigationLeaves visibilityFilters searchCategoryBoosts searchGroupBoosts hostedBaseUrl useHostedBaseUrlForAbsoluteLinks useFirstParagraphOfCommentAsSummary includeHierarchySummary Comment Options Options which control how TypeDoc parses comments and documents. commentStyle useTsLinkResolution preserveLinkText jsDocCompatibility suppressCommentWarningsInDeclarationFiles blockTags inlineTags modifierTags cascadedModifierTags excludeTags notRenderedTags preservedTypeAnnotationTags externalSymbolLinkMappings Organization Options Controls how TypeDoc organizes content within a converted project. groupReferencesByType categorizeByGroup defaultCategory categoryOrder groupOrder sort sortEntryPoints kindSortOrder Validation Options Configures the validation performed by TypeDoc on a converted project. validation treatWarningsAsErrors treatValidationWarningsAsErrors intentionallyNotExported requiredToBeDocumented packagesRequiringDocumentation intentionallyNotDocumented Other Options watch preserveWatchOutput help version showConfig logLevel skipErrorChecking Settings Theme OS Light Dark On This Page Configuration Options Input Options Output Options Comment Options Organization Options Validation Options Other Options Example API GitHub TypeDoc Loading... Generated using TypeDoc | 2026-01-13T09:29:15 |
https://id.atlassian.com/login?continue=https://confluence.atlassian.com/login.action?os_destination=/bamboo/bamboo-documentation-289276551.html | Log in with Atlassian account Atlassian JavaScript is disabled You should enable JavaScript to work with this page. Atlassian JavaScript load error We tried to load scripts but something went wrong. Please make sure that your network settings allow you to download scripts from the following domain: https://id-frontend.prod-east.frontend.public.atl-paas.net | 2026-01-13T09:29:15 |
https://www.linkedin.com/products/categories/cloud-communications-platforms | Best Cloud Communications Platforms | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Chief Technology Officer (9) Chief Product Officer (6) Chief Information Officer (4) Information Technology Manager (4) Chief Executive Officer (4) See all products Find top products in Cloud Communications Platforms category Software used to enable voice and text messaging features in mobile applications using APIs. - Use application programming interfaces (APIs) to enable cloud-hosted communications - Customize interface for calling, instant messaging, and verifications in applications - Integrate with VoIP services and implement enhanced security features - Use API documentation and guides to explore features and flexibility 96 results Slack Cloud Communications Platforms by Salesforce Slack is your work operating system. Slack is where humans with agents meet data, workflows, and apps to deliver big results, right where people love to work. View product Tencent Cloud Instant Messaging (IM) Cloud Communications Platforms by Tencent Tencent Cloud Instant Messaging (IM) provides globally interconnected chat APIs, multi-platform SDKs, and UIKit components to help you quickly bring messaging capabilities such as one-to-one chat, group chat, chat rooms, and system notifications to your applications and websites. View product Avaya Cloud Office Cloud Communications Platforms by Avaya Avaya Cloud Office is a cloud-based communications system with enterprise grade voice, video, IM/messaging, meetings, conferencing and collaboration. With Avaya Cloud Office you can easily connect all your workers, whether in the office, mobile, or remote to a single system regardless of their location. View product MessageBird Cloud Communications Platforms by Bird APIs, tools, customer support software, and a global connectivity network to elevate business to customer relationships. View product Vonage Communications APIs Cloud Communications Platforms by Vonage With Vonage APIs, growing startups and agile enterprises can build best-in-class communications solutions at scale. Our global platform, comprehensive suite of communications channels, and expert support help you move quickly to offer your customers the most helpful experience possible, sooner rather than later. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights CPaaS Cloud Communications Platforms by Plivo A CPaaS (communications platform as a service) lets developers add real-time communications features such as SMS messaging and voice call to their applications without the hassle of dealing with the global telecom infrastructure. Plivo's SMS API and Voice API and its Premium Communications Network give businesses a way to communicate with their customers anywhere in the world. Cloud-based communications platforms provide a bridge between businesses and telecom carriers, programmatically connecting a business’s devices and applications via application programming interfaces (API) that provide all the code a business needs to incorporate communication functionality into their applications. On the back end, cloud communications platforms connect to telecommunication carriers and operators to route messages and voice calls on behalf of businesses. View product Channelize.io: Live Stream Shopping Platform Cloud Communications Platforms by BigStep Technologies Channelize.io is a Platform-as-a-Service (PaaS) product that is revolutionizing the way businesses (eCommerce Stores & D2C Brands) are selling online. We enable eCommerce Stores & D2C Brands to leverage the power of Live Stream Shopping and Real-time Engagement to Boost Sales and Enhance their Brand Perception by connecting better with their buyers and showcasing and marketing their products in ways that lead to informed, trusted and accelerated purchases by buyers. We make Live Video Shopping better for brands with consistent, beautiful branded experiences across Websites and Apps. There is no doubt that Live Shopping will become the new normal in retail and will go down in history as an evolution of traditional eCommerce. Our platform, therefore, enables brands to achieve the best of both worlds - ROI and Reach, making them a real Winner in the retail space. Our Platform works through 3 primary mechanisms – entertaining consumers, engaging them, and educating them on the brands. View product Telnyx Cloud Communications Platforms by Telnyx Real-time conversational AI with global telephony, infrastructure & full customizability—all in one platform. View product Structured Cabling Cloud Communications Platforms by IES Communications Backed by 40+ years of experience, we design, install, and maintain scalable cabling systems, delivering high-performance connectivity for reliable network infrastructure. View product CXi Switch Cloud Communications Platforms by Ozonetel | oneCXi CXi Switch, a first-of-its-kind internet communication product, powers instant voice and digital conversations across all touchpoints, as an alternative to telecom channels. It's the core routing engine of the oneCXi platform that drives instant one-click conversations between customers and businesses. A plug-and-play widget seamlessly integrates with apps, websites, online ads, newsletters, in-store displays, digital kiosks, and other touchpoints to power hassle-free voice and digital conversations. It works across devices and form factors. Using internet communication, CXi Switch empowers businesses to take full control of conversations across marketing, sales, and service, creating truly seamless journeys throughout the customer lifecycle. It delivers measurable outcomes with improved brand engagement, 4X higher conversions through contextual conversations, and a 3X increase in lifetime value with improved retention. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:15 |
http://proceedings.mlr.press/faq.html | Proceedings of Machine Learning Research FAQ | Proceedings of Machine Learning Research Proceedings of Machine Learning Research Volume JMLR DMLR TMLR MLOSS FAQ Submission Format [ edit ] Proceedings of Machine Learning Research Frequently Asked Questions The Proceedings of Machine Learning Research are very much a community effort. Thanks are particularly due to Isabelle Guyon for working closely with us in preparing early volumes, Nicola Talbot for preparing the LaTeX class file and Leslie Pack Kaelbling for supporting the launch in the first place. There's a Mistake in a Paper: Can it be Fixed? If the mistake is a clear typo (e.g. abstract or authors not matching to the PDF) you can submit a pull request to fix it. You'll need a GitHub account for this. Log into your GitHub account and click the "edit" button on the abstract page and if you are logged into GitHub you will be able to edit the meta information directly. Pages are generated by jekyll in github, which is a static webpage generation framework. Information for each paper is included in the `_posts/` subdirectory, clicking edit from the abstract page will allow you to edit the relevant file. Each file is a markdown with a yaml header. All the meta information is in the yaml header. The format is based on Martin Fenner's citeproc . Note title information and author information is stored twice. Once in a format appropriate for html (e.g. with unicode characters) and once in a format appropriate for LaTeX (e.g. with LaTeX accents and curly brackets to preserve case). find information on the format Do not edit the bib file, this has no effect on the generated pages or the generated bibfiles. If the change is more involved then you need to get permission of the proceedings editors. Please report how you got that permission in the pull request (i.e. what the circumstances were, who the individual who gave you permission was). For example, if there's a mistake in the PDF you will need the volume editor's permission to make changes. How do I publish my proceedings with you? Email the series editor with your planned event. You need to provide brief CVs of the organizers and an overview of the subject matter. Potential proceedings are judged on the basis of the appropriateness of the material, the perceived quality of the workshop and the track record of the organizers. We expect papers to be peer reviewed, but we are open to innovative mechanisms on how this is done. Please provide a description of your approach. We expect proceedings requests to come from the editors of the proceedings directly, not through proxies. I would like to register a Reissue Proceedings with you? You will need the support of the original conference organizers. When you have this email the series editors with your planned reissue. The reissue proceedings need to be provided in the format detailed here here What Volume Number will my Proceedings be? When you are close to finalizing the proceedings, email the series editor. You'll receive you a volume number by allocating from those currently available. What if I want a Paper Copy? You are free to organize a hard copy of your proceedings for your workshop, although PMLR does not provide the support for this. Previous editors have made use of Microtome. See the Challenges in Machine Learning series here which is made up of volumes from PMLR. Nicola Talbot has created a LaTeX package for creating books from the proceedings. Should the Call for Papers be Before or After the Workshop? This is up to you. You can make the proceedings a pre-proceedings or a post-proceedings. Some conferences, like AISTATS 2010, do the call before the workshop and put the proceedings on line in time for the event. Other will finalize their proceedings after the event. Overall our advice would be for larger proceedings and more formal conferences, get them up before the event, for smaller proceedings and less formal workshops it can make sense to produce them after the event. What is the Style File for the Proceedings? PMLR proceedings use the jmlr LaTeX class, which supports both single-column and double-column formats. As of 2010, Nicola Talbot has kindly put together and maintains this LaTeX package. Installation and Access The jmlr package is available from CTAN (Comprehensive TeX Archive Network): Package page: https://ctan.org/pkg/jmlr Direct access: https://ctan.org/tex-archive/macros/latex/contrib/jmlr Most modern LaTeX distributions (TeX Live, MiKTeX) include the jmlr package by default. If you need to install it manually, download from CTAN or use your distribution's package manager. Usage Instructions For double-column format (used for some PMLR proceedings, e.g., AISTATS volumes): \documentclass[pmlr,twocolumn]{jmlr} For single-column format (PMLR proceedings): \documentclass[pmlr]{jmlr} The pmlr option sets the correct header reading "Proceedings of Machine Learning Research". The twocolumn option enables two-column layout when needed. Sample Files We provide complete sample LaTeX files to help you get started: Single-column format (standard PMLR): .tex source | .pdf preview Double-column format (AISTATS-style): .tex source | .pdf preview These samples include examples of author-year citations, equations, figures, tables, algorithms, and diverse international author names. Download the .tex files and modify them for your paper. Note: We don't support preparation systems other than LaTeX. Software for Supporting Proceedings Preparation As of 2012 Nicola Talbot has prepared a Java Application for assisting in preparing a PMLR proceedings. It is available here . Many thanks to Nicola. What do I Need to Provide for the Proceedings? You should appoint a publications chair (who could also be one of the editors, but needn't be: for larger conferences it is often a separate role, but for smaller workshops it tends to be an editor). The publications chair will be responsible for compiling the proceedings. The proceedings are provided in the format detailed here . As well as the papers in the format described above, you will need to provide permission to publish forms from your authors. Please distribute the form found here to your authors. Have them sign a copy and send you a scanned PDF. The scanned PDF should be named with the convention given above with the suffix Permission , e.g. turner10aPermission.pdf . These files should be placed in a separate directory from the other material called vXpermissions and sent the series editor. I'm an editor: How do I make changes to a published proceedings? To do this you'll need to have a github account. Some familiarity with git will help a lot. For smaller changes, you just need to be logged into your github account, go to the abstract page of the paper you want to change and click on the ‘edit’ button. You’ll then go straight to the relevant file and you can edit it directly (note that title and author changes need to be made in two fields, there is one normal field and one for bibtex/latex). For larger changes, you may need to use the more traditional approach to pull requests. You should "fork the repository" ( see here ) and then "clone it" ( see here ) to your local drive. Make the changes locally and commit them and push them then submit a pull request ( see here ) All of this allows us to use github’s mechanisms for version control to track changes to the proceedings. This site last compiled Wed, 31 Dec 2025 21:13:50 +0000 Github Account MLResearchPress | 2026-01-13T09:29:15 |
https://typedoc.org/example/index.html | TypeDoc Example TypeDoc Example Docs API GitHub Preparing search index... TypeDoc Example The TypeDoc Example Welcome to the TypeDoc example! TypeDoc is a documentation generator for TypeScript. TypeDoc automatically documents every variable, function, and class that is exported by your project. You can add explanations and examples to your documentation site by annotating your code with doc comments, e.g. /** * Calculates the square root of a number. * * @param x the number to calculate the root of. * @returns the square root if `x` is non-negative or `NaN` if `x` is negative. */ export function sqrt ( x : number ): number { return Math . sqrt ( x ); } Copy This project shows off some of TypeDoc's features: Built-in support for various TypeScript language constructs Markdown in doc comments Syntax highlighting in code blocks Index of Examples Click the "Exports" link in the sidebar to see a complete list of everything in the package. Here are some examples we wanted to highlight: Rendering External Markdown: here Markdown showcase: here Syntax highlighting showcase: here Functions Simple functions: sqrt and sqrtArrowFunction A generic function: concat Functions that take an options object: makeHttpCallA and makeHttpCallB An overloaded function: overloadedFunction An external function exported under a different name: lodashSortBy Types Type aliases: SimpleTypeAlias and ComplexGenericTypeAlias Interfaces: User and AdminUser Classes A basic class: Customer A subclass: DeliveryCustomer A complex class: CancellablePromise A class that extends a built-in generic type: StringArray Enums A basic enum: SimpleEnum Using the @enum tag: EnumLikeObject Variables PI , STRING_CONSTANT , and ObjectConstant React Components Basic React components: CardA and CardB A complex React component: EasyFormDialog and EasyFormDialogProps Settings Member Visibility Protected Inherited External Theme OS Light Dark On This Page The Type Doc Example Index of Examples Rendering Functions Types Classes Enums Variables React Components Docs API GitHub TypeDoc Example Loading... Generated using TypeDoc | 2026-01-13T09:29:15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.