patch
stringlengths
18
160k
callgraph
stringlengths
4
179k
summary
stringlengths
4
947
msg
stringlengths
6
3.42k
@@ -50,6 +50,7 @@ def googlenet(weights: Optional[GoogLeNetWeights] = None, progress: bool = True, if weights is not None: model.load_state_dict(weights.state_dict(progress=progress)) + # I understand this is present in the current code, just curious why this is needed? if not original_aux_logits: model.aux_logits = False model.aux1 = None # type: ignore[assignment]
[googlenet->[verify,get,pop,len,GoogLeNet,load_state_dict,warn,state_dict],GoogLeNetWeights->[partial,WeightEntry]]
Returns a model which uses the googlenet algorithm.
I believe this is necessary because the pre-trained weights contain records about the `aux_logits`, so this part of the network needs to be defined prior loading the weights. But then to respect the choice of the user if `aux_logits=False` we drop that part of the network. It's quite hacky but it is what it is.
@@ -153,14 +153,8 @@ limitations under the License. <%_ if (protractorTests) { _%> "webdriver-manager": "12.1.7", <%_ } _%> - "webpack": "VERSION_MANAGED_BY_CLIENT_ANGULAR", "webpack-bundle-analyzer": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "webpack-cli": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "webpack-dev-server": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "webpack-merge": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "webpack-notifier": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "workbox-webpack-plugin": "VERSION_MANAGED_BY_CLIENT_ANGULAR", - "write-file-webpack-plugin": "VERSION_MANAGED_BY_CLIENT_ANGULAR" + "webpack-notifier": "VERSION_MANAGED_BY_CLIENT_ANGULAR" }, "engines": { "node": ">=<%= NODE_VERSION %>"
[No CFG could be retrieved]
Version management. Node version of the application.
This plugin is not used any more and can be removed.
@@ -40,12 +40,18 @@ namespace System public static object? CreateInstance(Type type) => CreateInstance(type, nonPublic: false); + [UnconditionalSuppressMessage("ReflectionAnalysis", "IL2006:UnrecognizedReflectionPattern", + Justification = "Recognized as an intrinsic - no annotation possible")] public static ObjectHandle? CreateInstanceFrom(string assemblyFile, string typeName) => CreateInstanceFrom(assemblyFile, typeName, false, ConstructorDefault, null, null, null, null); + [UnconditionalSuppressMessage("ReflectionAnalysis", "IL2006:UnrecognizedReflectionPattern", + Justification = "Recognized as an intrinsic - no annotation possible")] public static ObjectHandle? CreateInstanceFrom(string assemblyFile, string typeName, object?[]? activationAttributes) => CreateInstanceFrom(assemblyFile, typeName, false, ConstructorDefault, null, null, null, activationAttributes); + [UnconditionalSuppressMessage("ReflectionAnalysis", "IL2006:UnrecognizedReflectionPattern", + Justification = "Recognized as an intrinsic - no annotation possible")] public static ObjectHandle? CreateInstanceFrom(string assemblyFile, string typeName, bool ignoreCase, BindingFlags bindingAttr, Binder? binder, object?[]? args, CultureInfo? culture, object?[]? activationAttributes) { Assembly assembly = Assembly.LoadFrom(assemblyFile);
[Activator->[CreateInstanceFrom->[CreateInstanceFrom,CreateInstance],CreateInstance->[CreateInstance]]]
Create an object handle from a given type.
Shouldn't these 3 be `RequiresUnreferencedCode`?
@@ -94,8 +94,9 @@ class CasualtyDetailsTest { } @Test - void damageLowestMovementAirUnitsWhenOnlyOneTypeIsAvailable() { + void damageHighestMovementAirUnitsWhenOnlyOneTypeIsAvailable() { final UnitType fighter = givenUnitType("fighter"); + UnitAttachment.get(fighter).setHitPoints(2); UnitAttachment.get(fighter).setMovement(4); UnitAttachment.get(fighter).setIsAir(true);
[CasualtyDetailsTest->[killPositiveMarineBonusLastIfAmphibious->[givenUnitType],ignoreNonAirUnitsAlreadyKilled->[givenUnitType],killLowestMovementAirUnitsWhenOnlyOneTypeIsAvailable->[givenUnitType],damageLowestMovementAirUnitsInTwoTypes->[givenUnitType],killNegativeMarineBonusFirstIfAmphibious->[givenUnitType],damageLowestMovementAirUnitsWhenOnlyOneTypeIsAvailable->[givenUnitType],ignoreNonAirUnitsAlreadyDamaged->[givenUnitType],killLowestMovementAirUnitsInTwoTypes->[givenUnitType]]]
Tests if the lowest air unit has movement left.
If all things are equal, we want an air unit with lowest movement to be taken as a casualty first. I think this means we should keep the existing test to ensure that behavior continues. A new unit test would be good here though to verify that we prefer units with multiple hit points before units with less movement.
@@ -1229,7 +1229,7 @@ zfs_rezget(znode_t *zp) zp->z_mode = mode; - if (gen != zp->z_gen) { + if (gen != ZTOI(zp)->i_generation) { zfs_znode_dmu_fini(zp); zfs_znode_hold_exit(zsb, zh); return (SET_ERROR(EIO));
[No CFG could be retrieved]
- - - - - - - - - - - - - - - - - - Znode delete handler DMI fini routine for znode_t.
I'd suggest explicitly casting `i_generation` here to a uint64_t for clarity.
@@ -38,7 +38,7 @@ <%# BEGIN Feed menu bar %> <main class="articles-list crayons-layout__content" id="articles-list" role="main"> <h1 class="visually-hidden-header">Articles</h1> - <%= render(partial: "onboardings/task_card") if user_signed_in? %> + <%= render(partial: "onboardings/task_card") if user_signed_in? && current_user.saw_onboarding %> <header class="flex items-center p-2 m:p-0 m:pb-2" id="on-page-nav-controls"> <button type="button" class="crayons-btn crayons-btn--ghost crayons-btn--icon mr-2 inline-block m:hidden" id="on-page-nav-butt-left" aria-label="nav-button-left">
[No CFG could be retrieved]
Renders a single page of a list of categories. Displays a list of post IDs.
This check will be important for the creator onboarding flow; without it, every Forem creator will see that onboarding task card, which will not make sense/will not apply to them.
@@ -681,4 +681,18 @@ class AssignmentsController < ApplicationController { resource_name: @assignment.short_identifier.blank? ? @assignment.model_name.human : @assignment.short_identifier, errors: @assignment.errors.full_messages.join('; ') } end + + def switch_to_same(options) + return false if options[:controller] == 'submissions' && options[:action] == 'file_manager' + return false if %w[submissions results].include?(options[:controller]) && !options[:id].nil? + + if options[:controller] == 'assignments' + options[:id] = params[:id] + elsif options[:assignment_id] + options[:assignment_id] = params[:id] + else + return false + end + true + end end
[AssignmentsController->[start_timed_assignment->[update],batch_runs->[new],set_boolean_graders_options->[update],create->[new],new->[new]]]
Returns a Hash with the options for the missing key.
Metrics/CyclomaticComplexity: Cyclomatic complexity for switch_to_same is too high. [7/6]<br>Metrics/PerceivedComplexity: Perceived complexity for switch_to_same is too high. [8/7]
@@ -72,7 +72,7 @@ public final class MultinodeHiveCaching { builder.configureContainer(COORDINATOR, container -> container .withCopyFileToContainer(forHostPath(dockerFiles.getDockerFilesHostPath("conf/environment/multinode/multinode-master-jvm.config")), CONTAINER_PRESTO_JVM_CONFIG) - .withCopyFileToContainer(forHostPath(dockerFiles.getDockerFilesHostPath("conf/environment/multinode/multinode-master-config.properties")), CONTAINER_PRESTO_CONFIG_PROPERTIES) + .withCopyFileToContainer(forHostPath(dockerFiles.getDockerFilesHostPath("conf/environment/standard/multinode-master-config.properties")), CONTAINER_PRESTO_CONFIG_PROPERTIES) .withCopyFileToContainer(forHostPath(dockerFiles.getDockerFilesHostPath("common/hadoop/hive.properties")), CONTAINER_PRESTO_HIVE_NON_CACHED_PROPERTIES) .withCopyFileToContainer(forHostPath(dockerFiles.getDockerFilesHostPath("conf/environment/multinode-cached/hive-coordinator.properties")), CONTAINER_PRESTO_HIVE_PROPERTIES) .withTmpFs(ImmutableMap.of("/tmp/cache", "rw")));
[MultinodeHiveCaching->[createPrestoWorker->[withTmpFs,addContainer,of],extendEnvironment->[withTmpFs,configureContainer,createPrestoWorker,of],getImagesVersion,of,requireNonNull]]
Extend the environment with the necessary files.
files should be copied to `common/trino-multinode` (directory name matches class name)
@@ -46,7 +46,11 @@ function useRelayer({ mutation, value }) { if (!contracts.config.relayer) reason = 'relayer not configured' if (!mutation) reason = 'no mutation specified' - if (mutation === 'makeOffer' && value && value !== '0') { + if ( + ['makeOffer', 'swapAndMakeOffer'].includes(mutation) && + value && + value !== '0' + ) { reason = 'makeOffer has a value' } if (mutation === 'transferToken') reason = 'transferToken is disabled'
[No CFG could be retrieved]
Check if a given mutation is allowed to use a proxy or a new proxy - wrap Send a transaction direct to a proxy after creation.
So prior to this we were using the relayer for swapAndMakeOffer ? That's odd, I don't remember my offers with DAI payment going thru the relayer...
@@ -0,0 +1,17 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. +// See the LICENSE file in the project root for more information. + +using System.Runtime.CompilerServices; + +namespace System.Windows.Forms.Design +{ + internal static class ArgumentValidation + { + internal static T OrThrowIfNull<T>(this T argument, [CallerArgumentExpression("argument")] string paramName = null) + { + ArgumentNullException.ThrowIfNull(argument, paramName); + return argument; + } + } +}
[No CFG could be retrieved]
No Summary Found.
This isn't necessary, internals from System.Windows.Forms.Primitives are accessible in this projects.
@@ -713,8 +713,9 @@ public class User extends AbstractModelObject implements AccessControlled, Descr if (id == null || StringUtils.isBlank(id)) { return false; } + final String trimmedId = id.trim(); for (String invalidId : ILLEGAL_PERSISTED_USERNAMES) { - if (id.equalsIgnoreCase(invalidId)) + if (trimmedId.equalsIgnoreCase(invalidId)) return false; } return true;
[User->[getRootDir->[getRootDir],getACL->[hasPermission->[hasPermission],getACL],get->[get],rss->[getUrl,getDisplayName],getAbsoluteUrl->[getUrl],doRssAll->[getBuilds],relatedTo->[getId],current->[get],getDisplayName->[getFullName],clear->[clear],rekey->[idStrategy],canDelete->[hasPermission,idStrategy],getOrCreate->[get,User,getFullName],getDescriptorByName->[getDescriptorByName],getById->[getOrCreate],doDoDelete->[delete],hasPermission->[hasPermission],save->[getConfigFile,isIdOrFullnameAllowed],getAll->[compare->[getId,compare],getOrCreate,idStrategy],checkPermission->[checkPermission],delete->[getRootDir,idStrategy],doConfigSubmit->[save,getProperty],reload->[load],getAuthorities->[hasPermission,impersonate,getAuthorities],getBuilds->[apply->[relatedTo]],doRssLatest->[relatedTo],FullNameIdResolver->[resolveCanonicalId->[getId,getAll,getFullName]],UserIDCanonicalIdResolver->[resolveCanonicalId->[getById,getId,get]]]]
Checks if the given id is allowed.
You should specify a LOCALE here...
@@ -8,15 +8,15 @@ namespace Microsoft.Xna.Framework.Content.Pipeline.Graphics [TypeConverter(typeof(CharacterRegionTypeConverter))] public struct CharacterRegion { - public char Start; - public char End; + public int Start; + public int End; // Enumerates all characters within the region. public IEnumerable<Char> Characters() { for (var c = Start; c <= End; c++) { - yield return c; + yield return (char)c; } }
[CharacterRegion->[Distinct->[ContainsKey,Default,Add],SelectMany->[selector],Any->[MoveNext]]]
Get all characters in this range.
I'm not sure about this change. I suspect the actual issue is somewhere else. Reading chars correctly is critical to the intermediate deserializer working. Let me investigate a bit with the new unit tests and see.
@@ -75,6 +75,7 @@ namespace CoreNodeModels } } + [JsonProperty("ToConversion")] public ConversionUnit SelectedToConversion { get { return selectedToConversion; }
[DynamoConvert->[SerializeCore->[SerializeCore],DeserializeCore->[DeserializeCore],UpdateValueCore->[UpdateValueCore]]]
Enumerate all the possible values of the object property. The base class for the conversion metric.
Are these two properties used separately from MetricConversion? That sounds like it could be very confusing, and problematic.
@@ -11,6 +11,17 @@ class Organization < ApplicationRecord acts_as_followable + before_validation :downcase_slug + before_validation :check_for_slug_change + before_validation :evaluate_markdown + before_save :update_articles + before_save :remove_at_from_usernames + before_save :generate_secret + # You have to put before_destroy callback BEFORE the dependent: :nullify + # to ensure they execute before the records are updated + # https://guides.rubyonrails.org/active_record_callbacks.html#destroying-an-object + before_destroy :cache_article_ids + has_many :articles, dependent: :nullify has_many :collections, dependent: :nullify has_many :credits, dependent: :restrict_with_error
[Organization->[sync_related_elasticsearch_docs->[call],unique_slug_including_users_and_podcasts->[add,include?,exists?],update_articles->[from_object,update],enough_credits?->[size],generate_secret->[secret,blank?],approved_and_filled_out_cta?->[cta_processed_html?],destroyable?->[zero?,count],remove_at_from_usernames->[github_username,twitter_username,delete],downcase_slug->[slug,downcase],profile_image_90->[call],check_for_slug_change->[perform_async,slug_changed?,old_old_slug,old_slug],generated_random_secret->[hex],bust_cache->[perform_async],evaluate_markdown->[evaluate_limited_markdown,cta_processed_html],freeze,all,include,alias_attribute,after_save,validate,before_save,validates,where,has_many,before_validation,after_commit,mount_uploader]]
The application record that is used to create an organization. Local version of the regex that is used by the regex_matcher.
We have a rubocop that forces us to put callbacks in a certain order. Since I had to move the before_destroy callback before the `dependent: nil` statements the rest of these had to come with it.
@@ -1010,13 +1010,11 @@ class Worker */ public static function spawnWorker($do_cron = false) { - $args = ["bin/worker.php"]; + $command = "bin/worker.php"; - if (!$do_cron) { - $args[] = "no_cron"; - } + $args[] = [ "cron" => $do_cron ]; - get_app()->proc_run($args); + get_app()->proc_run($command, $args); // after spawning we have to remove the flag. if (Config::get('system', 'worker_daemon_mode', false)) {
[Worker->[processQueue->[isMaxProcessesReached,isMaxLoadReached,min_memory_reached],spawnWorker->[proc_run],execute->[isMaxProcessesReached]]]
Spawns a worker process.
Code standards: Please use single quotes by default.
@@ -39,8 +39,12 @@ class PretrainedTransformerEmbedder(TokenEmbedder): want to use the encoder. train_parameters: `bool`, optional (default = `True`) If this is `True`, the transformer weights get updated during training. If this is `False`, the - transformer weights are not updated during training and the dropout and batch normalization layers - are set to evaluation mode. + transformer weights are not updated during training. + eval_mode: `bool`, optional (default = `False`) + If this is `True`, the model is always set to evaluation mode (e.g., the dropout is disabled and the + batch normalization layer statistics are not updated). If this is `False`, the dropout and batch + normalization layers are only set to evaluation model when when the model is evaluating on development + or train data. last_layer_only: `bool`, optional (default = `True`) When `True` (the default), only the final layer of the pretrained transformer is taken for the embeddings. But if set to `False`, a scalar mix of all of the layers
[PretrainedTransformerEmbedder->[forward->[_number_of_token_type_embeddings],_fold_long_sequences->[fold],train->[train]]]
Parameters model_name - Name of the model to use. Should be the same Constructor for a object.
minor changes :) If this is `True`, the model is set to evaluation mode (i.e. dropout is disabled and batch normalization layer statistics are not updated). If this is `False`, dropout and batch normalization layers are only set to evaluation mode when the model is being evaluated on development or train data.
@@ -771,9 +771,12 @@ static int pipeline_copy_from_upstream(struct comp_dev *start, buffer = container_of(clist, struct comp_buffer, sink_list); /* don't go upstream if this component is not connected */ - if (!buffer->connected || buffer->source->state != COMP_STATE_ACTIVE) + if (!buffer->connected) continue; + if (buffer->source->state != COMP_STATE_ACTIVE) + trace_pipe_error("pus"); + /* don't go upstream if this source is from another pipeline */ if (buffer->source->pipeline != current->pipeline) continue;
[No CFG could be retrieved]
This function copies the specified component from the upstream to the downstream components in a single operation. This function is used to copy the data from this component to all downstream sinks.
This will breal pipelines with a mixer, where closing one source stream will probably stop the sink stream (even if other source streams are active )?
@@ -151,6 +151,7 @@ namespace System.Drawing 0xFFF5F5F5, // WhiteSmoke 0xFFFFFF00, // Yellow 0xFF9ACD32, // YellowGreen + 0xFF663399, // RebeccaPurple }; internal static Color ArgbToKnownColor(uint argb)
[KnownColorTable->[Color->[FromArgb,FromKnownColor,ARGBAlphaMask,Transparent,Assert,Length],KnownColorToArgb->[Transparent,MenuHighlight,GetSystemColorArgb,Assert,IsKnownColorSystem],GetSystemColorId->[ActiveBorder,ButtonFace,Transparent,WindowText,Assert,IsKnownColorSystem],GetSystemColorArgb->[GetSystemColorId,ActiveBorder,ButtonFace,Transparent,WindowText,Assert,IsKnownColorSystem,COLORREFToARGB,GetSysColor],ControlDarkDark,ControlLightLight,MenuHighlight,WindowText,ButtonShadow,GradientActiveCaption,Info,InactiveCaption,ActiveCaption,ActiveCaptionText,Menu,Control,HighlightText,ControlDark,ScrollBar,Window,WindowFrame,AppWorkspace,ActiveBorder,ControlLight,ControlText,InactiveCaptionText,MenuBar,ButtonHighlight,Desktop,Highlight,InfoText,GradientInactiveCaption,ButtonFace,InactiveBorder,HotTrack,MenuText,GrayText]]
Given an ARGB color return the corresponding color in the color table if it is known.
You forgot to remove the entry above that you added before, so this is now duplicate.
@@ -123,10 +123,10 @@ class YumDistributor(Distributor): msg = _("protected should be a boolean; got %s instead" % protected) _LOG.error(msg) return False, msg - if key == 'generate_metadata': - generate_metadata = config.get('generate_metadata') - if generate_metadata is not None and not isinstance(generate_metadata, bool): - msg = _("generate_metadata should be a boolean; got %s instead" % generate_metadata) + if key == 'use_createrepo': + use_createrepo = config.get('use_createrepo') + if use_createrepo is not None and not isinstance(use_createrepo, bool): + msg = _("use_createrepo should be a boolean; got %s instead" % use_createrepo) _LOG.error(msg) return False, msg if key == 'checksum_type':
[YumDistributor->[set_progress->[progress_callback],symlink_distribution_unit_files->[init_progress,set_progress],create_consumer_payload->[get_repo_relative_path],form_rel_url_lookup_table->[split_path],publish_repo->[get_repo_relative_path,get_https_publish_dir,get_http_publish_dir],distributor_removed->[get_repo_relative_path],handle_symlinks->[init_progress,set_progress]]]
Validate the configuration. Generate a new node in the tree. Check if there is a duplicate in the publish directory.
I think you can skip the `is not None` part here.
@@ -283,7 +283,7 @@ func (tr *TaskRun) ApplyOutput(result RunOutput) { // RunResult keeps track of the outcome of a TaskRun or JobRun. It stores the // Data and ErrorMessage. type RunResult struct { - ID uint `json:"-" gorm:"primary_key;auto_increment"` + ID uint32 `json:"-" gorm:"primary_key;auto_increment"` Data JSON `json:"data" gorm:"type:text"` ErrorMessage null.String `json:"error"` }
[ApplyOutput->[SetError,SetStatus,HasError],PreviousTaskRun->[NextTaskRunIndex],Cancel->[SetStatus,NextTaskRun],ApplyBridgeRunResult->[SetError,SetStatus,HasError],NextTaskRun->[NextTaskRunIndex],TasksRemain->[NextTaskRunIndex],SetError->[SetStatus],String->[String]]
RunResult keeps track of the outcome of a TaskRun or JobRun.
Bigserial is 32 bits, so if you want to specify width, I'd specify at least 64.
@@ -203,6 +203,8 @@ namespace Dynamo.Graph.Workspaces private DateTime lastSaved; private string author = "None provided"; private string description; + private string thumbnail; + private Uri graphDocumentationURL; private bool hasUnsavedChanges; private bool isReadOnly; private readonly List<NodeModel> nodes;
[WorkspaceModel->[LoadNodes->[Log],Clear->[ClearNodes,Clear,Dispose,OnCurrentOffsetChanged],AddAnnotation->[AddNewAnnotation],AddNode->[OnNodeAdded],AddAndRegisterNode->[AddNode,OnRequestNodeCentered],LoadAnnotations->[AddNewAnnotation],UpdateWithExtraWorkspaceViewInfo->[OnCurrentOffsetChanged],ClearAnnotations->[Clear,OnAnnotationsCleared],Save->[OnSaved,setNameBasedOnFileName,Save,OnSaving],AnnotationModel->[AddNewAnnotation],Log->[Log],NoteModel->[AddNote],AddNote->[AddNote,OnRequestNodeCentered,OnNoteAdded],DisposeNode->[Dispose],LoadNotesFromAnnotations->[AddNote],LoadLegacyNotes->[AddNote],RemoveAnnotation->[OnAnnotationRemoved],ClearNodes->[OnNodesCleared],PopulateXmlDocument->[SerializeElementResolver,Save,OnSaving],ClearNotes->[Clear,OnNotesCleared],RecordModelsForModification->[RecordModelsForModification],RemoveNote->[OnNoteRemoved],RemoveAndDisposeNode->[OnNodeRemoved],AddNewAnnotation->[OnAnnotationAdded],Dispose->[OnConnectorDeleted],RegisterConnector]]
Extended version of the base class for all of the node - related methods. is a dummy node that has not been reloaded.
before this gets merged- a quick thought, do all of these new properties make sense on both home workspaces and custom node workspaces?
@@ -1,6 +1,6 @@ <% title "DEV Connect 👩‍💻💬👨‍💻" %> <%= content_for :page_meta do %> - <link rel="canonical" href="https://dev.to/connect" /> + <link rel="canonical" href="<%= app_url %>/connect" /> <meta name="description" content="DEV Connect"> <% end %> <%= csrf_meta_tags %>
[No CFG could be retrieved]
Displays a hidden hidden input that is used to show a unique identifier. footer style and script.
you can pass the path as an argumento to `app_url`, so in this case it would be: `app_url(connect_path)`
@@ -0,0 +1,15 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. +// See the LICENSE file in the project root for more information. + +using System; +using System.Runtime.InteropServices; + +internal static partial class Interop +{ + internal static partial class User32 + { + [DllImport(Libraries.User32, ExactSpelling = true, SetLastError = true)] + public static extern BOOL GetUserObjectInformationW(IntPtr hObj, UOI nIndex, ref USEROBJECTFLAGS pvBuffer, int nLength, ref int lpnLengthNeeded); + } +}
[No CFG could be retrieved]
No Summary Found.
do we need this SetLastError
@@ -132,7 +132,8 @@ export class DoubleclickA4aEligibility { 'https://github.com/ampproject/amphtml/issues/11834 ' + 'for more information'); const usdrd = 'useSameDomainRenderingUntilDeprecated'; - const hasUSDRD = usdrd in element.dataset || element.hasAttribute(usdrd); + const hasUSDRD = (tryParseJson(element.getAttribute('json')) || {})[usdrd] + || usdrd in element.dataset; if (hasUSDRD) { warnDeprecation(usdrd); }
[No CFG could be retrieved]
Determines if a specific branch is selected into an experiment and forces it if it is. Checks if the element in the array is a canonical element.
Actually let's reverse the items in this or as usdrd in element.dataset is faster to execute
@@ -137,7 +137,9 @@ func (s *store) Release() { } func (s *store) close() { - panic("TODO: implement me") + if err := s.persistentStore.Close(); err != nil { + s.log.Errorf("Closing registry store did report an error: %+v", err) + } } // Get returns the resource for the key.
[Release->[Release,Dec],UpdatesReleaseN->[Sub],Finished->[Load],Get->[Find],Retain->[Inc,Retain],UnpackCursor->[Lock,Unlock,Convert]]
close closes the store.
Is error displayed with `%+v` on purpose?
@@ -588,6 +588,11 @@ DataReaderImpl::remove_associations_i(const WriterIdSeq& writers, } } + while (!removed_writers.empty()) { + removed_writers.begin()->second->removed(); + removed_writers.erase(removed_writers.begin()); + } + wr_len = updated_writers.length(); // Return now if the supplied writers have been removed already.
[No CFG could be retrieved]
- - - - - - - - - - - - - - - - - - Mirror the SUBSCRIPTION_MATCHED_STATUS processing.
Since you are erasing in the loop, the complexity of this code is O(n * log(n)). I believe iterating over the map is O(n) and clearing it (if necessary) is O(n). Thus, I think it would be more efficient to just iterate over is as opposed to erasing from it until it was empty.
@@ -21,6 +21,8 @@ const ConferenceErrors = JitsiMeetJS.errors.conference; const TrackEvents = JitsiMeetJS.events.track; const TrackErrors = JitsiMeetJS.errors.track; +const RecorderErrors = JitsiMeetJS.errors.recorder; + let room, connection, localAudio, localVideo, roomLocker; let currentAudioInputDevices, currentVideoInputDevices;
[No CFG could be retrieved]
Creates a connection to the Jitsi - MeetJS API and sends a message to Get user nickname by user id.
Are we using this variable somewhere? If not maybe we should not expose this errors from the library.
@@ -331,9 +331,16 @@ namespace Dynamo.Graph.Workspaces { base.RequestRun(); - if (RunSettings.RunEnabled && RunSettings.RunType != RunType.Manual) + if (RunSettings.RunType != RunType.Manual) { - Run(); + // TODO for Dynamo 3.0: The boolean "executingTask" that is used here is a make-do fix. + // We will be needing a separate variable(boolean) to check for RunEnabled flag from external applications and + // not confuse it with the internal flag RunEnabled which is associated with the Run button in Dynamo. + // Make this RunSettings.RunEnabled private, introduce the new flag and remove the "executingTask" variable. + if (RunSettings.RunEnabled || executingTask) + { + Run(); + } } }
[HomeWorkspaceModel->[Clear->[Clear],RequestRun->[RequestRun],ResetEngine->[LibraryLoaded],MarkNodesAsModifiedAndRequestRun->[RequestRun],StopPeriodicEvaluation->[OnRefreshCompleted],Run->[OnEvaluationCompleted,OnEvaluationStarted],GetExecutingNodes->[OnSetNodeDeltaState],OnPreviewGraphCompleted->[OnSetNodeDeltaState],RegisterNode->[RegisterNode],StartPeriodicEvaluation->[OnRefreshCompleted],OnUpdateGraphCompleted->[OnSetNodeDeltaState,OnEvaluationCompleted,OnRefreshCompleted],NodeModified->[NodeModified,RequestRun],PopulateXmlDocument->[PopulateXmlDocument],Dispose->[Dispose],OnNodeRemoved->[OnNodeRemoved],DisposeNode->[DisposeNode]]]
Override RequestRun method to handle a missing run.
should it be `!executingTask`?
@@ -495,9 +495,8 @@ func (c *clusterInfo) handleStoreHeartbeat(stats *pdpb.StoreStats) error { } store.stats.StoreStats = proto.Clone(stats).(*pdpb.StoreStats) + store.stats.LeaderCount = c.regions.getStoreLeaderCount(storeID) store.stats.LastHeartbeatTS = time.Now() - store.stats.TotalRegionCount = c.regions.getRegionCount() - store.stats.LeaderRegionCount = c.regions.getStoreLeaderCount(storeID) c.stores.setStore(store) return nil
[putRegionLocked->[setRegion],getMetaRegions->[getMetaRegions],unblockStore->[unblockStore],handleRegionHeartbeat->[putRegionLocked,setRegion,getRegion],randFollowerRegion->[randFollowerRegion],getStores->[getStores],getRegions->[getRegions],getStoreLeaderCount->[getStoreLeaderCount],searchRegion->[getRegion,searchRegion],randLeaderRegion->[randLeaderRegion],handleStoreHeartbeat->[getStore,getRegionCount,getStoreLeaderCount,setStore],getStore->[getStore],getFollowerStores->[getStore],getStoreCount->[getStoreCount],getRegionStores->[getStore],getRegion->[getRegion],getMetaStores->[getMetaStores],getStoreRegionCount->[getStoreRegionCount],putStoreLocked->[setStore],allocPeer->[allocID],getRegionCount->[getRegionCount],blockStore->[blockStore],getRegionCount,getStoreCount]
handleStoreHeartbeat handles heartbeat for a given store.
Why remove it?
@@ -117,14 +117,6 @@ public final class CreateSourceFactory { final LogicalSchema schema = buildSchema(statement.getElements(), ksqlConfig); final Optional<TimestampColumn> timestampColumn = buildTimestampColumn(ksqlConfig, props, schema); - final DataSource dataSource = metaStore.getSource(sourceName); - - if (dataSource != null && !statement.isOrReplace() && !statement.isNotExists()) { - final String sourceType = dataSource.getDataSourceType().getKsqlType(); - throw new KsqlException( - String.format("Cannot add stream '%s': A %s with the same name already exists", - sourceName.text(), sourceType.toLowerCase())); - } return new CreateStreamCommand( sourceName,
[CreateSourceFactory->[buildSchema->[isSystemColumn,text,getName,toLogicalSchema,isEmpty,forEach,KsqlException],ensureTopicExists->[getKafkaTopic,KsqlException,isTopicExists],validateSerdesCanHandleSchemas->[getKeyFeatures,close,getValueFeatures,from],getWindowInfo->[getWindowSize,of,map],createTableCommand->[getProperties,text,getOrReplace,getSchema,getKafkaTopicName,buildTimestampColumn,getSource,getKsqlType,CreateTableCommand,isSource,ensureTopicExists,of,isOrReplace,getKsqlTopic,isEmpty,getWindowInfo,isNotExists,isPresent,format,KsqlException,buildSchema,getElements,getName,from,get,lineSeparator,buildFormats,getTimestampColumn,toLowerCase],buildTimestampColumn->[getTimestampColumnName,validateTimestampColumn,TimestampColumn,getTimestampFormat,map],buildFormats->[of,validateSerdesCanHandleSchemas,build,getKeyFormat,getValueSerdeFeatures,getValueFormat],createStreamCommand->[getProperties,text,getOrReplace,getSchema,getKafkaTopicName,buildTimestampColumn,getSource,getKsqlType,isSource,ensureTopicExists,of,isOrReplace,getKsqlTopic,getWindowInfo,isNotExists,format,KsqlException,buildSchema,getElements,getName,from,get,buildFormats,getTimestampColumn,toLowerCase,CreateStreamCommand],GenericRowSerDe,GenericKeySerDe,requireNonNull,buildKeyFeatures]]
Creates a stream command.
why did we need to get rid of this check?
@@ -432,7 +432,7 @@ int MainWrappers<double,LocalOrdinal,GlobalOrdinal,Node>::main_(Teuchos::Command tm = Teuchos::null; if (solverName == "Belos") { - tm = rcp(new TimeMonitor(*TimeMonitor::getNewTimer("Maxwell: 2 - Build Belos solver etc"))); + auto tm2 = TimeMonitor::getNewTimer("Maxwell: 2 - Build Belos solver etc"); // construct preconditioner RCP<MueLu::RefMaxwell<SC,LO,GO,NO> > preconditioner
[No CFG could be retrieved]
This function initializes the Maxwell coefficient matrices. Turns a Xpetra matrix into a Belos operator and a Belos solver.
What does this actually do? Does this next timers or no?
@@ -32,6 +32,7 @@ util.inherits(UpgradeGenerator, BaseGenerator); const GENERATOR_JHIPSTER = 'generator-jhipster'; const UPGRADE_BRANCH = 'jhipster_upgrade'; const GIT_VERSION_NOT_ALLOW_MERGE_UNRELATED_HISTORIES = '2.9.0'; +const GENERATOR_JHIPSTER_CLI_VERSION = '4.5.1'; module.exports = UpgradeGenerator.extend({ constructor: function (...args) { // eslint-disable-line object-shorthand
[No CFG could be retrieved]
External code that exports the base class of the given object. Private functions - Check out a specific branch.
this is the current JHipster version, it's already in `package.json` and should be available (I'll have a look)
@@ -51,6 +51,9 @@ class SelectInterpreter(Task): python_tgts = self.context.targets(lambda tgt: isinstance(tgt, PythonTarget)) fs = PythonInterpreterFingerprintStrategy() with self.invalidated(python_tgts, fingerprint_strategy=fs) as invalidation_check: + if PythonSetup.global_instance().interpreter_search_paths and PythonInterpreterCache.pex_python_paths: + self.context.log.warn("Detected both PEX_PYTHON_PATH and --python-setup-interpreter-search-paths. " + "Ignoring --python-setup-interpreter-search-paths.") # If there are no relevant targets, we still go through the motions of selecting # an interpreter, to prevent downstream tasks from having to check for this special case. if invalidation_check.all_vts:
[SelectInterpreter->[execute->[PythonInterpreterFingerprintStrategy]]]
Executes the interpreter.
`PythonInterpreterCache.pex_python_paths` is defined above as an instance method, here you access it as a class attribute. Need some fixup in one place or the other, but I expect you want to fix `PythonInterpreterCache.pex_python_paths` to be a memoized classmethod and call `PythonInterpreterCache.pex_python_paths()` here.
@@ -436,6 +436,11 @@ type DcosConfig struct { BootstrapProfile *BootstrapProfile `json:"bootstrapProfile,omitempty"` } +// HasPrivateRegistry returns if a private registry is specified +func (d *DcosConfig) HasPrivateRegistry() bool { + return len(d.Registry) > 0 +} + // MasterProfile represents the definition of the master cluster type MasterProfile struct { Count int `json:"count"`
[GetCustomEnvironmentJSON->[IsAzureStackCloud],HasAvailabilityZones->[HasAvailabilityZones],GetUserAssignedID->[UserAssignedIDEnabled],IsMetricsServerEnabled->[isAddonEnabled],GetFirstConsecutiveStaticIPAddress->[IsVirtualMachineScaleSets],IsUbuntu->[IsUbuntu1804,IsUbuntu1604],IsPrivateCluster->[IsKubernetes],GetUserAssignedClientID->[UserAssignedClientIDEnabled],GetCustomCloudAuthenticationMethod->[IsAzureStackCloud],GetAzureProdFQDN->[GetCustomCloudName],GetVirtualNetworkName->[K8sOrchestratorName,IsHostedMasterProfile],GetRouteTableName->[GetResourcePrefix],GetClusterMetadata->[GetVirtualNetworkName,GetRouteTableName,GetPrimaryScaleSetName,GetResourcePrefix,GetPrimaryAvailabilitySetName,GetSubnetName,GetVNetResourceGroupName,GetNSGName],GetAddonScript->[GetAddonByName],GetCustomCloudName->[IsAzureStackCloud],GetNonMasqueradeCIDR->[IsHostedMasterProfile],NeedsExecHealthz->[IsKubernetes],RequireRouteTable->[IsAzureCNI],IsNVIDIADevicePluginEnabled->[isAddonEnabled],IsIPMasqAgentEnabled->[isAddonEnabled,IsIPMasqAgentEnabled],isAddonEnabled->[IsEnabled,GetAddonByName],IsContainerMonitoringEnabled->[isAddonEnabled],GetCustomCloudIdentitySystem->[IsAzureStackCloud],IsAKSBillingEnabled->[GetCloudSpecConfig],GetVNetResourceGroupName->[IsHostedMasterProfile],GetPrimaryScaleSetName->[GetAgentVMPrefix],GetMasterFQDN->[IsHostedMasterProfile],GetCloudSpecConfig->[GetCustomCloudName],IsClusterAutoscalerEnabled->[isAddonEnabled],GetResourcePrefix->[K8sOrchestratorName],IsDashboardEnabled->[isAddonEnabled],GetSubnetName->[K8sOrchestratorName,IsHostedMasterProfile],GetLocations->[IsAzureStackCloud],IsACIConnectorEnabled->[isAddonEnabled],GetAgentVMPrefix->[K8sOrchestratorName],IsKeyVaultFlexVolumeEnabled->[isAddonEnabled],IsReschedulerEnabled->[isAddonEnabled],IsNvidiaDevicePluginCapable->[HasNSeriesSKU],IsUbuntuNonVHD->[IsUbuntu,IsVHDDistro],GetNSGName->[GetResourcePrefix],GetMasterVMPrefix->[K8sOrchestratorName],IsAzureCNIMonitoringEnabled->[isAddonEnabled],IsAADPodIdentityEnabled->[isAddonEnabled],IsBlobfuseFlexVolumeEnabled->[isAddonEnabled],IsSMBFlexVolumeEnabled->[isAddonEnabled],IsTillerEnabled->[isAddonEnabled]]
Magic number of the DCOS bootstrap node used to deploy the cluster Extension array for pre provision.
Do DC/OS functions need to come along for the ride? Ideally we wouldn't touch any legacy code.
@@ -39,7 +39,8 @@ install_requires = [ # we are rather picky about msgpack versions, because a good working msgpack is # very important for borg, see https://github.com/borgbackup/borg/issues/3753 # best versions seem to be 0.4.6, 0.4.7, 0.4.8 and 0.5.6: - 'msgpack-python >=0.4.6, <=0.5.6, !=0.5.0, !=0.5.1, !=0.5.2, !=0.5.3, !=0.5.4, !=0.5.5', + 'msgpack-python <0.5;python_version=="3.4"', + 'msgpack >=0.5.6;python_version >="3.5"', # if you can't satisfy the above requirement, these are versions that might # also work ok, IF you make sure to use the COMPILED version of msgpack-python, # NOT the PURE PYTHON fallback implementation: ==0.5.1, ==0.5.4
[build_usage->[write_options->[is_positional_group],write_options_group->[is_positional_group],rows_to_table->[write_row_separator],generate_level->[generate_level],write_usage->[format_metavar]],Clean->[run->[rm]],build_man->[write_see_also->[write,write_heading],run->[generate_level],write_man_header->[write,write_heading],write_options->[write_options_group,write_heading],write_examples->[write,write_heading],gen_man_page->[write],write_options_group->[is_positional_group,write],generate_level->[write_options_group,write_options,generate_level,write_usage],write_heading->[write],write_usage->[format_metavar,write]],detect_openssl]
Get the n - tuple of the n - tuple from the system. requires FUSE version > = 2. 8. 0.
the problem is just that this is not what we want: - we do not want to accept almost any early msgpack version on 3.4 - we do not want to require a very recent on >=3.5
@@ -230,6 +230,9 @@ def main(args): stop = time.perf_counter() print(translation) logger.info(f"time: {stop - start} s.") + if args.output: + with open(args.output, 'a') as f: + f.write(translation + '\n') except Exception: log.error("an error occurred", exc_info=True)
[Tokenizer->[decode->[decode],encode->[encode]],main->[sentences,Translator],build_argparser,main]
This is the main function of the translator. It creates a translator and translates a sequence.
It looks default Windows encoding can't process Russian symbols. You should force encoding='utf8'. An old version of `args.output` file should be overwritten if such existed before demo start.
@@ -342,6 +342,10 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser timeseriesQuery = timeseriesQuery.withDimFilter(timeseriesQuery.getDimensionsFilter().optimize()); queryPlus = queryPlus.withQuery(timeseriesQuery); } + if (timeseriesQuery.getLimit() < Integer.MAX_VALUE) { + timeseriesQuery = timeseriesQuery.withLimit(timeseriesQuery.getLimit()); + queryPlus = queryPlus.withQuery(timeseriesQuery); + } return runner.run(queryPlus, responseContext); }, this); }
[TimeseriesQueryQueryToolChest->[makeMetrics->[makeMetrics],mergeResults->[doRun->[doRun]]]]
Pre merge query decoration.
I don't think this block of code is really doing anything? It looks like it is re-creating the timeseriesQuery with the same limit that it already has.
@@ -67,7 +67,7 @@ public class NotebookSocket extends WebSocketAdapter { } public synchronized void send(String serializeMessage) throws IOException { - connection.getRemote().sendStringByFuture(serializeMessage); + connection.getRemote().sendString(serializeMessage); } public String getUser() {
[NotebookSocket->[onWebSocketClose->[onClose],send->[sendStringByFuture],onWebSocketConnect->[onOpen],toString->[getRemotePort,getRemoteHost],onWebSocketText->[onMessage]]]
Sends a message to the remote node.
This line should not be part of this PR.
@@ -81,7 +81,9 @@ func TrimColorizedString(v string, maxLength int) string { contract.Assertf(!tagRegexp.MatchString(textOrTag), "Got a tag when we did not expect it") text := textOrTag - if currentLength+len(text) > maxLength { + textLen := utf8.RuneCountInString(text) + + if currentLength+textLen > maxLength { // adding this text chunk will cause us to go past the max length we allow. // just take whatever subportion we can and stop what we're doing. trimmed += text[0 : maxLength-currentLength]
[Colorize->[Failf,ReplaceAllString],MustCompile,FindAllStringIndex,Assertf,MatchString]
TrimColorizedString takes a string with embedded color tags and returns a new string with the.
I suspect the interaction between counting runes but trimming on byte boundaries is going to lead to wackiness in some cases. Is there a reason we moved to `utf8.RuneCountInString`?
@@ -51,6 +51,10 @@ class TestTargetSetup: requirements_pex: Pex args: Tuple[str, ...] input_files_digest: Digest + timeout_seconds: Optional[int] + + # Prevent this class from being detected by pytest as a test class. + __test__ = False @rule
[setup_pytest_for_target->[TestTargetSetup],run_python_test->[calculate_timeout_seconds]]
Setup a PyTest test target for a given test target.
Right now, IPRs never support timeouts, which is I think why Alex kept the timeout logic inlined in `run_python_test()`. Maybe one day they will, though? Either way, calculating the timeout logically belongs in `TestTargetSetup`. `debug_python_test()` will simply ignore the field for now.
@@ -53,6 +53,13 @@ public class ModuleExceptionHandler { .name(errorDefinition.getType()) .build()); + String errorTypeDefinitionName; + if (errorDefinition instanceof SdkErrorTypeDefinitionAdapter) { + errorTypeDefinitionName = ((SdkErrorTypeDefinitionAdapter<?>) errorDefinition).getDelegate().toString(); + } else { + errorTypeDefinitionName = errorDefinition.toString(); + } + if (errorTypeLookedUp.isPresent()) { final ErrorType errorType = errorTypeLookedUp.get();
[ModuleExceptionHandler->[getExceptionCause->[MuleRuntimeException,getMessage,equals,getCause,createStaticMessage,suppressIfPresent],handleTypedException->[getExceptionCause,apply,TypedException],isAllowedError->[getNamespace,equals,anyMatch,getParentErrorType,getIdentifier],processException->[getType,handleTypedException],MuleRuntimeException,getExceptionCause,getName,getErrorModels,build,isPresent,getCause,isAllowedError,lookupErrorType,get,getExtensionsNamespace,createStaticMessage]]
Handler of ModuleException ModuleExceptions which checks whether the exceptions are * and if is A module exception is built and a TypedException is returned.
instead of doing this, you should just overwrite the `toString()` implementation in the adapter
@@ -441,6 +441,17 @@ class DepCppInfo(object): setattr(self, "_%s_paths" % item, paths) return paths + def _aggregated_dict_paths(self, item): + paths = getattr(self, "_%s_paths" % item) + if paths is not None: + return paths + paths = getattr(self._cpp_info, "%s_paths" % item) + if self._cpp_info.components: + for component in self._get_sorted_components().values(): + paths = merge_dicts(paths, getattr(component, "%s_paths" % item)) + setattr(self, "_%s_paths" % item, paths) + return paths + @staticmethod def _filter_component_requires(requires): return [r for r in requires if COMPONENT_SCOPE not in r]
[CppInfo->[__getattr__->[_get_cpp_info->[_CppInfo],_get_cpp_info],_raise_incorrect_components_definition->[_check_components_requires_instersection],__init__->[Component,DefaultOrderedDict]],DepCppInfo->[defines->[_aggregated_values],__getattr__->[_get_cpp_info->[],__getattr__],_get_sorted_components->[_filter_component_requires],cflags->[_aggregated_values],system_libs->[_aggregated_values],libs->[_aggregated_values],include_paths->[_aggregated_paths],res_paths->[_aggregated_paths],exelinkflags->[_aggregated_values],frameworks->[_aggregated_values],build_modules_paths->[_aggregated_paths],build_paths->[_aggregated_paths],src_paths->[_aggregated_paths],_aggregated_values->[_merge_lists],_aggregated_paths->[_merge_lists],lib_paths->[_aggregated_paths],framework_paths->[_aggregated_paths],bin_paths->[_aggregated_paths],sharedlinkflags->[_aggregated_values],cxxflags->[_aggregated_values],requires->[_aggregated_values],_check_component_requires->[_filter_component_requires]],_CppInfo->[lib_paths->[_filter_paths],framework_paths->[_filter_paths],include_paths->[_filter_paths],bin_paths->[_filter_paths],res_paths->[_filter_paths],build_paths->[_filter_paths],src_paths->[_filter_paths],get_filename->[get_name]],_BaseDepsCppInfo->[update->[merge_lists]],DepsCppInfo->[__getattr__->[_get_cpp_info->[],_BaseDepsCppInfo]],DefaultOrderedDict->[__copy__->[DefaultOrderedDict]]]
Returns a list of paths aggregated by the component scope.
Again, maybe a bit too early generic at this point. Maybe it only needs to be implemented as BuildModulesPath thing, not as a general any-dictionary of paths.
@@ -60,7 +60,7 @@ var wrapArgs = function (args, visited) { ret = { type: 'object', - name: value.constructor.name, + name: value.constructor != null ? value.constructor.name : 'Object', members: [] } for (prop in value) {
[No CFG could be retrieved]
Convert the arguments object into an array of meta data. Populates object s members from descriptors.
@chetverikov I tweaked your change a bit here to return an empty name when `constructor.name` is empty since that would seem to be the expected behavior for anonymous classes like `new (class {})`.
@@ -466,6 +466,7 @@ define([ batchIds : batchIds, styleableProperties : styleableProperties }; + this._pointsLength = pointsLength; this._isQuantized = isQuantized; this._isOctEncoded16P = isOctEncoded16P;
[No CFG could be retrieved]
Create a Cesium3DTileContent object. Get all the data from the parsed content object.
Now that this is a class member, no need to store in `parsedContent`.
@@ -45,6 +45,17 @@ type DockerBuilder struct { urlTimeout time.Duration } +// MetaInstuction represent an Docker instruction used for adding metadata +// to Dockerfile +type MetaInstruction string + +const ( + // Label represents the LABEL Docker instruction + Label MetaInstruction = "LABEL" + // Env represents the ENV Docker instruction + Env MetaInstruction = "ENV" +) + // NewDockerBuilder creates a new instance of DockerBuilder func NewDockerBuilder(dockerClient DockerClient, build *api.Build) *DockerBuilder { return &DockerBuilder{
[fetchSource->[checkSourceURI],dockerBuild->[setupPullSecret]]
NewDockerBuilder creates a new DockerBuilder instance startDockerBuild starts a Docker build from the given BuildConfig.
is there a reason to have this as constant? i'm 99% sure it wont change anytime soon
@@ -800,7 +800,8 @@ get_object_layout(struct pl_jump_map *jmap, struct pl_obj_layout *layout, add_ds_shard(&used_targets_list, target); /** If target is failed queue it for remap*/ - if (pool_target_unavail(target)) { + if (pool_target_unavail(target) && !(ignore_up == true && + target->ta_comp.co_status == PO_COMP_ST_UP)) { rc = remap_alloc_one(remap_list, k, target); if (rc) D_GOTO(out, rc);
[No CFG could be retrieved]
get the object from the object map Allocate and initializes the given object .
(style) line over 80 characters
@@ -612,6 +612,13 @@ def _radio_clicked(label, params): params['plot_fun']() +def _get_active_radiobutton(radio): + """Helper to find out active radio button.""" + # XXX: In mpl 1.5 you can do: fig.radio.value_selected + color_r = [circle.get_facecolor()[0] for circle in radio.circles] + return color_r.index(0) # where red is 0 + + def _set_radio_button(idx, params): """Helper for setting radio button.""" # XXX: New version of matplotlib has this implemented for radio buttons,
[_mouse_click->[_plot_raw_time,_handle_change_selection],_change_channel_group->[_set_radio_button],tight_layout->[tight_layout],ClickableImage->[plot_clicks->[plt_show],__init__->[plt_show]],_process_times->[_find_peaks],_plot_raw_onkey->[_channels_changed,_change_channel_group,_plot_raw_time],_handle_change_selection->[_set_radio_button],_set_radio_button->[_radio_clicked],_onclick_help->[_get_help_text,tight_layout,figure_nobar],_helper_raw_resize->[_layout_figure],_select_bads->[f,_find_channel_idx,_handle_topomap_bads],_plot_sensors->[plt_show]]
Helper for setting radio button.
you can do a `try/except` then here, no?
@@ -343,6 +343,10 @@ define([ this._selectionDepth = 0; this._lastFinalResolution = undefined; this._lastSelectionDepth = undefined; + this._requestedFrame = undefined; + this._lastVisitedFrame = undefined; + this._ancestorWithContent = undefined; + this._ancestorWithLoadedContent = undefined; } defineProperties(Cesium3DTile.prototype, {
[No CFG could be retrieved]
A class which holds all of the properties related to a single object. Get the bounding sphere derived from the tile s bounding volume.
No longer used.
@@ -36,5 +36,9 @@ class PyMock(PythonPackage): version('2.0.0', '0febfafd14330c9dcaa40de2d82d40ad') version('1.3.0', '73ee8a4afb3ff4da1b4afa287f39fdeb') - depends_on('py-pbr', type=('build', 'run')) + depends_on('py-pbr@0.11:', type=('build', 'run')) + depends_on('py-six@1.7:', type=('build', 'run')) + depends_on('py-six@1.9:', type=('build', 'run'), when='@2.0.0:') + # requirements.txt references @1:, but 0.4 is newest available.. + depends_on('py-funcsigs', type=('build', 'run'), when='^python@:3.2.99') depends_on('py-setuptools@17.1:', type='build')
[PyMock->[depends_on,version]]
- - - - - - - - - - - - - - - - - -.
not sure why they refer to @1: but this works
@@ -764,10 +764,14 @@ class Flow(Serializable): graph = graphviz.Digraph() for t in self.tasks: - graph.node(str(id(t)), t.name) + shape = "box" if t.mapped else "ellipse" + graph.node(str(id(t)), t.name, shape=shape) for e in self.edges: - graph.edge(str(id(e.upstream_task)), str(id(e.downstream_task)), e.key) + style = "dashed" if e.mapped else None + graph.edge( + str(id(e.upstream_task)), str(id(e.downstream_task)), e.key, style=style + ) try: if get_ipython().config.get("IPKernelApp") is not None:
[Flow->[copy->[copy],upstream_tasks->[edges_to],update->[add_edge,add_task],reference_tasks->[terminal_tasks],chain->[add_edge],edges_to->[all_upstream_edges],set_dependencies->[add_edge,add_task],run->[parameters,run],edges_from->[all_downstream_edges],generate_local_task_ids->[all_upstream_edges,sorted_tasks,copy,serialize,all_downstream_edges],validate->[reference_tasks],serialize->[parameters,serialize,reference_tasks],_sorted_tasks->[copy,upstream_tasks,downstream_tasks,update],downstream_tasks->[edges_from],add_edge->[copy,add_task],visualize->[id]]]
Visualizes the sequence of nodes and edges in the graph.
I see the objective, but I don't like relying on a task to tell the flow if it's mapped, especially because there's nothing preventing a task that's mapped in one flow from being added manually to a second flow, where it is not mapped. We can infer the correct value of `t.mapped` by instead checking `any(edge.mapped for edge in self.edges_to(t))`
@@ -256,11 +256,13 @@ namespace Content.Server.GameObjects.Components.Interactable if (TryWeld(5, victim, silent: true)) { PlaySoundCollection(WeldSoundCollection); - chat.EntityMe(victim, Loc.GetString("welds {0:their} every orifice closed! It looks like {0:theyre} trying to commit suicide!", victim)); + PopupMessageOtherClientsInRange(victim, Loc.GetString("{0:theName} welds {0:their} every orifice closed! It looks like {0:theyre} trying to commit suicide!", victim), 15); + _notifyManager.PopupMessage(victim, victim, Loc.GetString("You weld your every orific closed!")); return SuicideKind.Heat; } - chat.EntityMe(victim, Loc.GetString("bashes {0:themselves} with the {1}!", victim, Owner.Name)); + PopupMessageOtherClientsInRange(victim, Loc.GetString("{0:theName} bashes {0:themselves} with the {0}!", victim), 15); + _notifyManager.PopupMessage(victim, victim, Loc.GetString("You bash yourself with the {0}!")); return SuicideKind.Blunt; }
[WelderComponent->[UseEntity->[ToggleWelderStatus],OnUpdate->[ToggleWelderStatus],Initialize->[Initialize],ToggleWelderStatus->[CanLitWelder],UseTool->[UseTool],SuicideKind->[TryWeld],Shutdown->[Shutdown]]]
This method is called when a victim is a suicide. It will try to.
Missing Parameter (I guess this is temp because you don't have the weapon yet). On the line it should be "with the {1}"
@@ -88,7 +88,10 @@ static volatile char endstop_hit_bits = 0; // use X_MIN, Y_MIN, Z_MIN and Z_MIN_ #endif #if PIN_EXISTS(MOTOR_CURRENT_PWM_XY) - int motor_current_setting[3] = DEFAULT_PWM_MOTOR_CURRENT; + #ifndef PWM_MOTOR_CURRENT + #define PWM_MOTOR_CURRENT DEFAULT_PWM_MOTOR_CURRENT + #endif + const int motor_current_setting[3] = PWM_MOTOR_CURRENT; #endif static bool check_endstops = true;
[No CFG could be retrieved]
The main entry point for the various functions. \ Function to write a value to the current block.
@thinkyhead it seems this needs to be `static constexpr int motor_current_setting[3] = PWM_MOTOR_CURRENT;`.
@@ -9,12 +9,13 @@ import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.trace.Span.Kind; import io.opentelemetry.api.trace.SpanBuilder; +import io.opentelemetry.api.trace.StatusCode; import io.opentelemetry.api.trace.Tracer; import io.opentelemetry.api.trace.attributes.SemanticAttributes; import io.opentelemetry.context.Context; import io.opentelemetry.context.propagation.TextMapPropagator; -import io.opentelemetry.context.propagation.TextMapPropagator.Setter; import io.opentelemetry.instrumentation.api.tracer.utils.NetPeerUtils; +import io.opentelemetry.instrumentation.api.tracer.utils.SpanAttributeSetter; import java.net.URI; import java.net.URISyntaxException; import java.util.concurrent.TimeUnit;
[HttpClientTracer->[onResponse->[status],setUrl->[url],internalStartSpan->[startSpan],end->[end],endExceptionally->[endExceptionally],onRequest->[method,requestHeader],setFlavor->[flavor],spanNameForRequest->[method],startSpan->[getSetter,startSpan]]]
Creates a base tracer for HTTP requests. This class is used to create a HttpClientTracer.
probably an unrelated change (sorry), `CARRIER` type parameter is removed, without too much downside, and definitely feels nicer just being <REQUEST, RESPONSE>
@@ -222,7 +222,7 @@ const OSSL_STORE_LOADER *ossl_store_get0_loader_int(const char *scheme) template.load = NULL; template.eof = NULL; template.close = NULL; - template.open_with_libctx = NULL; + template.open_ex = NULL; if (!ossl_store_init_once()) return NULL;
[OSSL_STORE_register_loader->[ossl_store_register_loader_int],OSSL_STORE_unregister_loader->[ossl_store_unregister_loader_int]]
Get loader for given scheme.
Ditto for field names.
@@ -10,8 +10,8 @@ internal static partial class Interop { // https://msdn.microsoft.com/en-us/library/windows/hardware/ff556633.aspx // https://msdn.microsoft.com/en-us/library/windows/hardware/ff567047.aspx - [DllImport(Libraries.NtDll, CharSet = CharSet.Unicode, ExactSpelling = true)] - public static extern unsafe int NtQueryDirectoryFile( + [GeneratedDllImport(Libraries.NtDll, CharSet = CharSet.Unicode, ExactSpelling = true)] + public static unsafe partial int NtQueryDirectoryFile( IntPtr FileHandle, IntPtr Event, IntPtr ApcRoutine,
[Interop->[NtDll->[NtQueryDirectoryFile->[NtDll]]]]
This is a NtQueryDirectoryFile implementation.
I think we can just make this inline-able instead.
@@ -468,8 +468,8 @@ class Optimizer(object): if isinstance(self._learning_rate, float): return self._learning_rate - elif isinstance(self._learning_rate, _LearningRateEpochDecay): - step_lr = self._learning_rate() + elif isinstance(self._learning_rate, _LRScheduler): + step_lr = self._learning_rate.get_lr() return step_lr.numpy()[0] else: step_lr = self._learning_rate.step()
[Optimizer->[apply_gradients->[_create_optimization_pass],step->[_apply_optimize],state_dict->[state_dict],_create_param_lr->[_global_learning_rate],_apply_optimize->[apply_gradients,_create_optimization_pass],_create_optimization_pass->[_update_param_device_map,_append_optimize_op,_finish_update,_create_accumulators,_create_global_learning_rate,_get_device_for_param],minimize->[_apply_optimize,backward],backward->[_append_dgc_ops]]]
Get learning rate of the current step. Returns the last non - zero value if the current learning rate or the learning rate is not.
should be self._learning_rate()
@@ -271,7 +271,9 @@ namespace System.Media _stream = webResponse.GetResponseStream(); } - if (_stream.CanSeek) + // DO NOT assert - NRE is expected for null stream + // See SoundPlayerTests.Load_NullStream_ThrowsNullReferenceException + if (_stream!.CanSeek) { // if we can get data synchronously, then get it LoadStream(true);
[SoundPlayer->[PlaySync->[LoadAndPlay],PlayLooping->[LoadAndPlay],SetupStream->[CancelLoad,CleanupStreamData],Play->[LoadAndPlay],LoadSync->[CancelLoad,CleanupStreamData],SetupSoundLocation->[CancelLoad,CleanupStreamData]]]
load the next chunk of data synchronously if necessary.
It'd be worth opening an issue for this to decide separately whether we should throw a better exception in that case.
@@ -69,6 +69,7 @@ module Idv @flow.analytics.track_event(Analytics::DOC_AUTH_ASYNC, error: 'failed to load document_capture_session', uuid: flow_session[verify_document_capture_session_uuid_key], + flow_session: flow_session, ) return timed_out end
[VerifyDocumentStatusAction->[timed_out->[timed_out]]]
load_doc_auth_async_result Loads the async state node if it.
ummmm this possibly contains PII right? we store the attributes here between steps?
@@ -2981,11 +2981,9 @@ func setupV2WithMigrationState(t *testing.T, polV2, _, err := v2.NewPoliciesServer(ctx, l, mem_v2, writer, pl, vChan) require.NoError(t, err) - eventServiceClient := &testhelpers.MockEventServiceClient{} - configMgr, err := config.NewManager("/tmp/.authz-delete-me") require.NoError(t, err) projectsSrv, err := v2.NewProjectsServer(ctx, l, mem_v2, - rulesRetriever, eventServiceClient, configMgr, testhelpers.NewMockPolicyRefresher()) + rulesRetriever, testhelpers.NewMockProjectUpdateManager(), testhelpers.NewMockPolicyRefresher()) require.NoError(t, err) vSwitch := v2.NewSwitch(vChan)
[NewWithSeed,DeepEqual,UnaryInterceptor,NewProjectsServer,MemberSliceToStringSlice,Flush,New,NewV4,Pristine,NotNil,RegisterProjectsServer,Zero,ProjectsCache,PoliciesCache,NoError,Fatalf,Register,Seed,Lorem,UpdatePolicy,AddPolicyMembers,Word,DeletePolicy,Helper,False,GetPolicy,LoadDevCerts,Add,GetPolicyVersion,NotEqual,PurgeSubjectFromPolicies,NewPoliciesServer,DefaultProjects,EngineUpdateInterceptor,ToLower,NewRole,NewType,InProgress,NewServer,NotEmpty,CreateRole,Success,NewFactory,RolesCache,Failure,ListPolicyMembers,MigrationStatus,ChainUnaryServer,Dial,RemovePolicyMembers,NewPoliciesClient,Nil,Text,DefaultRoles,Equal,GetReports,NewProjectsClient,NewMember,NewAuthzServer,Get,SuccessBeta1,RegisterAuthorizationServer,Items,Sprintf,String,Intn,NewMockPolicyRefresher,ElementsMatch,Shuffle,Run,True,NewAuthorizationClient,AssertCode,CreateProject,ItemCount,ListRoles,NewSwitch,UpdateRole,Error,IsAuthorized,CreatePolicy,Empty,ResetToV1,InputValidationInterceptor,NewManager,GetRole,NewLogger,DeleteRole,MigrateToV2,Contains,ListPolicies,Regexp,DefaultPolicies,SystemPolicies,Background,ReplacePolicyMembers,RegisterPoliciesServer]
assertRolesMatch tests that the given roles are equal. testSetup creates a testSetup for the given policy lister.
I wished to mock the project update manager itself, greatly simplifying the necessary test framework by allowing focussing just on the DB. That necessitated this change in a few places. All the changes herein other than rules_property_test.go are part of this adjustment.
@@ -0,0 +1,8 @@ +module Idv + class PhoneConfirmationOtpGenerator + def self.generate_otp + digits = Devise.direct_otp_length + SecureRandom.random_number(10**digits).to_s.rjust(digits, '0') + end + end +end
[No CFG could be retrieved]
No Summary Found.
What do you think about calling this method `call` for consistency with other service objects, and for less redundancy with the class name?
@@ -552,4 +552,10 @@ public class ResourceUtils { return new VersionedClause(identity, attribs); } + static <T> T requireNonNull(T obj) { + if (obj != null) { + return obj; + } + throw new NullPointerException(); + } }
[ResourceUtils->[getEffective->[get],isFragment->[get,getIdentityCapability],findProviders->[matches],getIdentityVersion->[toString,getIdentityCapability],getIdentity->[get],toVersionClause->[toString,getIdentityCapability,getIdentity,getVersion],getResolution->[get],isEffective->[get],getLocations->[osgi_content,url,getContentCapabilities],toProvideCapability->[toString],matches->[get,isEffective,matches],get->[get,convert],toRequireCapability->[toString],toVersion->[toString],getVersion->[toString],as->[invoke->[invoke]]]]
Creates a version clause for the given resource.
This doesn't have any direct relation to Resources, so maybe there is a better location for it?
@@ -2460,9 +2460,8 @@ public class BigQueryIO { private static Set<String> createdTables = Collections.newSetFromMap(new ConcurrentHashMap<String, Boolean>()); - /** Tracks bytes written, exposed as "ByteCount" Counter. */ - private Aggregator<Long, Long> byteCountAggregator = - createAggregator("ByteCount", Sum.ofLongs()); + /** Tracks bytes written, exposed as "ByteCount" Metrics Counter. */ + private Counter byteCounter = Metrics.counter(StreamingWriteFn.class, "ByteCount"); /** Constructor. */ StreamingWriteFn(@Nullable ValueProvider<TableSchema> schema,
[BigQueryIO->[write->[build],ShardedKeyCoder->[decode->[decode],verifyDeterministic->[verifyDeterministic],of->[of],encode->[getKey,getShardNumber,encode],of],StreamingWriteFn->[populateDisplayData->[populateDisplayData],getOrCreateTable->[getTable,parseTableSpec],TableSchemaToJsonSchema],displayTable->[TableRefToTableSpec],PassThroughThenCleanup->[expand->[apply]],TransformingSource->[getEstimatedSizeBytes->[getEstimatedSizeBytes],TransformingReader->[getCurrentSource->[getCurrentSource],getCurrentTimestamp->[getCurrentTimestamp],getFractionConsumed->[getFractionConsumed],start->[start],getCurrent->[apply,getCurrent],close->[close],advance->[advance],splitAtFraction->[splitAtFraction]],splitIntoBundles->[splitIntoBundles],validate->[validate],createReader->[createReader]],Write->[toTableReference->[build,ensureToNotCalledYet],verifyTableNotExistOrEmpty->[getTable,toTableSpec],WritePartition->[processElement->[TableRowWriter,close,open]],withSchema->[build],withFormatFunction->[build],expand->[getFormatFunction,getTableRefFunction,getTableWithDefaultProject,apply],to->[toTableSpec,build,to,ensureToNotCalledYet],ensureToNotCalledYet->[getTable,getJsonTableRef],WriteBundles->[populateDisplayData->[populateDisplayData],finishBundle->[close],processElement->[close,write]],getSchema->[getJsonSchema],WriteTables->[populateDisplayData->[populateDisplayData],removeTemporaryFiles->[create]],withCreateDisposition->[build],TableRowWriter->[close->[close],write->[write],open->[close,create]],withoutValidation->[build],getTable->[JsonTableRefToTableRef,getJsonTableRef],getTableWithDefaultProject->[getTable,JsonTableRefToTableRef],withTableDescription->[build],TranslateTableSpecFunction->[apply->[parseTableSpec,apply]],validate->[getWriteDisposition,getJsonSchema,getValidate,getBigQueryServices,verifyTableNotExistOrEmpty,getJsonTableRef,getTableRefFunction,getCreateDisposition,getFormatFunction],populateDisplayData->[populateDisplayData,getTableRefFunction],WriteRename->[populateDisplayData->[populateDisplayData],copy->[setCreateDisposition]],withWriteDisposition->[build],withTestServices->[build]],Read->[expand->[BeamJobUuidToBigQueryJobUuid,CreatePerBeamJobUuid,getBigQueryServices,getUseLegacySql,CreateJsonTableRefFromUuid,getFlattenResults,getQuery,apply],ensureFromNotCalledYet->[getQuery,getJsonTableRef],withoutValidation->[build],withoutResultFlattening->[build],getTableProvider->[JsonTableRefToTableRef,getJsonTableRef],getTableWithDefaultProject->[JsonTableRefToTableRef],getTable->[getTableProvider],validate->[getBigQueryServices,getValidate,setUseLegacySql,getUseLegacySql,getFlattenResults,getQuery],populateDisplayData->[populateDisplayData],withTestServices->[build],from->[from,toTableSpec,build,ensureFromNotCalledYet],fromQuery->[ensureFromNotCalledYet,fromQuery,build],usingStandardSql->[build]],BigQueryQuerySource->[populateDisplayData->[populateDisplayData],createBasicQueryConfig->[setUseLegacySql],create->[BigQueryQuerySource],TableRefToJson,TableRefToProjectId],getExtractFilePaths->[build],TableRefToTableSpec->[apply->[toTableSpec]],verifyTablePresence->[getTable,toTableSpec],StreamWithDeDup->[getDefaultOutputCoder->[of],expand->[of,JsonSchemaToTableSchema,getBigQueryServices,getJsonSchema,getTableDescription,getTable,StreamingWriteFn,apply,getTableRefFunction,getCreateDisposition,getFormatFunction]],BigQueryTableSource->[populateDisplayData->[populateDisplayData],create->[BigQueryTableSource],TableRefToJson],BigQuerySourceBase->[splitIntoBundles->[getTableToExtract,cleanupTempResource],BigQueryReader->[start->[start],close->[close],advance->[advance],getCurrent->[getCurrent]],createSources->[from,getDefaultOutputCoder]],clearCreatedTables->[clearCreatedTables],verifyDatasetPresence->[toTableSpec],TableSpecToTableRef->[apply->[parseTableSpec]],TagWithUniqueIdsAndTable->[populateDisplayData->[populateDisplayData],tableSpecFromWindowedValue->[toTableSpec,apply],processElement->[TableRowInfo,of,apply],JsonTableRefToTableRef,of,TableRefToTableSpec],TableRowInfoCoder->[decode->[decode],encode->[encode],TableRowInfoCoder,of]]]
This class is used to perform a streaming BigQuery write of a single unique id. Process an element which is not a unique id.
Can you also delete the `@SystemDoFnInternal` above, as that was only added to support the Aggregator?
@@ -8,6 +8,9 @@ // Kratos default license: kratos/license.txt // // Main authors: Philipp Bucher +// Vicente Mataix Ferrandiz +// Riccardo Rossi +// Ruben Zorrilla // // System includes
[No CFG could be retrieved]
Provides a function to check if a given element is a non - zero . GetRayleighAlpha - Returns the alpha - rank of an object in the system.
should be somethign like ~~~cpp template< class TBMatrixType, class TDNMatrixType > CalculateB(const Element& rElement, TBMatrixType& rB, TDNMatrixType& DN_DX ~~~ passing a REFERENCE to the element and templating in the other matrix types
@@ -81,7 +81,10 @@ function dailymotion_shortcode( $atts ) { $atts['id'] = $id; } else { $params = shortcode_new_to_old_params( $atts ); - parse_str( $params, $atts ); + parse_str( $params, $atts_new ); + foreach( $atts_new as $k => $v ) { + $attr[ $k ] = $v; + } } if ( isset( $atts['id'] ) )
[No CFG could be retrieved]
Dailymotion shortcode helper Dailymotion uploaded by Dailymotion.
For this line, do you mean `$atts` instead of `$attr` ?
@@ -223,12 +223,16 @@ public class ReconContainerManager extends ContainerManagerImpl { ContainerInfo containerInfo = containerWithPipeline.getContainerInfo(); try { if (containerInfo.getState().equals(HddsProtos.LifeCycleState.OPEN)) { - PipelineID pipelineID = containerWithPipeline.getPipeline().getId(); + Pipeline pipeline = containerWithPipeline.getPipeline(); + PipelineID pipelineID = pipeline.getId(); if (pipelineManager.containsPipeline(pipelineID)) { getContainerStateManager().addContainer(containerInfo.getProtobuf()); pipelineManager.addContainerToPipeline( containerWithPipeline.getPipeline().getId(), containerInfo.containerID()); + // update open container count on all datanodes on this pipeline + pipelineToOpenContainer.put(pipelineID, + pipelineToOpenContainer.getOrDefault(pipelineID, 0) + 1); LOG.info("Successfully added container {} to Recon.", containerInfo.containerID()); } else {
[ReconContainerManager->[updateContainerReplica->[updateContainerReplica],getLatestContainerHistory->[getAllContainerHistory],removeContainerReplica->[removeContainerReplica]]]
Add a new container to the pipeline.
Use AtomicInteger for the value
@@ -272,8 +272,12 @@ namespace Kratos ; class_< LineSearchStrategy< SparseSpaceType, LocalSpaceType, LinearSolverType >, bases< ResidualBasedNewtonRaphsonStrategyType >, boost::noncopyable > - ("LineSearchStrategy", - init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, int, bool, bool, bool >()) + ("LineSearchStrategy", init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, int, bool, bool, bool >()) + .def(init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, BuilderAndSolverType::Pointer, int, bool, bool, bool >()) + .def(init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, BuilderAndSolverType::Pointer, int, bool, bool >()) + .def(init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, BuilderAndSolverType::Pointer, int, bool >()) + .def(init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, BuilderAndSolverType::Pointer, int >()) + .def(init < ModelPart&, BaseSchemeType::Pointer, LinearSolverType::Pointer, TConvergenceCriteriaType::Pointer, BuilderAndSolverType::Pointer >()) ; class_< ExplicitStrategy< SparseSpaceType, LocalSpaceType, LinearSolverType >,
[UnaliasedAdd->[UnaliasedAdd],Dot->[Dot],CreateEmptyMatrixPointer->[CreateEmptyMatrixPointer],CreateEmptyVectorPointer->[CreateEmptyVectorPointer],TransposeMult->[TransposeMult],ScaleAndAdd->[ScaleAndAdd],Mult->[Mult],TwoNorm->[TwoNorm]]
Add strategies to python. A list of all possible base solutions. Residual - based Newton rolaphson strategy. region ModelPartConfig methods Initialize a sequence of objects. A base class for all of the base classes of the residual displacement scheme.
Doesn't this init method take the same arguments as the init<> template parameter in the line above? If so, you are exposing to python the same method twice. You don't need to redefine it here.
@@ -100,7 +100,7 @@ public abstract class HttpClientTracer<REQUEST, RESPONSE> extends BaseTracer { return span; } - private Span onRequest(final Span span, final REQUEST request) { + public Span onRequest(final Span span, final REQUEST request) { assert span != null; if (request != null) { span.setAttribute(SemanticAttributes.HTTP_METHOD.key(), method(request));
[HttpClientTracer->[onResponse->[status],end->[end],endExceptionally->[endExceptionally],startScope->[getSetter],onRequest->[url,method,requestHeader],spanNameForRequest->[method],startSpan->[startSpan]]]
Creates a new span for the given request. Append url and fragment to the span.
this is needed by TracingExecutionInterceptor.afterMarshalling (2.2)
@@ -17,6 +17,7 @@ class DismissTopics .joins("LEFT JOIN topic_users ON topic_users.topic_id = topics.id AND topic_users.user_id = #{@user.id}") .where("topics.created_at >= ?", since_date) .where("topic_users.id IS NULL") + .where("topics.archetype <> ?", Archetype.private_message) .order("topics.created_at DESC") .limit(SiteSetting.max_new_topics).map do |topic| {
[DismissTopics->[perform!->[present?,insert_all],rows->[now,id,map],since_date->[previous_visit_at,created_at,new_topic_duration_minutes,max,ago,default_other_new_topic_duration_minutes]]]
Returns an array of rows where the last n - new topics have been created since the given.
I realized that in previous commits I forgot about private messages. No big harm as DismissedTopicUsers table is regularly cleaned
@@ -335,6 +335,11 @@ namespace Dynamo.Logging public void Dispose() { + // If the Analytics Client was initialized, shut it down. + // Otherwise skip this step because it would cause an exception. + if (Service.IsInitialized) + Service.ShutDown(); + if (Session != null) { Session.Dispose();
[DynamoAnalyticsClient->[Dispose->[Dispose],TrackException->[TrackException],ShutDown->[Dispose],Start],DynamoAnalyticsSession->[Dispose->[Dispose]]]
Dispose of the managed object.
Service.ShutDown() will clean all factories and all tracker related data
@@ -343,6 +343,7 @@ public class ExpansionService extends ExpansionServiceGrpc.ExpansionServiceImplB SdkComponents sdkComponents = rehydratedComponents.getSdkComponents().withNewIdPrefix(request.getNamespace()); sdkComponents.registerEnvironment(Environments.JAVA_SDK_HARNESS_ENVIRONMENT); + pipeline.replaceAll(ImmutableList.of(JavaReadViaImpulse.boundedOverride())); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(pipeline, sdkComponents); String expandedTransformId = Iterables.getOnlyElement(
[ExpansionService->[loadRegisteredTransforms->[knownTransforms],ExternalTransformRegistrarLoader->[buildProto->[buildProto]],expand->[getTransform,apply,expand],apply->[extractOutputs,getTransform,createInput]]]
Expands a single node in the tree. Expands the transform to another transform.
This will only override bounded Reads, so unbounded Reads will stay work.
@@ -416,6 +416,17 @@ define([ }; } + // converts x and y coordinates such that after updating for drawing buffer position, the correct coordinates will be used + function pickPrimitiveEqualsWrapper(actual, expected, x, y, width, height) { + width = width || 3; + height = height || width || 3; + x = x || 0; + y = y || 0; + var adjustedX = x + ((width - 1) * 0.5); + var adjustedY = -1 * (y + ((height - 1) * 0.5) - actual._context.drawingBufferHeight); + return pickPrimitiveEquals(actual, expected, adjustedX, adjustedY, width, height); + } + function renderAndReadPixels(options) { var scene;
[No CFG could be retrieved]
Creates an expectation for the . Checks if the color of the node is in the expected rgba.
I'm not completely following the conversion process here. Can the test be modified instead?
@@ -61,6 +61,8 @@ class LaplacianSolver: self.main_model_part.AddNodalSolutionStepVariable(KratosMultiphysics.POSITIVE_FACE_PRESSURE) self.main_model_part.AddNodalSolutionStepVariable(KratosMultiphysics.NEGATIVE_FACE_PRESSURE) self.main_model_part.AddNodalSolutionStepVariable(KratosMultiphysics.DISTANCE) + self.main_model_part.AddNodalSolutionStepVariable(KratosMultiphysics.NORMAL) + self.main_model_part.AddNodalSolutionStepVariable(KratosMultiphysics.CompressiblePotentialFlowApplication.VELOCITY_INFINITY) def AddDofs(self): for node in self.main_model_part.Nodes:
[LaplacianSolver->[AddDofs->[AddDof],Clear->[],AddVariables->[AddNodalSolutionStepVariable],Solve->[],Initialize->[,ResidualCriteria,ResidualBasedIncrementalUpdateStaticScheme,ResidualBasedNewtonRaphsonStrategy,Check,settings],ImportModelPart->[AddEmptyValue,print,ModelPartIO,GetBufferSize,ReplaceElementsAndConditionsProcess,SetBufferSize,Parameters,GetMinimumBufferSize,TetrahedralMeshOrientationCheck,settings,Exception],SetEchoLevel->[],__init__->[ConstructSolver,print,ValidateAndAssignDefaults,Parameters]],CreateSolver->[LaplacianSolver],CheckForPreviousImport]
Adds variables to the main model part.
also here you should use the function from variable utils, see e.g. in the StructuralMechanicsSolver
@@ -273,4 +273,3 @@ def test_raw_file_if_modified_since(client, settings, file_attachment): assert 'Cache-Control' in response assert 'public' in response['Cache-Control'] assert 'max-age=900' in response['Cache-Control'] - assert 'Vary' not in response
[AttachmentViewTests->[test_edit_attachment->[_post_attachment]]]
Test if file is not modified since last modified time.
I found that the `Vary` header was now in the response with a value of `Accept-Encoding`. I'm not sure if it's worth checking any of the headers when the response is a 304.
@@ -1038,6 +1038,7 @@ class Trainer(Registrable): histogram_interval = params.pop_int("histogram_interval", None) should_log_parameter_statistics = params.pop_bool("should_log_parameter_statistics", True) should_log_learning_rate = params.pop_bool("should_log_learning_rate", False) + log_batch_size_period = params.pop_int("log_batch_size_period", None) params.assert_empty(cls.__name__) return cls(model, optimizer, iterator,
[Trainer->[_parameter_and_gradient_statistics_to_tensorboard->[add_train_scalar,is_sparse],rescale_gradients->[sparse_clip_norm],_enable_activation_logging->[hook->[add_train_histogram]],train->[_enable_activation_logging,_validation_loss,_should_stop_early,_metrics_to_tensorboard,_enable_gradient_clipping,_metrics_to_console,_train_epoch,_get_metrics],_restore_checkpoint->[find_latest_checkpoint,move_optimizer_to_cuda],_validation_loss->[_get_metrics,batch_loss],batch_loss->[_data_parallel],__init__->[TensorboardWriter],_metrics_to_tensorboard->[add_validation_scalar,add_train_scalar],_learning_rates_to_tensorboard->[add_train_scalar],_train_epoch->[rescale_gradients,time_to_str,add_train_scalar,batch_loss,_get_metrics],from_params->[from_params],_histograms_to_tensorboard->[add_train_histogram]],sparse_clip_norm->[is_sparse],TensorboardWriter->[add_validation_scalar->[_item],add_train_scalar->[_item]]]
Create a Trainer from a sequence of params. Get all configuration options for a single node.
Do we really need this new parameter? Can't we just log the batch size every `summary_interval` steps? It's not expensive because it's a scalar, and it's not too visually obnoxious because tensorboard compresses graphs by default in the UI.
@@ -113,6 +113,7 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator { logger.debug("-Dio.netty.allocator.normalCacheSize: {}", DEFAULT_NORMAL_CACHE_SIZE); logger.debug("-Dio.netty.allocator.maxCachedBufferCapacity: {}", DEFAULT_MAX_CACHED_BUFFER_CAPACITY); logger.debug("-Dio.netty.allocator.cacheTrimInterval: {}", DEFAULT_CACHE_TRIM_INTERVAL); + logger.debug("-Dio.netty.allocator.powerOfCacheLineSize: {}", DEFAULT_POWER_OF_CACHE_LINE_SIZE); } }
[PooledByteBufAllocator->[PooledByteBufAllocator]]
This method is used to set the default pool size and heap Arena. Creates a new instance of the ByteBufAllocator which allocates the bytes of the specified array.
@normanmaurer I think the terminology is `multiple of cache line size`. power would be `cachlinesize^n` if I am not mistaken.
@@ -46,10 +46,10 @@ public class ParameterGroup implements EnrichableModel private final Class<?> type; /** - * The {@link Field} in which the generated value of - * {@link #type} is to be assigned + * The {@link Field} in which the generated value of {@link #type} is to be assigned. + * If the {@link ParameterGroup} is used as an argument of an operation this fill will be {@link Optional#empty()} */ - private final Field field; + private final Optional<Field> field; /** * A {@link Map} in which the keys are parameter names
[ParameterGroup->[addModelProperty->[checkArgument,getClass,put],getParameters->[copyOf],getModelProperty->[get,ofNullable],addParameter->[add],getModelProperties->[copyOf,values],checkArgument,setAccessible]]
Creates a new parameter group which groups a set of parameters together. Creates a new object of the specified type and field.
this is a hack. You're just using Optional.empty() as a flag. Can't you just have two implementations?
@@ -18,11 +18,13 @@ type ConfigAWS struct { ProfileName string `config:"credential_profile_name"` SharedCredentialFile string `config:"shared_credential_file"` Endpoint string `config:"endpoint"` + RoleArn string `config:"role_arn"` } // GetAWSCredentials function gets aws credentials from the config. // If access_key_id and secret_access_key are given, then use them as credentials. -// If not, then load from aws config file. If credential_profile_name is not +// If role_arn is given, assume the IAM role instead. +// If none of the above is given, then load from aws config file. If credential_profile_name is not // given, then load default profile from the aws config file. func GetAWSCredentials(config ConfigAWS) (awssdk.Config, error) { // Check if accessKeyID or secretAccessKey or sessionToken is given from configuration
[WithSharedConfigFiles,WithSharedConfigProfile,ResolveWithEndpointURL,LoadDefaultAWSConfig,Config]
GetAWSCredentials returns the AWS credentials for the given object. This function is used to add additional options to the credential configuration.
Does it sound like a new entry in the CHANGELOG file? ;)
@@ -74,7 +74,6 @@ namespace System.Net.Quic.Implementations.MsQuic // Backlog limit is managed by MsQuic so it can be unbounded here. public readonly Channel<MsQuicStream> AcceptQueue = Channel.CreateUnbounded<MsQuicStream>(new UnboundedChannelOptions() { - SingleReader = true, SingleWriter = true, });
[MsQuicConnection->[TraceId->[TraceId],Task->[Dispose],NativeCallbackHandler->[HandleEventConnected,TraceId,HandleEventStreamsAvailable,HandleEventShutdownInitiatedByTransport,HandleEventPeerCertificateReceived,HandleEventShutdownComplete,HandleEventShutdownInitiatedByPeer,HandleEventNewStream],HandleEventPeerCertificateReceived->[TraceId],Dispose->[Dispose,TraceId,SetClosing],HandleEventNewStream->[TryQueueNewStream],Dispose,TraceId]]
Remove a stream from the stream pool.
Is SingleWriter correct still?
@@ -11,13 +11,13 @@ import com.metamx.druid.realtime.firehose.Firehose; import com.metamx.druid.realtime.firehose.FirehoseFactory; import twitter4j.ConnectionLifeCycleListener; import twitter4j.HashtagEntity; +import twitter4j.StallWarning; import twitter4j.Status; import twitter4j.StatusDeletionNotice; import twitter4j.StatusListener; import twitter4j.TwitterStream; import twitter4j.TwitterStreamFactory; import twitter4j.User; -import twitter4j.StallWarning; import java.io.IOException; import java.util.Arrays;
[TwitterSpritzerFirehoseFactory->[connect->[nextRow->[maxTimeReached,maxCountReached],hasMore->[maxTimeReached,maxCountReached]]]]
Package for importing firehose objects. is a special case for the v1. 0 API.
This file also looks like whitespace only changes.
@@ -58,7 +58,7 @@ DB_CREATE_STATE_CHANGES = """ CREATE TABLE IF NOT EXISTS state_changes ( identifier ULID PRIMARY KEY NOT NULL, data JSON, - timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL + timestamp TIMESTAMP DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) NOT NULL ); """
[TimestampedEvent->[__getattr__->[getattr],namedtuple],format]
This class is used to create the necessary tables for a TimestampedEvent object. Creates the tables for the given .
This is not valid ISO8601
@@ -22,6 +22,8 @@ export type OptionT = $ReadOnly<{ disabled?: boolean, clearableValue?: boolean, isCreatable?: boolean, + // eslint-disable-next-line flowtype/no-weak-types + [key: string]: any, }>; export type ValueT = $ReadOnlyArray<OptionT>;
[No CFG could be retrieved]
Create a type export of the given . Create a NetUI element with properties for the selected menu item.
does replacing `any` with `mixed` essentially put it back to before this change? i'm a bit hesitant to throw out all of the type safety on this option.
@@ -2545,6 +2545,8 @@ static gboolean go_pgdown_key_accel_callback(GtkAccelGroup *accel_group, GObject } else { + // reset culling layout + _expose_destroy_slots(self); const int iir = get_zoom(); const int scroll_by_rows = 4; /* This should be the number of visible rows. */ const int offset_delta = scroll_by_rows * iir;
[No CFG could be retrieved]
private static gboolean go_pgup_key_accel_callback = 0 ; g_callback select_toggle_callback.
This is not used by culling.
@@ -1,7 +1,4 @@ # Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0
[HessianTests->[testHessian1D->[_forward_over_back_hessian]],ForwardpropTest->[testGradPureForward->[f],testHVPMemory->[_hvp],testVariableWatchedFunction->[compute_jvps,_Model],testPushPopAccumulatorState->[f],testExceptionCustomGradientRecomputeGradForward->[_jacfwd],testElementwiseNNOps->[_test_gradients],testArgumentUnused->[_f],testJVPManual->[_jvp],testVariableReadInFunction->[f],testExceptionInCustomGradientNotSwallowed->[f],testNumericHigherOrder->[_test_gradients],testFunctionGrad->[_test_gradients],testReusingJVP->[_expected],testCustomGradient->[_test_gradients],testFusedBatchNormGradsInference->[_test_gradients],testFunctionGradInFunctionPureForward->[take_gradients->[f],take_gradients],testHVPCorrectness->[_hvp,fun],testHigherOrderPureForward->[_forwardgrad->[_compute_forwardgrad->[f]],_forwardgrad,f]],ControlFlowTests->[testInFunctionWhile->[_fprop_while],testOfFunctionWhile->[_has_loop],testInFunctionCond->[_fprop_cond],testOfFunctionCond->[_has_cond]],JacobianTests->[testJVPBatchCorrectness->[_jvp_batch,_jvp_batch_matmul]],_jacfwd->[_jvp],_jvp_batch_matmul->[_jacfwd],_test_gradients->[_grad,_test_gradients,_jacfwd],_forward_over_back_hessian->[_vectorize_parameters]]
Creates a new object from a given sequence number. Unconnected gradients are not supported.
Why the license block change?
@@ -70,10 +70,15 @@ public class CoordinatorDynamicConfigsResource @Consumes(MediaType.APPLICATION_JSON) public Response setDynamicConfigs(final CoordinatorDynamicConfig dynamicConfig, @HeaderParam(AuditManager.X_DRUID_AUTHOR) @DefaultValue("") final String author, - @HeaderParam(AuditManager.X_DRUID_COMMENT) @DefaultValue("") final String comment + @HeaderParam(AuditManager.X_DRUID_COMMENT) @DefaultValue("") final String comment, + @Context HttpServletRequest req ) { - if (!manager.set(CoordinatorDynamicConfig.CONFIG_KEY, dynamicConfig, new AuditInfo(author, comment))) { + if (!manager.set( + CoordinatorDynamicConfig.CONFIG_KEY, + dynamicConfig, + new AuditInfo(author, comment, req.getRemoteAddr()) + )) { return Response.status(Response.Status.BAD_REQUEST).build(); } return Response.ok().build();
[CoordinatorDynamicConfigsResource->[getDatasourceRuleHistory->[Interval,build],setDynamicConfigs->[AuditInfo,set,build],getDynamicConfigs->[build]]]
Sets a single coordinator dynamic config.
the req here will be just the node itself if it is hosting the console
@@ -1785,7 +1785,7 @@ obj_shard_task_sched(struct obj_auxi_args *obj_auxi, uint64_t epoch) * the IO involved shards' targets not changed. No any shard task * re-scheduled for this case, can complete the obj IO task. */ - if (obj_auxi->shard_task_scheded == 0) + if (obj_auxi->shard_task_scheded == 0 && obj_auxi->obj_task) tse_task_complete(obj_auxi->obj_task, 0); }
[No CFG could be retrieved]
This function is called by the TSE daemon when a shard is found. Reads the object s auxi from the object s request queue and returns the object s aux.
IMHO, obj_task should always be there. Do you have the daos log available? We better figure out why it is NULL.
@@ -30,7 +30,6 @@ #include <json-glib/json-glib.h> #define DARKTABLE_KEYRING PACKAGE_NAME - #define GFOREACH(item, list) \ for(GList *__glist = list; __glist && (item = __glist->data, TRUE); __glist = __glist->next)
[No CFG could be retrieved]
Permission for obtaining a copy of a single in the Software. DARKTABLE - DARKTABLE - SECRET SCHEMA.
Don't rely on Gnome Keyring's "Login" collection anymore.
@@ -681,10 +681,10 @@ namespace System.Net.Quic.Implementations.MsQuic NetEventSource.Error(state, $"{state.TraceId} Exception occurred during handling {connectionEvent.Type} connection callback: {ex}"); } - if (state.ConnectTcs != null) + if (state.ConnectTcs != null && !state.ConnectTcs.Task.IsCompleted) { - state.ConnectTcs.SetException(ex); - state.ConnectTcs = null; + // This is opportunistic if we get exception and have ability to propagate it to caller. + state.ConnectTcs.TrySetException(ex); state.Connection = null; } else
[MsQuicConnection->[TraceId->[TraceId],Task->[Dispose],NativeCallbackHandler->[HandleEventConnected,TraceId,HandleEventStreamsAvailable,HandleEventShutdownInitiatedByTransport,HandleEventPeerCertificateReceived,HandleEventShutdownComplete,HandleEventShutdownInitiatedByPeer,HandleEventNewStream],HandleEventPeerCertificateReceived->[TraceId],Dispose->[Dispose,TraceId,SetClosing],HandleEventNewStream->[TryQueueNewStream],Dispose,TraceId]]
This function is called when a connection event is received from the peer. return error if node not found.
This is the change I was hinting at when asking about the race condition.
@@ -89,7 +89,7 @@ namespace System.Windows.Forms } } - dialog.SetTitle(Title); + dialog.SetTitle(Title!); dialog.SetOptions(GetOptions()); SetFileTypes(dialog);
[FileDialog->[HandleVistaFileOk->[ProcessVistaFiles],SetFileTypes->[SetFileTypes]]]
OnBeforeVistaDialog method.
This feels wrong. How can it be null?
@@ -30,4 +30,18 @@ public class DefaultInputQueryParam extends AbstractQueryParam implements InputQ { return value; } + + /** + * @return true if the InputQueryParam references a param that must be defined through a <db:in-param> element in the configuration file. + * false if the InputQueryParam is defined through a literal or a MEL expression + */ + public boolean isDbInParam() + { + return isDbParam; + } + + public void setDbParam(boolean dbParam) + { + isDbParam = dbParam; + } }
[No CFG could be retrieved]
Returns the value of the .
This class was supposed to be immutable. Find a different way to create it with the right value
@@ -223,6 +223,10 @@ class MenuComposer ->isActive() ->hasAccess($user) ->groupBy('severity') + ->leftJoin('devices', 'alerts.device_id', '=', 'devices.device_id') + ->where('devices.disabled', '=', '0') + ->where('devices.ignore', '=', '0') + ->groupBy('severity') ->pluck('severity'); if ($alert_status->contains('critical')) {
[MenuComposer->[compose->[groupBy,sortBy,get,pluck,keyBy,filter,contains,with,hasGlobalRead,count]]]
Composes a single node node menu. Returns array with variables sensor_menu wireless_menu and application_menu. Returns a list of all possible routing menus for the user Displays a menu of typeahead alert messages.
Should we include this in "isActive" ? I mean, an alert can be active if and only if those 2 where conditions on the device are met.
@@ -405,7 +405,7 @@ frappe.views.ListRenderer = Class.extend({ }, get_indicator_html: function (doc) { - var indicator = frappe.get_indicator(doc, this.doctype); + var indicator = frappe.get_indicator(doc, this.doctype, (frappe.workflow.workflows[this.doctype] && frappe.workflow.workflows[this.doctype]['override_status']) || true); if (indicator) { return `<span class='indicator ${indicator[1]} filterable' data-filter='${indicator[2]}'>
[No CFG could be retrieved]
Renders the list tags Internal method to set data on the object.
Line too long, let's break it using a variable?
@@ -159,6 +159,12 @@ export class SystemLayer { /** @const @private {!../../../src/service/vsync-impl.Vsync} */ this.vsync_ = Services.vsyncFor(this.win_); + + /** @const @private {!../../../src/service/timer-impl.Timer} */ + this.timer_ = Services.timerFor(this.win_); + + /** @private {?(number|string)} */ + this.muteMessageTimeout_ = null; } /**
[No CFG could be retrieved]
Creates a new system layer element. region System Layer buttons.
Is this used?
@@ -666,8 +666,8 @@ func (s *LoginState) loginWithPromptHelper(username string, loginUI LoginUI, sec getSecretKeyFn := func(keyrings *Keyrings, me *User) (GenericKey, error) { ska := SecretKeyArg{ - All: true, - Me: me, + Me: me, + KeyType: AllSecretKeyTypes, } key, _, err := keyrings.GetSecretKeyWithPrompt(ska, secretUI, "Login") return key, err
[RunSecretSyncer->[SecretSyncer,UID],loginWithStoredSecret->[switchUser,pubkeyLoginHelper,checkLoggedIn],tryPubkeyLoginHelper->[pubkeyLoginHelper],passphraseLogin->[computeLoginPw,postLoginToServer,saveLoginState,getSaltAndLoginSession],getEmailOrUsername->[switchUser],computeLoginPw->[getCachedSharedSecret],AssertLoggedIn->[IsLoggedIn],logout->[Logout,clearPassphrase],checkLoggedIn->[IsLoggedInLoad],GetPassphraseStream->[GetCachedPassphraseStream],pubkeyLoginHelper->[loginResult,saveLoginState,getSaltAndLoginSession],getTriplesec->[stretchPassphrase],loginWithPromptHelper->[switchUser,tryPubkeyLoginHelper,getEmailOrUsername,checkLoggedIn,tryPassphrasePromptLogin],IsLoggedIn->[IsLoggedIn],loginWithPassphrase->[passphraseLogin,switchUser,tryPubkeyLoginHelper,checkLoggedIn],LoadSKBKeyring->[IsLoggedIn,LoadSKBKeyring],AssertLoggedOut->[IsLoggedIn],GetVerifiedTriplesec->[GetCachedTriplesec]]
loginWithPromptHelper will attempt to login the user with the given username prompting the user.
This can be changed to DeviceKeyType, since pubkey login only makes sense for it
@@ -277,12 +277,8 @@ void RemoteClient::GetNextBlocks ( // Reset usage timer, this block will be of use in the future. block->resetUsageTimer(); - // Block is dummy if data doesn't exist. - // It means it has been not found from disk and not generated - if(block->isDummy()) - { + if (env->getMap().blockNotInDatabase(p)) surely_not_found_on_disk = true; - } // Block is valid if lighting is up-to-date and data exists if(block->isValid() == false)
[event->[notifyEvent,UpdatePlayerList],UpdatePlayerList->[getClientIDs]]
This method is called by the remote client to get the next blocks. This function is used to rotate the XZ camera by the Yaw and Zaw of u32 m_blocks_sending = m_blocks_sending + num_ This function is used to generate a set of blocks that are not in the queue.
I'm not sure that this is correct. The block could not be in the database but also not be in the not-exists cache. Why not just continue using the dummy flag?
@@ -60,7 +60,7 @@ final class SchemaBuilder implements SchemaBuilderInterface /** @var array<string, mixed> $graphqlConfiguration */ $graphqlConfiguration = $resourceMetadata->getGraphql() ?? []; foreach ($graphqlConfiguration as $operationName => $value) { - if ('query' === $operationName) { + if ('item_query' === $operationName || 'collection_query' === $operationName) { $queryFields += $this->fieldsBuilder->getQueryFields($resourceClass, $resourceMetadata, $operationName, [], []); continue;
[SchemaBuilder->[getSchema->[getGraphql,create,getQueryFields,getNodeQueryFields,get,set,getMutationFields,getWrappedType,getTypes,getGraphqlAttribute]]]
Returns a schema with all types.
Please separate into two conditions and change the value of `$itemConfiguration` and `$collectionConfiguration` to false in the `getQueryFields` call.
@@ -191,8 +191,8 @@ class Sorbet::Private::HiddenMethodFinder ret = [] rbi.each do |rbi_entry| - # skip synthetic constants - next if rbi_entry["name"]["kind"] == "UNIQUE" + # skip duplicated constant fields + next if rbi_entry["name"]["kind"] == "UNIQUE" and rbi_entry["kind"] == "STATIC_FIELD" source_entry = source_by_name[rbi_entry["name"]["name"]]
[serialize_values->[real_name],serialize_class->[real_name,serialize_constants],main->[main],symbols_id_to_name_real->[symbols_id_to_name_real],require_everything->[require_everything],main]
serialize_constants serializes the given Blast constants.
Should we not also skip MANGLE_RENAME?
@@ -12,6 +12,7 @@ import ( "github.com/smartcontractkit/chainlink/core/services/keystore" "github.com/smartcontractkit/chainlink/core/services/keystore/keys/vrfkey" "github.com/smartcontractkit/chainlink/core/utils" + stringutils "github.com/smartcontractkit/chainlink/core/utils/string_utils" ) // Bridge retrieves a bridges by name.
[JobProposal->[ParseInt,GetFeedsService,Is,GetJobProposal],Features->[GetConfig],Bridge->[BridgeORM,NewTaskType,Is,FindBridge],Chains->[EVMORM,Chains],FeedsManagers->[ListManagers,GetFeedsService],Bridges->[BridgeTypes,BridgeORM],FeedsManager->[ParseInt,GetFeedsService,GetManager,Is],Node->[ParseInt,EVMORM,Is,Node],Chain->[EVMORM,Chain,Is,UnmarshalText],P2PKeys->[GetAll,GetKeyStore,P2P],VRFKeys->[GetAll,GetKeyStore,VRF],CSAKeys->[GetAll,GetKeyStore,CSA],OCRKeyBundles->[OCR,GetAll,GetKeyStore],VRFKey->[Get,GetKeyStore,Cause,VRF],Job->[ParseInt,JobORM,FindJobTx,Is],Jobs->[JobORM,FindJobs]]
Bridge returns a bridge payload for a given task type.
Remove the name declaration
@@ -309,7 +309,10 @@ class MyModulesController < ApplicationController @direct_upload = ENV['PAPERCLIP_DIRECT_UPLOAD'] == "true" @my_module = MyModule.find_by_id(params[:id]) if @my_module - @project = @my_module.experiment.project + @experiment = @my_module.experiment + if @experiment + @project = @my_module.experiment.project + end else render_404 end
[MyModulesController->[unassign_samples->[samples]]]
load_vars Loads the nag - node variables from the request.
Favor modifier `if` usage when having a single-line body. Another good alternative is the usage of control flow `&&`/`||`.
@@ -30,7 +30,7 @@ import java.util.List; */ public class InetSocketAddressResolver extends AbstractAddressResolver<InetSocketAddress> { - private final NameResolver<InetAddress> nameResolver; + protected final NameResolver<InetAddress> nameResolver; /** * @param executor the {@link EventExecutor} which is used to notify the listeners of the {@link Future} returned
[InetSocketAddressResolver->[doResolve->[operationComplete->[setFailure,getPort,getNow,InetSocketAddress,isSuccess,cause,setSuccess],addListener],doIsResolved->[isUnresolved],doResolveAll->[operationComplete->[setFailure,size,getNow,getPort,InetSocketAddress,isSuccess,cause,add,setSuccess],addListener],close->[close]]]
Provides a resolver for the given network address. If the operation completes the promise will be set to the current address.
consider adding a protected final accessor method instead of directly exposing the member variable, or make it package private for now.
@@ -1178,6 +1178,12 @@ public class JmsOutboundGateway extends AbstractReplyProducingMessageHandler imp LinkedBlockingQueue<javax.jms.Message> queue = this.replies.get(correlationId); if (queue == null) { if (this.correlationKey != null) { + Log debugLogger = LogFactory.getLog("si.jmsgateway.debug"); + if (debugLogger.isDebugEnabled()) { + Object siMessage = this.messageConverter.fromMessage(message); + debugLogger.debug("No pending reply for " + siMessage + " with correlationId: " + + correlationId + " pending replies: " + this.replies.keySet()); + } throw new RuntimeException("No sender waiting for reply"); } synchronized (this.earlyOrLateReplies) {
[JmsOutboundGateway->[setIdleReplyContainerTimeout->[setIdleReplyContainerTimeout],createSession->[createSession],setRequiresReply->[setRequiresReply],setContainerProperties->[setDestinationResolver,setReceiveTimeout,setConnectionFactory],stop->[stop],createConnection->[createConnection],GatewayReplyListenerContainer->[resolveDestinationName->[resolveDestinationName],recoverAfterListenerSetupFailure->[recoverAfterListenerSetupFailure],getDestinationDescription->[getDestinationDescription]],handleRequestMessage->[start,isRunning],start->[start],sendAndReceiveWithoutContainer->[start,determineRequestDestination,determineReplyDestination],IdleContainerStopper->[run->[isRunning,stop]],LateReplyReaper->[run->[getTimeStamp]],sendAndReceiveWithContainer->[start,determineRequestDestination]]]
This method is called when a message is received from the JMS channel.
??? Can't we rely on the normal supplied `logger` from the super `IntegrationObjectSupport` ?