patch
stringlengths
18
160k
callgraph
stringlengths
4
179k
summary
stringlengths
4
947
msg
stringlengths
6
3.42k
@@ -90,6 +90,10 @@ namespace Content.Server.Body.Behavior continue; } + // How much reagent is available to metabolise? + // This needs to be passed to other functions that have metabolism rate information, such that they don't "overmetabolise" a reagant. + var availableReagent = bloodstream.Solution.Solution.GetReagentQuantity(reagent.ReagentId); + //TODO BODY Check if it's a Toxin. If volume < _toxinTolerance, just remove it. If greater, add damage = volume * _toxinLethality //TODO BODY Check if it has BoozePower > 0. Affect drunkenness, apply damage. Proposed formula (SS13-derived): damage = sqrt(volume) * BoozePower^_alcoholExponent * _alcoholLethality / 10 //TODO BODY Liver failure.
[LiverBehavior->[Update->[CurrentVolume,ReagentId,Metabolize,ToList,TryRemoveReagent,TryIndex,TryGetComponent,Owner,Zero,Metabolism]]]
Update the next unseen unseen unseen unseen frame time.
rather than relying on the metabolism behavior to check that it doesnt 'overmetabolize', shouldnt metabolizable/liver just check how much is returned and clamp it appropriately? i still think that available reagent amount should be passed in for stuff like chemicals that scale their power based off of reagent quantity, but i dont think that this is something that the behaviors should have to worry about handling
@@ -55,7 +55,7 @@ MYPYC_OPT_IN = [MYPYC_RUN, MYPYC_RUN_MULTI] # time to run. cmds = { # Self type check - 'self': python_name + ' -m mypy --config-file mypy_self_check.ini -p mypy', + 'self': '"' + python_name + '" -m mypy --config-file mypy_self_check.ini -p mypy', # Lint 'lint': 'flake8 -j0', # Fast test cases only (this is the bulk of the test suite)
[main->[start_background_cmd,run_cmd,wait_background_cmd],main]
Creates a list of commands that can be run by the type check. Takes seconds to run each in a loop.
this should use `shlex.quote`
@@ -109,7 +109,7 @@ public class HoodieFlinkStreamer { .transform( "bucket_assigner", TypeInformation.of(HoodieRecord.class), - new KeyedProcessOperator<>(new BucketAssignFunction<>(conf))) + new BucketAssignOperator<>(new BucketAssignFunction<>(conf))) .setParallelism(conf.getInteger(FlinkOptions.BUCKET_ASSIGN_TASKS)) .uid("uid_bucket_assigner") // shuffle by fileId(bucket id)
[HoodieFlinkStreamer->[main->[transform,enableCheckpointing,map,appendKafkaProps,of,JCommander,uid,setMaxConcurrentCheckpoints,setStateBackend,FlinkStreamerConfig,FsStateBackend,getExecutionEnvironment,getInteger,getLogicalType,toFlinkConfig,setParallelism,execute,usage,setGlobalJobParameters,getBoolean,needsAsyncCompaction,exit]]]
Main method of the Flink Streamer. Key by file id.
Nice catch, can we fix the indentation ? And there is another PR same with this, can we close that ?
@@ -216,6 +216,17 @@ func BuildStorageFactory(masterConfig configapi.MasterConfig, server *kapiserver // keep Deployments in extensions for backwards compatibility, we'll have to migrate at some point, eventually storageFactory.AddCohabitatingResources(extensions.Resource("deployments"), apps.Resource("deployments")) + if server.Etcd.EncryptionProviderConfigFilepath != "" { + glog.V(4).Infof("Reading encryption configuration from %q", server.Etcd.EncryptionProviderConfigFilepath) + transformerOverrides, err := encryptionconfig.GetTransformerOverrides(server.Etcd.EncryptionProviderConfigFilepath) + if err != nil { + return nil, err + } + for groupResource, transformer := range transformerOverrides { + storageFactory.SetTransformer(groupResource, transformer) + } + } + return storageFactory, nil }
[IsUnspecified,GetKubeletClientConfig,AddCohabitatingResources,Resource,NewResourceConfig,GetOAuthClientCertCAs,NewConfig,ReadFile,FromInt,GetOperationIDAndTags,Info,HasPrefix,LoadX509KeyPair,Atoi,GetInternalKubeClient,ParsePortRange,StorageGroupsToEncodingVersion,MatchString,CipherSuitesOrDie,SplitHostPort,NewSchedulerServer,Validate,NewLeaseEndpointReconciler,New,DefaultExternalHost,IPNet,NewDefaultResourceEncodingConfig,DefaultAPIResourceConfigSource,NewREST,Doc,NewServerRunOptions,Errorf,NewString,ApplyWithStorageFactoryTo,NewDefaultStorageFactory,MustCompile,ChooseHostInterface,SetTransportDefaults,SetWatchCacheSizes,Create,DisableVersions,Infof,OriginControllerManagerAddFlags,NewAggregate,IsLoopback,BasicLongRunningRequestCheck,NewLeases,V,AddCert,Fatalf,DefaultIPNet,NewCMServer,ApplyAuthorization,ParseIP,ApplyTo,Get,SetResourceEncoding,DefaultSwaggerConfig,EnableVersions,SetVersionEncoding,Resolve,GetDisabledAPIVersionsForGroup,InitCloudProvider,Insert,Parse,NewRegistry,TLSVersionOrDie,String,ResourcePrefix,GetEnabledAPIVersionsForGroup,ParseDuration,DefaultAdvertiseAddress]
buildUpstreamGenericConfig creates a generic apiserver. Config that is used to create a MaybeDefaultWithSelfSignedCerts creates self - signed certificates for the API server.
This log line was added by me. I think it won't hurt but it simplifies a debugging process.
@@ -61,6 +61,10 @@ const ( WorkspaceFile = "workspace.json" // CachedVersionFile is the name of the file we use to store when we last checked if the CLI was out of date CachedVersionFile = ".cachedVersionInfo" + + // PulumiBookkeepingLocationEnvVar is a path to the folder where '.pulumi' folder is stored. + // The path should not include '.pulumi' itself. It defaults to the user's home dir if not specified. + PulumiBookkeepingLocationEnvVar = "PULUMI_BOOKKEEPING_LOCATION" ) // DetectProjectPath locates the closest project from the current working directory, or an error if not found.
[Dir,Join,Save,Stat,IsDir,Sprintf,TrimSuffix,WalkUp,Current,Name,Errorf,Ext,Getwd]
DetectProjectPath locates the closest project in the current working directory and returns the path to return path to the ethernet ethernet file for the ethernet stack.
The name "bookkeeping" feels a bit unusual here for a public facing thing. Should it be `PULUMI_HOME_DIR`? or `PULUMI_HOME`?
@@ -4,6 +4,6 @@ class WorkMailer < ActionMailer::Base def untouched_works_notification(work_ids) @works = Work.where(id: work_ids) - mail(to: 'anannict@gmail.com', subject: '【Annict::Marie】未更新の作品を更新して下さい') + mail(to: 'anannict@gmail.com', subject: '【Annict DB】未更新の作品を更新して下さい') end end
[WorkMailer->[untouched_works_notification->[where,mail],default]]
Untouched Works Notifications.
Prefer double-quoted strings unless you need single quotes to avoid extra backslashes for escaping.
@@ -22,7 +22,7 @@ You may occasionally receive notifications about optional moderation actions; please <a href="<%= app_url(user_settings_path(:notifications)) %>">unsubscribe</a> if you do not want to receive these updates. </p> <p> - If you have any questions or feedback for us, please write to <%= SiteConfig.email_addresses[:default] %> and share your thoughts. + If you have any questions or feedback for us, please write to <%= email_link %> and share your thoughts. </p> <p>
[No CFG could be retrieved]
Returns a nicely formatted object.
replaced with `email_link` because this is a HTML template
@@ -29,8 +29,9 @@ const ( Ubuntu1804 Distro = "ubuntu-18.04" RHEL Distro = "rhel" CoreOS Distro = "coreos" - AKS Distro = "aks" - AKSDockerEngine Distro = "aks-docker-engine" // deprecated docker-engine distro + AKS Distro = "aks" // deprecated AKS 16.04 distro. Equivalent to aks-16.04. + AKSDockerEngine Distro = "aks-docker-engine" // deprecated docker-engine distro. + AKS1604 Distro = "aks-16.04" AKS1804 Distro = "aks-18.04" ACC1604 Distro = "acc-16.04" )
[No CFG could be retrieved]
returns a string representation of a single orchestrator type missing - type values for the n - agent - pool option.
Rather than deprecating this, we could just call it a "mutable" reference. So at some point aks is redirected from 16.04 to 18.04
@@ -529,9 +529,11 @@ class Optimizer(object): if not param.trainable: continue if param._ivar._grad_ivar() is not None: + Type = param._ivar._grad_ivar().type # create gradient variable grad_var = Variable( block=loss.block, + type=Type, name=param._ivar._grad_name(), stop_gradient=True, ivar=param._ivar._grad_ivar())
[MomentumOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],PipelineOptimizer->[_extract_section_ops->[_is_opt_role_op],_find_input_output->[update],minimize->[minimize,_create_vars,_split_program],_split_program->[_extract_section_ops,_find_persistable_vars,update,_find_input_output,_is_lr_role_op,_find_section_opt],_find_section_opt->[_extract_section_opt_ops]],DecayedAdagradOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],ModelAverage->[_add_average_apply_op->[_get_accumulator],_append_average_accumulate_op->[_add_accumulator]],AdamaxOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_finish_update->[_get_accumulator],_create_accumulators->[_add_accumulator]],Optimizer->[apply_gradients->[_create_optimization_pass,_process_distribute_lookuptable],apply_optimize->[apply_gradients,_create_optimization_pass],_create_param_lr->[_global_learning_rate],_create_optimization_pass->[_append_optimize_op,_create_global_learning_rate,_finish_update,_create_accumulators],minimize->[apply_optimize,backward],_process_distribute_lookuptable->[_create_global_learning_rate,_create_param_lr],backward->[_append_dgc_ops]],AdamOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_finish_update->[_get_accumulator],_create_accumulators->[_add_accumulator]],ExponentialMovingAverage->[apply->[restore]],LookaheadOptimizer->[minimize->[minimize]],RMSPropOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],SGDOptimizer->[_append_optimize_op->[_create_param_lr]],FtrlOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],AdagradOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],LambOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr]],LarsMomentumOptimizer->[_append_optimize_op->[_get_accumulator,_create_param_lr],_create_accumulators->[_add_accumulator]],RecomputeOptimizer->[apply_gradients->[apply_gradients],minimize->[apply_optimize,backward],apply_optimize->[apply_optimize]],AdadeltaOptimizer->[_append_optimize_op->[_get_accumulator],_create_accumulators->[_add_accumulator]],DGCMomentumOptimizer->[_append_optimize_op->[_is_use_dgc,_get_accumulator,_create_param_lr],_append_clip_norm->[_clip_by_norm],_append_dgc_ops->[_add_auto_increment_var,_is_use_dgc]],DpsgdOptimizer->[_append_optimize_op->[_create_param_lr]]]
This function is the backward version of the optimizer. Adds the gradient of the missing - value layer to the gradient_list.
use lower case? `Type` -> `type`
@@ -92,6 +92,17 @@ static const OPENSSL_CTX_METHOD rand_crng_ossl_ctx_method = { rand_crng_ossl_ctx_free, }; +static int prov_crngt_compare_previous(const unsigned char *prev, + const unsigned char *cur, + size_t sz) +{ + const int res = memcmp(prev, cur, sz) != 0; + + if (!res) + ossl_set_error_state(); + return res; +} + size_t prov_crngt_get_entropy(PROV_DRBG *drbg, unsigned char **pout, int entropy, size_t min_len, size_t max_len,
[prov_crngt_cleanup_entropy->[OPENSSL_secure_clear_free],int->[rand_pool_detach,rand_pool_reattach,EVP_Digest,memcpy,EVP_MD_fetch,prov_pool_acquire_entropy,EVP_MD_free],prov_crngt_get_entropy->[rand_pool_add,rand_pool_detach,rand_pool_new,OPENSSL_cleanse,PROV_LIBRARY_CONTEXT_OF,memcpy,memcmp,crngt_get_entropy,openssl_ctx_get_data,rand_pool_bytes_needed,rand_pool_free],void->[OPENSSL_zalloc,rand_pool_new,OPENSSL_cleanse,OPENSSL_free,crngt_get_entropy,rand_pool_free]]
Get entropy of a node in the system.
Should this be configurable? i.e on by default but able to be turned off.. Should it be able to be disabled at compile time? I had to do similar things for #12745
@@ -63,9 +63,7 @@ public class TestMultiFS extends HoodieClientTestHarness { @AfterEach public void tearDown() throws Exception { - cleanupSparkContexts(); - cleanupDFS(); - cleanupTestDataGenerator(); + cleanupResources(); } protected HoodieWriteConfig getHoodieWriteConfig(String basePath) {
[TestMultiFS->[readLocalWriteHDFS->[getHoodieWriteConfig]]]
This method is called after each node in the cluster is tear down.
@garyli1019 : Is this one of the places where we are missing the cleanup of resources ?
@@ -1693,8 +1693,15 @@ def compile_to_numba_ir(mk_func, glbls, typingctx=None, targetctx=None, # perform type inference if typingctx is available and update type # data structures typemap and calltypes if typingctx: - f_typemap, f_return_type, f_calltypes, _ = typed_passes.type_inference_stage( - typingctx, targetctx, f_ir, arg_typs, None) + f_typemap, f_return_type, f_calltypes, _ = ( + typed_passes.type_inference_stage( + typingctx, + targetctx, + f_ir, + arg_typs, + None, + ) + ) # remove argument entries like arg.a from typemap arg_names = [vname for vname in f_typemap if vname.startswith("arg.")] for a in arg_names:
[find_build_sequence->[require,get_definition],mk_loop_header->[mk_unique_var],mk_alloc->[mk_unique_var],find_global_value->[find_global_value,get_definition],rename_labels->[find_topo_order],resolve_func_from_module->[resolve_mod->[get_definition,resolve_mod],resolve_mod],gen_np_call->[get_np_ufunc_typ,mk_unique_var],set_index_var_of_get_setitem->[is_getitem,is_setitem],get_ir_of_code->[_create_function_from_code_obj,DummyPipeline],find_topo_order->[_dfs_rec->[_dfs_rec],_dfs_rec],find_callname->[require],find_const->[require,get_definition],simplify_CFG->[rename_labels],raise_on_unsupported_feature->[get_definition,guard],compile_to_numba_ir->[remove_dels,get_name_var_table,update,next,replace_var_names,mk_unique_var,add_offset_to_labels],apply_copy_propagate->[replace_vars_stmt,replace_vars_inner],visit_vars_inner->[visit_vars_inner],check_and_legalize_ir->[enforce_no_phis,enforce_no_dels],convert_size_to_var->[mk_unique_var],next_label->[next],flatten_labels->[add_offset_to_labels,find_max_label],index_var_of_get_setitem->[is_getitem,is_setitem],canonicalize_array_math->[get_np_ufunc_typ,find_topo_order,mk_unique_var],simplify->[get_name_var_table,remove_dead,apply_copy_propagate,dprint_func_ir,copy_propagate,restore_copy_var_names,simplify_CFG],get_definition->[get_definition],_mk_range_args->[mk_unique_var],mk_range_block->[mk_unique_var],restore_copy_var_names->[replace_var_names,mk_unique_var],convert_code_obj_to_function->[get_definition,_create_function_from_code_obj],get_call_table->[find_topo_order],_MaxLabel]
Compile a function or a make_function node to Numba IR. Remove and return the last unknown object from the stack.
Maybe create a temporary for the outputs and subsequently unpack? The braces around the function call seems strange.
@@ -660,8 +660,16 @@ public abstract class HoodieTable<T extends HoodieRecordPayload, I, K, O> implem public HoodieTableMetadata metadata() { if (metadata == null) { - metadata = HoodieTableMetadata.create(hadoopConfiguration.get(), config.getBasePath(), config.getSpillableMapBasePath(), - config.useFileListingMetadata(), config.getFileListingMetadataVerify(), config.isMetricsOn(), config.shouldAssumeDatePartitioning()); + HoodieEngineContext engineContext = context; + if (engineContext == null) { + // This is to handle scenarios where this is called at the executor tasks which do not have access + // to engine context, and it ends up being null (as its not serializable and marked transient here). + engineContext = new HoodieLocalEngineContext(hadoopConfiguration.get()); + } + + metadata = HoodieTableMetadata.create(engineContext, config.getBasePath(), config.getSpillableMapBasePath(), + config.useFileListingMetadata(), config.getFileListingMetadataVerify(), config.isMetricsOn(), + config.shouldAssumeDatePartitioning()); } return metadata; }
[HoodieTable->[deleteInvalidFilesByPartitions->[delete],requireSortedRecords->[getBaseFileFormat],validateSchema->[getMetaClient],getLogFileFormat->[getLogFileFormat],validateUpsertSchema->[validateSchema],validateInsertSchema->[validateSchema],getFileSystemViewInternal->[getFileSystemView],getActiveTimeline->[getActiveTimeline],reconcileAgainstMarkers->[deleteInvalidFilesByPartitions],getHadoopConf->[getHadoopConf],getLogDataBlockFormat->[getBaseFileFormat],getBaseFileFormat->[getBaseFileFormat]]]
Returns the HoodieTableMetadata instance that can be used to populate the Hoodie.
a more elegant way to do this would be through a `getEngineContext()` method that reinits lazily
@@ -168,7 +168,7 @@ func (korm ORM) CreateEthTransactionForUpkeep(sqlDB *sql.DB, upkeep UpkeepRegist if etx.ID == 0 { return etx, errors.New("a keeper eth_tx with insufficient eth is present, not creating a new eth_tx") } - err = korm.DB.First(&etx).Error + err = tx.First(&etx).Error if err != nil { return etx, errors.Wrap(err, "keeper find eth_tx after inserting") }
[BatchDeleteUpkeepsForJob->[WithContext,Exec],Registries->[WithContext,Find],UpsertRegistry->[WithContext,AssignmentColumns,Create,Clauses],SetLastRunHeightForUpkeepOnJob->[Exec],CreateEthTransactionForUpkeep->[Scan,First,New,DefaultQueryCtx,CheckOKToTransmit,QueryRowContext,Address,Wrap],EligibleUpkeeps->[Order,Find,WithContext,Preload,Where,Joins],RegistryForJob->[WithContext,First],UpsertUpkeep->[WithContext,AssignmentColumns,Create,Clauses],LowestUnsyncedID->[Select,WithContext,Where,Scan,Model,Row]]
CreateEthTransactionForUpkeep creates a new eth_tx in the database that is - - - - - - - - - - - - - - - - - -.
This sort of mistake might be best avoided by encapsulating this logic in one place
@@ -104,7 +104,7 @@ const readCache = new Map(); * * @param {string} path * @param {string=} optionsHash - * @return {{contents: string, hash: string}} + * @return {Promise<{contents: string, hash: string}>} */ function batchedRead(path, optionsHash) { let read = readCache.get(path);
[No CFG could be retrieved]
Reads a file and returns its contents and hash.
This type had been incorrect
@@ -78,7 +78,7 @@ <%= hidden_field_tag :my_module_id, @my_module.id %> <%= f.hidden_field :project_id, :value => @my_module.experiment.project.id %> <%= f.hidden_field :name, :value => t("tags.create.new_name") %> - <%= f.hidden_field :color, :value => TAG_COLORS[0] %> + <%= f.hidden_field :color, :value => <%= Constants::TAG_COLORS[0] %> %> <%= f.button class: "btn btn-primary" do %> <span class="glyphicon glyphicon-tag"></span> <span class="hidden-xs"><%=t "experiments.canvas.modal_manage_tags.create_new" %></span>
[No CFG could be retrieved]
This is the only widget that displays a hidden field that shows the tag number.
here!!!! change to value: Constants::TAG_COLORS[0]
@@ -2016,7 +2016,9 @@ void WalletImpl::doRefresh() } } catch (const std::exception &e) { setStatusError(e.what()); - } + break; + }while(!rescan && (rescan=m_refreshShouldRescan.exchange(false))); // repeat if not rescanned and rescan was requested + if (m_wallet2Callback->getListener()) { m_wallet2Callback->getListener()->refreshed(); }
[No CFG could be retrieved]
This function is called from the main thread of the refresh thread. It is called from the Stop or pause refresh of a single .
added: stop loop on exception
@@ -77,6 +77,8 @@ func (s *Secret) Validate() error { case "service_now": errors = requiredField(kvMap["username"], requiredServiceNowUsernameError, errors) errors = requiredField(kvMap["password"], requiredServiceNowPasswordError, errors) + case "chef-server": + errors = requiredField(kvMap["key"], requiredChefServerOrganizationKeyError, errors) } // Eventually I'd like to switch our error handling to be handle an aggregation of errors
[Merge->[WithFields,Debug],ProcessInvalid,Unmarshal]
Validate validates the Secret object This function is called when a user fails to authenticate. It will return nil if there are.
- As discussed have added type `chef-server` but it subject to change. - At present, it's required to add validation for admin `key` only.
@@ -57,10 +57,10 @@ class VersionGuesser } /** - * @param array $packageConfig - * @param string $path Path to guess into + * @param array<string, mixed> $packageConfig + * @param string $path Path to guess into * - * @return null|array versionData, 'version', 'pretty_version' and 'commit' keys, if the version is a feature branch, 'feature_version' and 'feature_pretty_version' keys may also be returned + * @phpstan-return Version|null */ public function guessVersion(array $packageConfig, $path) {
[VersionGuesser->[versionFromGitTags->[normalize,execute],guessFeatureVersion->[normalizeBranch,isFeatureBranch,execute],guessHgVersion->[normalizeBranch,getBranches,guessFeatureVersion,execute],guessSvnVersion->[normalizeBranch,normalize,execute],guessFossilVersion->[normalizeBranch,normalize,execute],guessGitVersion->[versionFromGitTags,isFeatureBranch,guessFeatureVersion,normalizeBranch,splitLines,execute],guessVersion->[guessHgVersion,guessSvnVersion,postprocess,guessFossilVersion,guessGitVersion]]]
Guess the version of the current repository.
I would keep the `@return null|array` for other tooling than phpstan
@@ -17,10 +17,15 @@ export const GTAG_CONFIG = /** @type {!JsonObject} */ ({ 'configRewriter': { 'url': 'https://www.googletagmanager.com/gtag/amp', + 'varGroups': { + 'dns': { + 'dr': 'DOCUMENT_REFERRER', + 'dl': 'SOURCE_URL', + }, + }, }, 'vars': { 'eventValue': '0', - 'documentLocation': 'SOURCE_URL', 'clientId': 'CLIENT_ID(AMP_ECID_GOOGLE,,_ga)', 'dataSource': 'AMP', 'anonymizeIP': 'aip',
[No CFG could be retrieved]
A GTAG config for a specific request. The url - like object that is returned by the ethernet network.
Is this the group we wanted enabled by default?
@@ -126,7 +126,6 @@ ik_rec_free(struct btr_instance *tins, struct btr_record *rec, void *args) umem_id_t *rec_ret = (umem_id_t *) args; /** Provide the buffer to user */ *rec_ret = rec->rec_mmid; - rec->rec_mmid = UMMID_NULL; return 0; } utest_free(ik_utx, irec->ir_val_mmid);
[No CFG could be retrieved]
finds the first key in the list of IK records that are in the list of t - > 0 - > 0 - > 0 - > 0 - > 0 - > if val_size > = val_size memcpy it to iov_buf.
Is it redundant or any side-effect? As my understand, if the caller offer the @args parameter, then it needs to free the space (pointed by rec->rec_mmid) by itself. The @rec will be freed after ik_rec_free() returned. So whether reset the rec_mmid or not will not cause correctness issue, right?
@@ -23,6 +23,16 @@ module AuthenticationHelper Authentication::Providers.enabled_for_user(user) end + def signed_up_with(user = current_user) + providers = Authentication::Providers.enabled_for_user(user) + + # If the user did not authenticate with any provider, they signed up with an email. + auth_method = providers.presence ? providers.map(&:official_name).to_sentence : "Email & Password" + verb = providers.size > 1 ? "any of those" : "that" + + "Reminder: you used #{auth_method} to authenticate your account, so please use #{verb} to sign in if prompted." + end + def available_providers_array Authentication::Providers.available.map(&:to_s) end
[authentication_enabled_providers->[get!,map],authentication_enabled_providers_for_user->[enabled_for_user],authentication_provider->[get!],forem_creator_flow_enabled?->[waiting_on_first_user?,enabled?],available_providers_array->[map],waiting_on_first_user?->[waiting_on_first_user],authentication_provider_enabled?->[include?],authentication_available_providers->[const_get,titleize,map],invite_only_mode_or_no_enabled_auth_options->[none?,invite_only_mode,allow_email_password_registration]]
Returns an array of authentication providers that are enabled for the given user.
I don't know why we used to render `Reminder: You used`, but I took the liberty of lowercasing it because...grammar.
@@ -39,6 +39,7 @@ import org.apache.zeppelin.helium.HeliumPackage; import org.apache.zeppelin.interpreter.InterpreterGroup; import org.apache.zeppelin.interpreter.remote.RemoteAngularObjectRegistry; import org.apache.zeppelin.interpreter.thrift.InterpreterCompletion; +import org.apache.zeppelin.rest.message.InterpreterSettingListForNoteBind; import org.apache.zeppelin.user.AuthenticationInfo; import org.apache.zeppelin.interpreter.InterpreterOutput; import org.apache.zeppelin.interpreter.InterpreterResult;
[NotebookServer->[pushAngularObjectToRemoteRegistry->[broadcastExcept],checkpointNotebook->[serializeMessage],onRemove->[broadcast,notebook],onLoad->[broadcast],updateParagraph->[permissionError,getOpenNoteId,broadcast],unicastNoteList->[generateNotebooksInfo,unicast],onOutputAppend->[broadcast],broadcast->[serializeMessage],sendNote->[permissionError,serializeMessage,addConnectionToNote],broadcastExcept->[serializeMessage],insertParagraph->[permissionError,getOpenNoteId,broadcastNote,insertParagraph],generateNotebooksInfo->[notebook],clearParagraphOutput->[permissionError,getOpenNoteId,clearParagraphOutput,broadcastNote],onStatusChange->[broadcast],cancelParagraph->[permissionError,getOpenNoteId],ParagraphListenerImpl->[onOutputAppend->[broadcast],onProgressUpdate->[broadcast],onOutputUpdate->[broadcast],afterStatusChange->[broadcastNote]],removeNote->[permissionError,removeNote,broadcastNoteList],broadcastNote->[broadcast],unicastUpdateNotebookJobInfo->[serializeMessage],removeAngularFromRemoteRegistry->[broadcastExcept],onMessage->[notebook],onUpdate->[broadcast,notebook],createNote->[broadcastNoteList,createNote,serializeMessage,addConnectionToNote],sendAllConfigurations->[serializeMessage],getNoteRevision->[getNoteRevision,serializeMessage],unicastNotebookJobInfo->[serializeMessage],broadcastNoteList->[generateNotebooksInfo,broadcastAll],updateNote->[permissionError,broadcastNote,broadcastNoteList],broadcastToNoteBindedInterpreter->[notebook],onOutputUpdated->[broadcast],removeConnectionFromAllNote->[removeConnectionFromNote],getParagraphJobListener->[ParagraphListenerImpl],sendHomeNote->[permissionError,serializeMessage,addConnectionToNote,removeConnectionFromAllNote],angularObjectUpdated->[broadcastExcept],removeParagraph->[permissionError,getOpenNoteId,broadcastNote,removeParagraph],sendAllAngularObjects->[serializeMessage],broadcastReloadedNoteList->[generateNotebooksInfo,broadcastAll],moveParagraph->[permissionError,getOpenNoteId,broadcastNote,moveParagraph],permissionError->[serializeMessage],runParagraph->[permissionError,getOpenNoteId,broadcast],unicast->[serializeMessage],completion->[getOpenNoteId,serializeMessage,completion],importNote->[broadcastNoteList,importNote,broadcastNote],pushAngularObjectToLocalRepo->[broadcastExcept],removeAngularObjectFromLocalRepo->[broadcastExcept],broadcastAll->[serializeMessage],cloneNote->[broadcastNoteList,getOpenNoteId,serializeMessage,addConnectionToNote,cloneNote],listRevisionHistory->[listRevisionHistory,serializeMessage]]]
Imports a single object from a list of objects. Imports a single version of the object that represents a Zeppelin websocket.
I think you'd better move this class into proper one. It's used in binding method only.
@@ -278,7 +278,7 @@ void InstanceSaveManager::LoadResetTimes() continue; // the reset_delay must be at least one day - uint32 period = mapDiff->resetTime; + uint32 period = uint32(((mapDiff->resetTime * sWorld->getRate(RATE_INSTANCE_RESET_TIME))/DAY) * DAY); if (period < DAY) period = DAY;
[_ResetOrWarnAll->[_ResetSave,,GetInstanceSave,ScheduleReset],PlayerBindToInstance->[RemovePlayer,AddPlayer,ASSERT],PlayerUnbindInstanceNotExtended->[RemovePlayer],PlayerGetInstanceSave->[PlayerGetBoundInstance],ASSERT->[ASSERT],PlayerIsPermBoundToInstance->[PlayerGetBoundInstance],UnbindAllFor->[PlayerUnbindInstance],Update->[ScheduleReset],DeleteInstanceSaveIfNeeded->[GetInstanceSave,DeleteInstanceSaveIfNeeded],InsertToDB->[ASSERT],LoadCharacterBinds->[RemovePlayer,AddPlayer,GetInstanceSave],PlayerUnbindInstance->[RemovePlayer],PlayerGetDestinationInstanceId->[PlayerGetBoundInstance],CopyBinds->[PlayerGetBoundInstance,PlayerBindToInstance],RemovePlayer->[DeleteInstanceSaveIfNeeded],LoadInstanceSaves->[AddInstanceSave]]
Load the global reset times for all instances. if there is a global reset time for this mapid and difficulty then we can schedule.
Whats going on here? `((X * Y)/Z) * Z` = `X * Y` is it not?
@@ -1534,13 +1534,6 @@ $config['os'][$os]['icon'] = 'hp'; $config['os'][$os]['over'][0]['graph'] = 'device_toner'; $config['os'][$os]['over'][0]['text'] = 'Toner'; -$os = 'richoh'; -$config['os'][$os]['group'] = 'printer'; -$config['os'][$os]['text'] = 'Ricoh Printer'; -$config['os'][$os]['type'] = 'printer'; -$config['os'][$os]['over'][0]['graph'] = 'device_toner'; -$config['os'][$os]['over'][0]['text'] = 'Toner'; - $os = 'okilan'; $config['os'][$os]['group'] = 'printer'; $config['os'][$os]['text'] = 'OKI Printer';
[addServer]
Creates a new object with all possible configuration values. - - - - - - - - - - - - - - - - - -.
Unfortunately you've removed the richoh code here :( If you can add this back in pls.
@@ -0,0 +1,18 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. +// See the LICENSE file in the project root for more information. + +using System; + +internal static partial class Interop +{ + internal static partial class User32 + { + public struct COPYDATASTRUCT + { + public UIntPtr dwData; + public uint cbData; + public IntPtr lpData; + } + } +}
[No CFG could be retrieved]
No Summary Found.
Why have these types been changed?
@@ -625,8 +625,9 @@ def get_sqd_params(rawfile): return sqd -def read_raw_kit(input_fname, mrk=None, elp=None, hsp=None, stim='>', - slope='-', stimthresh=1, preload=False, verbose=None): +def read_raw_kit(input_fname, mrk=None, mrk2=None, elp=None, hsp=None, + stim='>', slope='-', stimthresh=1, preload=False, + verbose=None): """Reader function for KIT conversion to FIF Parameters
[RawKIT->[_set_dig_neuromag->[append,enumerate,ValueError,asarray],_set_stimchannels->[NotImplementedError,str,ValueError,isinstance,pick_types],__repr__->[join,basename,len],_read_segment->[int,vstack,NotImplementedError,arange,unpack,len,copy,sum,seek,read,ValueError,fromfile,reshape,info,open,range,float,array],_set_dig_kit->[_decimate_points,get_ras_to_neuromag_trans,read_hsp,read_elp,fit_matched_points,len,read_mrk,apply_trans,format,isinstance,warning,_set_dig_neuromag],__init__->[sum,apply_trans,info,enumerate,Info,_loc_to_trans,abspath,int,norm,startswith,cross,radians,range,zeros,time,list,get_sqd_params,len,_set_stimchannels,ValueError,zip,float,vstack,arange,sin,_read_segment,cos,_set_dig_kit,array],read_stim_ch->[int,empty,_read_segment,range,pick_types]],read_raw_kit->[RawKIT],get_sqd_params->[dict,append,unpack,seek,read,fromfile,zeros,ones,xrange,ValueError,open,reshape,sysname,array]]
This function reads a raw KIT file from a SQD and returns a FFI find the index of a node with a given threshold.
A problem with inserting new arguments is that it breaks backwards compatibility in cases where people call the function with non keyword arguments (`read_raw_kit(input_fname, mrk, elp, ...)`). Before making this change we should be sure that nobody is using this function in their scripts that way. Otherwise the safer alternative is to add new arguments at the end.
@@ -509,7 +509,7 @@ class Yoast_Notification_Center_Test extends WPSEO_UnitTestCase { $a = new Yoast_Notification( 'a' ); $this->assertFalse( Yoast_Notification_Center::maybe_dismiss_notification( $a ) ); - $b = new Yoast_Notification( 'b', array( 'id' => uniqid( 'id' ) ) ); + $b = new Yoast_Notification( 'b', [ 'id' => uniqid( 'id' ) ] ); $this->assertFalse( Yoast_Notification_Center::maybe_dismiss_notification( $b ) ); }
[Yoast_Notification_Center_Test->[test_construct->[assertTrue],test_notification_is_new->[get_new_notifications,get_notification_center,assertContains,assertInternalType,add_notification],test_get_sorted_notifications_by_type->[get_notification_center,get_sorted_notifications,assertEquals,add_notification],test_update_storage_non_persistent->[assertFalse,get_notification_center,update_storage,add_notification],test_update_storage->[to_array,get_notification_center,update_storage,assertEquals,assertInternalType,add_notification],test_dismiss_notification_is_per_site->[assertTrue,create,skipWithoutMultisite,get_dismissal_key,assertFalse],test_is_notification_dismissed_is_per_site->[assertTrue,create,skipWithoutMultisite,get_dismissal_key,assertFalse,markTestSkipped],test_retrieve_notifications_from_storage_strips_nonces->[get_sample_notifications,to_array,get_nonce,setup_current_notifications,get_id,get_notifications,assertSame],test_remove_notification_by_id_when_no_notification_is_found->[remove_notification_by_id,method,getMock],test_add_notification->[get_notifications,get_notification_center,assertEquals,add_notification],test_is_notification_dismissed->[assertTrue,get_notification_center,is_notification_dismissed],test_get_notification_count->[get_notification_center,assertEquals,add_notification,get_notification_count],test_remove_notification_by_id_when_notification_is_found->[returnValue,remove_notification_by_id,will,getMock,method],test_add_notification_twice_persistent->[get_notifications,get_notification_center,assertEquals,add_notification],test_clear_dismissal_empty_key->[get_notification_center,assertFalse,clear_dismissal],test_get_sorted_notifications->[get_notification_center,get_sorted_notifications,assertEquals,assertInternalType,add_notification],test_restore_notification_clears_user_meta->[assertTrue,get_dismissal_key],get_notification_center->[setup_current_notifications],test_update_storage_strips_nonces->[get_id,get_sample_notifications,update_storage,assertSame],test_update_nonce_on_re_add_notification->[get_nonce,get_notification_center,get_notifications,assertNotEquals,assertInternalType,add_notification],setUp->[add_cap,create],test_resolved_notifications->[get_notification_center,get_resolved_notification_count,assertEquals],test_is_notification_dismissed_falls_back_to_user_meta->[assertTrue,assertEmpty,assertSame,get_dismissal_key],test_remove_storage_without_notifications->[assertFalse,remove_storage,has_stored_notifications],test_restore_notification_is_per_site->[assertTrue,create,skipWithoutMultisite,get_dismissal_key,assertFalse],test_display_notifications->[returnValue,expectOutput,display_notifications,get_notification_center,will,getMock,add_notification],test_maybe_dismiss_notification->[assertFalse],test_remove_storage_with_notifications->[assertTrue,setup_current_notifications,update_storage,remove_storage,add_notification,has_stored_notifications],test_display_dismissed_notification->[display_notifications,get_notification_center,add_notification,expectOutput],test_has_stored_notifications->[returnValue,will,getMock,assertEquals,has_stored_notifications],test_get_sorted_notifications_by_priority->[get_notification_center,get_sorted_notifications,assertEquals,add_notification],test_clear_dismissal_as_string->[assertTrue,get_notification_center,get_dismissal_key,is_notification_dismissed,assertFalse,clear_dismissal],test_clear_dismissal->[assertTrue,get_notification_center,get_dismissal_key,is_notification_dismissed,assertFalse,clear_dismissal],test_get_sorted_notifications_empty->[get_notification_center,get_sorted_notifications,assertInternalType,assertEquals],test_display_notifications_not_for_current_user->[returnValue,expectOutput,display_notifications,get_notification_center,will,getMock,add_notification],test_add_notification_twice->[get_notifications,get_notification_center,assertEquals,add_notification],tearDown->[deactivate_hook]]]
Test if there is a notification that should be dismissed.
Avoid variables with short names like $b. Configured minimum length is 3.
@@ -127,6 +127,9 @@ func addDefaultingFuncs(scheme *runtime.Scheme) { } }, func(obj *DNSConfig) { + if len(obj.ClusterDomain) == 0 { + obj.ClusterDomain = "cluster.local" + } if len(obj.BindNetwork) == 0 { obj.BindNetwork = "tcp4" }
[AddConversionFuncs,Has,Meta,IsNotRegisteredError,ConvertToVersion,Decode,DefaultConvert,NewCodecFactory,String,LegacyCodec,NewString,AddDefaultingFuncs,Encode]
Private functions - private Only let one identity provider authenticate a user.
@smarterclayton should this inform the routing config subdomain or leave that default as is?
@@ -295,6 +295,12 @@ func NewApplication(config *orm.Config, ethClient eth.Client, advisoryLocker pos } app.HeadTracker = services.NewHeadTracker(store, headTrackables) + head, err := app.HeadTracker.HighestSeenHeadFromDB() + if err != nil { + return nil, err + } + logBroadcaster.SetLatestHeadFromStorage(head) + return app, nil }
[NewBox->[NewBox],ArchiveJob->[ArchiveJob],stop->[Stop],Start->[Start],AddServiceAgreement->[AddJob],AddJob->[AddJob]]
Initialize the main services. Start starts all necessary services for the application.
I think a better place for this would be inside of `broadcaster#Start()`
@@ -232,13 +232,15 @@ func (a *AuthzServer) getAllowedMap( // Fetches the id of the current user PLUS the team ids for that user subjects := auth_context.FromContext(ctx).Subjects - resp, err := a.filterHandler.FilterAuthorizedPairs(ctx, subjects, mapByResourceAndAction, methodsInfo) + inputPairs := pairs.GetKeys(mapByResourceAndAction) + + filteredPairs, err := a.introspectionHandler.FilterAuthorizedPairs(ctx, subjects, inputPairs) if err != nil { log.WithError(err).Debug("Error on client.FilterAuthorizedPairs") return nil, err } endpointMap, err := pairs.GetEndpointMapFromResponse( - resp.Pairs, resp.MethodsInfo, resp.MapByResourceAndAction, true) + filteredPairs, methodsInfo, mapByResourceAndAction, true) if err != nil { log.WithError(err).Debug("Error on pairs.GetEndpointMapFromResponse") return nil, err
[IntrospectSome->[Join,Extract,InvertMapNonParameterized,getAllowedMap,GetInfoMap,Debugf],IntrospectAll->[GetInfoMap,getAllowedMap,InvertMapNonParameterized],GetVersion->[GetVersion],Introspect->[Error,Extract,getAllowedMap,InvertMapParameterized,GetInfoMap],getAllowedMap->[FromContext,GetEndpointMapFromResponse,FilterAuthorizedPairs,Extract,WithError,Debug],WithFields,Join,Warn,Sprintf,Debugf,Info,Debug]
getAllowedMap returns the allowed methods for the given resource and action.
There's only one set of `mapByResourceAndAction` and `methodsInfo`, so they're no longer passed in, but retrieved in the introspection code.
@@ -62,7 +62,7 @@ namespace Dynamo.Search /// <summary> /// Utility methods for categorizing search elements. /// </summary> - public static class SearchCategory + public static class SearchCategoryUtil { private sealed class SearchCategoryImpl<TEntry> : ISearchCategory<TEntry> {
[SearchLibrary->[OnEntryAdded->[OnEntryAdded],Add->[Add],OnEntryRemoved->[OnEntryRemoved],Update->[Add]],SearchCategory->[CategorizeSearchEntries->[Create]]]
Creates a search category for a sequence search. This method returns the category of the given entry array.
This must be renamed as soon as LibraryUI introduces class with identical name `SearchCategory`.
@@ -6,11 +6,13 @@ import os import re import socket -import OpenSSL +from OpenSSL import SSL, crypto # type: ignore # https://github.com/python/typeshed/issues/2052 import josepy as jose +from typing import Callable, Text, Union # pylint: disable=unused-import from acme import errors +from acme import str_utils logger = logging.getLogger(__name__)
[dump_pyopenssl_chain->[_dump_cert],probe_sni->[shutdown],SSLSocket->[FakeConnection->[shutdown->[shutdown]],accept->[FakeConnection,accept]]]
A class to handle the creation of a new object. The connection object on which the SNI extension was received.
Do we get any benefit from only putting `type: ignore` for `SSL` if there is a stub for `crypto`?
@@ -8,6 +8,14 @@ class Jetpack_Sync_Module_Themes extends Jetpack_Sync_Module { public function init_listeners( $callable ) { add_action( 'switch_theme', array( $this, 'sync_theme_support' ) ); add_action( 'jetpack_sync_current_theme_support', $callable ); + + // Sidebar updates. + add_action( "update_option_sidebars_widgets", array( $this, 'sync_sidebar_widgets_actions' ), 10, 2 ); + add_action( 'jetpack_widget_added', $callable, 10, 2 ); + add_action( 'jetpack_widget_removed', $callable, 10, 2 ); + add_action( 'jetpack_widget_moved_to_inactive', $callable ); + add_action( 'jetpack_cleared_inactive_widgets', $callable ); + add_action( 'jetpack_widget_reordered', $callable ); } public function init_full_sync_listeners( $callable ) {
[Jetpack_Sync_Module_Themes->[sync_theme_support->[get_theme_support_info],expand_theme_data->[get_theme_support_info]]]
Initializes all listeners for the current theme.
this could use single quotes
@@ -356,10 +356,16 @@ class FixedDialogTeacher(Teacher): if loop is None: loop = self.training if self.random: - new_idx = random.randrange(num_eps) + new_idx = ( + random.randrange( + num_eps // self.dws + int(num_eps % self.dws > self.rank) + ) + * self.dws + + self.rank + ) else: with self._lock(): - self.index.value += 1 + self.index.value += self.dws if loop: self.index.value %= num_eps new_idx = self.index.value
[create_task_agent_from_taskname->[_add_task_flags_to_agent_opt,get,MultiTaskTeacher],StreamDialogData->[reset->[_load],num_examples->[load_length],num_episodes->[load_length],__init__->[get],load_length->[_read_episode],get->[build_table],_data_generator->[_read_episode]],FbDialogTeacher->[__init__->[get]],DialogData->[build_table->[get],_load->[_read_episode],__init__->[get]],MultiTaskTeacher->[report->[report,get],update_counters->[update_counters],share->[share],shutdown->[shutdown],reset->[reset],reset_metrics->[reset_metrics],num_examples->[num_examples],num_episodes->[num_episodes],__init__->[get,num_episodes],save->[save],epoch_done->[epoch_done]],FixedDialogTeacher->[next_example->[num_episodes,next_episode_idx],reset->[_lock],__init__->[DataLoader],act->[get,reset,next_example],next_episode_idx->[_lock,num_episodes],next_batch->[_lock]],DialogTeacher->[next_example->[get],share->[share],reset->[reset],num_examples->[num_examples],num_episodes->[num_episodes],__init__->[reset,get],get->[get]],ChunkTeacher->[set_datasettings->[get_fold_chunks,get_num_samples,_get_data_folder],_get_data_folder->[get],reset->[_enqueue_request,_enqueue_chunks,_drain],__init__->[get],receive_data->[_enqueue_request],_enqueue_request->[request_load],get->[create_message,get],_drain->[get],get_chunk->[load_from_chunk,get,_enqueue_chunks]],AbstractImageTeacher->[setup_image_features->[get_image_features_path,is_image_mode_buildable],_validate_image_mode_name->[get_available_image_mode_names],num_episodes->[num_examples],_build_image_features_dict->[image_id_to_image_path],__init__->[reset,get],get->[get_image_features],get_image_path->[get_data_path,get],get_image_features->[image_id_to_image_path]],ConversationTeacher->[_get_ep_from_turns->[get],_setup_data->[get],__init__->[reset,get,_setup_data]],ParlAIDialogTeacher->[_setup_data->[get],__init__->[reset,get]],Teacher->[report->[report],share->[share]],DataLoader->[__init__->[__init__]]]
Return the next episode index.
what is this computation exactly?
@@ -212,10 +212,7 @@ void MapgenV7Params::writeParams(Settings *settings) const int MapgenV7::getSpawnLevelAtPoint(v2s16 p) { - // Base terrain calculation - s16 y = baseTerrainLevelAtPoint(p.X, p.Y); - - // If enabled, check if inside a river + // If rivers are enabled, first check if in a river if (spflags & MGV7_RIDGES) { float width = 0.2; float uwatern = NoisePerlin2D(&noise_ridge_uwater->np, p.X, p.Y, seed) * 2;
[generateTerrain->[getMountainTerrainFromMap,floatBaseExtentFromMap,baseTerrainLevelFromMap,getFloatlandMountainFromMap],NoiseParams->[NoiseParams],generateMountainTerrain->[getMountainTerrainFromMap]]
Returns the level at which the spawn point is located.
seems this loop is a good canditate to be replaced with a for loop
@@ -155,7 +155,8 @@ func (a *ACME) CreateClusterConfig(leadership *cluster.Leadership, tlsConfig *tl } a.store = datastore - a.challengeProvider = &challengeProvider{store: a.store} + a.challengeTLSProvider = &challengeTLSProvider{store: a.store} + a.challengeHTTPProvider = &challengeHTTPProvider{store: a.store} ticker := time.NewTicker(24 * time.Hour) leadership.Pool.AddGoCtx(func(ctx context.Context) {
[getProvidedCertificate->[Get],storeRenewedCertificate->[renewCertificates,Get],LoadCertificateForDomains->[Get],CreateClusterConfig->[init],getCertificate->[getCertificate,Get],retrieveCertificates->[Get],CreateLocalConfig->[init],renewCertificates->[Get],loadCertificateOnDemand->[Get]]
CreateClusterConfig creates a new cluster configuration elected is true if the user has already elected an account. renewCertificates - Runs all the jobs in the cluster.
I'm not in love with instantiating `challengeTLSProvider` & `challengeHTTPProvider` here.
@@ -259,6 +259,18 @@ export class VideoContainer extends LargeContainer { this._resizeListeners = new Set(); this.$video[0].onresize = this._onResize.bind(this); + + if (isTestModeEnabled(APP.store.getState())) { + const cb = name => APP.store.dispatch(updateLastLargeVideoMediaEvent(name)); + const containerHandlers = {}; + + containerEvents.forEach(event => { + containerHandlers[event] = cb.bind(this, event); + }); + containerEvents.forEach(event => { + this.$video[0].addEventListener(event, containerHandlers[event]); + }); + } } /**
[No CFG could be retrieved]
Private methods for the video element. Calculate optimal video size for specified container size.
I don't think you need a second loop here. You can add the code for adding a listener in the previous one, isn't it?
@@ -120,7 +120,7 @@ void Power::check() { } void Power::power_on() { - lastPowerOn = millis(); + lastPowerOn = millis() + 1; // add 1 because sometimes millis() return 0 if (!powersupply_on) { PSU_PIN_ON(); safe_delay(PSU_POWERUP_DELAY);
[No CFG could be retrieved]
Check if a thermal unit is on or off. Power - off or power - off depending on system configuration.
The `millis()` function should only return 0 before the timer starts, and then only once every 49 days. And of course, 0xFFFFFFFF + 1 is also 0.
@@ -71,6 +71,12 @@ func BinVersion() (string, error) { } func init() { + absPath, err := exec.LookPath("git") + if err != nil { + panic(fmt.Sprintf("Git not found: %v", err)) + } + GitExecutable = absPath + gitVersion, err := BinVersion() if err != nil { panic(fmt.Sprintf("Git version missing: %v", err))
[Printf,Println,RunInDirTimeout,Index,Sprintf,Compare,AddArguments,Print,Errorf,Fields,Run]
Fsck verifies the connectivity and validity of the objects in the database.
I think this is unnecessary and we should change many other places git command invokes and add an option on app.ini.
@@ -4396,6 +4396,7 @@ bool Blockchain::check_blockchain_pruning() return m_db->check_pruning(); } //------------------------------------------------------------------ +// returns min(Mb, 2Ml) as per https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf from HF_VERSION_LONG_TERM_BLOCK_WEIGHT uint64_t Blockchain::get_next_long_term_block_weight(uint64_t block_weight) const { PERF_TIMER(get_next_long_term_block_weight);
[No CFG could be retrieved]
private static int pruning_seed = 0 ; private static int MAX_LONG_TERM_COVERAGE_BLOCK_WEIGHT = 0.
This is only true for `hf_version >= HF_VERSION_2021_SCALING`, otherwise it is `min(Mb, 1.4*Ml)`.
@@ -989,8 +989,8 @@ define([ } var viewExtentCVCartographic = new Cartographic(); - var viewExtentCVNorthEast = Cartesian4.clone(Cartesian4.UNIT_W); - var viewExtentCVSouthWest = Cartesian4.clone(Cartesian4.UNIT_W); + var viewExtentCVNorthEast = new Cartesian3(); + var viewExtentCVSouthWest = new Cartesian3(); var viewExtentCVTransform = new Matrix4(); function extentCameraPositionColumbusView(camera, extent, projection, result, positionOnly) { var north = extent.north;
[No CFG could be retrieved]
Creates a columbus view that is a columbus view that is a position - Computes the matrix that is used to transform the camera s view.
You can remove these too. Below do: `var northEast = projection.project(cart, viewExtentCVNorthEast);` and remove `position`. Same thing for `southWest`.
@@ -79,4 +79,4 @@ foreach ($slas as $sla) { echo '</div>'; } -$pagetitle[] = 'SLAs'; +$pagetitle[] = 'SLAs'; \ No newline at end of file
[No CFG could be retrieved]
Echos the number of unique elements.
Oopsie, files should have a blank line at the end.
@@ -705,8 +705,16 @@ get_object_layout(struct pl_jump_map *jmap, struct pl_obj_layout *layout, D_DEBUG(DB_PL, "Target unavailable " DF_TARGET ". Adding to remap_list: fail cnt %d\n", DP_TARGET(target), fail_tgt_cnt); - rc = remap_alloc_one(remap_list, k, target, - false); + + if (remap_grp_used == NULL) { + remap_grp_used = remap_gpu_alloc_one(&dgu_remap_list, + dom_cur_grp_used); + if (remap_grp_used == NULL) + D_GOTO(out, rc = -DER_NOMEM); + realloc_grp_used = true; + } + + rc = remap_alloc_one(remap_list, k, target, false, remap_grp_used); if (rc) D_GOTO(out, rc);
[No CFG could be retrieved]
get the object class object from the object graph and add it to the list of object class Allocate and get the object from the jump map.
It's really hard to follow this, why don't you just simply allocate per-RDG "used domain bitmap" for all groups at the very beginning (Just like what we did for other bitmaps)?
@@ -35,7 +35,7 @@ let PropertyRulesDef; */ const GLOBAL_PROPERTY_RULES = { 'class': { - blacklistedValueRegex: '(^|\\W)i-amphtml-', + denylistedValueRegex: '(^|\\W)i-amphtml-', }, 'hidden': null, 'text': null,
[No CFG could be retrieved]
Creates a new object. Map whose keys comprise all properties that contain URLs.
/cc @choumx I can't tell if this value is actually exposed to publishers. If so, this isn't safe to change.
@@ -178,9 +178,13 @@ class PickDropChannelsMixin(object): inst = self.copy() if copy else self idx = [inst.ch_names.index(c) for c in ch_names if c in inst.ch_names] + if hasattr(inst, 'picks'): inst.picks = [inst.picks[k] for k in idx] + if hasattr(inst, 'cals'): + inst.cals = inst.cals[idx] + inst.info = pick_info(inst.info, idx, copy=False) my_get = lambda attr: getattr(inst, attr, None)
[ContainsMixin->[__contains__->[_contains_ch_type]]]
Pick some channels from the list of channels.
@agramfort I like this better since it's a separate issue from preloading. WDYT? cc @dengemann
@@ -52,9 +52,8 @@ class DataIterator: @classmethod def from_params(cls, params: Params): - from allennlp.experiments.registry import Registry # TODO(Mark): The adaptive iterator will need a bit of work here, # to retrieve the scaling function etc. - iterator_type = params.pop_choice("type", Registry.list_data_iterators()) - return Registry.get_data_iterator(iterator_type)(**params.as_dict()) # type: ignore + iterator_type = params.pop_choice("type", cls.list_available()) + return cls.by_name(iterator_type)(**params.as_dict()) # type: ignore
[DataIterator->[__call__->[range,_yield_one_epoch],from_params->[list_data_iterators,as_dict,pop_choice,get_data_iterator],_yield_one_epoch->[as_arrays,get_padding_lengths,Dataset,_create_batches]]]
Create a new instance of the class from the given parameters.
Do we actually need the `# type: ignore` here? We don't have it in other places. Also, there is no listed return type for the method - maybe that's the reason there was a type failure?
@@ -429,6 +429,11 @@ static int def_load_bio(CONF *conf, BIO *in, long *line) if (!str_copy(conf, psection, &include, p)) goto err; + if (conf->flag_abspath && !ossl_is_absolute_path(include)) { + ERR_raise(ERR_LIB_CONF, CONF_R_RELATIVE_PATH); + goto err; + } + if (include_dir != NULL && !ossl_is_absolute_path(include)) { size_t newlen = strlen(include_dir) + strlen(include) + 2;
[int->[IS_EOF,BIO_snprintf,eat_alpha_numeric,ERR_raise_data,BUF_MEM_grow,CONF_free,get_next_file,BIO_new_file,clear_comments,OPENSSL_strlcat,OPENSSL_DIR_end,_CONF_get_string,BUF_MEM_grow_clean,sk_BIO_push,memcmp,strncmp,strlen,sk_BIO_new_null,IS_NUMBER,strchr,ERR_peek_last_error,_CONF_new_data,IS_DOLLAR,sk_BIO_pop,OPENSSL_strdup,def_destroy_data,ERR_GET_REASON,eat_ws,_CONF_get_section,BUF_MEM_free,strcmp,DECIMAL_SIZE,trim_ws,STACK_OF,ossl_is_absolute_path,OPENSSL_malloc,OPENSSL_strlcpy,_CONF_add_string,ERR_raise,ERR_add_error_data,IS_ALNUM,BIO_vfree,lh_CONF_VALUE_doall_BIO,ossl_ends_with_dirsep,process_include,OPENSSL_free,IS_ESC,memset,_CONF_new_section,sk_BIO_free,_CONF_free_data,BUF_MEM_new,ossl_safe_getenv,memmove,IS_DQUOTE,BIO_free,str_copy,IS_QUOTE,BIO_gets,def_load_bio,sk_BIO_num],CONF->[OPENSSL_malloc,OPENSSL_free],BIO->[OPENSSL_zalloc,OPENSSL_DIR_read,OPENSSL_strlcat,ERR_raise_data,strcasecmp,OPENSSL_free,OPENSSL_DIR_end,strlen,OPENSSL_strlcpy,S_ISDIR,stat,get_next_file,BIO_new_file,ERR_raise],char->[IS_EOF,scan_esc,IS_ESC,IS_DOLLAR,IS_WS,IS_ALNUM_PUNCT],void->[scan_quote,IS_EOF,IS_COMMENT,scan_esc,IS_ESC,IS_FCOMMENT,IS_DQUOTE,scan_dquote,IS_QUOTE,BIO_printf,IS_WS],IMPLEMENT_LHASH_DOALL_ARG_CONST]
load_bio - load a bio from a file or directory. read_retry = 0 ; looks for a line that does not contain any trailing \ r \ n removed and returns Parse a configuration file entry. read the file from the included path and return it as - is.
I'd recommend considering semantics that enforces the absolute path check only after prepending the include_dir. IMO it would be much more useful.
@@ -83,9 +83,6 @@ func testAPIDeleteOAuth2Application(t *testing.T) { oldApp := models.AssertExistsAndLoadBean(t, &models.OAuth2Application{ UID: user.ID, Name: "test-app-1", - RedirectURIs: []string{ - "http://www.google.com", - }, }).(*models.OAuth2Application) urlStr := fmt.Sprintf("/api/v1/user/applications/oauth2/%d?token=%s", oldApp.ID, token)
[EqualValues,AssertExistsAndLoadBean,Sprintf,AssertNotExistsBean,Empty,NotEmpty,Len,MakeRequest]
testAPIDeleteOAuth2Application deletes an OAuth2 application.
reason to remove this?
@@ -1,9 +1,16 @@ import logging +from dvc.exceptions import InvalidArgumentError + logger = logging.getLogger(__name__) def _update_import_on_remote(stage, remote, jobs): + if stage.is_repo_import: + raise InvalidArgumentError( + "Can't update a repo import with --to-remote" + ) + url = stage.deps[0].path_info.url stage.outs[0].hash_info = stage.repo.cloud.transfer( url, jobs=jobs, remote=remote, command="update"
[_update_import_on_remote->[transfer],sync_import->[already_cached,deps,save_deps,outs,format,info],update_import->[_update_import_on_remote,deps,reproduce],getLogger]
Update import on remote.
~~What's a repo import?~~ Got it.
@@ -175,12 +175,16 @@ func determineSeed(shoot *gardencorev1beta1.Shoot, seedLister gardencorelisters. return nil, err } - filteredSeeds, err := filterSeedsMatchingSeedSelector(cloudProfile, seedList) + seedsMatchingCloudProfileSelector, err := filterSeedsMatchingSeedSelector(seedList, cloudProfile.Spec.SeedSelector, "CloudProfile") + if err != nil { + return nil, err + } + seedsMatchingShootSelector, err := filterSeedsMatchingSeedSelector(seedsMatchingCloudProfileSelector, shoot.Spec.SeedSelector, "Shoot") if err != nil { return nil, err } - candidates, err := getCandidates(shoot, filteredSeeds, strategy) + candidates, err := getCandidates(shoot, seedsMatchingShootSelector, strategy) if err != nil { return nil, err }
[reconcileShootKey->[Infof,ScheduleShoot,SplitMetaNamespaceKey,Shoots,IsNotFound,Get,Debugf],ScheduleShoot->[NewAlreadyScheduledError,DeepCopy,Infof,Sprintf,NewFieldLogger,GardenCore,TryUpdateShoot,reportSuccessfulScheduling,reportFailedScheduling,WithField],reportSuccessfulScheduling->[reportEvent],reportFailedScheduling->[reportEvent],shootUpdate->[shootAdd],reportEvent->[Eventf],shootAdd->[Infof,Forget,Errorf,MetaNamespaceKeyFunc,Add],ErrorBody,TaintsHave,Error,TrimSuffix,Sprintf,LabelSelectorAsSelector,List,Matches,String,Errorf,ShootUsesUnmanagedDNS,VerifySeedReadiness,Get,NewPath,HasPrefix,Set,ValidateNetworkDisjointedness,Everything]
determineSeed returns a Seed cluster if the shoot has a seed and if it has a Selector returns an object that represents the seed that matches the seed selector.
Hmm, this will yield an unhelpful message, if there are Seeds that are matching the `seedSelector` of the Shoot, but not the one of the CloudProfile. I.e. it will say "none out of the <n> seeds has the matching labels required by seed selector of 'Shoot'" although, all Seeds might actually match the Shoot's selector. Can you evaluate both selectors individually and then combine the filtered set of Seeds here to be able to give a more helpful error message?
@@ -741,7 +741,17 @@ static int check_id(X509_STORE_CTX *ctx) if (!check_id_error(ctx, X509_V_ERR_IP_ADDRESS_MISMATCH)) return 0; } - return 1; + + /* + * When verifying SSL server certificates, require that an + * identity be set for validating the certificate subject unless + * the X509_V_FLAG_ALLOW_NO_SUBJECT_CHECK flags is set instead. + */ + if (vpm->purpose != X509_PURPOSE_SSL_SERVER + || (vpm->flags & X509_V_FLAG_ALLOW_NO_SUBJECT_CHECK)) + return 1; + + return (vpm->hosts || vpm->email || vpm->ip); } static int check_trust(X509_STORE_CTX *ctx, int num_untrusted)
[int->[x509_check_cert_time,X509_get_pubkey_parameters,X509_verify_cert,STACK_OF],X509_CRL_diff->[STACK_OF]]
Check if a certificate is in the chain. Look for a trusted certificate in the chain and if found return the first match. can be called from within a loop.
Instead of `vpm->purpose != X509_PURPOSE_SSL_SERVER` it would have to be `vpm->purpose == X509_PURPOSE_SSL_CLIENT`. Otherwise, this breaks `X509_PURPOSE_SMIME_SIGN` and possibly others.
@@ -236,6 +236,7 @@ func (b *BleveIndexer) Search(keyword string, repoIDs []int64, limit, start int) for _, repoID := range repoIDs { repoQueriesP = append(repoQueriesP, numericEqualityQuery(repoID, "RepoID")) } + index, _ := strconv.ParseInt(keyword, 10, 64) repoQueries := make([]query.Query, len(repoQueriesP)) for i, v := range repoQueriesP { repoQueries[i] = query.Query(v)
[Delete->[Delete],Close->[Close],Index->[Index],Search->[Search]]
Search searches for a list of repositories that match the given keyword.
Maybe keyword is a string?
@@ -329,13 +329,10 @@ func (a *apiServer) upsertWorkersForPipeline(pipelineInfo *pps.PipelineInfo) err workerRc, err := rc.Get( ppsutil.PipelineRcName(pipelineInfo.Pipeline.Name, pipelineInfo.Version), metav1.GetOptions{}) - if err == nil { - if (workerRc.Spec.Template.Spec.Containers[0].Resources.Requests == nil) && *workerRc.Spec.Replicas == 1 { - parallelism = 1 - resourceRequests = nil - resourceLimits = nil - } + if err != nil { + log.Errorf("error from rc.Get: %v", err) } + // TODO figure out why the statement below runs even if there's an error // rc was made by a previous version of pachyderm so we delete it if workerRc.ObjectMeta.Labels["version"] != version.PrettyVersion() { if err := a.deleteWorkersForPipeline(pipelineInfo); err != nil {
[checkOrDeployGithookService->[CoreV1,GithookService,Services,Create],deleteWorkersForPipeline->[CoreV1,Delete,ReplicationControllers,Services,PipelineRcName],setPipelineFailure->[FailPipeline],upsertWorkersForPipeline->[GetExpectedNumWorkers,CoreV1,NewInfiniteBackOff,createWorkerRc,GetRequestsResourceListFromPipeline,deleteWorkersForPipeline,GetLimitsResourceListFromPipeline,Errorf,RetryNotify,ReplicationControllers,PipelineRcName,Get,getWorkerOptions,PrettyVersion],master->[Unlock,CoreV1,NewInfiniteBackOff,setPipelineFailure,Close,deleteWorkersForPipeline,SetAuthToken,Stop,Error,VisitInput,Lock,Errorf,Watch,GetPipelineInfo,FormatLabelSelector,Join,Infof,UnmarshalPrev,ResultChan,WithCtx,getPachClient,SetAsLabelSelector,NewDLock,Sprintf,Background,WithCancel,Unmarshal,ReadOnly,String,RetryNotify,checkOrDeployGithookService,Pods,upsertWorkersForPipeline,WatchWithPrev,getPPSToken],SetAsLabelSelector,CoreV1,List,Errorf,Services,FormatLabelSelector]
upsertWorkersForPipeline deletes all workers for a given pipeline This function creates a worker for the given pipeline.
Shouldn't we be returning the error if there is an error here? I noticed you have a todo below to find out why the statement below is running when there is an error.
@@ -1018,8 +1018,17 @@ def while_loop(cond, body, loop_vars, is_test=False, name=None): return loop_vars while_loop_block = While(pre_cond, is_test, name) + with_mutable_vars = assert_with_mutable_vars(loop_vars) with while_loop_block.block(): - output_vars = body(*loop_vars) + # If a variable with mutable type is included in vars, like `dict/list`, + # modifying them in the body function will cause origin variable be modified + # synchronously. This will raise an assignment error out of while block. + # Here we make a copy of the mutable object to aviod this problem. + if with_mutable_vars: + new_loop_vars = copy_mutable_vars(loop_vars) + output_vars = body(*new_loop_vars) + else: + output_vars = body(*loop_vars) if not isinstance(output_vars, (list, tuple)): output_vars = [output_vars] if len(output_vars) != len(loop_vars):
[Switch->[default->[ConditionalBlock,ConditionalBlockGuard],case->[_case_check_args->[],ConditionalBlock,ConditionalBlockGuard]],IfElseBlockGuard->[__exit__->[__exit__],__enter__->[__enter__],__init__->[block]],DynamicRNN->[_parent_block_->[block],block->[array_write,block,increment,less_than,array_to_lod_tensor],static_input->[_assert_in_rnn_block_,shrink_memory],update_memory->[_assert_in_rnn_block_],__init__->[While],output->[_assert_in_rnn_block_,array_write],step_input->[_assert_in_rnn_block_,array_read],memory->[_assert_in_rnn_block_,memory,shrink_memory,array_read]],ConditionalBlock->[complete->[output,block],append_conditional_block_grad->[output,block],block->[ConditionalBlockGuard]],StaticRNN->[_complete_op->[output,_parent_block],step->[BlockGuardWithCompletion],output->[step_output],step_input->[_assert_in_rnn_block_],step_output->[_assert_in_rnn_block_],memory->[_assert_in_rnn_block_,StaticRNNMemoryLink,memory]],while_loop->[block,While],copy_var_to_parent_block->[block],While->[_complete->[output,block],block->[WhileGuard]],cond->[ConditionalBlock,block,select_input,copy_var_to_parent_block],case->[_case_check_args->[_error_message],_case_check_args],switch_case->[_check_args->[equal,_error_message],_check_args],IfElse->[_parent_block->[block],true_block->[IfElseBlockGuard],__init__->[ConditionalBlock],output->[_parent_block],__call__->[merge_lod_tensor],false_block->[IfElseBlockGuard],input->[_parent_block]]]
This function is used to perform a while loop in a control flow. This function runs the main loop of the network. Loop through the loop_vars and assign the output_vars to the output_vars.
Grammatical error: "cause origin variable be modified" to "cause origin variable to be modified"
@@ -39,11 +39,12 @@ KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS(COARSE_VELOCITY) KRATOS_CREATE_VARIABLE(double,FIC_BETA) // Adjoint variables -KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS(ADJOINT_FLUID_VECTOR_1) -KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS(ADJOINT_FLUID_VECTOR_2) -KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS(ADJOINT_FLUID_VECTOR_3) KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS(AUX_ADJOINT_FLUID_VECTOR_1) +KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS_WITH_TIME_DERIVATIVE(ADJOINT_FLUID_VECTOR_3, AUX_ADJOINT_FLUID_VECTOR_1); +KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS_WITH_TIME_DERIVATIVE(ADJOINT_FLUID_VECTOR_2, ADJOINT_FLUID_VECTOR_3); +KRATOS_CREATE_3D_VARIABLE_WITH_COMPONENTS_WITH_TIME_DERIVATIVE(ADJOINT_FLUID_VECTOR_1, ADJOINT_FLUID_VECTOR_2); KRATOS_CREATE_VARIABLE(double, ADJOINT_FLUID_SCALAR_1) +KRATOS_CREATE_VARIABLE(Vector, PRIMAL_RELAXED_SECOND_DERIVATIVE_VALUES) // Non-Newtonian constitutive relations KRATOS_CREATE_VARIABLE(double, REGULARIZATION_COEFFICIENT)
[No CFG could be retrieved]
Create all the variables of the kratos system Creates variables for the given object.
mmm... are these time derivatives?
@@ -1,12 +1,15 @@ // Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. +using System; + namespace System.Configuration { public enum SettingsSerializeAs { String = 0, Xml = 1, + [Obsolete(Obsoletions.BinaryFormatterMessage + @". Consider using Xml instead.", false)] Binary = 2, ProviderSpecific = 3 }
[No CFG could be retrieved]
Copyright 2015 the. NET Foundation Licensed under the MIT license.
This is a change to public surface area. I would have expected the ref to also be updated with this.
@@ -3130,7 +3130,7 @@ void ByteCodeGenerator::ProcessCapturedSym(Symbol *sym) FuncInfo *funcHome = sym->GetScope()->GetFunc(); FuncInfo *funcChild = funcHome->GetCurrentChildFunction(); - Assert(sym->NeedsSlotAlloc(funcHome) || sym->GetIsGlobal() || sym->GetIsModuleImport()); + Assert(sym->NeedsSlotAlloc(funcHome) || sym->GetIsGlobal() || sym->GetIsModuleImport() || sym->GetIsModuleExportStorage()); // If this is not a local property, or not all its references can be tracked, or // it's not scoped to the function, or we're in debug mode, disable the delayed capture optimization.
[No CFG could be retrieved]
This is a utility function that processes a symbol that has a non - local argument and is Checks if the capturing function is a declaration or a scope of an inner scope.
do you need to make change to ByteCodeGenerator::InitBlockScopedNonTemps ? #Resolved
@@ -247,6 +247,7 @@ define([ } var i; + var j; var len; // Clear the render list.
[No CFG could be retrieved]
Destroyes a WebGL object. Enqueue the root tiles.
At some point in this function, the quadtree should know (or could know) if we reached a leaf node, and we could use this to let the tile's billboards know that we reached the maximum level and they don't need updates.
@@ -3133,6 +3133,16 @@ class FirstPass(NodeVisitor): func._fullname = self.sem.qualified_name(func.name()) if kind == GDEF: self.sem.globals[func.name()] = SymbolTableNode(kind, func, self.sem.cur_mod_id) + if func.impl: + # Also analyze the function body (in case there are conditional imports). + sem = self.sem + sem.function_stack.append(func.impl) + sem.errors.push_function(func.name()) + sem.enter() + func.impl.body.accept(self) + sem.leave() + sem.errors.pop_function() + sem.function_stack.pop() def visit_class_def(self, cdef: ClassDef) -> None: kind = self.kind_by_scope()
[SemanticAnalyzer->[analyze_comp_for->[analyze_lvalue],build_newtype_typeinfo->[named_type],build_typeddict_typeinfo->[basic_new_typeinfo,str_type,object_type,named_type_or_none],visit_for_stmt->[anal_type,store_declared_types,visit_block,visit_block_maybe,analyze_lvalue],add_symbol->[is_func_scope],visit_import_all->[normalize_type_alias,add_submodules_to_parent_modules,process_import_over_existing_name,correct_relative_import],bind_class_type_variables_in_symbol_table->[bind_type_var],visit_index_expr->[anal_type,alias_fallback],accept->[accept],visit_func_expr->[analyze_function],normalize_type_alias->[add_module_symbol],analyze_simple_literal_type->[named_type_or_none],anal_type->[anal_type],parse_typeddict_fields_with_types->[anal_type],is_class_scope->[is_func_scope],check_newtype_args->[anal_type],visit_with_stmt->[visit_block,anal_type,store_declared_types,analyze_lvalue],analyze_types->[anal_type],process_typevar_parameters->[expr_to_analyzed_type,object_type],fail_blocker->[fail],visit_type_application->[anal_type],build_namedtuple_typeinfo->[named_type,add_field,add_method,object_type,basic_new_typeinfo,str_type,named_type_or_none],visit_assignment_stmt->[anal_type],get_tvars->[get_tvars,analyze_unbound_tvar],analyze_try_stmt->[analyze_lvalue],alias_fallback->[object_type],store_declared_types->[store_declared_types],parse_namedtuple_fields_with_types->[anal_type],analyze_function->[next_function_tvar_id],visit_block_maybe->[visit_block],find_type_variables_in_type->[find_type_variables_in_type],visit_member_expr->[normalize_type_alias],visit_while_stmt->[visit_block_maybe],analyze_tuple_or_list_lvalue->[analyze_lvalue],visit_cast_expr->[anal_type],lookup_qualified->[lookup,normalize_type_alias],visit__promote_expr->[anal_type],is_valid_del_target->[is_valid_del_target],visit_if_stmt->[visit_block_maybe,visit_block],visit_import_from->[add_submodules_to_parent_modules],analyze_typeddict_classdef->[is_typeddict],is_module_scope->[is_func_scope,is_class_scope],analyze_lvalue->[analyze_lvalue]],ThirdPass->[accept->[accept],visit_file->[accept],analyze->[accept],fail_blocker->[fail],visit_decorator->[builtin_type]],FirstPass->[visit_func_def->[qualified_name,accept,set_original_def,enter,is_module_scope,check_no_global,leave],process_nested_classes->[enter_class,process_nested_classes,accept,leave_class],visit_while_stmt->[accept,is_module_scope],visit_import->[add_symbol,is_module_scope],visit_assignment_stmt->[is_module_scope,analyze_lvalue],kind_by_scope->[is_func_scope,is_class_scope,is_module_scope],visit_class_def->[check_no_global,qualified_name],visit_try_stmt->[is_module_scope,analyze_try_stmt],visit_file->[named_type,qualified_name,accept],visit_overloaded_func_def->[check_no_global,qualified_name],visit_for_stmt->[accept,is_module_scope,analyze_lvalue],visit_if_stmt->[accept],visit_import_from->[add_symbol,is_module_scope],visit_block->[accept],visit_with_stmt->[accept,is_module_scope,analyze_lvalue],visit_import_all->[is_module_scope],visit_decorator->[qualified_name,add_symbol],analyze_lvalue->[is_module_scope,analyze_lvalue]],returns_any_if_called->[returns_any_if_called],calculate_class_mro->[fail],replace_implicit_first_type->[replace_implicit_first_type],mark_block_mypy_only->[accept],mark_block_unreachable->[accept],find_fixed_callable_return->[find_fixed_callable_return]]
Visit overloaded function.
Hmm... Refactor this to merge with the nearly identical block in the previous method?
@@ -230,7 +230,7 @@ int d_fault_attr_err_code(uint32_t fault_id) { struct d_fault_attr_t *fault_attr; - uint32_t err_code; + int32_t err_code; fault_attr = d_fault_attr_lookup(fault_id); if (fault_attr == NULL) {
[No CFG could be retrieved]
in fault_attr_set fault_attr_lookup fault_attr_err_code parse the fault_attr_t from the parser.
Also don't need the case in return;
@@ -382,11 +382,9 @@ static int rsa_get_ctx_params(void *vprsactx, OSSL_PARAM *params) } p = OSSL_PARAM_locate(params, OSSL_ASYM_CIPHER_PARAM_OAEP_LABEL); - if (p != NULL && !OSSL_PARAM_set_octet_ptr(p, prsactx->oaep_label, 0)) - return 0; - - p = OSSL_PARAM_locate(params, OSSL_ASYM_CIPHER_PARAM_OAEP_LABEL_LEN); - if (p != NULL && !OSSL_PARAM_set_size_t(p, prsactx->oaep_labellen)) + if (p != NULL && + !OSSL_PARAM_set_octet_ptr(p, prsactx->oaep_label, + prsactx->oaep_labellen)) return 0; p = OSSL_PARAM_locate(params, OSSL_ASYM_CIPHER_PARAM_TLS_CLIENT_VERSION);
[No CFG could be retrieved]
region OSSL algorithm Set the client version and alt version of the cipher.
I wonder if this should really be an octet_string instead of an octet_ptr?
@@ -163,6 +163,14 @@ export class UrlReplacements { return removeFragment(info.sourceUrl); })); + this.setAsync_('SOURCE_URL', () => { + return getTrackImpressionPromise().then( + this.getDocInfoValue_.bind(this, info => { + return removeFragment(info.sourceUrl); + }) + ); + }); + // Returns the host of the Source URL for this AMP document. this.set_('SOURCE_HOST', this.getDocInfoValue_.bind(this, info => { return parseUrl(info.sourceUrl).host;
[UrlReplacements->[constructor->[shareTrackingForOrNull,variantForOrNull],getTimingDataAsync_->[whenDocumentComplete,resolve],buildExpr_->[length,join,sort],expand_->[then,split,resolve,user,encodeValue,apply,replace,async,rethrowAsync,catch,sync],getTimingDataSync_->[isFiniteNumber],collectVars->[create],setAsync_->[dev,indexOf],getAccessValue_->[getter,user],set_->[dev,indexOf],getDocInfoValue_->[getter,documentInfoForDoc],maybeExpandLink->[getAttribute,origin,trim,hasOwnProperty,tagName,dev,documentInfoForDoc,user,href,sourceUrl,parseUrl,canonicalUrl,isExperimentOn],getExpr_->[forEach,length,push,keys],ensureProtocolMatches_->[user,parseUrl],initialize_->[cidFor,create,outgoingFragment,language,dev,sourceUrl,getTotalEngagedTime,browserLanguage,join,charset,hostname,getSize,pageViewId,getAccessReaderId,host,characterSet,viewportForDoc,canonicalUrl,getAuthdataField,userNotificationManagerFor,user,pathname,getScrollLeft,getScrollTop,activityFor,userLanguage,now,parseQueryString,getScrollHeight,incomingFragment,push,resolve,search,get,random,removeFragment,getScrollWidth,parseUrl,viewerForDoc]],encodeURIComponent,fromClassForDoc]
Initializes the property that is used to determine the cache busters. Returns the hostname of the URL the path of the Source URL the page view ID and the Returns a promise that resolves to a reserved keyword if the user has not requested it.
I think there is a problem here: Doc info eagerly reads `location.href` and never reads it again after.
@@ -106,6 +106,16 @@ class Search_Replace_Command extends WP_CLI_Command { return $wpdb->get_col( $wpdb->prepare( "SHOW TABLES LIKE %s", like_escape( $prefix ) . '%' ) ); } + private static function fast_handle_col( $col, $table, $old, $new, $dry_run ) { + global $wpdb; + + if ( $dry_run ) { + return $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(`$col`) FROM `$table` WHERE `$col` LIKE %s;", '%' . like_escape( esc_sql( $old ) ) . '%' ) ); + } else { + return $wpdb->query( $wpdb->prepare( "UPDATE `$table` SET `$col` = REPLACE(`$col`, %s, %s);", $old, $new ) ); + } + } + private static function handle_col( $col, $primary_keys, $table, $old, $new, $dry_run, $recurse_objects ) { global $wpdb;
[Search_Replace_Command->[get_table_list->[prepare,get_col],handle_col->[update,run],__invoke->[setHeaders,setRows,display],get_columns->[get_results]]]
Get a list of tables from the database. count of items in table.
`fast_handle_col` should be `sql_handle_col()`. Let's also rename the other method.
@@ -31,6 +31,7 @@ public class ProjectLaunchImplTest extends TestCase { public static void testParseSystemCapabilities() throws Exception { Workspace ws = Workspace.getWorkspace(IO.getFile("test/ws")); Project project = ws.getProject("p1"); + project.getTarget().mkdirs(); String systemCaps = null; try {
[ProjectLaunchImplTest->[tearDown->[reallyClean]]]
This test method parses system capabilities from the project.
Do you really need to mkdirs() here? I would think you just need `project.prepare()` which will make that target folder if necessary. Can you try that instead?
@@ -405,13 +405,8 @@ int dgst_main(int argc, char **argv) } else { const char *sig_name = NULL; if (!out_bin) { - if (sigkey != NULL) { - const EVP_PKEY_ASN1_METHOD *ameth; - ameth = EVP_PKEY_get0_asn1(sigkey); - if (ameth) - EVP_PKEY_asn1_get0_info(NULL, NULL, - NULL, NULL, &sig_name, ameth); - } + if (sigkey != NULL) + sig_name = EVP_PKEY_get0_first_alg_name(sigkey); } ret = 0; for (i = 0; i < argc; i++) {
[No CFG could be retrieved]
Reads the signature file and creates the appropriate signature buffer. show_digests - Shows the digests of the specified key.
This is very dodgy. One alias to the HMAC signature algorithm and the test may randomly go **BAMF**, because you might end up with, oh I don't know, "id-Hmac". This is a case where the application has more accurate knowledge than libcrypto, as it really has *all* the data it needs to know exactly what names it's asking for. It's quite a simple program after all...
@@ -165,7 +165,9 @@ define([ that.state = Cesium3DTileContentState.FAILED; that._readyPromise.reject(error); }); + return true; } + return false; }; /**
[No CFG could be retrieved]
Creates a batched version of the object. Only Batched 3D Model version 1 is supported.
Since we prefer early return, flip the above `if(defined(promise))` check and return false immediately to make the code cleaner.
@@ -350,7 +350,7 @@ def test_generalization_across_time(): reg = KernelRidge() def scorer_proba(y_true, y_pred): - roc_auc_score(y_true, y_pred[:, 0]) + return roc_auc_score(y_true, y_pred[:, 0]) # We re testing 3 scenario: default, classifier + predict_proba, regressor scorers = [None, scorer_proba, scorer_regress]
[test_decoding_time->[make_epochs],test_generalization_across_time->[make_epochs]]
Test time generalization decoding and prediction. fit - > fit - > check manual picks - > fit - > check manual Check that the model has a specific number of events in the epochs. Missing feature in the model.
fixes missing return in previous PR
@@ -225,6 +225,12 @@ func resourceCloudFunctionsCreate(d *schema.ResourceData, meta interface{}) erro if err != nil { return err } + // We do this extra validation here since most regions are not valid, and the + // error message that Cloud Functions has for "wrong region" is not specific. + _, errs := validCloudFunctionRegion(region, "region") + if len(errs) > 0 { + return errs[0] + } cloudFuncId := &cloudFunctionId{ Project: project,
[cloudFunctionId->[Sprintf],terraformId->[Sprintf],locationId->[Sprintf],Patch,locationId,Delete,StringInSlice,Partial,Set,terraformId,Atoi,UpdateMask,Itoa,MatchString,GetOk,HasChange,Errorf,SetId,MustCompile,Create,Join,Do,Id,Get,cloudFunctionId,Split,Printf,DefaultTimeout,Sprintf,Replace,IntBetween]
resourceCloudFunctionsCreate creates a new resource - level cloud function. function returns an object that can be used to create a new object.
I see, the provider-level region skips the field validation. May be add a mention about this here?
@@ -1,4 +1,6 @@ class DeepLinksController < ApplicationController + AASA_PATHS = ["/*", "NOT /users/auth/*"].freeze + def mobile; end # Apple Application Site Association - based on Apple docs guidelines
[DeepLinksController->[aasa->[join,render,pluck,map]]]
Renders an array of AASA App ID s for a mobile device.
_I think_ this is the correct way to freeze this array, but I'm open to any suggestions to improve it (or fix it altogether if I'm not even close to how it should be )
@@ -84,7 +84,6 @@ class JavaThriftLibrary(JvmTarget): def include_paths(self): return self._include_paths - # TODO(Eric Ayers) As of 2/5/2015 this call is DEPRECATED and should be removed soon @property def is_thrift(self): return True
[JavaThriftLibrary->[__init__->[check_value_for_arg]]]
True if the paths should be included in the result.
Perhaps instead of adding a new call for is_thrift, this could move to a different pattern. Maybe it could look for the class instead via isinstance. It looks like the deprecation plan for the associated `is_*` methods involved types via marker mixins.
@@ -224,6 +224,16 @@ class Environment: self._values[k] = new_value return self + def to_dict(self): + ret = OrderedDict() + for varname, varvalues in self._values.items(): + value = self._format_value(varname, varvalues, "", os.pathsep) + ret[varname] = value + return ret + + def apply(self): + return _environment_add(self.to_dict()) + class ProfileEnvironment: def __init__(self):
[ProfileEnvironment->[loads->[Environment,compose,ProfileEnvironment,unset],compose->[compose],get_env->[compose,Environment]],Environment->[define_path->[_list_value],append_path->[_list_value],define->[_list_value,_Sep],save_ps1->[append,_format_value],prepend_path->[_list_value],append->[_list_value,_Sep],save_sh->[append,_format_value],prepend->[_list_value,_Sep],save_bat->[append,_format_value]]]
Compose a new environment lease object.
Maybe for the variable access we should provide access to the "structure"? Like if it is a list of items, it should be a list of items, not always a string that they will need to parse? I also understand the reason for always a string, to resemble the native env-vars, just a question.
@@ -147,9 +147,9 @@ public class AWSJobConfigurationManager extends JobConfigurationManager { if (jobConfigDir.exists()) { LOGGER.info("Loading job configurations from " + jobConfigDir); final Properties properties = new Properties(); - properties.setProperty(ConfigurationKeys.JOB_CONFIG_FILE_DIR_KEY, jobConfigDir.getAbsolutePath()); + properties.setProperty(ConfigurationKeys.JOB_CONFIG_FILE_GENERAL_PATH_KEY, jobConfigDir.getAbsolutePath()); - final List<Properties> jobConfigs = SchedulerUtils.loadJobConfigs(properties); + final List<Properties> jobConfigs = SchedulerUtils.loadGenericJobConfigs(properties); LOGGER.info("Loaded " + jobConfigs.size() + " job configuration(s)"); for (Properties config : jobConfigs) { LOGGER.debug("Config value: " + config);
[AWSJobConfigurationManager->[fetchJobConf->[fetchJobConfSettings]]]
Fetches the job configuration and stores it in the job files. This method is called when a new config arrives for a job. It is called by.
Please also change the config property name in conf/aws/application.conf
@@ -26,10 +26,6 @@ namespace Microsoft.Xna.Framework.Graphics this.VertexCount = vertexCount; this.BufferUsage = bufferUsage; - // Make sure the graphics device is assigned in the vertex declaration. - if (vertexDeclaration.GraphicsDevice != graphicsDevice) - vertexDeclaration.GraphicsDevice = graphicsDevice; - _isDynamic = dynamic; PlatformConstruct();
[VertexBuffer->[GetData->[Get,Length,VertexStride,WriteOnly],GraphicsDeviceResetting->[PlatformGraphicsDeviceResetting],SetDataInternal->[Get,Length,VertexStride],SetData->[Get,None,Length],VertexDeclaration,FromType,BufferUsage,PlatformConstruct,ResourceCreationWhenDeviceIsNull,GraphicsDevice,VertexCount]]
Creates an object that represents a single vertex buffer. This method is used to determine the number of bytes that are used to store the data in.
We could live with this maybe... although I expect XNA does set it. I wonder how many people have as a hack assumed they could fetch the graphics device via `VertexDeclaration.GraphicsDevice`.
@@ -479,7 +479,7 @@ All jobs created by a pipeline will create commits in the pipeline's repo. createPipeline.Flags().StringVarP(&pipelinePath, "file", "f", "-", "The file containing the pipeline, it can be a url or local file. - reads from stdin.") createPipeline.Flags().BoolVarP(&pushImages, "push-images", "p", false, "If true, push local docker images into the cluster registry.") createPipeline.Flags().StringVarP(&registry, "registry", "r", "docker.io", "The registry to push images to.") - createPipeline.Flags().StringVarP(&username, "username", "u", "", "The username to push images as, defaults to your OS username.") + createPipeline.Flags().StringVarP(&username, "username", "u", "", "The username to push images as, defaults to your docker username.") createPipeline.Flags().StringVarP(&password, "password", "", "", "Your password for the registry being pushed to.") var reprocess bool
[ListDatumF,StringVar,InspectJob,Message,TempFile,PrintDetailedDatumInfo,PipelineReqFromInfo,Close,PrintJobInfo,ErrorAndExit,NextCreatePipelineRequest,RAMInBytes,StopPipeline,PrintDetailedPipelineInfo,RunBoundedArgs,RunFixedArgs,Flush,Error,FlushJobAll,RunIO,PrintPipelineInfo,Marshal,ReadFile,StartPipeline,TagImage,ExtractPipeline,StringVarP,VarP,NewPipelineManifestReader,Ctx,Errorf,GetLogs,ListPipeline,Int64VarP,StopJob,NewWriter,Join,Equal,DeletePipeline,NewAuthConfigurationsFromDockerCfg,BoolVarP,Current,Next,StringSliceVarP,PrintDetailedJobInfo,CreatePipeline,DeleteJob,PrintDatumInfo,ListJobF,Name,Contains,RestartDatum,InspectPipeline,Split,Err,NewWithoutDashes,InspectDatum,Int64Var,MarshalToString,Fprintf,Printf,NewOnUserMachine,Println,Sprintf,GarbageCollect,NewClientFromEnv,Background,Unmarshal,ParseRepositoryTag,PushImage,ScrubGRPC,ParseCommits,Getenv,Run,Flags,BoolVar]
finds the n - node index and creates a new n - node index and runs the createPipelineRequest returns a pipeline request that can be used to create a pipeline.
Small nit: we should probably be prompting for password interactively by default, rather than asking people to input it as part of the command
@@ -1975,6 +1975,18 @@ def find_bad_channels_maxwell( .. versionadded:: 0.20 """ + if h_freq is not None: + if 'lowpass' in raw.info: + msg = (f'The input data has already been low-pass filtered with a ' + f'{raw.info["lowpass"]} Hz cutoff frequency. If you wish ' + f'to avoid filtering again in find_bad_channels_maxwell(), ' + f'please pass `h_freq=None`.') + logger.warning(msg) + + logger.info(f'Applying low-pass filter with {h_freq} Hz cutoff ' + f'frequency ...') + raw = raw.copy().filter(l_freq=None, h_freq=h_freq) + limit = float(limit) onsets, ends = _annotations_starts_stops( raw, skip_by_annotation, invert=True)
[_trans_sss_basis->[_sss_basis],find_bad_channels_maxwell->[_run_maxwell_filter,_prep_maxwell_filter],_get_grad_point_coilsets->[_prep_mf_coils],_regularize_in->[_get_degrees_orders,_regularize_out],_sss_basis_basic->[_get_mag_mask,_concatenate_sph_coils,_sph_harm_norm],_compute_sphere_activation_in->[_sq],_overlap_projector->[_orth_overwrite]]
r Find bad channels using Maxwell filtering. This function is called to check if a block of data is unused in a given time window 4. Computes the peak - to - peak and standard deviation of the difference between the Get the scores of a single in the Meg - picks.
lowpass is always present but it can be None. I think you will warn all the time here. What I would do is something like this: `if raw.info.get('lowpass') and raw.info['lowpass'] < 'lowpass' `
@@ -102,6 +102,7 @@ var mapServiceNames = []string{ "codecommit", "cognitoidentity", "cognitoidentityprovider", + "dataexchange", "dlm", "eks", "glacier",
[Strings,Write,Create,Execute,New,Close,Source,Parse,Fatalf,Funcs,Bytes]
The template data structure for all the services. nanononsliceServiceNames is the main entry point for the generation of the nanon.
This service supports updating tags via the `TagResource` and `UntagResource` API calls, which means it can be added to `aws/internal/keyvaluetags/generators/updatetags/main.go` as well for completeness. Will add its service entry and generate the new update function on merge.
@@ -267,6 +267,16 @@ public class CommentManagerImpl implements CommentManager { return commentDocModel; } + protected DocumentModel createHiddenFolder(CoreSession session, String parentPath, String name) { + DocumentModel dm = session.createDocumentModel(parentPath, name, "HiddenFolder"); + dm.setProperty("dublincore", "title", name); + dm.setProperty("dublincore", "description", ""); + Framework.doPrivileged(() -> dm.setProperty("dublincore", "created", Calendar.getInstance())); + DocumentModel parent = session.createDocument(dm); // change variable name to be effectively final + setFolderPermissions(parent); + return parent; + } + private static void notifyEvent(CoreSession session, DocumentModel docModel, String eventType, DocumentModel parent, DocumentModel child, NuxeoPrincipal principal) {
[CommentManagerImpl->[createComment->[createComment,internalCreateComment],getCommentName->[getCurrentUser],createLocatedComment->[internalCreateComment],createCommentDocModel->[updateAuthor],getThreadForComment->[getThreadForComment,getDocumentsForComment],internalCreateComment->[updateAuthor],getComments->[getComments],deleteComment->[notifyEvent]]]
create comment docModel Notify the event of the given type of the given event type on the given parent and child.
Can you explain why this change is not present in master and 10.10?
@@ -412,9 +412,6 @@ public class FakeDatasetService implements DatasetService, Serializable { @Override public WriteStream createWriteStream(String tableUrn, Type type) throws IOException, InterruptedException { - if (type != Type.PENDING && type != Type.BUFFERED) { - throw new RuntimeException("We only support PENDING or BUFFERED streams."); - } TableReference tableReference = BigQueryHelpers.parseTableUrn(BigQueryHelpers.stripPartitionDecorator(tableUrn)); synchronized (tables) {
[FakeDatasetService->[insertAll->[insertAll,getTableContainer],commitWriteStreams->[commit],finalizeWriteStream->[finalizeStream],setUp->[setUp],createTable->[validateWholeTableReference],patchTableDescription->[getTable,validateWholeTableReference,getTableContainer],createWriteStream->[Stream,getTableContainer],getTable->[getTable],getStreamAppendClient->[appendRows->[appendRows]],flush->[flush]]]
Creates a new write stream with a random name.
We could check for COMMIT_AT_LEAST_ONCE explicitly.
@@ -69,11 +69,17 @@ def test_compute_proj(): projs_evoked = compute_proj_evoked(evoked, n_grad=1, n_mag=1, n_eeg=0) # XXX : test something + # test parallelization + projs = compute_proj_epochs(epochs, n_grad=1, n_mag=1, n_eeg=0, n_jobs=2) + projs = activate_proj(projs) + proj_par, _, _ = make_projector(projs, epochs.ch_names, bads=[]) + assert_array_equal(proj, proj_par) # Test that the raw projectors work for ii in (1, 2, 4, 8, 12, 24): raw = Raw(raw_fname) - projs = compute_proj_raw(raw, duration=ii-0.1, n_grad=1, n_mag=1, n_eeg=0) + projs = compute_proj_raw(raw, duration=ii-0.1, n_grad=1, n_mag=1, + n_eeg=0) # test that you can compute the projection matrix projs = activate_proj(projs)
[test_compute_proj->[read_events,compute_proj_epochs,read_proj,compute_proj_raw,assert_array_almost_equal,corrcoef,activate_proj,make_projector,len,sign,zip,compute_proj_evoked,average,Raw,p1,assert_true,ones,Epochs,save,pick_types],dirname,join]
Test SSP computation of the projection matrix. test that you can save them .
Does this test work for you? It fails here.
@@ -79,8 +79,11 @@ type Config struct { ActiveSeriesMetricsIdleTimeout time.Duration `yaml:"active_series_metrics_idle_timeout"` // Use blocks storage. - BlocksStorageEnabled bool `yaml:"-"` - BlocksStorageConfig tsdb.BlocksStorageConfig `yaml:"-"` + BlocksStorageEnabled bool `yaml:"-"` + BlocksStorageConfig tsdb.BlocksStorageConfig `yaml:"-"` + StreamChunksWhenUsingBlocks bool `yaml:"-"` + // Runtime-override for type of streaming query to use (chunks or samples). + StreamTypeFn func() QueryStreamType `yaml:"-"` // Injected at runtime and read from the distributor config, required // to accurately apply global limits.
[AllUserStats->[checkRunningOrStopping],MetricsForLabelMatchers->[checkRunningOrStopping],CheckReady->[checkRunningOrStopping,CheckReady],Push->[checkRunningOrStopping],purgeUserMetricsMetadata->[getUsersWithMetadata,getUserMetadata],UserStats->[checkRunningOrStopping],MetricsMetadata->[checkRunningOrStopping,getUserMetadata],LabelNames->[checkRunningOrStopping,LabelNames],QueryStream->[checkRunningOrStopping],RegisterFlags->[RegisterFlags],LabelValues->[checkRunningOrStopping,LabelValues],Query->[checkRunningOrStopping],startingForFlusher->[startFlushLoops]]
RegisterFlags registers the flags for the given config object. DurationVar - instance variables for flushing chunks.
Why do we also need this if we have a runtime override? Can't we just used that?
@@ -108,7 +108,7 @@ public class SearchAdminResource implements ResourceHandler { throw new CacheException("NotImplemented"); } else { SearchStatistics searchStatistics = Search.getSearchStatistics(cache); - searchStatistics.getQueryStatistics().clear(); + Security.doAs(restRequest.getSubject(), () -> searchStatistics.getQueryStatistics().clear()); return completedFuture(responseBuilder.build()); } }
[SearchAdminResource->[lookupCacheWithStats->[lookupIndexedCache],lookupQueryStatistics->[lookupCacheWithStats]]]
clear the search statistics.
why is this required? the cache already contains the subject (via `Cache.withSubject()`)
@@ -46,11 +46,4 @@ public class AssetManagerTest { HttpURLConnection httpCon = (HttpURLConnection) url.openConnection(); assertEquals(HttpURLConnection.HTTP_NOT_FOUND, httpCon.getResponseCode()); } - - @Test - public void handlebarsLoad() throws Exception { - URL url = new URL(j.getURL() + "assets/handlebars/jsmodules/handlebars3.js"); - HttpURLConnection httpCon = (HttpURLConnection) url.openConnection(); - assertEquals(HttpURLConnection.HTTP_OK, httpCon.getResponseCode()); - } }
[AssetManagerTest->[emptyAssetDoesNotThrowError->[getResponseCode,openConnection,getURL,URL,assertEquals],handlebarsLoad->[getResponseCode,openConnection,getURL,URL,assertEquals],JenkinsRule]]
Empty asset does not throw error.
Is there any change in the way the plugins consume handlebars3 now? Do plugin developers need to do any change? or are the plugins going to stop working because of that or trigger a 404 in the browser console?
@@ -173,10 +173,11 @@ void MultiTopicDataReaderBase::data_available(DDS::DataReader_ptr reader) if (rc == RETCODE_NO_DATA) { return; } else if (rc != RETCODE_OK) { - ostringstream rc_ss; - rc_ss << rc; + OPENDDS_STRING rc_ss; + rc_ss.reserve(sizeof(ReturnCode_t)); + rc_ss += rc; throw runtime_error("Incoming DataReader for " + topic + - " could not be read, error #" + rc_ss.str()); + " could not be read, error #" + rc_ss); } const MetaStruct& meta = metaStructFor(reader);
[No CFG could be retrieved]
region MultiTopicDataReaderBase Methods Implementation of the DDS interface.
sizeof() is number of bytes in memory, not number of characters in the string string += int is appending the character value of the integer
@@ -326,6 +326,7 @@ OIDC_RP_SCOPES = config("OIDC_RP_SCOPES", default="openid profile email") OIDC_REDIRECT_ALLOWED_HOSTS = config( "OIDC_REDIRECT_ALLOWED_HOSTS", default="", cast=Csv() ) +OIDC_AUTH_REQUEST_EXTRA_PARAMS = {"access_type": "offline"} # Allow null on these because you should be able run Kuma with these set. # It'll just mean you can't use kuma to authenticate. And a warning
[_get_locales->[_Language,namedtuple,items,open,load],Path,_get_locales,dict,init,DjangoIntegration,parseaddr,sorted,Csv,strip,set,items,config,zip,join,lower]
Function that gets called when a user logs in. The login_url method is used to redirect to the next URL if no next URL is.
Interesting! I didn't know anything about this before. So this allows us to refresh an access token without the user needing to be present and authorize the refresh?
@@ -609,10 +609,7 @@ class SharePermission(Model): } _attribute_map = { - 'permission': {'key': 'permission', 'type': 'str', 'xml': {'name': 'permission'}}, - } - - _xml_map = { + 'permission': {'key': 'permission', 'type': 'str'}, } def __init__(self, **kwargs):
[ShareStats->[__init__->[super,get]],ListFilesAndDirectoriesSegmentResponse->[__init__->[super,get]],ListSharesResponse->[__init__->[super,get]],SharePermission->[__init__->[super,get]],RetentionPolicy->[__init__->[super,get]],Range->[__init__->[super,get]],FileHTTPHeaders->[__init__->[super,get]],CorsRule->[__init__->[super,get]],FilesAndDirectoriesListSegment->[__init__->[super,get]],StorageServiceProperties->[__init__->[super,get]],ShareProperties->[__init__->[super,get]],ListHandlesResponse->[__init__->[super,get]],StorageError->[__init__->[super,get]],SourceModifiedAccessConditions->[__init__->[super,get]],ShareItem->[__init__->[super,get]],StorageErrorException->[__init__->[super,deserialize]],FileProperty->[__init__->[super,get]],DirectoryItem->[__init__->[super,get]],Metrics->[__init__->[super,get]],AccessPolicy->[__init__->[super,get]],SignedIdentifier->[__init__->[super,get]],HandleItem->[__init__->[super,get]],FileItem->[__init__->[super,get]]]
Initialize a SharePermission object.
we should avoid manual edits to the generated files, right?
@@ -296,10 +296,15 @@ export function createIframeWithMessageStub(win) { element.expectMessageFromParent = msg => { return new Promise(resolve => { const listener = event => { + let expectMsg = msg; + let eventMsg = event.data.receivedMessage; + if (typeof expectMsg !== 'string') { + expectMsg = JSON.stringify(expectMsg); + eventMsg = JSON.stringify(eventMsg); + } if (event.source == element.contentWindow && event.data.testStubEcho - && JSON.stringify(msg) - == JSON.stringify(event.data.receivedMessage)) { + && expectMsg == eventMsg) { win.removeEventListener('message', listener); resolve(msg); }
[No CFG could be retrieved]
Creates an iframe fixture in the given window that can be used for the message to the parent Expect post message from source window to target window.
nit: rename `eventMsg` to `actualMsg`
@@ -507,3 +507,5 @@ def cleanup_old_worker(*args, **kwargs): """ name = kwargs['sender'].hostname _delete_worker(name, normal_shutdown=True) + # Recreate a new working directory for worker that is starting now + common_utils.create_worker_working_directory(name)
[cleanup_old_worker->[_delete_worker],_delete_worker->[list,Criteria,error,_,filter_workers,len,objects,get_collection,cancel,from_bson,delete],_queue_reserved_task->[ReservedResource,sleep,apply_async,get_worker_for_reservation,get_unreserved_worker,tasks],register_sigterm_handler->[wrap_f->[signal,f],sigterm_handler->[handler]],TaskResult->[serialize->[to_dict],from_async_result->[cls],from_task_status_dict->[cls,get],__init__->[append,get,isinstance]],ReservedTaskMixin->[apply_async_with_reservation->[save_with_set_on_insert,uuid4,get,apply_async,str,TaskStatus,join,AsyncResult]],cancel->[_,revoke,get,objects,MissingResource,info],Task->[on_failure->[format_iso8601_datetime,debug,utc_tz,get,str,now,isinstance,to_dict,save,PulpException],__call__->[format_iso8601_datetime,debug,utc_tz,get,objects,super,now],on_success->[format_iso8601_datetime,debug,utc_tz,append,get,now,isinstance,save,to_dict],apply_async->[save_with_set_on_insert,pop,get,TaskStatus,super]],_release_resource->[get_collection],task,getLogger,Control]
Cleans up old worker state if this worker was previously running but died unexpectedly.
Who calls `cleanup_old_worker()`?. I am having some trouble finding it.
@@ -4,8 +4,10 @@ # Licensed under the MIT License. # ------------------------------------ +from typing import Optional from azure.core.polling.base_polling import ( LongRunningOperation, + PipelineResponseType, _is_empty, _as_json, BadResponse,
[TranslationPolling->[_map_nonstandard_statuses->[raise_error],get_status->[_map_nonstandard_statuses,get,_as_json,_is_empty,BadResponse],set_initial_status->[OperationFailed],can_poll->[_as_json,_is_empty,get],raise_error->[ODataV4Format,body,HttpResponseError,"]]]
Creates a base polling class for a specific Long running operation. Check if the response is empty.
This can't be imported here because it's defined in a type checking block. You'll need to redefine it in this file.
@@ -27,6 +27,18 @@ public class JavaExternalSerializerProtocol extends AbstractSerializationProtoco */ @Override public void serialize(Object object, OutputStream out) throws SerializationException { + if (object instanceof CursorStreamProvider) { + try (CursorStream cursor = ((CursorStreamProvider) object).openCursor()) { + doSerialize(toByteArray(cursor), out); + } catch (IOException e) { + throw new SerializationException(createStaticMessage("Could not serialize cursor stream"), e); + } + } else { + doSerialize(object, out); + } + } + + private void doSerialize(Object object, OutputStream out) { validateForSerialization(object); SerializationUtils.serialize((Serializable) object, out); }
[JavaExternalSerializerProtocol->[serialize->[serialize],doSerialize->[serialize]]]
Serializes the given object to the given output stream.
I think that at least we should create an issue to be able to serialize Cursor without loading everything into memory at the same time.
@@ -12,6 +12,7 @@ describe 'two_factor_authentication/totp_verification/show.html.slim' do before do allow(view).to receive(:current_user).and_return(user) allow(view).to receive(:reauthn?).and_return(false) + allow(view).to receive(:confirmation_for_phone_change?).and_return(false) @presenter = TwoFactorAuthCode::AuthenticatorDeliveryPresenter. new(presenter_data, ApplicationController.new.view_context)
[email,it_behaves_like,to,new,let,have_content,have_xpath,describe,build_stubbed,merge,before,have_link,view_context,t,it,require,and_return,otp_send_path]
This is a basic example of how to show a TwoFactorAuthCode. displays a helpful tooltip to the user.
This is not needed anymore.
@@ -1,8 +1,15 @@ from django.template import Library +from ...core.utils import get_country_name_by_code + register = Library() @register.simple_tag def get_formset_form(formset, index): return formset.forms[index] + + +@register.simple_tag +def get_country_by_code(country_code): + return get_country_name_by_code(country_code)
[Library]
Get a form from a formset.
I think this can be done via `get_language_info` `i18n` templatetag
@@ -647,7 +647,7 @@ function killme() /** * @brief Redirect to another URL and terminate this process. */ -function goaway($path) +function goaway($path = '') { if (strstr(normalise_link($path), 'http://')) { $url = $path;
[check_url->[getHostName],get_temppath->[getHostName]]
This function is called when a user redirects to a page.
Interesting for #5907 So I will rename it from `System::redirectTo($url)` to `System::redirect($to = '');` :-)
@@ -79,6 +79,18 @@ def test_lcmv(): assert_raises(ValueError, lcmv, evoked, forward, noise_cov, data_cov, reg=0.01, pick_ori="normal") + # Test picking normal orientation + stc_normal = lcmv(evoked, forward_surf_ori, noise_cov, data_cov, reg=0.01, + pick_ori="normal") + + assert_true((np.abs(stc_normal.data) <= stc.data + 0.8).all()) + + # Test picking source orientation maximizing output source power + stc_max_power = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01, + pick_ori="max-power") + + assert_true((np.abs(stc_max_power.data) <= stc.data + 1).all()) + # Test if fixed forward operator is detected when picking normal or # max-power orientation assert_raises(ValueError, lcmv, evoked, forward_fixed, noise_cov, data_cov,
[test_lcmv_raw->[lcmv_raw,assert_array_almost_equal,read_selection,read_cov,regularize,assert_true,len,time_as_index,pick_types,compute_raw_data_covariance,intersect1d,array],test_lcmv->[lcmv,max,read_cov,sum,assert_raises,argmax,resample,assert_array_almost_equal,drop_bad_epochs,dict,regularize,compute_covariance,zeros_like,stcs,assert_array_equal,read_selection,len,lcmv_epochs,next,average,assert_true,Epochs,pick_types],Raw,read_events,read_label,read_cov,read_forward_solution,data_path,join]
Test LCMV with evoked data and single trials Test if a single trial is detected when picking a max - power or . epochs forward_fixed noise_cov data_cov label.
why `+0.8`? shouldn't it always be lower? i.e., `np.abs(stc_normal.data) <= stc.data` should hold
@@ -1,6 +1,6 @@ require Rails.root.join('lib', 'config_validator.rb') -Figaro.require_keys( +Figaro.require_keys([ 'attribute_encryption_key', 'database_statement_timeout', 'disallow_all_web_crawlers',
[require_keys,validate,require,join]
This function is called from the Rails Rails framework. It loads the config_validator. rb.
I vote to keep the splat/`*args`
@@ -1094,7 +1094,7 @@ KUMASCRIPT_URL_TEMPLATE = 'http://developer.mozilla.org:9080/docs/{path}' ES_DISABLED = True ES_LIVE_INDEX = False -LOG_LEVEL = logging.WARN +LOG_LEVEL = logging.DEBUG SYSLOG_TAG = 'http_app_kuma' LOGGING = {
[get_user_url->[reverse],JINJA_CONFIG->[MemcachedBytecodeCache,isinstance],lazy_langs->[dict,lower],lazy_language_deki_map->[dict],node,lazy,listdir,join,remove,abspath,dict,replace,%,append,dirname,sorted,tuple,items,path,_,basename,isdir,setup_loader,dumps,lower]
JSON array listing tag suggestions for documents The default formatters and handlers are available in the logging module.
Whoops, did these `settings.py` changes sneak in? Should these be in `settings_local.py`?
@@ -88,12 +88,12 @@ abstract class WPCOM_JSON_API_Comment_Endpoint extends WPCOM_JSON_API_Endpoint { && $this->api->token_details['user']['display_name'] === $comment->comment_author ) { - $user_can_read_coment = true; + $user_can_read_comment = true; } else { - $user_can_read_coment = current_user_can( 'edit_comment', $comment->comment_ID ); + $user_can_read_comment = current_user_can( 'edit_posts' ); } - if ( !$user_can_read_coment ) { + if ( !$user_can_read_comment ) { return new WP_Error( 'unauthorized', 'User cannot read unapproved comment', 403 ); } }
[WPCOM_JSON_API_Comment_Endpoint->[get_comment->[get_site_link,format_date,comment_like_count,get_comment_link,get_blog_id_for_output,get_author,get_post_link,user_can_view_post]]]
Get a single comment Get a comment object Get the comment Get comment details Get the response for a comment.
Since you're editing that line, could you add a space after the `!` to make it super clean?
@@ -43,6 +43,11 @@ func NewPlan(ctx *plugin.Context, target *Target, prev *Snapshot, source Source, olds := make(map[resource.URN]*resource.State) if prev != nil { for _, oldres := range prev.Resources { + // Ignore resources that are pending deletion; these should not be recorded in the LUT. + if oldres.Delete { + continue + } + urn := oldres.URN contract.Assert(olds[urn] == nil) olds[urn] = oldres
[Provider->[Provider],Assert]
NewPlan creates a new plan from a snapshot of new resources. Prev returns the previous snapshot of the plan.
What the heck is a LUT?
@@ -812,10 +812,8 @@ func (mod *modContext) genNestedTypes(member interface{}, resourceType bool) []d // and if it appears in an input object and/or output object. mod.getTypes(member, tokens) - isK8s := isKubernetesPackage(mod.pkg) - var typs []docNestedType - for token, tyUsage := range tokens { + for token := range tokens { for _, t := range mod.pkg.Types { switch typ := t.(type) { case *schema.ObjectType:
[getTSLookupParams->[getLanguageModuleName],genLookupParams->[getTSLookupParams,getCSLookupParams,getPythonLookupParams,getGoLookupParams],genConstructorGo->[getLanguageModuleName],getConstructorResourceInfo->[getLanguageModuleName],genNestedTypes->[getLanguageModuleName,typeString],getGoLookupParams->[getLanguageModuleName],genConstructorCS->[getLanguageModuleName,typeString],genConstructors->[genConstructorTS,genConstructorPython,genConstructorGo,genConstructorCS],getProperties->[typeString],genConstructorTS->[getLanguageModuleName],cleanTypeString->[cleanTypeString,getLanguageModuleName],getCSLookupParams->[getLanguageModuleName],typeString->[cleanTypeString,getLanguageModuleName,typeString],genResource->[genNestedTypes,genLookupParams,genConstructors,getProperties,getConstructorResourceInfo,genResourceHeader],gen->[getModuleFileName,genResource,add],genIndex->[getModuleFileName,getLanguageLinks],getLanguageLinks->[getLanguageModuleName],getNestedTypes->[contains,getNestedTypes,add],getTypes->[getNestedTypes],gen]
genNestedTypes generates all of the nested types for a given member This function creates a nested type that can be used to generate the documentation for the type. This function is used to create a nested type that can be used to create a nested type.
All or most of this deletion was related to generating the per-language API doc link for each object type.
@@ -146,6 +146,16 @@ public class SCMPipelineManager implements PipelineManager { pipelineFactory.setProvider(replicationType, provider); } + @VisibleForTesting + public void setAllowPipelineCreation(boolean newState) { + this.allowPipelineCreation.set(newState); + } + + @VisibleForTesting + public boolean getAllowPipelineCreation() { + return allowPipelineCreation.get(); + } + protected void initializePipelineState() throws IOException { if (pipelineStore.isEmpty()) { LOG.info("No pipeline exists in current db");
[SCMPipelineManager->[addContainerToPipeline->[addContainerToPipeline],deactivatePipeline->[deactivatePipeline],waitPipelineReady->[getPipeline],removePipeline->[removePipeline],createPipeline->[recordMetricsForPipeline],openPipeline->[openPipeline],scrubPipeline->[finalizeAndDestroyPipeline],triggerPipelineCreation->[triggerPipelineCreation],getPipeline->[getPipeline],destroyPipeline->[triggerPipelineCreation],close->[close],activatePipeline->[activatePipeline],finalizePipeline->[finalizePipeline],containsPipeline->[getPipeline],removeContainerFromPipeline->[removeContainerFromPipeline],handleSafeModeTransition->[triggerPipelineCreation,getSafeModeStatus,startPipelineCreator],getPipelines->[getPipelines],getNumberOfContainers->[getNumberOfContainers]]]
This method sets the pipeline provider.
I would suggest to use pipelineCreationAllowed as the internal state name, it makes the code easier to read out as I I see. I know this is a regular getter and and setter pair, but... In the tests, the set method is always called with true as the parameter, so I suggest to call this method simply allowPipelineCreation() without a parameter, the getter part of the pair in this case can be called as isPipelineCreationAllowed(). This suggestion is simply about expressions from the usage point of view, and I don't have a strong opinion on this, however if we want to keep this name and want to name it as a regular getter setter as it is suggested by the books then we should use isAllowPipelineCreation as the getter name.