patch
stringlengths
18
160k
callgraph
stringlengths
4
179k
summary
stringlengths
4
947
msg
stringlengths
6
3.42k
@@ -136,14 +136,15 @@ def host_port_to_endpoint(host: str, port: int) -> str: return '{}:{}'.format(host, port) -def split_endpoint(endpoint: str) -> Tuple[str, Union[str, int]]: +def split_endpoint(endpoint: str) -> Tuple[str, Union[str, Optional[int]]]: match = re.match(r'(?:[a-z0-9]*:?//)?([^:/]+)(?::(\d+))?', endpoint, re.I) if not match: raise ValueError('Invalid endpoint', endpoint) host, port = match.groups() + returned_port = None if port: - port = int(port) - return host, port + returned_port = int(port) + return host, returned_port def privatekey_to_publickey(private_key_bin: bytes) -> bytes:
[merge_dict->[merge_dict],lpex->[pex],privatekey_to_publickey->[ishash]]
Splits an endpoint into host and port.
Please, please, simplify the type here. E.g. `Tuple[Host, Optional[Port]]`
@@ -148,7 +148,10 @@ func (d *Service) Run() (err error) { } } - // Explicitly set fork type here based on KEYBASE_LABEL + // Explicitly set fork type here based on KEYBASE_LABEL. + // This is for OSX-based Launchd implementations, which unfortunately + // don't obey the same command-line flag conventions as + // the other platforms. if len(d.G().Env.GetLabel()) > 0 { d.ForkType = keybase1.ForkType_LAUNCHD }
[GetExclusiveLock->[GetExclusiveLockWithoutAutoUnlock,ReleaseLock],writeServiceInfo->[ensureRuntimeDir],GetExclusiveLockWithoutAutoUnlock->[ensureRuntimeDir],Handle->[RegisterProtocols],ListenLoop->[Handle]]
Run starts the service. This is a helper function that creates a new updateChecker and starts it.
Unrelated change snuck in?
@@ -231,7 +231,7 @@ module View when Engine::Round::Operating h(Game::Round::Operating, game: @game) when Engine::Round::G1846::Draft - h(Game::Round::Draft, game: @game) + h(Game::Round::Draft, game: @game, round: @round) when Engine::Round::Auction h(Game::Round::Auction, game: @game) end
[GamePage->[route_anchor->[split],render_action->[any?,h,crowded_corps],render->[h,tiles,dig,unsubscribe,init_tiles,current_action_id,get,clone,layout,process_action,render_broken_game,store,lambda,subscribe],load_game->[new,size,store,take,id,map],render_round->[h,operating_rounds,round_num,finished,game_ending_description,operating?,description,title,last,turn],render_game->[round,h],render_title->[title,include?,id,dig],render_broken_game->[is_a?,h,nil?,action_id],menu->[item,color_for,h],cursor->[to_i],item->[h,first,color_for,store,lambda],needs,include],require_tree,require]
Renders action - unknown node hash.
this is unnecessary, you can get @game.round
@@ -6,10 +6,15 @@ def ConstructSolver(settings): if(type(settings) != KratosMultiphysics.Parameters): raise Exception("Input is expected to be provided as a Kratos Parameters object") + solver_type = settings["solver_type"].GetString() + + if solver_type == "eigen_sparse_eigensystem": + import KratosMultiphysics.EigenSolversApplication + eigen_solver = KratosMultiphysics.EigenSolversApplication.SparseEigensystemSolver(settings) + return eigen_solver + import new_linear_solver_factory linear_solver = new_linear_solver_factory.ConstructSolver(settings["linear_solver_settings"]) - - solver_type = settings["solver_type"].GetString() if(solver_type == "power_iteration_eigenvalue_solver"): eigen_solver = KratosMultiphysics.PowerIterationEigenvalueSolver( settings, linear_solver)
[ConstructSolver->[ConstructSolver,FEASTSolver,type,PowerIterationHighestEigenvalueSolver,PowerIterationEigenvalueSolver,settings,RayleighQuotientIterationEigenvalueSolver,Exception]]
Construct a Kratos solver.
name inconsistent? see above
@@ -171,10 +171,6 @@ public class ExternalizerTable implements ObjectTable { public Writer getObjectWriter(Object o) throws IOException { Class<?> clazz = o.getClass(); Writer writer = writers.get(clazz); - if (writer == null) { - if (Thread.currentThread().isInterrupted()) - throw new IOException(log.interruptedRetrievingObjectWriter(clazz.getName())); - } return writer; }
[ExternalizerTable->[updateExtReadersWritersWithTypes->[updateExtReadersWritersWithTypes],ForeignExternalizerAdapter->[writeObject->[writeObject]],loadForeignMarshallables->[updateExtReadersWritersWithTypes],getExternalizerId->[getExternalizerId],ExternalizerAdapter->[hashCode->[hashCode],writeObject->[writeObject],readObject->[readObject],equals->[equals]],readObject->[readObject]]]
This method is called by the reader and writer methods to retrieve a writer from the cache.
I think removing this check might cause JBoss Marshalling to throw bogus exceptions if the node really is shutting down and the writers map has been cleared. It would have been better to replace the interruption check with `if (!started)`.
@@ -38,7 +38,6 @@ public class KafkaConsumerConfigs props.put("group.id", StringUtils.format("kafka-supervisor-%s", IdUtils.getRandomId())); props.put("auto.offset.reset", "none"); props.put("enable.auto.commit", "false"); - props.put("isolation.level", "read_committed"); return props; }
[KafkaConsumerConfigs->[getConsumerProperties->[getRandomId,put,format]]]
Get consumer properties.
I think to remove `isolation.level` here and leaves the configuration to users changes the current behavior. Currently users care nothing about this kafka configuration when using a higher version of kafka such as 0.11. By removing it, they need to set this property in supervisor spec, or the default value, which is `read_uncommitted`, will be applied, which may be not what they expect. If this is the root cause that limits the Druid to use Kafka lower than 0.11, I think maybe we can introduce another property, as the same way as `pollTimeout` property does, then we can unset `isolation.level` property according to the value of this new property. Only those who want to use Druid with older kafka need to set this new property.
@@ -430,7 +430,8 @@ public abstract class AbstractHoodieWriteClient<T extends HoodieRecordPayload, I protected void postCommit(HoodieTable<T, I, K, O> table, HoodieCommitMetadata metadata, String instantTime, Option<Map<String, String>> extraMetadata) { try { // Delete the marker directory for the instant. - new MarkerFiles(table, instantTime).quietDeleteMarkerDir(context, config.getMarkersDeleteParallelism()); + MarkerFilesFactory.get(config.getMarkersIOMode(), table, instantTime) + .quietDeleteMarkerDir(context, config.getMarkersDeleteParallelism()); // We cannot have unbounded commit files. Archive commits if we have to archive HoodieTimelineArchiveLog archiveLog = new HoodieTimelineArchiveLog(config, table); archiveLog.archiveIfRequired(context);
[AbstractHoodieWriteClient->[rollbackInflightCompaction->[rollback],startCommitWithTime->[startCommitWithTime,startCommit],inlineCluster->[scheduleClustering,cluster],startCommit->[startCommit],restoreToSavepoint->[createTable],commitStats->[commit,commitStats],close->[close],scheduleTableServiceInternal->[scheduleClustering,scheduleCompaction,scheduleCleaning],savepoint->[savepoint,createTable],compact->[compact],rollback->[createTable,rollback],preWrite->[setOperationType,syncTableMetadata],scheduleTableService->[scheduleTableService],deleteSavepoint->[createTable,deleteSavepoint],rollbackInflightClustering->[rollback],rollbackFailedBootstrap->[createTable],bootstrap->[bootstrap],inlineCompact->[compact,scheduleCompaction],postCommit->[syncTableMetadata],clean->[clean],commit->[commit],restoreToInstant->[createTable],rollbackFailedWrites->[createTable,rollbackFailedWrites,rollbackFailedBootstrap,rollback],finalizeWrite->[finalizeWrite]]]
After commit the table has been fully committed.
something to remember. We might have to add an upgrade step for migration from previous version to new one. I guess it is as simple as just cleaning up the old marker files and recreate marker in new format.
@@ -1034,8 +1034,10 @@ class ProCombatMoveAi { // Set enough land and sea units in territories to have at least a chance of winning for (final Unit unit : sortedUnitAttackOptions.keySet()) { final boolean isAirUnit = UnitAttachment.get(unit.getType()).getIsAir(); - if (isAirUnit || addedUnits.contains(unit)) { - continue; // skip air units + final boolean isExpensiveLandUnit = Matches.unitIsLand().test(unit) + && ProData.unitValueMap.getInt(unit.getType()) > 2 * ProData.minCostPerHitPoint; + if (isAirUnit || isExpensiveLandUnit || addedUnits.contains(unit)) { + continue; // skip air and expensive units } final TreeMap<Double, Territory> estimatesMap = new TreeMap<>(); for (final Territory t : sortedUnitAttackOptions.get(unit)) {
[ProCombatMoveAi->[doCombatMove->[getLandOptions,determineTerritoriesThatCanBeHeld,equals,populateAttackOptions,territoryHasLocalLandSuperiority,addAll,populateEnemyAttackOptions,info,determineUnitsToAttackWith,getNeighbors,findTerritoryValues,getTerritory,logAttackMoves,prioritizeAttackOptions,checkContestedSeaTerritories,isEmpty,populateEnemyDefenseOptions,removeTerritoriesThatCantBeConquered,removeTerritoriesWhereTransportsAreExposed,debug,territoryIsWater,moveOneDefenderToLandTerritoriesBorderingEnemy,getData,getTerritoryMap,removeAttacksUntilCapitalCanBeHeld,ProTerritoryManager,removeTerritoriesThatArentWorthAttacking,determineTerritoriesToAttack,doMove,getPlayer,calculateAmphibRoutes,setStoredStrafingTerritories,getStrafingTerritories,add],determineTerritoriesThatCanBeHeld->[size,getMaxUnits,getMaxBombardUnits,getMaxEnemyDefenders,test,addAll,info,getMaxEnemyUnits,getWinPercentage,getTerritory,getTuvSwing,isWater,setMaxEnemyUnits,getEnemyAttackOptions,getMax,getMaxAmphibUnits,debug,isHasLandUnitRemaining,getTerritoryMap,getAverageAttackersRemaining,getMatches,estimateAttackBattleResults,calculateBattleResults,setCanHold,get,isStrafing,negate,unitIsNotAir,setMaxEnemyBombardUnits],removeTerritoriesWhereTransportsAreExposed->[unitIsAlliedNotOwned,size,removeAll,getMaxUnits,getUnits,getMaxEnemyDefenders,addAll,info,containsKey,put,getTerritory,getProduction,min,trace,isEmpty,getTuvSwing,isWater,values,getEnemyAttackOptions,getMax,debug,getTerritoryMap,isCapital,getMatches,populateDefenseOptions,calculateBattleResults,getName,toList,get,isStrafing,collect,add,clear,keySet],moveOneDefenderToLandTerritoriesBorderingEnemy->[unitIsOwnedBy,remove,getMatches,debug,getTerritory,anyMatch,removeAll,add,unitIsLand,getInt,getType,isEmpty,info,territoryIsEnemyNonNeutralAndHasEnemyUnitMatching,getUnitMoveMap,isWater,getNeighbors,unitCantBeMovedAndIsAlliedDefenderAndNotInfra],tryToAttackTerritories->[getValue,getTransportCasualtiesRestricted,unitIsSub,checkForOverwhelmingWin,territoryCanMoveSeaUnits,size,equals,removeAll,setBattleResult,getMaxUnits,getUnits,getType,getMaxEnemyDefenders,getInt,test,addAll,isTransporting,remove,containsKey,put,getNeighbors,estimateStrengthDifference,isUnitAllied,addUnit,getWinPercentage,getTerritory,getMovedUnits,isCanHold,sortUnitNeededOptions,putAmphibAttackMap,trace,getTransportList,isEmpty,getUnitMoveMap,isWater,values,getEnemyAttackOptions,getTransport,getMax,getIsAir,addUnits,anyMatch,noneMatch,isHasLandUnitRemaining,getTerritoryMap,getBattleResult,getUnitsToAdd,getMovementLeft,getBombardMap,sortUnitMoveOptions,getMatches,getUnitsToTransportFromTerritories,unitIsAaForAnything,estimateAttackBattleResults,firstKey,getDistance_IgnoreEndForCondition,getName,get,isCurrentlyWins,isStrafing,unitIsDestroyer,containsAll,getIsDestroyer,sortUnitNeededOptionsThenAttack,negate,getTransportMoveMap,contains,territoryCanMoveAirUnitsAndNoAa,add,clear,unitIsEnemyAndNotInfa,keySet],removeAttacksUntilCapitalCanBeHeld->[getValue,size,removeAll,setBattleResult,getMaxUnits,getUnits,getMaxBombardUnits,test,addAll,populateEnemyAttackOptions,info,remove,getNeighbors,isUnitAllied,getTerritory,trace,findMaxPurchaseDefenders,isEmpty,territoryIsLand,getTuvSwing,values,getEnemyAttackOptions,getMax,getMaxAmphibUnits,debug,isHasLandUnitRemaining,getTerritoryMap,getMatches,estimateDefendBattleResults,unitCanBeMovedAndIsOwnedLand,getName,get,contains,add,clear,keySet],removeTerritoriesThatArentWorthAttacking->[getValue,getMaxUnits,getMaxEnemyDefenders,remove,info,getNeighbors,estimateStrengthDifference,getTerritory,isCanHold,isWater,getEnemyAttackOptions,retainAll,getMax,hasNext,isNeutralLand,getMaxAmphibUnits,debug,anyMatch,iterator,next,getName,get,containsAll,territoryIsEnemyNotNeutralLand,isNeedAmphibUnits,add,enemyUnit],logAttackMoves->[getValue,toStringNoOwner,getMaxUnits,getUnits,getMaxBombardUnits,getMaxEnemyDefenders,addAll,containsKey,put,getMaxEnemyUnits,trace,getTuvSwing,getMaxAmphibUnits,debug,getTerritoryMap,getName,get,getMaxEnemyBombardUnits,keySet],prioritizeAttackOptions->[getValue,getMaxUnits,getMaxEnemyDefenders,reversed,test,info,remove,getNeighbors,estimateStrengthDifference,getTerritory,getProduction,isCanHold,isFfa,trace,setValue,isTerritoryEnemy,isEmpty,getTuvSwing,isWater,hasNext,isNeutralLand,territoryCanMoveLandUnits,debug,sort,iterator,isCapital,getDistance,territoryHasUnitsOwnedBy,getMatches,next,findTerritoryAttackValue,territoryIsEnemyNotNeutralOrAllied,getDistance_IgnoreEndForCondition,getName,get,isNeedAmphibUnits,add,unitIsEnemyAndNotInfa],canAirSafelyLandAfterAttack->[getDistance_IgnoreEndForCondition,get,getMovementLeft,test,territoryCanMoveAirUnitsAndNoAa],determineTerritoriesToAttack->[size,removeAll,setBattleResult,getUnits,tryToAttackTerritories,getMaxEnemyDefenders,info,remove,haveUsedAllAttackTransports,estimateStrengthDifference,getWinPercentage,getTerritory,min,add,trace,getStrengthEstimate,subList,getResultString,debug,isHasLandUnitRemaining,getBattleResult,setStrengthEstimate,estimateAttackBattleResults,getName,get,isStrafing,isNeedAmphibUnits,setCanAttack,keySet],doMove->[calculateMoveRoutes,calculateBombingRoutes,calculateBombardMoveRoutes,calculateAmphibRoutes,doMove,clear],checkContestedSeaTerritories->[ProTerritory,addUnits,info,getTerritoryMap,get,containsKey,test,isEmpty,territoryCanMoveSeaUnitsThrough,getMatches,unitCanBeMovedAndIsOwnedSea,put,isWater,next,getNeighbors],determineUnitsToAttackWith->[checkForOverwhelmingWin,size,equals,removeAll,setBattleResult,getMaxUnits,getUnits,tryToAttackTerritories,getType,getMaxEnemyDefenders,getInt,getMaxBombardUnits,test,addAll,info,remove,getMaxEnemyUnits,getBomberMoveMap,getWinPercentage,addUnit,of,getTerritory,getProduction,isCanHold,isFfa,trace,empty,isEmpty,getUnitMoveMap,getTuvSwing,values,getEnemyAttackOptions,isWater,getMax,getIsAir,getMaxAmphibUnits,isNeutralLand,getResultString,debug,addUnits,anyMatch,isPresent,isHasLandUnitRemaining,getTerritoryMap,getOwner,getBattleResult,getUnitsToAdd,getAverageAttackersRemaining,isCapital,getMatches,unitIsAaForAnything,unitCanProduceUnitsAndCanBeDamaged,estimateAttackBattleResults,calculateBattleResults,getIsSea,getBattleRounds,unitIsAaForBombingThisUnitOnly,getName,getPlayerProduction,get,negate,isStrafing,sortUnitNeededOptionsThenAttack,canAirSafelyLandAfterAttack,contains,add,clear,keySet],getCalc]]
Try to attack territories. Add unit to attack map. Determines if a unit is a AA and if so adds it to the attack map. find the best effort to move the units to the correct territory. Checks if there is a battle attack and if so calculates the attack result.
Want to use cheap land units to populate attacks in territories first as fodder. Most maps this doesn't matter since all land units are fairly cheap but some of the advanced maps have expensive land units.
@@ -69,7 +69,7 @@ def block_activity_log_delete(obj, *, submission_obj=None, delete_user=None): 'url': obj.url, 'reason': obj.reason, 'include_in_legacy': obj.in_legacy_blocklist, - 'comments': f'Versions {obj.min_version} - {obj.max_version} blocked.', + 'comments': f'Versions {obj.min_version} - {obj.max_version} unblocked.', } if submission_obj: details['signoff_state'] = submission_obj.SIGNOFF_STATES.get(
[save_guids_to_blocks->[disable_addon_for_block,block_activity_log_save],block_activity_log_save->[add_version_log_for_blocked_versions],block_activity_log_delete->[add_version_log_for_blocked_versions]]
Create a block activity log entry.
Drive-by fix that had confused me in the past.
@@ -19,13 +19,13 @@ class Serf(SConsPackage): variant('debug', default=False, description='Enable debugging info and strict compile warnings') - depends_on('scons@2.3.0:', type='build') - depends_on('apr') depends_on('apr-util') depends_on('openssl') - depends_on('zlib') + depends_on('python+pythoncmd', type='build') + depends_on('scons@2.3.0:', type='build') depends_on('uuid') + depends_on('zlib') patch('py3syntax.patch')
[Serf->[build_test->[scons],build_args->[,dependencies,extend,items,join],variant,depends_on,version,patch]]
Build the arguments for the object.
Should this be type build? Does it link to the Python libs/headers? Is it used at run-time?
@@ -7,6 +7,9 @@ module Notifications # * :followable_type [String] - "User" or "Organization" # * :follower_id [Integer] - user id def initialize(follow_data, is_read = false) + # we explicitly symbolize_keys because FollowData.new will fail otherwise with an error of + # ":followable_id is missing in Hash input". FollowData expects a symbol, not a string. + follow_data.symbolize_keys! follow_data = follow_data.is_a?(FollowData) ? follow_data : FollowData.new(follow_data) @followable_id = follow_data.followable_id # fetch(:followable_id) @followable_type = follow_data.followable_type # fetch(:followable_type)
[Send->[call->[notifiable_type,current,where,first,order,map,json_data,notifiable_id,find_or_initialize_by,follower_id,save!,destroy,select,read,detect,id,notified_at,call,zero?,user_data],initialize->[followable_type,followable_id,new,is_a?,follower_id],follower->[find],attr_reader,delegate]]
Initializes the follow object with the given follow data.
good call! this is because of how delayed job is different from sidekiq. I wonder if we should add a spec in the services spec to make sure this doesn't regress
@@ -42,7 +42,7 @@ public class LiveMeasureDao implements Dao { this.system2 = system2; } - public List<LiveMeasureDto> selectByComponentUuids(DbSession dbSession, Collection<String> largeComponentUuids, Collection<Integer> metricIds) { + public List<LiveMeasureDto> selectByComponentUuidsAndMetricIds(DbSession dbSession, Collection<String> largeComponentUuids, Collection<Integer> metricIds) { if (largeComponentUuids.isEmpty() || metricIds.isEmpty()) { return Collections.emptyList(); }
[LiveMeasureDao->[selectMeasure->[selectByComponentUuidsAndMetricKeys],deleteByProjectUuidExcludingMarker->[deleteByProjectUuidExcludingMarker],selectTreeByQuery->[selectTreeByQuery],selectByComponentUuidsAndMetricKeys->[selectByComponentUuidsAndMetricKeys],insert->[insert],insertOrUpdate->[insert]]]
Select live measurements by componentUuids and metricIds.
why is it called `largeComponentUuids`?
@@ -1032,7 +1032,7 @@ namespace NServiceBus.Transport TreatAsErrorFromVersion = "8", RemoveInVersion = "9")] Task Init(Func<MessageContext, Task> onMessage, - Func<ErrorContext, Task<ErrorHandleResult>> onError, + Func<ErrorContext, Task<ReceiveResult>> onError, CriticalError criticalError, PushSettings settings);
[ExpressAttribute->[Interface,Class],Build->[GetService,nameof],IBuilder->[CreateScope,nameof],BuildAll->[GetServices,nameof],T->[GetService,nameof],Never,nameof]
ObsoleteEx. Initialize IMessageReceiver. StartReceive.
Does this indicate that we need to keep `ErrorHandleResult` in obsoletes for v8 (with an error)?
@@ -54,9 +54,16 @@ foreach($vars['tags'] as $tag) { } } +if (empty($list_items)) { + return; +} + +$icon = elgg_view_icon('tag', $icon_class); + $list = <<<___HTML <div class="clearfix"> <ul class="$list_class"> + <li>$icon</li> $list_items </ul> </div>
[No CFG could be retrieved]
Egg View for the .
What's the visual change here? Maybe should go in a separate PR
@@ -263,7 +263,7 @@ func getw3wpProceses() (map[int]string, error) { func getProcessIds(counterValues map[string][]pdh.CounterValue) []WorkerProcess { var workers []WorkerProcess for key, values := range counterValues { - if strings.Contains(key, "\\ID Process") { + if strings.Contains(key, "\\ID Process") && values[0].Measurement != nil { workers = append(workers, WorkerProcess{instanceName: values[0].Instance, processId: int(values[0].Measurement.(float64))}) } }
[read->[CollectData,GetFormattedCounterValues,Close,mapEvents,Wrap],initAppPools->[Wrapf,Error,RemoveUnusedCounters,Infow,Contains,GetCounterPaths,Debugw,AddCounter,Namespace,Wrap,Info],mapEvents->[Put,PdhErrno,Namespace,Debugw],close->[Close],NewLogger,Contains,Open,Wrap,Info,initAppPools,Processes]
getProcessIds func maps the process ids from the counter values to the worker process obj and.
Just curious, do we need to check `values` length first? like `len(values) > 0`
@@ -69,6 +69,17 @@ class BaseHandler: self._auth_uri = None self._properties = create_properties(self._config.user_agent) + @classmethod + def _convert_connection_string_to_kwargs(cls, conn_str, **kwargs): + # pylint:disable=protected-access + return BaseHandlerSync._convert_connection_string_to_kwargs(conn_str, **kwargs) + + @classmethod + def _create_credential_from_connection_string_parameters(cls, token, token_expiry, policy, key): + if token and token_expiry: + return ServiceBusSASTokenCredential(token, token_expiry) + return ServiceBusSharedKeyCredential(policy, key) + async def __aenter__(self): await self._open_with_retry() return self
[BaseHandler->[_mgmt_request_response_with_retry->[_do_retryable_operation],close->[_close_handler],_do_retryable_operation->[_handle_exception,_backoff],_open_with_retry->[_do_retryable_operation]]]
Initialize a new object with a specific .
not blocking: same suggestion as the above one, if we move this method into utils, then it could be shared between sync and async
@@ -106,7 +106,8 @@ def read_from_url(url, accept_content_type=None): else: # User has explicitly indicated that they do not want SSL # verification. - context = ssl._create_unverified_context() + if not __UNABLE_TO_VERIFY_SSL: + context = ssl._create_unverified_context() req = Request(url_util.format(url)) content_type = None
[remove_url->[_debug_print_delete_results],list_url->[_iter_local_prefix,_iter_s3_prefix],url_exists->[read_from_url],get_header->[unfuzz],read_from_url->[uses_ssl],LinkParser->[__init__->[__init__]],find_versions_of_archive->[spider],spider->[_spider->[read_from_url,LinkParser]],push_to_url->[warn_no_ssl_cert_checking,uses_ssl],_iter_s3_prefix->[_list_s3_objects],_list_s3_objects->[_iter_s3_contents]]
Read a file from a URL. return response url headers response_nopagon if accept_content_type is False.
What's this condition doing, and what happens if `__UNABLE_TO_VERIFY_SSL` is true here? Seems like there will be no context in that case. Can that happen?
@@ -634,6 +634,13 @@ Any pachctl command that can take a Commit ID, can take a branch name instead.`, trigger.Branch == "" { return errors.Errorf("trigger condition specified without a branch to trigger on, specify a branch with --trigger") } + if trigger.Branch == "" { + trigger = nil + } + var headCommit *pfs.Commit + if head != "" { + headCommit = branch.Repo.NewCommit("", head) + } c, err := client.NewOnUserMachine("user") if err != nil { return err
[StringVar,GetFileURL,PrintDetailedCommitInfo,TempFile,Fd,TempDir,SubscribeCommit,HasPrefix,CreateRepo,Walk,RunFixedArgs,Flush,ReadFile,New,CreateDocsAlias,PrintDetailedBranchInfo,StartCommit,CompactPrintRepo,NewWriter,GetFile,DiffFileAll,MarkFlagCustom,Split,PutFile,Println,Disable,AddFlagSet,Stdin,Finish,IntVarP,GlobFileAll,SameFlag,BoolVar,Dir,FinishCommit,Close,ParseBranch,Wrap,ListFile,Is,PrintDetailedFileInfo,RunBoundedArgs,Marshal,ParseBool,Page,WithGZIPCompression,WithAppendPutFile,LookPath,NewInWorker,FlushCommit,CopyFile,Create,StringSliceVarP,PrintDiffFileInfo,MkdirAll,ListBranch,Printf,ParseBranches,IsDir,PrintFileInfo,PrintDetailedRepoInfo,CreateAlias,TrimPrefix,FilesystemCompletion,CreateBranch,RunPFSLoadTest,PutFileURL,InspectCommit,WithMaxConcurrentStreams,LookupEnv,FileCompletion,Clean,WithModifyFileClient,Int64VarP,WithAppendCopyFile,NewScanner,Text,Wrapf,InspectBranch,IsCygwinTerminal,ListRepoByType,PrintCommitInfo,Name,ToSlash,ParseHistory,Get,NewRepo,Scan,ParseFile,PrintBranch,PrintRepoInfo,InspectRepo,String,Parse,Open,RegisterCompletionFunc,Run,BoolVarP,Flags,DeleteFile,RemoveAll,NewCommit,Fields,ParseCommit,DeleteBranch,Ctx,StringVarP,VarP,Errorf,DeleteRepo,Wait,InspectFile,Join,ExitCode,SquashCommit,ListCommitF,InspectPipeline,NewFlagSet,Command,Int64Var,NewOnUserMachine,WithActiveTransaction,NewCommitProvenance,Fsck,IsTerminal,ScrubGRPC,ParseCommits,AndCacheFunc]
branchProvenance is the provenance for the branch to create or update. Flags for branch creation.
hahaha, I made the same fix in my branch =(
@@ -311,12 +311,10 @@ class BigQueryAvroUtils { verify(v instanceof Boolean, "Expected Boolean, got %s", v.getClass()); return v; case "TIMESTAMP": - // TIMESTAMP data types are represented as Avro LONG types. They are converted back to - // Strings with variable precision (up to six digits) to match the JSON files exported by - // BigQuery. + // TIMESTAMP data types are represented as Avro LONG types, microseconds since the epoch. + // Values may be negative since BigQuery timestamps start at 0001-01-01 00:00:00 UTC. verify(v instanceof Long, "Expected Long, got %s", v.getClass()); - double doubleValue = ((Long) v) / 1_000_000.0; - return formatTimestamp(Double.toString(doubleValue)); + return formatTimestamp((Long) v); case "RECORD": verify(v instanceof GenericRecord, "Expected GenericRecord, got %s", v.getClass()); return convertGenericRecordToTableRow((GenericRecord) v, fieldSchema.getFields());
[BigQueryAvroUtils->[convertRequiredField->[formatTimestamp,formatDate,formatTime,convertGenericRecordToTableRow],convertField->[toGenericAvroSchema],convertNullableField->[convertRequiredField],convertGenericRecordToTableRow->[convertGenericRecordToTableRow]]]
Convert the required field of a partition into the corresponding avro type. Returns a string representation of the given object if it is a reserved word. Get a base64 encoded byte sequence from bytes.
Maybe you don't need to cast it if you've checked its type?
@@ -259,8 +259,8 @@ public class FingerprinterTest { Collection<Fingerprint> fingerprints = action.getFingerprints().values(); for (Fingerprint f: fingerprints) { assertTrue(f.getOriginal().is(upstream)); - assertTrue(f.getOriginal().getName().equals(renamedProject1)); - assertFalse(f.getOriginal().getName().equals(oldUpstreamName)); + assertEquals(f.getOriginal().getName(), renamedProject1); + assertNotEquals(f.getOriginal().getName(), oldUpstreamName); } action = downstreamBuild.getAction(Fingerprinter.FingerprintAction.class);
[FingerprinterTest->[presentFingerprintActionIsReused->[FingerprintAddingBuilder]]]
Rename project. This method checks that all the fingerprints are in the right order.
Here also use hamcrest?
@@ -25,14 +25,14 @@ function archives_shortcode( $attr ) { ); extract( shortcode_atts( $default_atts, $attr, 'archives' ) ); - if ( !in_array( $type, array( 'yearly', 'monthly', 'daily', 'weekly', 'postbypost' ) ) ) + if ( ! in_array( $type, array( 'yearly', 'monthly', 'daily', 'weekly', 'postbypost' ) ) ) $type = 'postbypost'; - if ( !in_array( $format, array( 'html', 'option', 'custom' ) ) ) - $format = 'html'; + if ( ! in_array( $format, array( 'html', 'option', 'custom' ) ) ) + $format = 'html'; if ( '' != $limit ) { - $limit = (int)$limit; + $limit = ( int ) $limit; // A Limit of 0 makes no sense so revert back to the default. if ( 0 == $limit ) { $limit = '';
[No CFG could be retrieved]
Shortcode for archives This function is used to display a list of archives that are currently published.
Whitespace fixes! <3 <3 <3
@@ -46,7 +46,7 @@ define(['../../Core/buildModuleUrl', * @alias CesiumWidget * @constructor * - * @param {Element|String} container The DOM element, or ID of a page element, that will contain the widget. + * @param {Element|String} container The element, or ID of an page element, that will contain the widget. * @param {Object} [options] Configuration options for the widget. * @param {Clock} [options.clock=new Clock()] The clock to use to control current time. * @param {ImageryProvider} [options.imageryProvider=new BingMapsImageryProvider()] The imagery provider to serve as the base layer.
[No CFG could be retrieved]
Construct a URL for a base layer widget. CesiumWidget provides a widget to be rendered in a container.
If we're going to be changing these, I think "The DOM element, or ID of a DOM element" is probably best. DOM is important to qualify. I was unsure about whether node vs. element but the W3 spec uses element as a more specific term.
@@ -81,7 +81,7 @@ public abstract class AbstractLambdaPollLoop { continue; } try { - if (LambdaHotReplacementRecorder.enabled) { + if (LambdaHotReplacementRecorder.enabled && launchMode == LaunchMode.DEVELOPMENT) { try { // do not interrupt during a hot replacement // as shutdown will abort and do nasty things.
[AbstractLambdaPollLoop->[requeue->[requeue],startPollLoop->[run->[isStream]]]]
This method is called by the polling thread. It will poll the next request in the queue This method is called from the lambda thread. This method is called when the polling thread is complete. It is called by the polling thread.
Why only in dev mode are we doing the hot replacement check?
@@ -56,6 +56,14 @@ module Users redirect_to root_url(request_id: request_id) end + def bounced + issuer = sp_session[:issuer] + return if issuer.blank? + service_provider = ServiceProvider.from_issuer(issuer) + @sp_name = service_provider.friendly_name + @sp_link = service_provider.return_to_sp_url + end + private def redirect_to_signin
[SessionsController->[update_last_sign_in_at_on_email->[now],track_authentication_attempt->[new],process_locked_out_user->[new],now->[now]]]
timeout the user session if it is expired.
We may wanna move this to something like `SpHandoffBounced` controller to separate the logic from sessions
@@ -72,10 +72,6 @@ set({name}_LIBRARIES{build_type_suffix} "${{{name}_LIBRARIES{build_type_suffix}} set(CMAKE_MODULE_PATH {deps.build_paths} ${{CMAKE_MODULE_PATH}}) set(CMAKE_PREFIX_PATH {deps.build_paths} ${{CMAKE_PREFIX_PATH}}) - -foreach(_BUILD_MODULE_PATH ${{{name}_BUILD_MODULES_PATHS{build_type_suffix}}}) - include(${{_BUILD_MODULE_PATH}}) -endforeach() """
[CMakeFindPackageCommonMacros->[dedent],find_transitive_dependencies->[append,format,join,dedent]]
Find transitive dependencies.
Wasn't this a feature already of both generators, the normal and the _multi one? removing this from here won't break the _multi one? From the docs: > The <PKG-NAME>Targets-.cmake files use <PKG-NAME>_BUILD_MODULES_<BUILD-TYPE> values to include the files using the include( ) CMake directive. This makes functions or utilities exported by the package available for consumers just by setting find_package(<PKG-NAME>) in the CMakeLists.txt.
@@ -12,7 +12,12 @@ define(function() { * @see DrawCommand * @see ClearCommand */ - var PassState = function() { + var PassState = function(context) { + /** + * DOC_TBA + */ + this.context = context; + /** * The framebuffer to render to. This framebuffer is used unless a {@link DrawCommand} * or {@link ClearCommand} explicitly define a framebuffer, which is used for off-screen
[No CFG could be retrieved]
A class that defines the state of a specific which is used to supplement the.
The rest of this class has doc, and as a general rule we don't allow new DOC_TBAs in master. Is there a good reason for this?
@@ -292,10 +292,13 @@ int NCONF_get_number_e(const CONF *conf, const char *group, const char *name, if (str == NULL) return 0; - for (*result = 0; conf->meth->is_number(conf, *str);) { - *result = (*result) * 10 + conf->meth->to_int(conf, *str); - str++; - } + if (conf == NULL) + *result = strtol(str, &str, 10); + else + for (*result = 0; conf->meth->is_number(conf, *str);) { + *result = (*result) * 10 + conf->meth->to_int(conf, *str); + str++; + } return 1; }
[CONF_dump_bio->[CONF_set_nconf],NCONF_get_number_e->[NCONF_get_string],CONF_load_bio->[CONF_set_nconf],CONF_get_string->[CONF_set_nconf],CONF_get_section->[CONF_set_nconf],CONF_free->[CONF_set_nconf],CONF_get_number->[CONF_set_nconf]]
Get number of occurrences of a number in a group.
One has to recognize that this is actually not equivalent replacement. Because strtol pays attention to `-`, is saturating and sets errno. One can of course argue that it's the right thing to do. [Maybe modulo errno? Maybe it would be appropriate to save-restore it?] But then this calls for question if below code is right. One should note that it's actually prone to undefined behaviour. Because signed arithmetics that carries into sign bit is declared undefined. It might be appropriate to perform unsigned operations, pay attention to carry, then finally perform check for LONG_MAX and saturate? And maybe it should pay attention to `-`. Alternatively it might be more appropriate to use strtoul with cast as ~above~ fallback... Or right thing to do might be to have ~called~ caller passing default method...
@@ -32,7 +32,8 @@ public enum Mode { ASYNC(false), ; private final boolean sync; - private Mode(boolean sync) { + + Mode(boolean sync) { this.sync = sync; }
[toString->[name],forCacheMode->[isSynchronous],apply->[toAsync,toSync]]
Returns the cache mode for the given cache mode.
Guessing you didn't mean to change this?
@@ -40,7 +40,7 @@ export class ItemCountComponent { */ @Input() set params(params: { page?: number; totalItems?: number; itemsPerPage?: number }) { if (params.page !== undefined && params.totalItems !== undefined && params.itemsPerPage !== undefined) { - this.first = (params.page - 1) * params.itemsPerPage === 0 ? 1 : (params.page - 1) * params.itemsPerPage + 1; + this.first = (params.page - 1) * params.itemsPerPage + 1; this.second = params.page * params.itemsPerPage < params.totalItems ? params.page * params.itemsPerPage : params.totalItems; } else { this.first = undefined;
[No CFG could be retrieved]
ItemCountComponent - Class for the ItemCount component.
Is this correct for `page: 0`, `totalItems: 0`, `itemsPerPage: 0`? Is it supposed to happen?
@@ -44,3 +44,7 @@ class Freetype(AutotoolsPackage): def configure_args(self): return ['--with-harfbuzz=no'] + + def setup_dependent_environment(self, spack_env, run_env, dependent_spec): + spack_env.prepend_path('CPATH', + join_path(self.prefix, 'include', 'freetype2'))
[Freetype->[depends_on,version]]
Configure command line arguments for harfbuzz.
You should actually be able to replace this whole `join_path` stuff with `self.prefix.include.freetype2`
@@ -53,8 +53,15 @@ public class JSONFlattenerMaker implements ObjectFlatteners.FlattenerMaker<JsonN .options(EnumSet.of(Option.SUPPRESS_EXCEPTIONS)) .build(); + private final boolean keepNullValues; + private final CharsetEncoder enc = StandardCharsets.UTF_8.newEncoder(); + public JSONFlattenerMaker(boolean keepNullValues) + { + this.keepNullValues = keepNullValues; + } + @Override public Iterable<String> discoverRootFields(final JsonNode obj) {
[JSONFlattenerMaker->[valueConversionFunction->[valueConversionFunction]]]
Discover root fields.
I'm wondering why `JSONFlattenMaker` filters nulls out even though other `FlattenMaker` implementations don't. We should probably make the behavior consistent, but it should be done in a separate PR.
@@ -161,11 +161,13 @@ class ScroogeGen(SimpleCodegenTask, NailgunTask): for lhs, rhs in partial_cmd.namespace_map: args.extend(['--namespace-map', '%s=%s' % (lhs, rhs)]) - if partial_cmd.rpc_style == 'ostrich': - args.append('--finagle') - args.append('--ostrich') - elif partial_cmd.rpc_style == 'finagle': - args.append('--finagle') + # ignore rpc_style if we think compiler_args is setting it + if not compiler_args_has_rpc_style(): + if partial_cmd.rpc_style == 'ostrich': + args.append('--finagle') + args.append('--ostrich') + elif partial_cmd.rpc_style == 'finagle': + args.append('--finagle') args.extend(['--dest', target_workdir])
[ScroogeGen->[gen->[_tempname],execute_codegen->[_validate_language,_validate_rpc_style],_thrift_dependencies_for_target->[_declares_service],synthetic_target_type->[_target_type_for_language],_resolved_dep_info->[_resolve_deps],_validate_compiler_configs->[collect->[compiler_config],compiler_config],_target_type_for_language->[_registered_language_aliases]]]
Generate a sequence of Nagle objects. Format the return code of the command.
As mentioned above, I think this arg creation should move before `PartialCmd` creation.
@@ -68,12 +68,16 @@ public class CobblerSyncTask extends RhnJavaJob { Double mtime = null; try { - mtime = (Double) invoker.invokeMethod("last_modified_time", + String mtimeStr = (String) invoker.invokeMethod("last_modified_time", new ArrayList()); + mtime = Double.parseDouble(mtimeStr); } catch (XmlRpcFault e) { log.error("Error calling cobbler.", e); } + catch (NumberFormatException e) { + log.error("Error converting cobbler response", e); + } CobblerDistroSyncCommand distSync = new CobblerDistroSyncCommand(); distSync.backsyncKernelOptions();
[CobblerSyncTask->[execute->[getMessage,getDefaultDownloadLocation,longValue,equals,getOrg,CobblerProfileSyncCommand,getTree,CobblerDistroSyncCommand,ArrayList,sendErrorEmail,invokeMethod,getTime,getClassFromConfig,lookupKickstartDataByUpdateable,set,CobblerProfileEditCommand,getRealUpdateType,error,debug,updateKickstartableTree,getCobblerId,getNewestTree,getName,syncNullDistros,KickstartEditCommand,backsyncKernelOptions,isNotEmpty,get,store,getId],AtomicLong]]
Execute the Cobbler sync. syncs distros and profiles if it is less than last change.
I cannot think that these API changes are wanted.
@@ -870,9 +870,11 @@ def prepare_and_parse_args(plugins, args): "multiple -d flags or enter a comma separated list of domains " "as a parameter.") helpful.add( - None, "--duplicate", dest="duplicate", action="store_true", - help="Allow getting a certificate that duplicates an existing one") - + None, "--keep-until-expiring", "--keep", "--reinstall", + dest="reinstall", action="store_true", + help="If the requested cert matches an existing cert, keep the " + "existing one by default until it is due for renewal (for the " + "'run' subcommand this means reinstall the existing cert)") helpful.add_group( "automation", description="Arguments for automating execution & other tweaks")
[_auth_from_domains->[_report_new_cert,_treat_as_renewal],_treat_as_renewal->[_find_duplicative_certs],install->[_init_le_client,_find_domains,choose_configurator_plugins],_paths_parser->[config_help,add,add_group,flag_default],_plugins_parsing->[add,add_group,add_plugin_args],revoke->[revoke,_determine_account],_handle_exception->[flag_default],run->[_auth_from_domains,_suggest_donate,_find_domains,choose_configurator_plugins,_init_le_client],WebrootPathProcessor->[__init__->[__init__]],main->[setup_logging,prepare_and_parse_args],_init_le_client->[_determine_account],prepare_and_parse_args->[HelpfulArgumentParser,flag_default,add_deprecated_argument,add,parse_args,config_help,add_group],rollback->[rollback],setup_logging->[setup_log_file_handler],HelpfulArgumentParser->[__init__->[usage_strings,SilentParser,flag_default],add_deprecated_argument->[add_deprecated_argument],add_plugin_args->[add_group],parse_args->[parse_args],add->[add_argument]],obtain_cert->[_auth_from_domains,_suggest_donate,_find_domains,_report_new_cert,choose_configurator_plugins,_init_le_client],SilentParser->[add_argument->[add_argument]],choose_configurator_plugins->[set_configurator,diagnose_configurator_problem],_create_subparsers->[add,add_group,flag_default],main]
Prepares and parses command line arguments for a . Add options for the missing certificate. A command line interface to add a new . Get the name of the node that is not in the list of known nodes.
What is the benefit of these aliases? I personally feel that it complicates the command line with little to no gain.
@@ -18,12 +18,7 @@ module UploadHelper elsif %w[.yml .yaml].include? filetype { type: '.yml', - contents: YAML.safe_load( - upload_file.read.encode(Encoding::UTF_8, encoding), - [Date, Time, Symbol, ActiveSupport::TimeWithZone, ActiveSupport::TimeZone], - [], - true # Allow aliases in YML file - ) + contents: parse_yaml_content(upload_file.read.encode(encoding, 'UTF-8')) } else raise StandardError, I18n.t('upload_errors.malformed_csv')
[process_file_upload->[size,original_filename,include?,raise,safe_load,require,encode,t,extname],upload_files_helper->[binmode,path,mime_type,write,new,file?,casecmp?,open,read,directory?,rewind,each,extname,name]]
Processes the file upload of a and returns a hash with the necessary information.
Why did you change the arguments to `encode` here?
@@ -252,9 +252,11 @@ static void set_allowed_options(OptionList *allowed_options) allowed_options->insert(std::make_pair("map-dir", ValueSpec(VALUETYPE_STRING, _("Same as --world (deprecated)")))); allowed_options->insert(std::make_pair("world", ValueSpec(VALUETYPE_STRING, - _("Set world path (implies local game) ('list' lists all)")))); + _("Set world path (implies local game)")))); allowed_options->insert(std::make_pair("worldname", ValueSpec(VALUETYPE_STRING, _("Set world by name (implies local game)")))); + allowed_options->insert(std::make_pair("worldlist", ValueSpec(VALUETYPE_STRING, + _("Get list of worlds (implies local game) ('path' lists paths, 'name' lists names, 'both' lists both)")))); allowed_options->insert(std::make_pair("quiet", ValueSpec(VALUETYPE_FLAG, _("Print to console errors only")))); allowed_options->insert(std::make_pair("info", ValueSpec(VALUETYPE_FLAG,
[No CFG could be retrieved]
Set allowed options Add some options that are not allowed to log.
Line is longer than 80 characters. Please split it.
@@ -229,6 +229,14 @@ if (device_permitted($vars['device']) || $check_device == $vars['device']) { </a> </li>'); + if (@dbFetchCell("SELECT COUNT(stp_id) FROM stp WHERE device_id = '".$device['device_id']."'") > '0') { + echo '<li class="'.$select['stp'].'"> + <a href="'.generate_device_url($device, array('tab' => 'stp')).'"> + <img src="images/16/chart_organisation.png" align="absmiddle" border="0" /> STP + </a> + </li>'; + } + if (@dbFetchCell("SELECT COUNT(*) FROM `packages` WHERE device_id = '".$device['device_id']."'") > '0') { echo '<li class="'.$select['packages'].'"> <a href="'.generate_device_url($device, array('tab' => 'packages')).'">
[No CFG could be retrieved]
Displays a list of all possible network interfaces for the given device. Displays a list of all possible n - hot - devices and services.
Instead of `count` just select `1` - will take some load away from the sql if the table gets too populated
@@ -37,6 +37,7 @@ class Annotation < ApplicationRecord content: annotation_text.content, annotation_category: annotation_text.annotation_category&.annotation_category_name, + category_id: annotation_text.annotation_category&.id, type: self.class.name, number: annotation_number, is_remark: is_remark,
[Annotation->[modify_mark_with_deduction->[flexible_criterion,update_deduction],get_data->[split,id,deduction,filename,last_name,content,first_name,annotation_category_name,name],belongs_to,after_destroy,include?,deduction,after_create,validates_numericality_of,validates_associated,validates_format_of,validates_presence_of,validates_inclusion_of]]
Get data for object.
Sorry I missed this one. The name is fine, but in general I'd prefer using the exact attribute names when possible: here that would be `annotation_category_id` instead of `category_id`. In fact, you can access `annotation_text.annotation_category_id` explicitly. :)
@@ -164,6 +164,17 @@ class ProxyInvocationHandler implements InvocationHandler { + Arrays.toString(args) + "]."); } + private void writeObject(java.io.ObjectOutputStream stream) + throws IOException { + throw new NotSerializableException( + "PipelineOptions objects are not serializable and should not be embedded into transforms " + + "(did you capture a PipelineOptions object in a field or in an anonymous class?). " + + "Instead, if you're using a DoFn, access PipelineOptions at runtime " + + "via ProcessContext/StartBundleContext/FinishBundleContext.pipelineOptions(), " + + "or pre-extract necessary fields from PipelineOptions " + + "at pipeline construction time."); + } + /** * Track whether options values are explicitly set, or retrieved from defaults. */
[ProxyInvocationHandler->[getValueFromJson->[toString],getDefault->[equals],buildOptionNameToSpecMap->[getValue],Deserializer->[deserialize->[as,getValue]],PipelineOptionsDisplayData->[populateDisplayData->[isDefault,getValue]],outputRuntimeOptions->[invoke,equals],BoundValue->[fromDefault->[of],fromExplicitOption->[of]],toString->[toString,getValue],Serializer->[serialize->[getValue],ensureSerializable->[getValue]]]]
This method is invoked when a method is invoked.
`ProcessContext.pipelineOptions()` -> `ProcessContext.getPipelineOptions()` Its also available on StartBundleContext and FinishBundleContext, should we mention these?
@@ -0,0 +1,12 @@ +from decimal import Decimal + + +def get_error_response(amount: Decimal, **additional_kwargs) -> dict: + """Create a place holder response for invalid/ failed requests + for generated a failed transaction object.""" + return dict(is_success=False, amount=amount, **additional_kwargs) + + +def get_amount_for_razorpay(amount: Decimal) -> int: + """Convert a decimal amount to int, by multiplying the value by 100.""" + return int(amount * 100)
[No CFG could be retrieved]
No Summary Found.
Same here, I'd add a word about the conversion, it's not about multiplying the amount, it's about converting it from Indian rupees to paisa.
@@ -1561,11 +1561,8 @@ h6 > .elgg-icon { .elgg-plugin-title { font-weight: bold; - margin-right: 5px; -} -.elgg-state-active .elgg-plugin-title { - font-style: italic; } + .elgg-state-inactive .elgg-plugin-title { color: #666; }
[No CFG could be retrieved]
Displays a list of items with options for the menu item. list of all possible contributors.
@mrclay Sorry, but I can't stand the look of it.
@@ -45,6 +45,13 @@ if (process.platform === 'darwin') { } } +if (process.platform === 'linux' && bindings.unityLauncherAvailable()) { + app.unityLauncher = { + setBadgeCount: bindings.unityLauncherSetBadgeCount, + getBadgeCount: bindings.unityLauncherGetBadgeCount + } +} + app.allowNTLMCredentialsForAllDomains = function (allow) { if (!process.noDeprecations) { deprecate.warn('app.allowNTLMCredentialsForAllDomains', 'session.allowNTLMCredentialsForDomains')
[No CFG could be retrieved]
Get the application menu.
The existence of API should not depend on current environment, if an API is available on one Linux distribution, it should be able to be called on other Linux distributions, even though it may fail. We can add a `app.unityLauncher.isAvailable` API to let user decide how to behave on different platforms.
@@ -39,7 +39,9 @@ func GetPodLogs(kubeClient kubernetes.Interface, namespace string, podName strin os.Exit(1) } - defer logStream.Close() + defer func() { + _ = logStream.Close() + }() buf := new(bytes.Buffer) _, err = buf.ReadFrom(logStream) if err != nil {
[Stream,NewSet,CoreV1,Msgf,Now,NewForConfig,Close,AdmissionregistrationV1beta1,Delete,Info,ReadFrom,Exit,Add,Done,Error,NewTime,GetLogs,NewNonInteractiveDeferredLoadingClientConfig,Since,SliceStable,MutatingWebhookConfigurations,UnixNano,ReadString,Contains,Get,Err,Namespaces,Sleep,ClientConfig,Printf,Msg,Println,NewReader,Background,Int64Ptr,List,NewDefaultClientConfigLoadingRules,String,Pods]
GetPodLogs returns the logs of a given pod in a given namespace. DeleteWebhookConfiguration deletes the mutating webhook configuration by name.
In such cases, if you are disregarding error checks anyway, you can annotate the linter to not flag such errors by using `//nolint: ...`.
@@ -16,6 +16,7 @@ def hash_all(strs, digest=None): """ digest = digest or hashlib.sha1() for s in strs: + s = ensure_binary(s) digest.update(s) return digest.hexdigest()
[stable_json_hash->[hash_all],Sharder->[is_in_shard->[compute_shard],compute_shard->[hash_all],__init__->[ensure_int->[InvalidShardSpec],InvalidShardSpec,ensure_int]]]
Hashes all the strings in strs and returns the hash in hex form.
Hasher requires byte strings. Most of the changes made below are converting to bytes.
@@ -132,6 +132,7 @@ func NewTraefikDefaultPointersConfiguration() *TraefikConfiguration { defaultKubernetes.Watch = true defaultKubernetes.Endpoint = "" defaultKubernetes.LabelSelector = "" + defaultKubernetes.IngressClass = "" defaultKubernetes.Constraints = types.Constraints{} // default Mesos
[Duration]
This is a utility function that returns a catalog of all the possible types of a node. missing - related - fields.
Hmm, interesting. Do we need to set this given that it's the zero value?
@@ -18,6 +18,7 @@ import androidx.test.ext.junit.runners.AndroidJUnit4; import static com.ichi2.libanki.Consts.MODEL_CLOZE; import static com.ichi2.libanki.Utils.stripHTML; import static org.hamcrest.MatcherAssert.assertThat; +import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.not; import static org.junit.Assert.assertArrayEquals;
[ModelTest->[test_cloze_ordinals->[size,getJSONObject,setCurrent,current,getCol,addNote,byName,put,cards,getModels,addTemplateModChanged,newTemplate,setItem,getOrd,cardCount,assertEquals,newNote,remTemplate,get,save],test_modelCopy->[scmhash,getLong,getString,copy,current,assertNotEquals,getCol,length,assertEquals],test_fields->[scmhash,getFields,getJSONObject,flush,getItem,current,assertNotEquals,getCol,addNote,containsString,setItem,getNote,renameField,addField,assertArrayEquals,moveField,assertEquals,newNote,newField,get,getString,assertThat,remField],test_req->[assertTrue,getJSONArray,save,length,reqSize,getString,getInt,has,getCol,contains,byName,assertEquals,put,JSONArray,getModels],test_cloze_mathjax->[assertTrue,numberOfCards,not,assertThat,setCurrent,endsWith,containsString,assertNotEquals,getCol,addNote,q,byName,assertEquals,newNote,setItem],test_text->[assertThat,current,containsString,getCol,put,addNote,q,save,newNote,setItem],test_chained_mods->[remTemplate,newNote,getJSONObject,addTemplateModChanged,save,setCurrent,current,newTemplate,getCol,addNote,q,byName,assertEquals,put,setItem,getModels],test_modelDelete->[cardCount,current,getCol,addNote,setItem,assertEquals,newNote,rem],reqSize->[getInt,assertEquals,length],test_templates->[size,getJSONObject,current,getCol,addNote,q,put,cards,getModels,stripHTML,addTemplateModChanged,newTemplate,load,setItem,template,moveTemplate,getOrd,queryLongScalar,cardCount,assertEquals,newNote,remTemplate,get,save],test_cloze->[size,flush,setCurrent,assertNotEquals,getCol,addNote,q,byName,cards,numberOfCards,containsString,setItem,cardCount,assertEquals,newNote,get,getString,assertThat,a],test_typecloze->[assertThat,newNote,save,setCurrent,containsString,getCol,addNote,q,byName,put,setItem],test_modelChange->[getJSONObject,getItem,current,queryScalar,getCol,addNote,q,byName,put,getModels,numberOfCards,addTemplateModChanged,newTemplate,containsString,load,setItem,change,save,getOrd,assertEquals,newNote,remTemplate,get,getId,useCount,assertThat]]]
package com. ichi2. libanki. RobolectricTest ; Basic copy of the models.
Doesn't seem necessary
@@ -24,6 +24,10 @@ import ( templatev1 "github.com/openshift/api/template/v1" ) +func init() { + batchv1.AddToScheme(legacyscheme.Scheme) +} + type roundtripper func(*http.Request) (*http.Response, error) func (rt roundtripper) RoundTrip(r *http.Request) (*http.Response, error) {
[NewSimpleClientset,NewBuffer,Unix,PrioritizedVersionsAllGroups,Error,TestOnlyStaticRESTMapper,NopCloser,Marshal,PrependReactor,checkReadiness,NewForConfig,Fatal,AuthorizationV1,Add]
RoundTrip is a wrapper around the underlying RoundTripper. It will return an error if the.
This needs to be fixed (can be followup), this might risk infecting other tests. We should have scheme just for the test, as discussed some time ago.
@@ -300,16 +300,6 @@ public class DoFnOperator<InputT, OutputT> extends AbstractStreamOperator<Window sideInputReader = NullSideInputReader.of(sideInputs); - // maybe init by initializeState - if (nonKeyedStateInternals == null) { - if (keyCoder != null) { - nonKeyedStateInternals = - new FlinkKeyGroupStateInternals<>(keyCoder, getKeyedStateBackend()); - } else { - nonKeyedStateInternals = new FlinkSplitStateInternals<>(getOperatorStateBackend()); - } - } - if (!sideInputs.isEmpty()) { FlinkBroadcastStateInternals sideInputStateInternals =
[DoFnOperator->[processElement2->[addSideInputValue,setPushedBackWatermark],processElement1->[setPushedBackWatermark],snapshotState->[invokeFinishBundle,snapshotState],TaggedKvCoder->[decode->[decode],verifyDeterministic->[verifyDeterministic],encode->[encode]],FlinkTimerInternals->[deleteTimer->[cleanupPendingTimer],currentProcessingTime->[currentProcessingTime],currentSynchronizedProcessingTime->[currentProcessingTime],currentInputWatermarkTime->[getPushbackWatermarkHold],setTimer->[setTimer]],processElement->[processElement],emitAllPushedBackData->[setPushedBackWatermark,processElement],setup->[setup],processWatermark1->[getPushbackWatermarkHold],dispose->[dispose],initializeState->[initializeState],addSideInputValue->[addSideInputValue],processWatermark2->[processWatermark1],emitWatermark->[emitWatermark],onProcessingTime->[checkInvokeStartBundle],close->[close],open->[getDoFn,createWrappingDoFnRunner]]]
Initializes the state of the FlinkInput. get - finds a in the state - internals and timer - internals.
Why'd you remove this check?
@@ -118,7 +118,7 @@ class Jetpack_CLI extends WP_CLI_Command { } $body = wp_remote_retrieve_body( $response ); - if ( ! $body ) { + if ( ! $body || is_wp_error( $body ) ) { WP_CLI::error( __( 'Failed to test connection (empty response body)', 'jetpack' ) ); }
[Jetpack_CLI->[test_connection->[get_error_message,get_error_code],sync_queue->[peek],partner_provision->[generate_secrets,partner_provision_error,get_api_host],partner_provision_error->[get_error_message,get_error_code],partner_cancel->[partner_provision_error,get_api_host],sync->[do_full_sync,get_error_code]]]
Test the connection to the Jetpack server.
This probably should add a new response conditional, as the error message being passed back (empty response body) wouldn't be accurate.
@@ -227,7 +227,7 @@ class TokenNetwork: timeout_min = self.settlement_timeout_min() timeout_max = self.settlement_timeout_max() - invalid_timeout = settle_timeout < timeout_min or settle_timeout > timeout_max + invalid_timeout = settle_timeout <= timeout_min or settle_timeout >= timeout_max if invalid_timeout: msg = ( f"settle_timeout must be in range [{timeout_min}, "
[TokenNetwork->[update_transfer->[chain_id,_detail_participant,_detail_channel],safety_deprecation_switch->[safety_deprecation_switch],new_netting_channel->[raise_if_invalid_address_pair,token_network_deposit_limit,safety_deprecation_switch],_set_total_deposit->[channel_participant_deposit_limit,token_network_deposit_limit,safety_deprecation_switch,_detail_participant],_check_for_outdated_channel->[_detail_channel],_set_total_withdraw->[_detail_channel,_detail_participant],close->[chain_id,_detail_channel],detail_participants->[ParticipantsDetails,get_channel_identifier,_detail_participant],unlock->[_detail_participant,_detail_channel],channel_participant_deposit_limit->[channel_participant_deposit_limit],set_total_withdraw->[chain_id,_detail_participant,_detail_channel],settle->[_detail_participant,_detail_channel],settlement_timeout_max->[settlement_timeout_max],set_total_deposit->[channel_participant_deposit_limit,token_network_deposit_limit,safety_deprecation_switch,_detail_participant,_detail_channel],_settle->[_detail_participant,_detail_channel],token_network_deposit_limit->[token_network_deposit_limit],get_channel_identifier_or_none->[get_channel_identifier],_get_channel_state->[_detail_channel],detail->[token_address,ChannelDetails,chain_id,detail_participants,_detail_channel],_detail_participant->[raise_if_invalid_address_pair,ParticipantDetails],_unlock->[_detail_channel,_detail_participant],get_channel_identifier->[raise_if_invalid_address_pair],_update_transfer->[_detail_participant,_detail_channel],_detail_channel->[raise_if_invalid_address_pair,get_channel_identifier,ChannelData],can_transfer->[channel_is_opened,_detail_participant],_close->[_detail_channel,_detail_participant],settlement_timeout_min->[settlement_timeout_min],_new_netting_channel->[token_network_deposit_limit,safety_deprecation_switch]]]
Creates a new channel in the TokenNetwork. Open a new channel in the token network.
Going by the variable names, min and max should be allowed. Is there a reason against changing this in the contracts?
@@ -134,6 +134,9 @@ cont_stop_agg_ult(struct ds_cont_child *cont) { int rc; + if (!cont->sc_vos_aggregating) + return; + D_DEBUG(DF_DSMS, DF_CONT": Stopping aggregation ULT\n", DP_CONT(NULL, cont->sc_uuid));
[No CFG could be retrieved]
END of function get_last_aggregate finds the object in the child list that has the same key in the child list.
I don't know why this extra checking is required, but it could cause 'sc_agg_ult' leaking when the aggregation ULT exited before calling this cont_stop_agg_ult() function. So you'd remove this check or move it after the 'sc_agg_ult' free below.
@@ -5,7 +5,7 @@ import os import platform from cryptography.hazmat.backends import default_backend -from cryptography.hazmat.primitives.asymmetric import rsa +from cryptography.hazmat.primitives.asymmetric.rsa import generate_private_key # type: ignore import josepy as jose import OpenSSL import zope.component
[sample_user_agent->[DummyConfig,determine_user_agent],perform_registration->[perform_registration],Client->[obtain_and_enroll_certificate->[obtain_certificate],obtain_certificate->[obtain_certificate_from_csr,obtain_certificate],__init__->[acme_from_config_key]],view_config_changes->[view_config_changes],register->[acme_from_config_key]]
Certbot client API. Generate a user agent string for the certbot acme client.
Usually I try and dig into this stuff myself when reviewing to offer a concrete suggestion but for speed, is there anyway we can remove the `type: ignore` here? If not, is there a link to the issue describing the problem?
@@ -36,7 +36,7 @@ public class MutinyInfrastructure { * calling from a Vert.x event-loop context / thread. */ String threadName = Thread.currentThread().getName(); - return !threadName.contains("vertx-eventloop-thread-"); + return !threadName.startsWith("vert.x-eventloop-thread-"); } }); }
[MutinyInfrastructure->[configureDroppedExceptionHandlerAndThreadBlockingChecker->[accept->[error],getAsBoolean->[contains,getName],setCanCallerThreadBeBlockedSupplier,getLogger,setDroppedExceptionHandler,BooleanSupplier],configureMutinyInfrastructure->[setDefaultExecutor]]]
Configure dropped exception handler and thread blocking checker.
Couldn't we have a constant in Vert.x or an API to test that?
@@ -897,15 +897,12 @@ class WPSEO_Metabox extends WPSEO_Meta { wp_enqueue_style( 'featured-image', plugins_url( 'css/featured-image' . WPSEO_CSSJS_SUFFIX . '.css', WPSEO_FILE ), array(), WPSEO_VERSION ); wp_enqueue_style( 'jquery-qtip.js', plugins_url( 'css/jquery.qtip' . WPSEO_CSSJS_SUFFIX . '.css', WPSEO_FILE ), array(), '2.2.1' ); - wp_enqueue_script( 'jquery-ui-autocomplete' ); - // Always enqueue minified as it's not our code. wp_enqueue_script( 'jquery-qtip', plugins_url( 'js/jquery.qtip.min.js', WPSEO_FILE ), array( 'jquery' ), '2.2.1', true ); wp_enqueue_script( 'wp-seo-metabox', plugins_url( 'js/wp-seo-metabox' . WPSEO_CSSJS_SUFFIX . '.js', WPSEO_FILE ), array( 'jquery', 'jquery-ui-core', - 'jquery-ui-autocomplete', ), WPSEO_VERSION, true ); if ( post_type_supports( get_post_type(), 'thumbnail' ) ) {
[WPSEO_Metabox->[score_headings->[save_score_result,strip_separators_and_fold],setup_page_analysis->[is_metabox_hidden],add_custom_box->[add_meta_box],check_double_focus_keyword->[save_score_result],enqueue->[localize_script,is_metabox_hidden],script->[localize_script],do_meta_box->[get_metabox_post],calculate_results->[aasort,strtolower_utf8],score_title->[save_score_result],column_heading->[is_metabox_hidden],publish_box->[get_metabox_post,is_metabox_hidden],meta_box->[do_tab,get_metabox_post],score_keyword->[save_score_result],score_body->[save_score_result,strtolower_utf8],snippet->[get_metabox_post],score_anchor_texts->[save_score_result,strtolower_utf8],score_images_alt_text->[save_score_result,strip_separators_and_fold],strip_separators_and_fold->[strtolower_utf8],column_content->[is_metabox_hidden],get_images_alt_text->[strtolower_utf8],score_description->[save_score_result,strip_separators_and_fold],get_headings->[strtolower_utf8],add_meta_box->[is_metabox_hidden],posts_filter_dropdown->[is_metabox_hidden],column_sort->[is_metabox_hidden],localize_script->[get_metabox_post,is_metabox_hidden],score_url->[save_score_result,strip_separators_and_fold]]]
Enqueue metaboxes for admin Enqueue scripts and styles for metabox Add the script to the metabox.
If `jquery-ui-core` is required, then `jquery` is already a dependency of that, so this line is redundant.
@@ -56,12 +56,12 @@ import java.util.concurrent.Callable; public class ITKafkaIndexingServiceTest extends AbstractIndexerTest { private static final Logger LOG = new Logger(ITKafkaIndexingServiceTest.class); - private static final int DELAY_BETWEEN_EVENTS_SECS = 5; private static final String INDEXER_FILE = "/indexer/kafka_supervisor_spec.json"; private static final String QUERIES_FILE = "/indexer/kafka_index_queries.json"; private static final String DATASOURCE = "kafka_indexing_service_test"; private static final String TOPIC_NAME = "kafka_indexing_service_topic"; - private static final int MINUTES_TO_SEND = 4; + private static final int NUM_EVENTS_TO_SEND = 60; + private static final long WAIT_TIME_MILIIS = 2 * 60 * 1000L; // We'll fill in the current time and numbers for added, deleted and changed // before sending the event.
[ITKafkaIndexingServiceTest->[testKafka->[call->[areSegmentsLoaded],forPattern,testQueriesFromString,getResourceAsStream,toString,info,put,plusSeconds,replaceAll,submitSupervisor,forID,getZookeeperHosts,DateTime,replace,error,getKafkaHost,print,sleep,ZkClient,format,close,ISE,StringSerializer,propagate,createTopic,shutdownSupervisor,retryUntil,compareTo,get,Properties],afterClass->[info,deleteTopic,unloadAndKillData],forPattern,Logger]]
This test class creates a new instance of the IKafkaIndexingService. It uses the The data structure for the object.
Misspelled, should be `WAIT_TIME_MILLIS`
@@ -39,7 +39,17 @@ public class ITParallelIndexTest extends AbstractITBatchIndexTest doIndexTestTest( INDEX_DATASOURCE, INDEX_TASK, - INDEX_QUERIES_RESOURCE + INDEX_QUERIES_RESOURCE, + false + ); + + // Index again, this time only choosing the second data file, and without explicit intervals chosen. + // The second datafile covers both day segments, so this should replace them, as reflected in the queries. + doIndexTestTest( + INDEX_DATASOURCE, + REINDEX_TASK, + REINDEX_QUERIES_RESOURCE, + true ); } }
[ITParallelIndexTest->[testIndexData->[getExtraDatasourceNameSuffix,doIndexTestTest,unloader]]]
Test index data.
By any chance, is it possible that this makes the test flaky? It will block while `getSegmentVersions()` returns a single version. What happens if overshadowed segments are removed from `CoordinatorServerView` between retries?
@@ -21,7 +21,7 @@ const ( bucketName = "file.v1" // Use old namespace for data until we do some field renaming for GA. - namespace = "audit.file" + namespace = "." ) func init() {
[purgeDeleted->[Errorw,IsExcludedPath,purgeOlder,Event],purgeOlder->[Seek,Cursor,Next,Now,Delete,Debugf,With,HasPrefix,Since,Update],init->[Error,Errorw,Now,UTC,Start,Wrap,Done,OpenBucket],hasFileChangedSinceLastEvent->[Debugw,Namespace,Warnw],Close->[Close],Run->[init,purgeDeleted,Done,reportEvent],reportEvent->[Errorw,hasFileChangedSinceLastEvent,Debugw,Delete,Event],DefaultMetricSet,Wrapf,Store,Beta,WithNamespace,NewLogger,Geteuid,MustAddMetricSet,Module,WithHostParser,Wrap,UnpackConfig,Load,Debugf]
Package file_integrity requires a module name metricsetName and namespace. Construct a new metric set from a base metric set.
I assume that means it ignores the namespace?
@@ -0,0 +1,14 @@ +# frozen_string_literal: true + +class CreateTopicGroupTable < ActiveRecord::Migration[5.2] + def change + create_table :topic_groups do |t| + t.integer :group_id, null: false + t.integer :topic_id, null: false + t.integer :last_read_post_number, null: false, default: 0 + t.timestamps null: false + end + + add_index :topic_groups, %i[group_id topic_id], unique: true + end +end
[No CFG could be retrieved]
No Summary Found.
hmmm so this is starting blank? Why not populate it in the migration?
@@ -50,7 +50,7 @@ class Cuda(Package): url="http://developer.download.nvidia.com/compute/cuda/6_5/rel/installers/cuda_6.5.14_linux_64.run") def install(self, spec, prefix): - runfile = glob(join_path(self.stage.path, 'cuda*run'))[0] + runfile = glob(join_path(self.stage.path, 'cuda*_linux*'))[0] chmod = which('chmod') chmod('+x', runfile) runfile = which(runfile)
[Cuda->[install->[join_path,runfile,chmod,glob,which],version]]
Install CUDA and CUDA toolkit.
was it a bug or shall this be version dependent?
@@ -53,7 +53,7 @@ class LineSource(iobase.BoundedSource): for line in f: if not range_tracker.try_claim(current): return - yield line.rstrip('\n') + yield line.decode().rstrip('\n') current += len(line) def split(self, desired_bundle_size, start_position=None, stop_position=None):
[SourcesTest->[test_run_direct->[LineSource,_create_temp_file],test_read_from_source->[LineSource,read,get_range_tracker,_create_temp_file]]]
Read a list of source bundles.
In case you are curious, we changed this to `line.rstrip(b'\n')` to fix the test in Python 3.
@@ -72,10 +72,11 @@ for _, module_name, _ in pkgutil.walk_packages(pyz.__path__): module = importlib.import_module(pyz.__name__ + '.' + module_name) # Check if we are in the IPython shell -try: +if major == 3: import builtins -except ImportError: - import __builtin__ as builtins # Py2 +else: + import __builtin__ as builtins + _is_ipython = hasattr(builtins, '__IPYTHON__') # Configure ROOT facade module
[pythonization->[pythonization_impl->[add_pythonization,fn]],cleanup->[__dict__,ClearProxiedObjects,hasattr,EndOfProcessCleanups],get_ipython,any,hasattr,dirname,import_module,walk_packages,register,format,join,ROOTFacade]
A decorator to be used in pythonization modules for pythonizations. Keep polling for root events and clean up all the objects.
Can you explain why this change? We can't do try except because `builtins` can also exist in Python2 but, in that case, it does not contain `__IPYTHON__`?
@@ -70,6 +70,7 @@ KNOWN_TASK_TYPES = { 'optical_character_recognition', 'place_recognition', 'question_answering', + 'salient_object_detection', 'semantic_segmentation', 'sound_classification', 'speech_recognition',
[FileSourceHttp->[start_download->[http_range_headers,handle_http_response],deserialize->[validate_string]],Model->[deserialize->[DeserializationError,validate_string,deserialization_context,validate_string_enum,deserialize]],TaggedBase->[deserialize->[DeserializationError]],run_in_parallel->[start->[JobWithQueuedOutput,QueuedOutputContext],complete,cancel],PostprocUnpackArchive->[apply->[print_section_heading],deserialize->[validate_string,validate_relative_path]],load_models->[DeserializationError,deserialization_context,deserialize],JobWithQueuedOutput->[complete->[print],cancel->[cancel,interrupt]],deserialization_context->[DeserializationError],validate_nonnegative_int->[DeserializationError],DirectOutputContext->[print->[print],subprocess->[print,_signal_message]],Reporter->[log_error->[print,printf],print_section_heading->[printf],with_event_context->[Reporter],print->[printf],log_details->[print],log_warning->[print,printf],print_progress->[print],emit_event->[print],end_progress->[print],print_group_heading->[print,printf]],FileSourceGoogleDrive->[start_download->[http_range_headers,handle_http_response],deserialize->[validate_string]],ModelFile->[deserialize->[DeserializationError,validate_string,deserialization_context,validate_nonnegative_int,validate_relative_path,deserialize]],validate_string->[DeserializationError],load_models_from_args->[print,load_models_or_die],QueuedOutputContext->[subprocess->[_signal_message]],PostprocRegexReplace->[apply->[print_section_heading],deserialize->[validate_string,validate_nonnegative_int,validate_relative_path]],load_models_or_die->[print,load_models],validate_string_enum->[validate_string,DeserializationError],JobContext->[printf->[print]],validate_relative_path->[validate_string,DeserializationError]]
Load the n - ary models from the MSM file. A JobContext class for handling the signal.
please also update tools/downloader/README.md
@@ -444,7 +444,11 @@ namespace Dynamo.Core internal string GetUserDataFolder(IPathResolver pathResolver = null) { if (pathResolver != null && !string.IsNullOrEmpty(pathResolver.UserDataRootFolder)) - return GetDynamoDataFolder(pathResolver.UserDataRootFolder); + { + var versionedPath = GetDynamoDataFolder(pathResolver.UserDataRootFolder); + if (Directory.Exists(versionedPath)) return versionedPath; + return pathResolver.UserDataRootFolder; + } if (!string.IsNullOrEmpty(userDataDir)) return userDataDir; //Return the cached userDataDir if we have one.
[PathManager->[TransformPath->[GetUserDataFolder]]]
Get the user data folder.
Can you please assess the impact of this code change for packages migration?
@@ -2512,6 +2512,13 @@ RSpec.describe TopicsController do expect(response.media_type).to eq('application/rss+xml') end + it 'renders rss even if post is deleted' do + topic.posts.map(&:destroy) + get "/t/foo/#{topic.id}.rss" + expect(response.status).to eq(200) + expect(response.media_type).to eq('application/rss+xml') + end + it 'renders rss of the topic correctly with subfolder' do set_subfolder "/forum" get "/t/foo/#{topic.id}.rss"
[topic_user_post_timings_count->[count,map],extract_post_stream->[parsed_body,map],invite_group->[id,to,post,eq,name],email,create,let,duration,week,freeze_time,current,eq_time,it,set_subfolder,contain_exactly,to,external_system_avatars_enabled,max_reply_history,cooked,allow_staff_to_tag_pms,tl1_requires_read_posts,with,avatar_template,username,types,sort!,change,httpdate,each,detailed_404,match,post_number,context,tl1_requires_topics_entered,allowed_tags,uncategorized_category_id,key,enable_category_group_moderation,reload,to_date,shared_drafts_category,messageable_level,to_not,custom_fields,match_array,first_visited_at,slug,create!,where,map,group,save!,default,post,returns,let!,destroy,t,last,embed_set_canonical_url,include,url,allow_index_in_robots_txt,title,id,delete,execute,max_allowed_message_recipients,include_examples,get,not_to,min_trust_to_create_tag,username_lower,tagging_enabled,flushdb,save,enable_escaped_fragments,all,find,new,ago,embed_url,put,max_prints_per_hour_per_user,set_notification_level_for_category,login_required,updated_at,create_post,before,off,body,groups,day,tap,excerpt_for_topic,tags,to_formatted_s,to_s,from_now,trusted_users_can_edit_others,add_owner,editing_grace_period,user,fab!,min_trust_to_edit_wiki_post,times,shared_examples,now,received_postgres_readonly!,allowed_users,banner,topic_id,closed,build,base_url_no_prefix,to_f,pluck,by,eq,sign_in,post_created,after,like,raises,allowed_tag_groups,update_column,describe,topic,expects,unescapeHTML,first,name,permissions,on,min_topic_title_length,invite_group,require,category_id,count,to_i,sort,parsed_body,relative_url,parse,maximum,tl1_requires_time_spent_mins,have_tag,update!,redirect_to,allow_uncategorized_topics,set_permissions,private_message,find_by]
description of the action disallows inviting a group to a topic.
The PR description says "soft deleted" (meaning `.trash!`?) but the spec here is performing `.destroy`. Should one of them be changed?
@@ -40,7 +40,8 @@ public class JobStatus { private final long startTime; private final long endTime; @Setter - private String message; + private String metrics; + private final String message; private final long processedCount; private final String lowWatermark; private final String highWatermark;
[No CFG could be retrieved]
private class AI.
I think the Setter is needed for the message field too.
@@ -210,10 +210,18 @@ void WbImageTexture::destroyWrenTexture() { void WbImageTexture::updateUrl() { // we want to replace the windows backslash path separators (if any) with cross-platform forward slashes int n = mUrl->size(); + if (n == 0) + return; for (int i = 0; i < n; i++) { QString item = mUrl->item(i); mUrl->setItem(i, item.replace("\\", "/")); } + const QString &url = mUrl->item(0); + if (isPostFinalizedCalled() && WbUrl::isWeb(url) && mDownloader == NULL) { + // url was changed from the scene tree or supervisor + downloadAssets(); + return; + } updateWrenTexture();
[No CFG could be retrieved]
private void wr_texture_2d_new ( void ) Reset the anisotropy level if it is not in the range of the maximum supported anisotropy.
This return statement is wrong: if `mUrl` is empty because I deleted the item from the scene tree (or supervisor), the wren texture has to be updated and the `changed` signal has to be emitted. To reproduce the error you can delete the URL of the MyBot body cylinder in the `url.wbt` simulation and check that the robot appearance in the 3D view is not updated.
@@ -7,6 +7,8 @@ module Follows follow = Follow.find_by(id: follow_id, followable_type: "User") return unless follow&.followable.present? && follow.followable.receives_follower_email_notifications? + return if follow.follower.score < 25 # Restrict new follower emails to more active/established accounts + return if EmailMessage.where(user_id: follow.followable_id). where("sent_at > ?", rand(15..35).hours.ago). where("subject LIKE ?", "%#{NotifyMailer::SUBJECTS[:new_follower_email]}").exists?
[SendEmailNotificationWorker->[perform->[receives_follower_email_notifications?,deliver,exists?,find_by,present?,name],sidekiq_options,include]]
Perform a new follower email delivery if there is a lease.
Instead of having a hard-coded value, do we maybe want to add this to `SiteConfig`?
@@ -200,6 +200,7 @@ def translate(request, document_slug, document_locale, revision_id=None): parent_slug=slug_dict['parent']) rev_form.instance.document = doc # for rev_form.clean() +# import ipdb;ipdb.set_trace() if rev_form.is_valid() and not doc_form_invalid: parent_id = request.POST.get('parent_id', '')
[translate->[redirect,split_slug,get_language_mapping,get_object_or_none,urlparams,DocumentForm,copy,is_valid,bool,smart_int,parse,update,RevisionForm,document_form_initial,AttachmentRevisionForm,get_absolute_url,allows_editing_by,_,render,allows_revision_by,get,get_edit_url,current_or_latest_revision,reverse,save,lower,get_object_or_404],select_locale->[render,get_object_or_404]]
Create a new translation of a wiki document. This function is called when a specific tag is not found in the topic_translated_doc This function is called when a user is redirected to the edit_url of the document.
Remove this line
@@ -0,0 +1,9 @@ +class AddPartialIndexToUsersFeedUrl < ActiveRecord::Migration[6.0] + disable_ddl_transaction! + + def change + # adds an index only on users with a feed URL, as they are a tiny percentage of the total + # number of users, there is no need to add a full index as most entries will be empty + add_index :users, :feed_url, where: "COALESCE(feed_url, '') <> ''", algorithm: :concurrently + end +end
[No CFG could be retrieved]
No Summary Found.
TIL about `COALESCE` and `<>` in postgres!
@@ -387,6 +387,16 @@ class Device extends BaseModel // ---- Accessors/Mutators ---- + public function getStatus() + { + return $this->status; + } + + public function getLastPolled() + { + return strtotime($this->last_polled); + } + public function getIconAttribute($icon) { $this->loadOs();
[Device->[getIconAttribute->[loadOs],shortDisplayName->[displayName],name->[displayName]]]
Get icon attribute.
Can you please remove getStatus() and getLastPolled(), just use the property accessors: ->status and ->last_polled
@@ -0,0 +1,16 @@ +class CreateRoles < ActiveRecord::Migration[6.1] + def change + create_table :roles do |t| + t.references :user, null: false, foreign_key: true + t.references :course, null: false, foreign_key: true + t.references :section, null: true, foreign_key:true + t.string :type + t.boolean :hidden, null: false, default: false + t.integer :grace_credits, default: 0 + t.boolean :receives_results_emails, null: false, default: false + t.boolean :receives_invite_emails, null: false, default: false + + t.timestamps + end + end +end
[No CFG could be retrieved]
No Summary Found.
Ideally it would be best if all the migrations could be in one file
@@ -1364,6 +1364,12 @@ export default { APP.UI.addListener(UIEvents.AUDIO_MUTED, muteLocalAudio); APP.UI.addListener(UIEvents.VIDEO_MUTED, muteLocalVideo); + APP.UI.addListener(UIEvents.REQUEST_ROOM_PASSWORD, returnRoomStatus); + + APP.UI.addListener(UIEvents.UNLOCK_ROOM, unlockRoom); + + APP.UI.addListener(UIEvents.LOCK_ROOM, lockRoom); + if (!interfaceConfig.filmStripOnly) { APP.UI.addListener(UIEvents.MESSAGE_CREATED, (message) => { APP.API.notifySendingChatMessage(message);
[No CFG could be retrieved]
Add listeners to the UI to show the . This method broadcasts the data that remote participants care about.
We already have the conference event ConferenceEvents.LOCK_STATE_CHANGED and UIEvents.ROOM_LOCK_CLICKED. Do we really need 4 more events UNLOCK_ROOM, LOCK_ROOM, ROOM_LOCKED, ROOM_UNLOCKED? Also the ROOM_LOCKED and ROOM_UNLOCKED are currently triggered when the user clicks on "Add" password, whereas the right time is to change the state when the ConferenceEvent occurs.
@@ -774,7 +774,15 @@ class CommandeFournisseur extends CommonOrder if ($user->rights->fournisseur->commande->lire) { $label = '<u class="paddingrightonly">'.$langs->trans("SupplierOrder").'</u>'; if (isset($this->statut)) { - $label .= ' '.$this->getLibStatut(5); + $statusText = ' '.$this->getLibStatut(5); + $parameters = array('obj' => $this); + $reshook = $hookmanager->executeHooks('moreHtmlStatus', $parameters, $object); // Note that $action and $object may have been modified by hook + if (empty($reshook)) { + $statusText .= $hookmanager->resPrint; + } else { + $statusText = $hookmanager->resPrint; + } + $label .= $statusText; } if (!empty($this->ref)) { $label .= '<br><b>'.$langs->trans('Ref').':</b> '.$this->ref;
[CommandeFournisseur->[valid->[fetch],deleteline->[fetch],Livraison->[getDispachedLines],updateFromCommandeClient->[fetch],approve->[getNextNumRef,fetch],updateline->[fetch,update],createFromClone->[fetch,create],getMaxDeliveryTimeDay->[fetch],getNomUrl->[getLibStatut],calcAndSetStatusDispatch->[Livraison],addline->[fetch]]]
Get the URL of the order that should be displayed in the Nom. Dol - Print date and order of the card. This method is used to display a note in the Note section of the Note section of the.
Because a hooks is already inside LibStatut(), there is no need to add another one here.
@@ -16,7 +16,13 @@ */ package org.apache.dubbo.rpc; -import org.apache.dubbo.common.Constants; +import org.apache.dubbo.common.constants.ClusterConstants; +import org.apache.dubbo.common.constants.CommonConstants; +import org.apache.dubbo.common.constants.ConfigConstants; +import org.apache.dubbo.common.constants.FilterConstants; +import org.apache.dubbo.common.constants.MonitorConstants; +import org.apache.dubbo.common.constants.RegistryConstants; +import org.apache.dubbo.common.constants.RemotingConstants; /** * RpcConstants
[No CFG could be retrieved]
This method is used to register a protocol specific object with the Rpc service.
Why is this class not deleted, I see that it has not been used anymore.
@@ -8,9 +8,9 @@ LICENSE file in the root directory of this source tree. import * as React from 'react'; -import {Button} from '../button/index.js'; +import {Button, SIZE} from '../button/index.js'; import {ButtonGroup, MODE} from '../button-group/index.js'; -import {Input} from '../input/index.js'; +import {Input, SIZE as INPUT_SIZE} from '../input/index.js'; import {useStyletron} from '../styles/index.js'; import {Paragraph4} from '../typography/index.js';
[No CFG could be retrieved]
Imports a single non - negative number of components from the System. Format the value according to the options.
should we import this as `BUTTON_SIZE`, to be consistent with `INPUT_SIZE`?
@@ -64,12 +64,14 @@ import java.util.concurrent.locks.ReentrantLock; */ public class TaskLockbox { - // Datasource -> Interval -> Tasks + TaskLock - private final Map<String, NavigableMap<Interval, TaskLockPosse>> running = Maps.newHashMap(); + // Datasource -> Interval -> list of (Tasks + TaskLock) + // Multiple shared locks can be acquired for the same dataSource and interval. + // Note that revoked locks are also maintained in this map to notify that those locks are revoked to the callers when + // they acquire the same locks again. + private final Map<String, NavigableMap<Interval, List<TaskLockPosse>>> running = Maps.newHashMap(); private final TaskStorage taskStorage; private final ReentrantLock giant = new ReentrantLock(true); private final Condition lockReleaseCondition = giant.newCondition(); - protected final long lockTimeoutMillis; private static final EmittingLogger log = new EmittingLogger(TaskLockbox.class);
[TaskLockbox->[lock->[lock],unlock->[lock,unlock],tryAddTaskToLockPosse->[lock],findLockPossesForTask->[lock,unlock],remove->[lock,remove,unlock],TaskLockPosse->[hashCode->[hashCode],toString->[toString],equals->[equals]],findLockPossesForInterval->[lock,unlock],add->[lock,unlock,add],tryLock->[lock,tryLock],findLocksForTask->[lock]]]
Creates a new TaskLockbox object. This method is used to create a new instance of the class.
How long are they kept? Could we run out of memory storing them all?
@@ -175,12 +175,10 @@ def main(args): # Temporary hack to call the legacy command system if the command is not yet implemented in V2 command_argument = args[0] if args else None - if command_argument in CLI_V1_COMMANDS: - from conans.client.command import v1_main - return v1_main(args) + is_v1_command = command_argument in CLI_V1_COMMANDS try: - conan_api = Conan() + conan_api = ConanAPIV1() if is_v1_command else Conan() except ConanMigrationError: # Error migrating sys.exit(ERROR_MIGRATION) except ConanException as e:
[Cli->[run->[help_message,_print_similar,run]],main->[run,Cli]]
Entry point of the conan application.
Maybe already rename ``Conan`` to ``ConanAPI``?
@@ -2186,12 +2186,16 @@ crt_ivsync_rpc_issue(struct crt_ivns_internal *ivns_internal, uint32_t class_id, D_ASSERT(input != NULL); D_ALLOC_PTR(iv_sync_cb); - if (iv_sync_cb == NULL) + if (iv_sync_cb == NULL) { + D_INFO("NO callback supplied\n"); D_GOTO(exit, rc = -DER_NOMEM); + } iv_sync_cb->isc_sync_type = *sync_type; input->ivs_ivns_id = ivns_internal->cii_gns.gn_ivns_id.ii_nsid; input->ivs_ivns_group = ivns_internal->cii_gns.gn_ivns_id.ii_group_name; + input->ivs_grp_ver = ivns_internal->cii_grp_priv->gp_membs_ver; +/* SAB Do we need a version check here, where do we determine node to rpc? */ d_iov_set(&input->ivs_key, iv_key->iov_buf, iv_key->iov_buf_len); d_iov_set(&input->ivs_sync_type, &iv_sync_cb->isc_sync_type, sizeof(crt_iv_sync_t));
[No CFG could be retrieved]
Create the Initialize ISC key and IV key buffer.
This should be error if anything; message is misleading as well
@@ -4936,7 +4936,7 @@ public class Jenkins extends AbstractCIBase implements DirectlyModifiableTopLeve if(ver.equals(UNCOMPUTED_VERSION) || SystemProperties.getBoolean("hudson.script.noCache")) RESOURCE_PATH = ""; else - RESOURCE_PATH = "/static/"+SESSION_HASH; + RESOURCE_PATH = "static/"+SESSION_HASH; VIEW_RESOURCE_PATH = "/resources/"+ SESSION_HASH; }
[Jenkins->[getAllItems->[getAllItems],getUser->[get],_cleanUpShutdownTcpSlaveAgent->[add],setNumExecutors->[updateComputerList],getPlugin->[getPlugin],_cleanUpCloseDNSMulticast->[add],getViewActions->[getActions],getJDK->[getJDKs],setViews->[addView],getCloud->[getByName],getStaplerFallback->[getPrimaryView],getStoredVersion->[getActiveInstance],getViews->[getViews],doDoFingerprintCheck->[isUseCrumbs],deleteView->[deleteView],_cleanUpInterruptReloadThread->[add],doConfigSubmit->[save,updateComputerList],CloudList->[onModified->[onModified]],doCheckDisplayName->[isNameUnique,isDisplayNameUnique],_cleanUpPersistQueue->[save,add],reload->[loadTasks,save,reload,executeReactor],doConfigExecutorsSubmit->[all,get,updateComputerList],DescriptorImpl->[getDynamic->[getDescriptor],DescriptorImpl],_cleanUpShutdownThreadPoolForLoad->[add],isDisplayNameUnique->[getDisplayName],_cleanUpRunTerminators->[onTaskFailed->[getDisplayName],execute->[run],onTaskCompleted->[getDisplayName],onTaskStarted->[getDisplayName],add],getJobNames->[getFullName,allItems,add],doChildrenContextMenu->[add,getViews,getDisplayName],doLogout->[doLogout],getActiveInstance->[getInstance],getNode->[getNode],copy->[copy],updateNode->[updateNode],doSubmitDescription->[doSubmitDescription],doCheckURIEncoding->[doCheckURIEncoding],getItem->[getItem,get],doViewExistsCheck->[getView],getUnprotectedRootActions->[getActions,add],setAgentProtocols->[add],allItems->[allItems],disableSecurity->[setSecurityRealm],onViewRenamed->[onViewRenamed],getDescriptorByName->[getDescriptor],loadConfig->[getConfigFile],refreshExtensions->[getInstance,add,getExtensionList],getRootPath->[getRootDir],getView->[getView],putItem->[get],_cleanUpShutdownTimer->[add],_cleanUpDisconnectComputers->[run->[add]],getAllThreadDumps->[get,getComputers],createProject->[createProject,getDescriptor],MasterComputer->[doConfigSubmit->[doConfigExecutorsSubmit],hasPermission->[hasPermission],getInstance],createProjectFromXML->[createProjectFromXML],getAgentProtocols->[add],doScript->[getView,getACL],_cleanUpReleaseAllLoggers->[add],isRootUrlSecure->[getRootUrl],EnforceSlaveAgentPortAdministrativeMonitor->[doAct->[forceSetSlaveAgentPort,getExpectedPort],isActivated->[getSlaveAgentPortInitialValue,getInstance],getExpectedPort->[getSlaveAgentPortInitialValue]],setSecurityRealm->[get],getItems->[getItems,add],doCheckViewName->[getView,checkGoodName],removeNode->[removeNode],getSelfLabel->[getLabelAtom],fireBeforeShutdown->[all,add],doSimulateOutOfMemory->[add],expandVariablesForDirectory->[expandVariablesForDirectory,getFullName],_getFingerprint->[get],getManagementLinks->[all],addView->[addView],getPlugins->[getPlugin,getPlugins,add],save->[getConfigFile],getPrimaryView->[getPrimaryView],getDescriptorList->[get],makeSearchIndex->[all->[getViews],get->[getView],makeSearchIndex,add],getNodes->[getNodes],lookup->[get,getInstanceOrNull],getLegacyInstanceId->[getSecretKey],_cleanUpShutdownUDPBroadcast->[add],saveQuietly->[save],getLifecycle->[get],getInstanceOrNull->[getInstance],executeReactor->[containsLinkageError->[containsLinkageError],runTask->[runTask]],setNodes->[setNodes],loadTasks->[run->[setSecurityRealm,getExtensionList,getNodes,setNodes,remove,add,loadConfig],add],remove->[remove],getDescriptorOrDie->[getDescriptor],getLabelAtoms->[add],getItemByFullName->[getItemByFullName,getItem],doCreateView->[addView],getExtensionList->[get,getExtensionList],getLabels->[add],restart->[get],isNameUnique->[getItem],getWorkspaceFor->[all],_cleanUpShutdownPluginManager->[add],getRootDirFor->[getRootDirFor,getRootDir],canDelete->[canDelete],getInstance->[getInstance],getFingerprint->[get],getAuthentication->[getAuthentication],doScriptText->[getView,getACL],getDynamic->[getActions],_cleanUpPluginServletFilters->[cleanUp,add],_cleanUpShutdownTriggers->[add],addNode->[addNode],getTopLevelItemNames->[add],MasterRestartNotifyier->[onRestart->[all]],doQuietDown->[doQuietDown],safeRestart->[get],updateComputerList->[updateComputerList],rebuildDependencyGraphAsync->[call->[get,rebuildDependencyGraph]],_cleanUpAwaitDisconnects->[get,add],readResolve->[getSlaveAgentPortInitialValue],getName]]
Compute the version of the Jenkins application.
Should it be also changed then?
@@ -177,6 +177,7 @@ type FakeIdentifyUI struct { Outcome *keybase1.IdentifyOutcome StartCount int Token keybase1.TrackToken + BrokenTracking bool sync.Mutex }
[FinishSocialProofCheck->[Lock,Unlock],Start->[Lock,Unlock],FinishWebProofCheck->[Lock,Unlock],LaunchNetworkChecks->[Lock,Unlock],DisplayKey->[Lock,Unlock,ImportPGPFingerprintSlice],Confirm->[Lock,Unlock],GetLogUI,Cleanup,Result,DeepEqual,Export,Fatal,Errorf,Logf,NewSecretUI]
FinishWebProofCheck is a method that will be called when a proof is completed.
the `FakeIdentifyUI` keeps track of if there was a `BreaksTracking` bool or not...
@@ -141,6 +141,13 @@ export class AmpAdUIHandler { * @private */ displayNoContentUI_() { + const adContainer = getAdContainer(this.baseInstance_.element); + if (adContainer == 'AMP-STICKY-AD') { + // force collapse the sticky-ad + this.baseInstance_./*OK*/collapse(); + this.state = AdDisplayState.LOADED_NO_CONTENT; + return; + } // The order here is collapse > user provided fallback > default fallback this.baseInstance_.attemptCollapse().then(() => { this.state = AdDisplayState.LOADED_NO_CONTENT;
[No CFG could be retrieved]
Private methods for the base element. Adds a default UI component if the element is not layout - in.
> I can also use the pre-calculate container value from `baseInstance`
@@ -90,10 +90,11 @@ class Project < ActiveRecord::Base def unassigned_users User - .joins("INNER JOIN user_organizations ON users.id = user_organizations.user_id ") - .where("user_organizations.organization_id = ?", organization) - .where.not(confirmed_at: nil) - .where("users.id NOT IN (?)", UserProject.where(project: self).select(:id).distinct) + .joins('INNER JOIN user_organizations ON users.id = user_organizations.user_id') + .where('user_organizations.organization_id = ?', organization) + .where.not(confirmed_at: nil) + .where('users.id NOT IN (?)', + UserProject.where(project: self).select(:user_id).distinct) end def user_role(user)
[Project->[assigned_modules->[user_role],space_taken->[space_taken],log->[log]]]
Find the first unassigned user in the project.
Line is too long. [84/80]<br>Use 2 (not 0) spaces for indenting an expression spanning multiple lines.
@@ -248,7 +248,7 @@ class QuantizedConv2D(layers.Layer): # For FakeQuant self._fake_quant_weight = _get_fake_quant_type( weight_quantize_type, self.weight.name, moving_rate, weight_bits, - self._dtype, True) + self._dtype, True, self.weight.shape[0]) self._fake_quant_input = _get_fake_quant_type( activation_quantize_type, layer.full_name(), moving_rate, activation_bits, self._dtype, False)
[QuantizedConv2D->[__init__->[_get_fake_quant_type]],_get_fake_quant_type->[FakeQuantAbsMax,FakeQuantMovingAverage],QuantizedLinear->[__init__->[_get_fake_quant_type]]]
Initialize a QuantizedConv2D model with a single base layer. Append an activation to the model if the object is not yet connected to a network.
Please use keyword arguments to call the `_get_fake_quant_type` function.
@@ -173,4 +173,17 @@ public class SparkDatasetTestUtils { .withBulkInsertParallelism(2); } + private static InternalRow serializeRow(ExpressionEncoder encoder, Row row) + throws InvocationTargetException, IllegalAccessException, NoSuchMethodException, ClassNotFoundException { + // TODO remove reflection if Spark 2.x support is dropped + if (package$.MODULE$.SPARK_VERSION().startsWith("2.")) { + Method spark2method = encoder.getClass().getMethod("toRow", Object.class); + return (InternalRow) spark2method.invoke(encoder, row); + } else { + Class<?> serializerClass = Class.forName("org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer"); + Object serializer = encoder.getClass().getMethod("createSerializer").invoke(encoder); + Method aboveSpark2method = serializerClass.getMethod("apply", Object.class); + return (InternalRow) aboveSpark2method.invoke(serializer, row); + } + } }
[SparkDatasetTestUtils->[getEncoder->[toList,resolveAndBind,toSeq,collect],toInternalRows->[collectAsList,add,copy],getInternalRowWithError->[nextBoolean,toString,nextInt,GenericInternalRow],getConfigBuilder->[withBulkInsertParallelism],getRandomRows->[add,getRandomValue,createDataFrame],getRandomValue->[GenericRow,nextLong,toString,nextInt],StructType,empty,StructField,getEncoder]]
Get a builder for the HoodieWriteConfig.
let's file a tracking JIRA for this?
@@ -239,6 +239,11 @@ public class HttpContentCompressor extends HttpContentEncoder { new EmbeddedChannel(ctx.channel().id(), ctx.channel().metadata().hasDisconnect(), ctx.channel().config(), new BrotliEncoder(brotliOptions.parameters()))); } + if (targetContentEncoding.equals("zstd")) { + return new Result(targetContentEncoding, + new EmbeddedChannel(ctx.channel().id(), ctx.channel().metadata().hasDisconnect(), + ctx.channel().config(), new ZstdEncoder(zstdOptions.compressionLevel()))); + } throw new Error(); } else { ZlibWrapper wrapper = determineWrapper(acceptEncoding);
[HttpContentCompressor->[determineEncoding->[split,indexOf,parseFloat,isAvailable,substring,contains],beginEncode->[determineEncoding,EmbeddedChannel,windowBits,memLevel,equals,BrotliEncoder,hasDisconnect,readableBytes,Result,newZlibEncoder,get,compressionLevel,parameters,Error,config,id,determineWrapper],determineWrapper->[split,indexOf,parseFloat,substring,contains],IllegalArgumentException,brotli,gzip,getSimpleName,deflate,checkPositiveOrZero,checkInRange,deepCheckNotNull]]
This method is called to determine the best encoding for the given HTTP response. Returns a new result with a sequence of bytes encoded and encoded bytes.
Time to use a `Map` lookup instead of four `if`s?
@@ -43,6 +43,9 @@ public class BlockHandleImpl implements BlockHandle { } public BlockCapsule produce(Miner miner, long blockTime, long timeout) { + long now = System.currentTimeMillis(); + BlockChainInfo blockInfo=new BlockChainInfo(false); + BlockCapsule blockCapsule = manager.generateBlock(miner, blockTime, timeout); if (blockCapsule == null) { return null;
[BlockHandleImpl->[produce->[generateBlock,error,BlockMessage,broadcast,getString,receiveBlock,fastForward,pushBlock],getState->[equals]]]
Produce a new block from the pool.
This is the block generation time, not the block processing time. What's the meaning of a new BlockChainInfo here?
@@ -49,7 +49,11 @@ class ExpectedRiskMinimization(DecoderTrainer[Callable[[StateType], torch.Tensor # Finished model scores are log-probabilities of the predicted sequences. We convert # log probabilities into probabilities and re-normalize them to compute expected cost under # the distribution approximated by the beam search. - costs = torch.cat(finished_costs[batch_index]) + + # finished_costs[batch_index] could be a list of 0-tensors or 1-tensors. We use .view(-1) + # to make sure they're treated as 1-tensors and then concatenated appropriately. + # TODO(joelgrus): make sure this is correct + costs = torch.cat([cost.view(-1) for cost in finished_costs[batch_index]]) logprobs = torch.cat(finished_model_scores[batch_index]) # Unmasked softmax of log probabilities will convert them into probabilities and # renormalize them.
[ExpectedRiskMinimization->[_get_best_action_sequences->[_get_model_scores_by_batch]]]
Decodes a sequence of sequence sequences from the decoder.
This looks like it shouldn't be right to me. Well, it looks like it probably does the right thing, but I think fixing this where the `finished_costs` gets created is the right solution, instead of this one. My guess is that something is getting created as a scalar now, where we really want it as a 1-dim tensor.
@@ -28,6 +28,18 @@ class BazelDeps(object): save(filename, buildfile_content) return filename + def _get_build_dependency_buildfile_content(self, dependency): + filegroup = textwrap.dedent(""" + filegroup( + name = "{}_binaries", + data = glob(["**"]), + visibility = ["//visibility:public"], + ) + + """).format(dependency.ref.name) + + return filegroup + def _get_dependency_buildfile_content(self, dependency): template = textwrap.dedent(""" load("@rules_cc//cc:defs.bzl", "cc_import", "cc_library")
[BazelDeps->[_save_dependendy_buildfile->[save,format],generate->[_save_dependendy_buildfile,append,_save_main_buildfiles,_get_main_buildfile_content,_create_new_local_repository,values,_get_dependency_buildfile_content],_get_dependency_buildfile_content->[aggregate_components,dedent,append,replace,format,join,Template],_save_main_buildfiles->[save],_get_main_buildfile_content->[splitlines,format,join,dedent],_create_new_local_repository->[dedent],__init__->[check_using_build_profile]]]
Save a buildfile with a specific dependency. Renders a . txt file with context.
without having any idea about Bazel: is it possible that we want some other visibilty here? Typically the ``build_requires`` are not propagated to downstream consumers. If this was a transitive dependency, maybe it doesn't need to be visible to the downstream?
@@ -79,8 +79,10 @@ namespace Dynamo.ViewModels ShowGalleryCommand = new DelegateCommand(p => OnRequestShowHideGallery(true), o => true); CloseGalleryCommand = new DelegateCommand(p => OnRequestShowHideGallery(false), o => true); ShowNewPresetsDialogCommand = new DelegateCommand(ShowNewPresetStateDialogAndMakePreset, CanShowNewPresetStateDialog); - NodeFromSelectionCommand = new DelegateCommand(CreateNodeFromSelection, CanCreateNodeFromSelection); - } + NodeFromSelectionCommand = new DelegateCommand(CreateNodeFromSelection, CanCreateNodeFromSelection); + // TODO: To be removed in Dynamo 2.0 + TogglePreviewBubblesShowingCommand = new DelegateCommand(TogglePreviewBubblesShowing); + } public DelegateCommand OpenIfSavedCommand { get; set; } public DelegateCommand OpenCommand { get; set; } public DelegateCommand ShowOpenDialogAndOpenResultCommand { get; set; }
[DynamoViewModel->[InitializeDelegateCommands->[Copy,OnRequestShowHideGallery,PostUIActivation,CanRunExpression,PublishNewPackage,PublishSelectedNodes,CanPublishSelectedNodes,DumpLibraryToXml,PublishCurrentWorkspace,CanPublishNewPackage,PublishCustomNode,CanDumpLibraryToXml,Log,CanPublishCustomNode,ToString,AddToSelection,CanPublishCurrentWorkspace]]]
Initialize the commands that are registered in the model. Delegate commands to the model. Returns a command that can be used to publish the selected nodes and the selected connectors.
Initialize the delegate command in the old way.
@@ -116,6 +116,15 @@ MiddlewareRegistry.register(({ dispatch, getState }) => next => action => { dispatch(setEveryoneSupportE2EE(false)); } + if (isMaxModeReached(getState)) { + if (isMaxModeThresholdReached(getState)) { + dispatch(setE2EEMaxMode(MAX_MODE.THRESHOLD_EXCEEDED)); + dispatch(toggleE2EE(false)); + } else { + dispatch(setE2EEMaxMode(MAX_MODE.ENABLED)); + } + } + return result; }
[No CFG could be retrieved]
Returns the next action s . Checks if the remote participants are supported by the E2E.
Can you please try to unify the 3 blocks that look similar into one helper `updateMaxMode` (sample name).
@@ -2384,6 +2384,11 @@ namespace Dynamo.Models /// directly in this point. </param> public void Paste(Point2D targetPoint, bool useOffset = true) { + //When called from somewhere other than StateMachine and only ConnectorPins are selected. + if (!ClipBoard.Where(m => !(m is ConnectorPinModel)).Select(m => m).Any()) + { + return; + } if (useOffset) { // Provide a small offset when pasting so duplicate pastes aren't directly on top of each other
[DynamoModel->[InitializeNodeLibrary->[InitializeIncludedNodes],ForceRun->[ResetEngine],LoadNodeLibrary->[LoadNodeLibrary],Paste->[Paste,Copy],RemoveWorkspace->[Dispose],UngroupModel->[DeleteModelInternal],ResetEngine->[ResetEngine],ShutDown->[ShutDown],SetPeriodicEvaluation->[ResetEngine],ResetEngineInternal->[RegisterCustomNodeDefinitionWithEngine,Dispose],EngineController_RequestCustomNodeRegistration->[RegisterCustomNodeDefinitionWithEngine],DumpLibraryToXml->[DumpLibraryToXml],AddWorkspace->[CheckForInvalidInputSymbols],AddZeroTouchNodeToSearch->[AddZeroTouchNodeToSearch],AddHomeWorkspace->[RegisterHomeWorkspace],DeleteModelInternal->[Dispose],Dispose->[Dispose,LibraryLoaded],Report3DPreviewOutage->[OnPreview3DOutage],HostAnalyticsInfo]]
Paste a specific node from the system. Updates the node and notes in the workspace. Adds a new node model to the workspace and records the models that are created as part of if there are no models in the database this will be called.
Maybe use `All()` instead of the more complex way you are using?
@@ -1,5 +1,7 @@ <%- if controller_name != 'sessions' %> - <%= link_to t("devise.links.login"), new_session_path(resource_name) %><br /> + <% login = t("devise.links.login") %> + <% login = t("devise.links.login_with_provider") if ['new_with_provider', 'create_with_provider'].include? action_name %> + <%= link_to login, new_session_path(resource_name) %><br /> <% end -%> <%- if devise_mapping.registerable? && Rails.configuration.x.enable_user_registration && controller_name != 'registrations' %>
[No CFG could be retrieved]
Displays a list of possible devise resource link links for the current controller.
maybe is sexier to write `if action_name.in? %w(new_with_provider create_with_provider)` :smile:
@@ -33,11 +33,11 @@ namespace Dynamo.Core.Threading internal UpdateGraphAsyncTask(IScheduler scheduler, bool verboseLogging1) : base(scheduler) { - verboseLogging = verboseLogging; + this.verboseLogging = verboseLogging1; } /// <summary> - /// This method is called by codes that intent to start a graph update. + /// This method is called by code that intends to start a graph update. /// This method is called on the main thread where node collection in a /// WorkspaceModel can be safely accessed. /// </summary>
[UpdateGraphAsyncTask->[GetDownstreamNodes->[GetDownstreamNodes]]]
Creates a class which implements the GraphSyncData interface. region Public Overridable Overridable Methods.
I've just updated this in `master` and `RC0.8.0_ds` branches.
@@ -94,7 +94,7 @@ func TestAccAwsEc2ClientVpnEndpoint_withLogGroup(t *testing.T) { }) } -func TestAccAwsEc2ClientVpnEndpoint_withDNSServers(t *testing.T) { +func TestAccAwsEc2ClientVpnEndpoint_withDNervers(t *testing.T) { rStr := acctest.RandString(5) resource.ParallelTest(t, resource.TestCase{
[ParallelTest,Meta,Sprintf,TestCheckResourceAttr,RootModule,DescribeClientVpnEndpoints,ComposeTestCheckFunc,String,Errorf,RandString]
TestAccAwsEc2ClientVpnEndpoint_withLogGroup tests if the resource exists on TestAccAwsEc2ClientVpnEndpoint_tags tests the tags of an aws_ec.
Is this rename intentional?
@@ -281,6 +281,9 @@ export class AmpForm { body, method: this.method_, credentials: 'include', + headers: { + Accept: 'application/json', + }, }).then(response => { return response.json().then(json => { this.triggerAction_(/* success */ true, json);
[No CFG could be retrieved]
This method handles the AJAX and GET requests. This is the main action of the action - xhr action. It is called by the action.
Is accept OK for CORS without preflight?
@@ -1005,6 +1005,7 @@ namespace System.Net.Http.Functional.Tests public SocketsHttpHandler_SchSendAuxRecordHttpTest(ITestOutputHelper output) : base(output) { } } + [SkipOnMono("Tests hang with chrome. To be investigated", TestPlatforms.Browser)] public sealed class SocketsHttpHandler_HttpClientHandlerTest : HttpClientHandlerTest { public SocketsHttpHandler_HttpClientHandlerTest(ITestOutputHelper output) : base(output) { }
[SocketsHttpHandler_PostScenarioTest->[Task->[Task]],SocketsHttpHandler_HttpClientHandler_Asynchrony_Test->[Task->[Task],MakeHttpRequestWithTcsSetOnFinalizationInAsyncLocal->[Task]],SocketsHttpHandler_HttpClientHandler_ConnectionPooling_Test->[Task->[Task]]]
Test class for SocketsHttpHandler. The base class for SocketsHttpHandler.
do we have a tracking issue for this?
@@ -217,5 +217,6 @@ def ten_crop(img, size, vertical_flip=False): def _blend(img1, img2, ratio): - bound = 1 if img1.dtype.is_floating_point else 255 + # type: (Tensor, Tensor, float) -> Tensor + bound = 1 if img1.dtype == torch.float else 255 return (ratio * img1 + (1 - ratio) * img2).clamp(0, bound).to(img1.dtype)
[ten_crop->[five_crop,hflip,vflip],center_crop->[crop],adjust_contrast->[rgb_to_grayscale],adjust_saturation->[rgb_to_grayscale],five_crop->[center_crop,crop]]
Blend two images.
Can you do instead something like `img1.dtype in [torch.half, torch.float32, torch.float64]`?
@@ -259,8 +259,8 @@ public class BoundedOffHeapDataContainer extends OffHeapDataContainer { try { InternalCacheEntry<WrappedBytes, WrappedBytes> ice = offHeapEntryFactory.fromMemory(addressToRemove); passivator.passivate(ice); - // TODO: this reareads the object from memory again! - performRemove(addressToRemove, ice.getKey()); + // TODO: this rereads the object from memory again! + performRemove(memoryLookup.getMemoryAddress(ice.getKey()), ice.getKey()); evictionManager.onEntryEviction(Collections.singletonMap(ice.getKey(), ice)); } finally { entryWriteLock.unlock();
[BoundedOffHeapDataContainer->[entryReplaced->[entryReplaced],compute->[compute],entryRemoved->[entryRemoved],entryRetrieved->[entryRetrieved],put->[put],entryCreated->[entryCreated],performClear->[performClear]]]
This method will check if the size of the cache is not larger than the maximum size.
This is the actual big bug fix. The problem was this was passing the address pointer to remove (which would be a fine assumption from the name as Dan mentioned). However this method required the address of the first node in the memory lookup. This is required since this is a forward linked list only and such when we remove an element we need its parent to mend the linked list. By not passing this the list would be corrupted which could cause all sorts of issues (especially if this memory location was reused by another entry - which was almost always the case)
@@ -17,6 +17,7 @@ class RdmaCore(CMakePackage): version('13', sha256='e5230fd7cda610753ad1252b40a28b1e9cf836423a10d8c2525b081527760d97') depends_on('pkgconfig', type='build') + depends_on('py-docutils', type=('build', 'run')) depends_on('libnl') conflicts('platform=darwin', msg='rdma-core requires FreeBSD or Linux') conflicts('%intel', msg='rdma-core cannot be built with intel (use gcc instead)')
[RdmaCore->[conflicts,depends_on,version]]
Returns the arguments for rdma - core.
Is this actually needed at run-time?
@@ -76,7 +76,7 @@ cmake_multi = """set(CMAKE_CXX_COMPILER_WORKS 1) project(Hello CXX) cmake_minimum_required(VERSION 2.8.12) include(${{CMAKE_CURRENT_BINARY_DIR}}/conanbuildinfo_multi.cmake) -conan_basic_setup() +conan_basic_setup(NO_OUTPUT_DIRS) add_library(hello{name} hello.cpp) conan_target_link_libraries(hello{name}) """
[WorkspaceTest->[use_build_requires_editable_test->[files],build_requires_test->[files],complete_single_conf_build_test->[files],gen_subdirectories_test->[files],generators_test->[files],complete_multi_conf_build_test->[files],simple_test->[files],per_package_layout_test->[files],simple_build_test->[files]]]
Creates a C ++ file containing the C ++ code for the non - standard non - standard Create a conan specific configuration file for the given node.
This shouldn't be necessary. ``conan_basic_setup()`` is not setting up output directories, because by default that would be overwriting artifacts. Making ``conan_basic_setup()`` of ``cmake_multi`` to define OUTPUT_DIRS by default might be breaking.