patch stringlengths 18 160k | callgraph stringlengths 4 179k | summary stringlengths 4 947 | msg stringlengths 6 3.42k |
|---|---|---|---|
@@ -797,9 +797,9 @@ function item_store($arr,$force_parent = false, $notify = false, $dontcache = fa
put_item_in_cache($arr);
if ($notify)
- call_hooks('post_local',$arr);
+ call_hooks('post_local', $arr);
else
- call_hooks('post_remote',$arr);
+ call_hooks('post_remote', $arr);
if (x($arr,'cancel')) {
logger('item_store: post cancelled by plugin.');
| [new_follower->[get_item_tags],item_add_language_opt->[detect],fix_private_photos->[scaleImage,is_valid,getType,imageString],item_is_remote_self->[get_hostname],subscribe_to_hub->[get_curl_code],item_store->[format,get_hostname]] | Stores an item in the storage check for duplicate item Demonstrates how to log the items in the network. Item new URI generator - - - - - - - - - - - - - - - - - - notags - An array of tags that are not associated with an item This function is used to determine if a contact is a suitable contact that matches with the author This function is used to get the contact - id of the post. | Standards: Please add braces to this condition. |
@@ -159,6 +159,9 @@ check_for_uns_ep(struct dfuse_projection_info *fs_handle,
if (rc)
return rc;
+ /** should switch dfuse to use da_pool and da_cont instead of the uuids. */
+ duns_destroy_attr(&dattr);
+
if (dattr.da_type != DAOS_PROP_CO_LAYOUT_POSIX)
return ENOTSUP;
| [No CFG could be retrieved] | This function is called from the DOS standard to check for a new unified namespace entry point -DER - open - open - open - open - open - open - open - open. | Is this a TODO? (future) |
@@ -0,0 +1,12 @@
+const { contextBridge, ipcRenderer } = require('electron')
+const fs = require('fs').promises
+const path = require('path')
+
+contextBridge.exposeInMainWorld('electron', {
+ startDrag: async (fileName) => {
+ // Create a new file to copy - you can also copy existing files.
+ await fs.writeFile(fileName, '# Test drag and drop')
+
+ ipcRenderer.send('ondragstart', path.join(process.cwd(), fileName))
+ }
+})
| [No CFG could be retrieved] | No Summary Found. | nit: I don't think we should be creating the file every time we start the drag-and-drop. Would it make sense to put this line in the main process? |
@@ -21,7 +21,7 @@ import { Route } from '@angular/router';
import { RegisterComponent } from './register.component';
export const registerRoute: Route = {
- path: 'register',
+ path: 'account/register',
component: RegisterComponent,
data: {
authorities: [],
| [No CFG could be retrieved] | - > Route. | I think we can move `account` path prefix to `account.route.ts`. |
@@ -106,7 +106,7 @@ class WPSEO_Configuration_Page {
<html <?php language_attributes(); ?>>
<!--<![endif]-->
<head>
- <meta name="viewport" content="width=device-width"/>
+ <meta name="viewport" content="width=device-width", initial-scale=1/>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<title><?php
printf(
| [WPSEO_Configuration_Page->[remove_notification->[remove_notification],add_notification->[add_notification],show_wizard->[enqueue_assets]]] | Displays a Yoast configuration wizard. This function prints the Yoast Seo configuration wizard and the JS scripts. | `<meta name="viewport" content="width=device-width, initial-scale=1.0">` The `initial-scale` needs to be within the double quotes. |
@@ -115,7 +115,7 @@ public class SafeModeHandler implements EventHandler<SafeModeStatus> {
Thread.currentThread().interrupt();
}
replicationManager.start();
- cleanupPipelines();
+ scmPipelineManager.triggerPipelineCreation();
});
safeModeExitThread.setDaemon(true);
| [SafeModeHandler->[getSafeModeStatus->[get],onMessage->[setSafeModeStatus,sleep,Thread,start,get,getSafeModeStatus,set,setDaemon,cleanupPipelines,interrupt],cleanupPipelines->[isAllocationTimeout,error,getPipelineState,toString,forEach,getPipelines,finalizeAndDestroyPipeline],getTimeDuration,AtomicBoolean,getLogger,getBoolean,requireNonNull,set]] | Called when a message is received from the client. | Thanks @bharatviswa504 for the detail explanation. When add triggerPipelineCreation() here, we'd better remove the pipelineManager.startPipelineCreator(); in SCMSafeModeManager#exitSafeMode, otherwise, the time wait actually doesn't has effect on pipeline scrubber in PipelineManager. Also, I would suggest move the scmPipelineManager.setSafeModeStatus(isInSafeMode.get()); into the safeModeExitThread run. |
@@ -86,7 +86,7 @@ public class TestConnectWebSocket extends TestListenWebSocket {
processor.consume(webSocketSession, binaryMessage, 0, binaryMessage.length);
processor.consume(webSocketSession, binaryMessage, 0, binaryMessage.length);
return null;
- }).when(service).connect(endpointId);
+ }).when(service).connect(endpointId, Collections.emptyMap());
runner.addControllerService(serviceId, service);
runner.enableControllerService(service);
| [TestConnectWebSocket->[testSuccess->[assertTrue,SharedSessionState,onTrigger,size,equals,mock,getAllTransferredFlowFiles,InetSocketAddress,getProvenanceEvents,forEach,enableControllerService,getProcessor,newTestRunner,thenReturn,AtomicLong,setProperty,allMatch,addControllerService,getEventType,getProcessContext,assertFlowFile,assertEquals,spy,MockProcessSession,connect,get,add]]] | Tests if the connection is successful. This method checks if there are no unexpected flow files in the WebSocket session. | If the interface is made backward-compatible, this change can be reverted. |
@@ -6,8 +6,7 @@ import java.util.logging.Logger;
public class OutputStreamExample {
public static void main(String[] args) {
- Logger log = Logger.getLogger(OutputStreamExample.class.getName());
- log.log(Level.INFO, Integer.toString(sum(1,2)));
+ System.out.println(sum(1,2));
}
public static int sum(int a, int b) {
| [OutputStreamExample->[main->[getName,getLogger,sum,log,toString]]] | This example shows how to sum two integers. | The test checks the output stream, which does not work when the result is logged. |
@@ -35,6 +35,7 @@
#include <spa/debug/format.h>
#include <spa/debug/types.h>
#include <spa/param/video/type-info.h>
+#include <spa/utils/result.h>
#define REQUEST_PATH "/org/freedesktop/portal/desktop/request/%s/obs%u"
#define SESSION_PATH "/org/freedesktop/portal/desktop/session/%s/obs%u"
| [No CFG could be retrieved] | This function is used to provide a list of possible objects. \ Function to convert a sequence of 16 - bit unsigned integers into a sequence of 16 -. | Sorry, I forgot to ask you to add a small paragraph to this commit as well citing that this will be used by the next commit to negotiate the stream format properly. |
@@ -259,6 +259,8 @@ class CheckoutCreate(ModelMutation, I18nMixin):
def save(cls, info, instance: models.Checkout, cleaned_input):
# Create the checkout object
instance.save()
+ country = info.context.country
+ instance.set_country(country.code, commit=True)
# Retrieve the lines to create
variants = cleaned_input.get("variants")
| [CheckoutBillingAddressUpdate->[perform_mutation->[CheckoutBillingAddressUpdate,save]],CheckoutCreate->[process_checkout_lines->[check_lines_quantity],clean_input->[retrieve_billing_address,retrieve_shipping_address,process_checkout_lines],Arguments->[CheckoutCreateInput],perform_mutation->[save,CheckoutCreate,clean_input],save->[save,save_addresses]],CheckoutShippingAddressUpdate->[perform_mutation->[update_checkout_shipping_method_if_invalid,save,CheckoutShippingAddressUpdate]],update_checkout_shipping_method_if_invalid->[clean_shipping_method],CheckoutLineDelete->[perform_mutation->[update_checkout_shipping_method_if_invalid,CheckoutLineDelete]],CheckoutShippingMethodUpdate->[perform_mutation->[CheckoutShippingMethodUpdate,save,clean_shipping_method]],CheckoutRemovePromoCode->[perform_mutation->[CheckoutRemovePromoCode]],CheckoutAddPromoCode->[perform_mutation->[CheckoutAddPromoCode]],CheckoutEmailUpdate->[perform_mutation->[save,CheckoutEmailUpdate]],CheckoutLinesAdd->[perform_mutation->[CheckoutLinesAdd,update_checkout_shipping_method_if_invalid,check_lines_quantity]],CheckoutComplete->[perform_mutation->[CheckoutComplete]],CheckoutCustomerAttach->[perform_mutation->[save,CheckoutCustomerAttach]],CheckoutCustomerDetach->[perform_mutation->[save,CheckoutCustomerDetach]]] | Save a single checkout object. | I think in the new APIs we could start passing the country code as mutation parameter since we wanted to stop relying on geolocalization (since it won't work for people using VPN). |
@@ -68,6 +68,6 @@ func LoadFiles(dir, lang string, files []string) (map[string][]byte, error) {
func ValidateFileEquality(t *testing.T, actual, expected map[string][]byte) {
for name, file := range expected {
assert.Contains(t, actual, name)
- assert.Equal(t, string(file), string(actual[name]))
+ assert.Equal(t, string(file), string(actual[name]), name)
}
}
| [Join,Equal,ReadFile,Contains,Unmarshal,ImportSpec] | ValidateFileEquality checks that the files in the expected map are equal. | Does this just spit out the file name where the error is? Genius! |
@@ -432,7 +432,7 @@ public class JournalStorageManager extends AbstractJournalStorageManager {
journalFF.releaseBuffer(buffer);
}
- public long storePendingLargeMessage(final long messageID, long recordID) throws Exception {
+ public long storePendingLargeMessage(final long messageID, long recordID, boolean wait) throws Exception {
readLock();
try {
if (recordID == LargeServerMessage.NO_PENDING_ID) {
| [JournalStorageManager->[createFileForLargeMessage->[createFileForLargeMessage],pageClosed->[pageClosed,isReplicated],addBytesToLargeMessage->[addBytesToLargeMessage,isReplicated],allocateDirectBuffer->[allocateDirectBuffer],pageWrite->[isReplicated,pageWrite],stopReplication->[stop,performCachedLargeMessageDeletes],stop->[stop],createLargeMessage->[isReplicated,storePendingLargeMessage,createLargeMessage],parseLargeMessage->[createFileForLargeMessage],pageDeleted->[isReplicated,pageDeleted],deleteLargeMessageFile->[isReplicated,storePendingLargeMessage,run],internalStop->[stop],startReplication->[sendJournalFile,isReplicated,prepareJournalForCopy,performCachedLargeMessageDeletes]]] | This method is called when a direct buffer is available. | I'm not sure this is a good idea, this is introducing a blocking operation. |
@@ -1,7 +1,7 @@
/*
Minetest
-Copyright (C) 2015-2018 paramat
-Copyright (C) 2015-2018 kwolekr, Ryan Kwolek <kwolekr@minetest.net>
+Copyright (C) 2015-2019 paramat
+Copyright (C) 2015-2016 kwolekr, Ryan Kwolek
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
| [readParams->[getFlagStrNoEx,getFloatNoEx,getS16NoEx,getU16NoEx,getV3FNoEx,getNoiseParams],makeChunk->[dustTopNodes,getBlockSeed2,placeAllDecos,updateHeightmap,updateLiquid,generateBiomes,generateCavesNoiseIntersection,v3s16,calcBiomeNoise,generateCavesRandomWalk,placeAllOres,assert,generateDungeons,generateTerrain,calcLighting],writeParams->[setU16,setV3F,setFlagStr,setS16,setNoiseParams,setFloat],getSpawnLevelAtPoint->[MYMAX,getFractalAtPoint,NoisePerlin2D],generateTerrain->[perlinMap2D,getFractalAtPoint,m_data,index]] | 2015 - 12 - 15 Missing parameters for MapgenFractal. | Why did you remove this credit? Much was changed in this file, but there's surely a concept or even part of the old code left over. |
@@ -287,8 +287,9 @@ func TestDefaultSupport_Manager_EnsurePolicy(t *testing.T) {
onHasILMPolicy(testPolicy.Name).Return(true, nil),
},
},
- "overwrite existing": {
+ "overwrite": {
overwrite: true,
+ create: true,
calls: []onCall{
onCreateILMPolicy(testPolicy).Return(nil),
},
| [Manager,MapStr,Equal,MustNewConfigFrom,Error,Policy,AssertExpectations,EnsurePolicy,Return,New,Alias,NoError,EnsureAlias,Run,CheckEnabled,Mode] | TestDefaultSupport_Manager_EnsurePolicy tests that a manager creates a new policy with the createManager creates a manager for the given client handler. | The test was wrong. When overwrite is enabled we ignore checkExists and always attempt to create the resource. The mock as configured checks that: HasPolicy will not be called, ILMPolicy will be created and indicates success. |
@@ -109,6 +109,10 @@ public class StatusHistoryEndpointMerger implements EndpointResponseMerger {
noReadPermissionsComponentDetails = nodeStatus.getComponentDetails();
}
+ if (!nodeStatus.isIncludeCounters()) {
+ includeCounters = false;
+ }
+
final NodeIdentifier nodeId = nodeResponse.getNodeId();
final NodeStatusSnapshotsDTO nodeStatusSnapshot = new NodeStatusSnapshotsDTO();
nodeStatusSnapshot.setNodeId(nodeId.getId());
| [StatusHistoryEndpointMerger->[canHandle->[getMetricDescriptors],merge->[getMetricDescriptors]]] | Merge the successful responses and problematic responses into a single response. missing status history. | I'm not sure we need to add a new field to the `nodeStatus` as the read permission is already present in the corresponding `nodeResponseEntity`. |
@@ -22,6 +22,8 @@ import {
parseUrlWithA,
removeFragment,
} from './url';
+// Source for this constant is css/amp-story-player-iframe.css
+import {cssText} from '../build/amp-story-player-iframe.css';
import {dict} from './utils/object';
import {findIndex} from './utils/array';
import {setStyle} from './style';
| [No CFG could be retrieved] | Provides a simple way to display a single unique identifier in the AMP HTML Authors. A class that exports a single AMP . | Should we still just import this as the name `CSS`? That's what we use in e.g. extension files |
@@ -434,7 +434,7 @@ export class AppStorage {
* @return {Promise.<void>}
*/
async fetchWalletTransactions(index) {
- console.log('fetchWalletTransactions for wallet#', typeof index === 'undefined' ? '(all)' : index);
+ console.log('fetchWalletTransactions for wallet#', index);
if (index || index === 0) {
let c = 0;
for (let wallet of this.wallets.filter(wallet => wallet.type !== PlaceholderWallet.type)) {
| [No CFG could be retrieved] | Fetches balance from remote endpoint all transactions for each wallet. If index is present then fetch from Get all transactions in the wallet. | youre reverting changes in master |
@@ -0,0 +1,10 @@
+'use strict';
+
+var npmInstall = require('./scripts/npm/install-dependencies');
+
+module.exports = function(grunt) {
+
+ grunt.registerTask('npm-install', function() {
+ npmInstall.installDependencies();
+ });
+};
| [No CFG could be retrieved] | No Summary Found. | We need to remove this once we are finished testing |
@@ -323,14 +323,8 @@ namespace System.Net.Security
//
// This is a class holding a Credential handle reference, used for static handles cache
//
-#if DEBUG
- internal sealed class SafeCredentialReference : DebugCriticalHandleMinusOneIsInvalid
- {
-#else
- internal sealed class SafeCredentialReference : CriticalHandleMinusOneIsInvalid
+ internal sealed class SafeCredentialReference : CriticalFinalizerObject, IDisposable
{
-#endif
-
//
// Static cache will return the target handle if found the reference in the table.
//
| [No CFG could be retrieved] | Creates a new SafeCredentialReference object from a given Credential handle. Get a handle from the that was previously obtained by GetSecurityDescriptor. | Why is this valid? There's no way user code could drop one of these on the floor and result in this being invoked? Even if the answer is "no", this assert could fire / exception thrown if the SafeFreeCredentials instance passed to the constructor is concurrently closed after target.IsClosed check in CreateReference and before the DangerousAddRef call in the ctor. If that happens, the DangerousAddRef will throw an exception, but by that point, the SafeCredentialReference instance will have already been allocated and its finalizer will eventually run. Seems like this whole if DEBUG / endif section should be deleted. |
@@ -11,7 +11,7 @@ class ModerationsController < ApplicationController
order("hotness_score DESC").limit(100)
@articles = @articles.cached_tagged_with(params[:tag]) if params[:tag].present?
- @rating_votes = RatingVote.where(article: @articles.pluck(:id), user: current_user)
+ @rating_votes = RatingVote.where(article: @articles, user: current_user)
@articles = @articles.decorate
end
| [ModerationsController->[article->[authorize,render,find_by],comment->[to_i,authorize,render,find],index->[decorate,limit,pluck,trusted,where,cached_tagged_with,present?],after_action]] | Displays a list of all articles with a that is valid for the current user. | Same here, Rails will transform this in a single select statement, `@articles` are already going to be used by the view to render the page, there's no need to extract ids pre-emptively |
@@ -437,6 +437,12 @@ public class HadoopFormatIO {
.build();
}
+ /** Transforms the keys read from the source using the given key translation function. */
+ public Read<K, V> withKeyTranslation(SimpleFunction<?, K> function, Coder<K> coder) {
+ checkArgument(coder != null, "coder can not be null");
+ return withKeyTranslation(function).toBuilder().setKeyCoder(coder).build();
+ }
+
/** Transforms the values read from the source using the given value translation function. */
public Read<K, V> withValueTranslation(SimpleFunction<?, V> function) {
checkArgument(function != null, "function can not be null");
| [HadoopFormatIO->[CommitJobFn->[cleanupJob->[getOutputCommitter,getJobId]],AssignTaskFn->[createTaskIDForKV->[getJobId],getPartitioner->[getPartitioner],getJobId->[getJobId],getReducersCount->[getReducersCount]],ConfigurationCoder->[encode->[write]],Write->[populateDisplayData->[populateDisplayData]],SerializableSplit->[writeObject->[write]],Read->[withConfiguration->[setInputFormatKeyClass,getValueTranslationFunction,getKeyTranslationFunction,setConfiguration,build,setKeyTypeDescriptor,setValueTypeDescriptor,setInputFormatValueClass,setInputFormatClass],expand->[getValueTranslationFunction,getKeyTranslationFunction,getValueTypeDescriptor,getConfiguration,getKeyTypeDescriptor],withKeyTranslation->[build],withValueTranslation->[build],validateTransform->[getinputFormatValueClass,getValueTranslationFunction,getinputFormatKeyClass,getKeyTranslationFunction,getConfiguration]],HadoopInputFormatBoundedSource->[populateDisplayData->[populateDisplayData],HadoopInputFormatReader->[close->[close],getProgress->[getProgress]],createReader->[validate,createInputFormatInstance]],WriteFn->[setupTask->[processTaskException,setupTask,getTaskAttemptContext],processElement->[write],write->[write,processTaskException,getRecordWriter],finishBundle->[close,getTaskId,getTaskAttemptContext],processTaskException->[abortTask]],SetupJobFn->[processElement->[validateConfiguration,getJobId]],TaskContext->[abortTask->[abortTask,getTaskId,getJobId],toString->[getTaskId,getJobId],initOutputCommitter->[getOutputCommitter],initRecordWriter->[getRecordWriter]]]] | Adds key translation function to read. | You need to clear the keyCoder/valueCoder in the non coder based variants otherwise we won't honor the typedescriptor when the user changes the translation function |
@@ -17,12 +17,8 @@ import (
jsonw "github.com/keybase/go-jsonw"
)
-// Substitute vars for %{name} in the string.
-// Only substitutes whitelisted variables.
-// It is an error to refer to an unknown variable or undefined numbered group.
-// Match is an optional slice which is a regex match.
-// AllowActiveString makes active_string a valid variable.
-func substitute(template string, state scriptState) (string, libkb.ProofError) {
+// Substitute register values for %{name} in the string.
+func substitute(template string, state scriptState, regexEscape bool) (string, libkb.ProofError) {
var outerr libkb.ProofError
// Regex to find %{name} occurrences.
// Match broadly here so that even %{} is sent to the default case and reported as invalid.
| [Attr,GetBool,ReplaceAllStringFunc,Each,QuoteMeta,AtIndex,MatchString,New,Errorf,Len,MustCompile,Text,IsAbs,Join,Contains,ToLower,AtKey,Get,NewProofError,ToArray,Split,ParseIP,ToDictionary,GetInt,TrimSuffix,Sprintf,Keys,GetString,Parse] | Substitute vars for unknown variables in the string. jsonUnpackArray returns the elements of an array. | Ack! Boolean parameters! Could we split this into two functions? |
@@ -695,6 +695,16 @@ class CI_Upload {
return FALSE;
}
+ if ($this->min_width > 0 AND $D['0'] < $this->min_width)
+ {
+ return FALSE;
+ }
+
+ if ($this->min_height > 0 AND $D['1'] < $this->min_height)
+ {
+ return FALSE;
+ }
+
return TRUE;
}
| [CI_Upload->[is_allowed_dimensions->[is_image],_prep_filename->[mimes_types]]] | Checks if the image is allowed dimensions. | I think you have to fix the indentation for this line. |
@@ -157,10 +157,8 @@ class EpochsVectorizer(TransformerMixin):
Attributes
----------
- n_channels : int
+ n_channels_ : int
The number of channels.
- n_times : int
- The number of time points.
"""
def __init__(self, info=None):
| [PSDEstimator->[transform->[_psd_multitaper,ValueError,isinstance,type],fit->[ValueError,isinstance,type]],EpochsVectorizer->[inverse_transform->[reshape,ValueError,isinstance,type],transform->[type,atleast_3d,ValueError,isinstance,reshape],fit->[ValueError,isinstance,type]],Scaler->[inverse_transform->[type,atleast_3d,iteritems,ValueError,isinstance],transform->[type,atleast_3d,iteritems,ValueError,isinstance],__init__->[dict],fit->[type,dict,mean,atleast_3d,ValueError,isinstance,items,pick_types]],FilterEstimator->[transform->[band_pass_filter,type,low_pass_filter,atleast_3d,band_stop_filter,ValueError,isinstance,high_pass_filter],__init__->[_check_type_picks],fit->[type,ValueError,isinstance,float,pick_types]]] | Initialize the object with the given info. | no need to initialize these var here. The sklearn convention is `self.init_param = init_param`, `self.new_attribute_ = new_attribute`. |
@@ -218,7 +218,7 @@ class TestWriteToTFRecord(TestTFRecordSink):
with TempDir() as temp_dir:
file_path_prefix = temp_dir.create_temp_file('result')
with TestPipeline() as p:
- input_data = [b'foo', b'bar']
+ input_data = [b'bar', b'foo']
_ = p | beam.Create(input_data) | WriteToTFRecord(
file_path_prefix, compression_type=CompressionTypes.GZIP)
| [TestTFRecordSink->[test_write_record_multiple->[_write_lines],test_write_record_single->[_write_lines]],TestTFRecordUtil->[test_read_record_invalid_data_mask->[_test_error,_increment_value_at_index],test_read_record_invalid_length_mask->[_test_error,_increment_value_at_index],test_read_record_invalid_record->[_test_error],_test_error->[_as_file_handle],test_read_record->[_as_file_handle]],TestReadFromTFRecord->[test_process_single->[_write_file],test_process_gzip->[_write_file_gzip],test_process_deflate->[_write_file_deflate],test_process_gzip_auto->[_write_file_gzip],test_process_auto->[_write_file_gzip],test_process_multiple->[_write_file]],TestReadAllFromTFRecord->[test_process_single->[_write_file],test_process_gzip->[_write_file_gzip],test_process_deflate->[_write_file_deflate],_write_glob->[_write_file],test_process_auto->[_write_file_gzip],test_process_glob->[_write_glob],test_process_multiple_globs->[_write_glob],test_process_multiple->[_write_file]],TestEnd2EndWriteAndRead->[test_end2end->[create_inputs],test_end2end_read_write_read->[create_inputs],test_end2end_auto_compression_unsharded->[create_inputs],test_end2end_auto_compression->[create_inputs]]] | Write a record with gzip compression. | nit: instead of changing the order doesn't it make sense to use `sorted()` in the `assertEqual()` calls? |
@@ -229,7 +229,7 @@ public class AvroSource<T> extends BlockBasedSource<T> {
private AvroSource(String fileNameOrPattern, long minBundleSize, String schema, Class<T> type,
String codec, byte[] syncMarker) {
- super(fileNameOrPattern, minBundleSize);
+ super(fileNameOrPattern, /* TODO */ true, minBundleSize);
this.readSchemaString = internSchemaString(schema);
this.codec = codec;
this.syncMarker = syncMarker;
| [AvroSource->[validate->[validate],createDatumReader->[getFileSchema,getReadSchema],AvroBlock->[decodeAsInputStream,getCodec,createDatumReader],AvroReader->[getCurrentSource->[getCurrentSource],getSplitPointsRemaining->[getSplitPointsRemaining],createStream->[getSyncMarker],startReading->[getSyncMarker,createStream],readNextBlock->[getCurrentSource,getSyncMarker]]]] | Returns an AvroSource that reads files containing records of the given type and uses the supplied minimum Creates a source for a subrange of a file. | I've added the option as a builder API where the pattern already exists. Most sources are fully-configured within their constructors, so I've added it there. I generally worry about bloating the parameter list for constructors, but I also don't like maintaining many constructor overloads to handle default parameters. We could convert long parameter lists to an `Options` object. I didn't do that yet because I don't think any of these tip the scale. Let me know what you think. |
@@ -46,6 +46,8 @@ public class HoodieCompactionConfig extends DefaultHoodieConfig {
public static final String INLINE_COMPACT_PROP = "hoodie.compact.inline";
// Run a compaction every N delta commits
public static final String INLINE_COMPACT_NUM_DELTA_COMMITS_PROP = "hoodie.compact.inline.max.delta.commits";
+ public static final String INLINE_COMPACT_ELAPSED_TIME_PROP = "hoodie.compact.inline.max.delta.time";
+ public static final String INLINE_COMPACT_TRIGGER_STRATEGY_PROP = "hoodie.compact.inline.trigger.strategy";
public static final String CLEANER_FILE_VERSIONS_RETAINED_PROP = "hoodie.cleaner.fileversions.retained";
public static final String CLEANER_COMMITS_RETAINED_PROP = "hoodie.cleaner.commits.retained";
public static final String CLEANER_INCREMENTAL_MODE = "hoodie.cleaner.incremental.mode";
| [HoodieCompactionConfig->[Builder->[build->[HoodieCompactionConfig]]]] | Configuration for Hoodie Compaction. nannanNode is the last node in the list of all commits that have the total record. | how about renaming it to `hoodie.compact.inline.max.delta.seconds`, it seems more readable cc @yanghua |
@@ -127,6 +127,7 @@ void _exit(int status) {
abort();
}
+int atexit(void (*func)()) __attribute__((weak));
int atexit(void (*func)()) {
(void) func;
return 0;
| [No CFG could be retrieved] | Exit the program. | I think this was mentioned before, weak definition will apply to every symbol otherwise. Placing it here we specify that libc_replacements.o version is weak |
@@ -173,7 +173,7 @@ module ApplicationHelper
end
def community_name
- @community_name ||= SiteConfig.community_name # rubocop:disable Rails/HelperInstanceVariable
+ @community_name ||= SiteConfig.community_name
end
def community_qualified_name
| [community_name->[community_name],title_with_timeframe->[title],sanitized_referer->[sanitized_referer]] | Returns the name of the community that is not in use. | Did this get smarter about setting instance variables in helpers vs accessing them so we could remove the cop? |
@@ -23,7 +23,13 @@ namespace System.Text.Json
{
private ReadOnlyMemory<byte> _utf8Json;
private MetadataDb _parsedData;
- private byte[]? _extraRentedBytes;
+
+ private byte[]? _extraRentedArrayPoolBytes;
+ private bool _hasExtraRentedArrayPoolBytes;
+
+ private PooledByteBufferWriter? _extraPooledByteBufferWriter;
+ private bool _hasExtraPooledByteBufferWriter;
+
private (int, string?) _lastIndexAndString = (-1, null);
internal bool IsDisposable { get; }
| [JsonDocument->[GetRawValue->[GetEndIndex],TextEquals->[TextEquals],GetPropertyRawValueAsString->[GetPropertyRawValue],WriteComplexElement->[GetEndIndex],WritePropertyName->[UnescapeString,ClearAndReturn,WritePropertyName],JsonElement->[GetEndIndex],WriteTo->[WriteTo],WriteString->[UnescapeString,ClearAndReturn],GetPropertyRawValue->[GetEndIndex],GetNameOfPropertyValue->[GetString],Dispose->[Dispose],GetRawValueAsString->[GetRawValue]]] | Creates a new object from a read - only object and a metadata object. _utf8Json = read - only memory buffer _extraRentedBytes = read. | Why are the extra flags needed? They seem to be equivalent to checking whether the rented references are null. |
@@ -73,7 +73,7 @@ class EventGridPublisherClient:
def __init__(
self,
endpoint: str,
- credential: Union[AzureKeyCredential, AzureSasCredential],
+ credential: Union["AsyncTokenCredential", AzureKeyCredential, AzureSasCredential],
**kwargs: Any
) -> None:
self._client = EventGridPublisherClientAsync(
| [EventGridPublisherClient->[__aexit__->[__aexit__],__aenter__->[__aenter__],close->[__aexit__]]] | Initializes EventGridPublisherClientAsync object. | That seems inconsistent, if not used those types should be in the `TYPE_CHECKING` as well, but I see now reason why some type would be string and some would be types |
@@ -288,13 +288,13 @@ namespace Content.Server.GameObjects
var entity = _entityManager.GetEntity(remove.EntityUid);
if (entity != null && storage.Contains(entity))
{
- Remove(entity);
var item = entity.GetComponent<ItemComponent>();
if (item != null && playerentity.TryGetComponent(out HandsComponent hands))
{
- if (hands.PutInHand(item))
- return;
+ if (hands.CanPutInHand(item) && hands.PutInHand(item))
+ //Remove(entity);
+ return;
}
entity.GetComponent<ITransformComponent>().WorldPosition = ourtransform.WorldPosition;
| [ServerStorageComponent->[OnDestroy->[Remove],Insert->[Insert],Initialize->[Initialize],HandleNetworkMessage->[UnsubscribeSession,Remove,HandleNetworkMessage],ExposeData->[ExposeData],UpdateClientInventory->[UnsubscribeSession],Activate->[UseEntity],PlayerInsertEntity->[_ensureInitialCalculated,Insert],HandlePlayerSessionChangeEvent->[UnsubscribeSession],Remove->[Remove],UnsubscribeSession->[Remove]]] | Override this method to handle a network message that is not relevant to the player. | Indentation is a bit weird here. |
@@ -164,6 +164,7 @@ function handleClickUrl(win) {
return Promise.resolve();
}
+ ////////////////// TODO: set ampshare without this
if (win.location.hash) {
// This is typically done using replaceState inside the viewer.
// If for some reason it failed, get rid of the fragment here to
| [addParamsToUrl,all,status,href,dev,getMode,whenFirstVisible,applyResponse,isTrustedViewer,replaceUrl,isExperimentOn,location,getQueryParamUrl,hasCapability,history,document,getAttribute,indexOf,user,getParam,isTrustedReferrer,sendMessageAwaitResponse,json,invoke,parseQueryString,length,push,resolve,timerFor,search,isProxyOrigin,whenReady,xhrFor,splice,resolveImpression,handleClickUrl,getBody,parseUrl,viewerForDoc,handleReplaceUrl] | Handle a click url request. Get the from the impression proxy. | Also note that `replaceState` is called below in `applyResponse()`. I don't really get what the point of this class is. AMP ALP? Why change the window location from the result of a JSON fetch to the "click" viewer param? @zhouyx for context. |
@@ -197,7 +197,7 @@ $arrayfields = array(
'f.total_localtax2'=>array('label'=>$langs->transcountry("AmountLT2", $mysoc->country_code), 'checked'=>0, 'enabled'=>($mysoc->localtax2_assuj == "1"), 'position'=>120),
'f.total_ttc'=>array('label'=>"AmountTTC", 'checked'=>0, 'position'=>130),
'u.login'=>array('label'=>"Author", 'checked'=>1, 'position'=>135),
- 'dynamount_payed'=>array('label'=>"Received", 'checked'=>0, 'position'=>140),
+ 'dynamount_payed'=>array('label'=>"Payed", 'checked'=>0, 'position'=>140),
'rtp'=>array('label'=>"Rest", 'checked'=>0, 'position'=>150), // Not enabled by default because slow
'f.multicurrency_code'=>array('label'=>'Currency', 'checked'=>0, 'enabled'=>(empty($conf->multicurrency->enabled) ? 0 : 1), 'position'=>160),
'f.multicurrency_tx'=>array('label'=>'CurrencyRate', 'checked'=>0, 'enabled'=>(empty($conf->multicurrency->enabled) ? 0 : 1), 'position'=>170),
| [fetch,selectMassAction,form_multicurrency_rate,select_country,fetch_object,getSommePaiement,jdate,select_all_categories,transcountry,getDocumentsLink,select_conditions_paiements,select_salesrepresentatives,getNomUrl,rollback,getLoginUrl,select_dolusers,begin,initHooks,demande_prelevement,typent_array,idate,effectif_array,load,plimit,getSumCreditNotesUsed,executeHooks,loadLangs,select_categories,getLibType,fetch_lines,select_types_paiements,LibStatut,escape,sanitize,showCheckAddButtons,close,selectMultiCurrency,multiSelectArrayWithCheckbox,selectDate,query,fetch_name_optionals_label,form_modes_reglement,free,getMarginInfosArray,trans,num_rows,showFilterButtons,getAvailableDiscounts,hasDelay,form_conditions_reglement,getOptionalsFromPost,selectarray,commit,showdocuments,getSumDepositsUsed] | This is a list of all possible values for the Fusion configuration. Demonstrates how to set the multicurrency configuration. | We should not have same label for supplier and customer invoice. One is amount paid, other is amount collected. |
@@ -139,8 +139,18 @@ AmpBaseCarousel['props'] = {
'snapBy': {attr: 'snap-by', type: 'number', media: true},
'snapAlign': {attr: 'snap-align', type: 'string', media: true},
'visibleCount': {attr: 'visible-count', type: 'number', media: true},
+ 'children': {
+ props: {
+ 'thumbnailSrc': {attr: 'data-thumbnail-src'},
+ },
+ selector: '*', // This should be last as catch-all.
+ single: false,
+ },
};
+/** @override */
+AmpBaseCarousel['usesShadowDom'] = true;
+
/** @override */
AmpBaseCarousel['shadowCss'] = COMPONENT_CSS;
| [No CFG could be retrieved] | Props of the carousel. Fires a slide change event for the specified element and index. | This confuses me. What does `props` insides a prop definition do? |
@@ -543,6 +543,7 @@ class Jetpack_Carousel {
foreach ( (array) $extra_data as $data_key => $data_values ) {
$html = str_replace( '<div ', '<div ' . esc_attr( $data_key ) . "='" . json_encode( $data_values ) . "' ", $html );
$html = str_replace( '<ul class="wp-block-gallery', '<ul ' . esc_attr( $data_key ) . "='" . json_encode( $data_values ) . "' class=\"wp-block-gallery", $html );
+ $html = str_replace( '<ul class="blocks-gallery-grid', '<ul ' . esc_attr( $data_key ) . "='" . json_encode( $data_values ) . "' class=\"blocks-gallery-grid", $html );
}
}
| [Jetpack_Carousel->[carousel_display_geo_callback->[settings_checkbox],carousel_display_geo_sanitize->[sanitize_1or0_option],carousel_enable_it_callback->[settings_checkbox],carousel_enable_it_sanitize->[sanitize_1or0_option],carousel_display_exif_callback->[settings_checkbox],settings_checkbox->[test_1or0_option],add_data_img_tags_and_enqueue_assets->[enqueue_assets],carousel_background_color_callback->[settings_select],enqueue_assets->[asset_version],carousel_display_exif_sanitize->[sanitize_1or0_option]]] | Add data to the Gallery container. | I noticed a small problem with the `str_replace()` call. Your change didn't introduce the problem, but maybe it could be fixed in this PR. This `str_replace()` will add only the first `$data_key`/`$data_value` pair in `$extra_data` to the ul element. The first loop changes the element to `<ul key='value' class="blocks-gallery-grid`. So, the string `<ul class="blocks-gallery-grid` no longer exists. |
@@ -71,6 +71,11 @@ def plot_epochs_image(epochs, picks=None, sigma=0., vmin=None,
Figure instance to draw the image to. Figure must contain two axes for
drawing the single trials and evoked responses. If None a new figure is
created. Defaults to None.
+ axes : list of matplotlib axes | None
+ List of axes instances to draw the image, erp and colorbar to.
+ Must be of length three if colorbar is True (with the last list element
+ being the colorbar axes) or two if colorbar is False. If both fig and
+ axes are passed, fig is ignored. Defaults to None.
overlay_times : array-like, shape (n_epochs,) | None
If not None the parameter is interpreted as time instants in seconds
and is added to the image. It is typically useful to display reaction
| [_plot_onkey->[_plot_traces,_plot_window],_mouse_click->[plot_epochs_image,_pick_bad_epochs,_plot_window],_toggle_labels->[_plot_vert_lines],_epochs_navigation_onclick->[_draw_epochs_axes]] | Plots a series of event related potentials in an image. Creates a new is object. Plots a nanoseconds image. Plot the nanoseconds of the data. | Wouldn't an error be better than silently ignoring it? |
@@ -219,6 +219,18 @@ def should_overwrite(filepath, overwrite):
return True
+def convert_output_metrics(metrics_config, custom_objects):
+ from google3.third_party.tensorflow.python.keras import metrics as metrics_module # pylint:disable=g-import-not-at-top
+ if isinstance(metrics_config, list):
+ return [convert_output_metrics(mc, custom_objects) for mc in metrics_config]
+ elif (isinstance(metrics_config, dict) or
+ (metrics_config not in ['accuracy', 'acc', 'crossentropy', 'ce'])):
+ # Do not deserialize accuracy and cross-entropy strings as we have special
+ # case handling for these in compile, based on model output shape.
+ return metrics_module.deserialize(metrics_config, custom_objects)
+ return metrics_config
+
+
def compile_args_from_training_config(training_config, custom_objects=None):
"""Return model.compile arguments from training config."""
if custom_objects is None:
| [trace_model_call->[raise_model_input_error,model_input_signature]] | Return model. compile arguments from training config. | `google3.third_party` can be removed. from tensorflow.python.keras import metrics as metrics_module # pylint:disable=g-import-not-at-top |
@@ -1247,7 +1247,7 @@ weight_norm : str | None
The Neural Activity Index :footcite:`VanVeenEtAl1997` will be computed,
which simply scales all values from ``'unit-noise-gain'`` by a fixed
value.
- - ``'unit-noise-gain-invariante'``
+ - ``'unit-noise-gain-invariant'``
Compute a rotation-invariant normalization using the matrix square
root. This differs from ``'unit-noise-gain'`` only when
``pick_ori='vector'``, creating a solution that:
| [copy_doc->[wrapper->[ValueError,len]],copy_function_doc_to_method_doc->[wrapper->[split,lstrip,len,strip,ValueError,join,enumerate]],linkcode_resolve->[relpath,split,hasattr,getattr,dirname,getsourcefile,get,len,join,normpath,getsourcelines],fill_doc->[split,indentcount_lines,append,len,str,items,join,splitlines,RuntimeError],deprecated->[_update_doc->[split,lstrip,len,strip,enumerate],_make_fun->[_update_doc,dict,repr,FunctionMaker,make],_decorate_fun->[_make_fun],__call__->[_decorate_class,isinstance,_decorate_fun],_decorate_class->[_make_fun]],copy_base_doc_to_subclass_doc->[dir,callable,mro,getattr],open_docs->[open_new_tab,keys,dict,sorted,_check_option,get_config],deprecated_alias->[split,int,deepcopy,currentframe,deprecated,str,join],dict,filterwarnings,format,docdict] | The rank of the non - zero non - zero non - zero non - zero non - Additional information about the which can be used to pick the . | this is just a random typo that I noticed while making the other fixes. |
@@ -27,11 +27,7 @@ This test can accept the following parameters:
* side_input_type (str) - Required. Specifies how the side input will be
materialized in ParDo operation. Choose from (dict, iter, list).
* window_count (int) - The number of fixed sized windows to subdivide the
- side input into. By default, no windows will be used.
- * side_input_size (int) - The size of the side input. Must be equal to or
- lower than the size of the main input. If lower, the side input will be
- created by applying a :class:`beam.combiners.Sample
- <apache_beam.transforms.combiners.Sample>` transform.
+ side input into. By default, a global window will be used.
* access_percentage (int) - Specifies the percentage of elements in the side
input to be accessed. By default, all elements will be accessed.
| [SideInputTest->[test->[SequenceSideInputTestDoFn,AddEventTimestamps,materialize_as,GetRandomKeys,MappingSideInputTestDoFn]],SideInputTest] | A basic load test that creates a new object and checks its contents. Reads a single object from a file. | This should mention that by default the side_input will be subdivided into 1000 windows. |
@@ -1107,6 +1107,14 @@ class StubGenerator(mypy.traverser.TraverserVisitor):
name = self.typing_name(name)
self.import_tracker.require_name(name)
+ def add_abc_import(self, name: str) -> None:
+ """Add a name to be imported from collections.abc, unless it's imported already.
+
+ The import will be internal to the stub.
+ """
+ name = self.typing_name(name)
+ self.import_tracker.require_name(name)
+
def add_import_line(self, line: str) -> None:
"""Add a line of text to the import section, unless it's already there."""
if line not in self._import_lines:
| [get_qualified_name->[get_qualified_name],find_module_paths_using_imports->[StubSource],ImportTracker->[reexport->[require_name]],generate_stubs->[generate_stub_from_ast,generate_asts_for_modules,collect_docs_signatures,mypy_options,collect_build_targets],find_method_names->[find_method_names,add],main->[generate_stubs,parse_options],find_self_initializers->[SelfTraverser],generate_stub_from_ast->[output,StubGenerator],generate_asts_for_modules->[parse_source_file],find_module_paths_using_search->[StubSource,is_non_library_module],parse_options->[Options],StubGenerator->[process_member_expr_decorator->[require_name],visit_class_def->[require_name,AliasPrinter,add_import],output->[import_lines],get_base_types->[AliasPrinter],get_str_type_of_node->[typing_name,add_typing_import],visit_decorator->[visit_func_def],print_annotation->[AnnotationPrinter],visit_import->[add_import],visit_overloaded_func_def->[visit_func_def],is_private_member->[is_private_name],process_namedtuple->[require_name],__init__->[ImportTracker,reexport,add_import_from],record_name->[is_top_level],add_coroutine_decorator->[require_name,add_decorator],is_alias_expression->[is_alias_expression],is_recorded_name->[is_top_level],visit_mypy_file->[find_defined_names,add_import_from,find_referenced_names],visit_import_from->[add_import_from,reexport],add_typing_import->[require_name,typing_name],process_name_expr_decorator->[require_name],process_typealias->[AliasPrinter]],collect_build_targets->[StubSource,remove_blacklisted_modules],main] | Add a name to be imported from typing unless it s already imported. | The name `typing_name` is not really correct anymore, since we use it for `collections.abc` as well. Should we rename it? |
@@ -435,6 +435,13 @@ void Certificate::load_cert_bytes(const std::string& path)
#else
std::ifstream in(path.c_str(), std::ios::binary);
+ const std::ifstream::pos_type begin = in.tellg();
+ in.seekg(0, std::ios::end);
+ const std::ifstream::pos_type end = in.tellg();
+ in.seekg(0, std::ios::beg);
+
+ original_bytes_.length(end - begin + 1);
+ in.read(reinterpret_cast<char*>(&original_bytes_[0]), end - begin);
if (!in) {
ACE_ERROR((LM_WARNING,
| [No CFG could be retrieved] | - - - - - - - - - - - - - - - - - - load_cert_data_bytes Loads a list of certificate from PEM file. | Isn't this guaranteed to be 0 for a file opened (without "append")? Does it need to check if the stream is open first? |
@@ -81,8 +81,8 @@ export default class ListingAdapterV1 extends AdapterBase {
? new Money(ipfsData.commission)
: null
listing.commissionPerUnit = ipfsData.commissionPerUnit
- ? new Money(ipfsData.commissionPerUnit)
- : null
+ ? new Money(ipfsData.commissionPerUnit)
+ : null
} else if (listing.type === 'fractional') {
listing.slots = ipfsData.slots
listing.timeIncrement = ipfsData.timeIncrement
| [No CFG could be retrieved] | Get a listing of the given type. | I'm a little confused about the use of `new Money()` as it is used in this file (not just from this new use of it). To me it looks like you'd at least have to pass it an object with a key called `amount` in order for it to return anything useful, and ideally, you'd also pass it another key called `currency`, which differs between `OGN` and `ETH` in this file (price is normally ETH and commission is normally OGN), but that's not defined in this file... How does `new Money()` return a useful object without being explicitly passed `amount` or `currency`? |
@@ -357,6 +357,7 @@ const files = {
'spec/app/shared/util/entity-utils.spec.ts',
'spec/app/shared/auth/private-route.spec.tsx',
'spec/app/shared/layout/header.spec.tsx',
+ 'spec/app/shared/layout/menus/account.spec.tsx',
'spec/app/modules/account/register/register.spec.tsx',
'spec/app/modules/account/register/register.reducer.spec.ts',
'spec/app/modules/account/activate/activate.reducer.spec.ts',
| [No CFG could be retrieved] | includes layout header footer and password - strength - bar. css Tests for missing components. | should be `spec/app/shared/layout/header/menus/account.spec.tsx` |
@@ -15,6 +15,7 @@ var (
// when this variable is created. See ChangeDefaultCfgfileFlag which should
// be called prior to flags.Parse().
configfiles = flagArgList("c", "beat.yml", "Configuration file")
+ overwrites = common.NewFlagConfig(nil, nil, "E", "Configuration overwrite")
testConfig = flag.Bool("configtest", false, "Test configuration and exit.")
)
| [SetDefault,Dir,Join,Unpack,LoadFile,LoadFiles,Errorf,Abs,Bool] | Load reads the configuration from a YAML file and imports it into the given interface. IsTestConfig returns true if the configuration is used for testing. | Can the `-E` flag be used multiple times to overwrite multiple variables? |
@@ -50,6 +50,7 @@ define([
'./Camera',
'./CullFace',
'./CullingVolume',
+ './DebugCameraPrimitive',
'./OrthographicFrustum',
'./Pass',
'./PerInstanceColorAppearance',
| [No CFG could be retrieved] | Define a uniform uniform shader. - - - - - - - - - - - - - - - - - -. | `GeometryAttributes` and perhaps other includes are no longer needed. |
@@ -1516,6 +1516,7 @@ abstract class ElggEntity extends \ElggData implements
$container_guid = $this->attributes['container_guid'];
if ($container_guid == 0) {
+ $this->attributes['container_guid'] = $owner_guid;
$container_guid = $owner_guid;
}
$container_guid = (int)$container_guid;
| [ElggEntity->[countAnnotations->[getAnnotationCalculation],canEditMetadata->[canEdit],export->[getGUID,getExportableValues],getURL->[getSubtype,getType],refresh->[load],getIconURL->[getType],get->[__get],disable->[disableMetadata,canEdit,disableAnnotations,disable],prepareObject->[getSubtype,getURL,getContainerGUID,getType,getTimeUpdated,getOwnerGUID],update->[canEdit],enable->[deleteMetadata,enableMetadata,canEdit,enable,enableAnnotations],getAnnotationsMin->[getAnnotationCalculation],getSystemLogID->[getGUID],getContainerEntity->[getContainerGUID],canDelete->[canEdit,get],getAnnotationsMax->[getAnnotationCalculation],load->[isFullyLoaded],loadAdditionalSelectValues->[setVolatileData],canEdit->[canEdit],getAnnotationsSum->[getAnnotationCalculation],deleteOwnedAccessCollections->[delete],save->[getGUID],delete->[deleteAnnotations,deleteMetadata,deleteOwnedAnnotations,deleteRelationships,deleteOwnedMetadata,delete,canDelete],create->[canWriteToContainer,setPrivateSetting,getContainerEntity,getOwnerEntity,annotate],set->[__set],getAnnotationsAvg->[getAnnotationCalculation]]] | Creates an entity if a user is not logged in and the current user can write to a base entity it Save all unsaved attributes. | Good catch. Testing this would be easy, no? |
@@ -53,4 +53,9 @@ public class UnownedOutputStream extends FilterOutputStream {
public String toString() {
return MoreObjects.toStringHelper(UnownedOutputStream.class).add("out", out).toString();
}
+
+ @Override
+ public void write(byte[] b, int off, int len) throws IOException {
+ out.write(b, off, len);
+ }
}
| [UnownedOutputStream->[hashCode->[hashCode],toString->[toString],equals->[equals]]] | Returns a string representation of this output stream. | Wondering if we should do the bounds check here too (like in `FilterOutputStream` or we assume that the delegated does the check? opinions @lukecwik |
@@ -422,7 +422,10 @@ loop:
}
// Mark ourselved as Leaving so no more samples are send to us.
- i.changeState(context.Background(), LEAVING)
+ err := i.changeState(context.Background(), LEAVING)
+ if err != nil {
+ level.Error(util.Logger).Log("msg", "failed to set state to LEAVING", "ring", i.RingName, "err", err)
+ }
// Do the transferring / flushing on a background goroutine so we can continue
// to heartbeat to consul.
| [ClaimTokensFor->[setTokens],loop->[GetState],changeState->[updateConsul,setState,GetState],verifyTokens->[setTokens,GetState],compareTokens->[getTokens],autoJoin->[setState,getTokens,GetState,setTokens],initRing->[setTokens,setState,GetState],updateConsul->[getTokens,GetState],RegisterFlagsWithPrefix->[RegisterFlagsWithPrefix]] | loop is the main loop for the lifecycler This function is called when the ring is observing and when the ring is in the ACTIVE This loop is run in a goroutine and will process any unregisters and updates the consul instance. | `to set state` > `to change the state` |
@@ -27,10 +27,12 @@ public abstract class ScalarImplementationDependency
implements ImplementationDependency
{
private final Optional<InvocationConvention> invocationConvention;
+ private final Class<?> type;
- protected ScalarImplementationDependency(Optional<InvocationConvention> invocationConvention)
+ protected ScalarImplementationDependency(Optional<InvocationConvention> invocationConvention, Class<?> type)
{
this.invocationConvention = requireNonNull(invocationConvention, "invocationConvention is null");
+ this.type = requireNonNull(type, "type is null");
if (invocationConvention.map(InvocationConvention::supportsInstanceFactor).orElse(false)) {
throw new IllegalArgumentException(getClass().getSimpleName() + " does not support instance functions");
}
| [ScalarImplementationDependency->[resolve->[getMethodHandle],IllegalArgumentException,orElse,requireNonNull,getSimpleName]] | Produces a class implementing the ScalarImplementationDependency interface. | I can't find why this had to change in this commit. |
@@ -267,6 +267,7 @@ Rails.application.routes.draw do
:constraints => { view: /moderate/ }
get "/listings/:category/:slug/delete_confirm" => "listings#delete_confirm"
delete "/listings/:category/:slug" => "listings#destroy"
+ patch "listing_endorsement/:id" => "listing_endorsements#update", :as => :approve_endorsement
get "/notifications/:filter" => "notifications#index"
get "/notifications/:filter/:org_id" => "notifications#index"
get "/notification_subscriptions/:notifiable_type/:notifiable_id" => "notification_subscriptions#show"
| [new,authenticate,authenticated,redirect,devise_scope,mount,put,draw,freeze,has_role?,resources,member,root,use,scope,post,set,controllers,require,secrets,class_eval,use_doorkeeper,resource,patch,devise_for,each,production?,delete,namespace,collection,get,session_options,tech_admin?,app] | This is a list of all the routes that are registered with the email_authorizations module chat_channels. index. | why not simply add ` resources :listing_endorsements, only: %i[create update]` up there? |
@@ -267,6 +267,10 @@ class Resolver(object):
# requirements we have to pull the tree down and inspect to assess
# the version #, so it's handled way down.
if not req_to_install.link:
+ if req_to_install.is_pinned:
+ # No need to check the index for a better version.
+ return 'already satisfied'
+
try:
self.finder.find_requirement(req_to_install, upgrade=True)
except BestVersionAlreadyInstalled:
| [Resolver->[_resolve_one->[add_req,_get_abstract_dist_for,_check_dist_requires_python],get_installation_order->[schedule->[schedule],schedule],_check_skip_installed->[_set_req_to_reinstall,_is_upgrade_allowed],_get_abstract_dist_for->[_check_skip_installed,_set_req_to_reinstall,_is_upgrade_allowed]]] | Checks if the req_to_install should be skipped. | Should we be checking that the pinned does indeed match the installed version here? I imagine this'll return even if we're at `0.1.0` if pinned on `0.1.1` for example. |
@@ -436,7 +436,7 @@ func (o *Operation) EnsureShootStateExists(ctx context.Context) error {
blockOwnerDeletion := false
ownerReference.BlockOwnerDeletion = &blockOwnerDeletion
- _, err := controllerutil.CreateOrUpdate(ctx, o.K8sGardenClient.Client(), shootState, func() error {
+ _, err := controllerutil.CreateOrUpdate(ctx, o.K8sGardenClient.DirectClient(), shootState, func() error {
shootState.OwnerReferences = []metav1.OwnerReference{*ownerReference}
return nil
})
| [DeleteClusterResourceFromSeed->[InitializeSeedClients],InjectSeedShootImages->[SeedVersion,ShootVersion],InjectSeedSeedImages->[SeedVersion],InjectShootShootImages->[ShootVersion]] | EnsureShootStateExists creates a new ShootState if it does not already exist. | Have you seen any issues here with the cached client? If yes, I would probably also go for a patch here instead of using the `DirectClient`. |
@@ -187,15 +187,8 @@ class GradientChecker(unittest.TestCase):
backward_op.infer_shape(scope)
backward_op.run(scope, ctx)
- if isinstance(place, core.CPUPlace):
- msg = "CPU kernel gradient is not close to numeric gradient"
- else:
- if isinstance(place, core.GPUPlace):
- msg = "GPU kernel gradient is not close to numeric gradient"
- else:
- raise ValueError("unknown place " + type(place))
- self.assertTrue(
- self.__is_close(numeric_grad, scope, max_relative_error), msg)
+ self.__is_close(numeric_grad, scope, max_relative_error,
+ "Gradient Check On %s" % str(place))
if __name__ == '__main__':
| [GetNumericGradientTest->[test_softmax_op->[label_softmax_grad,get_numeric_gradient],test_add_op->[get_numeric_gradient]],get_numeric_gradient->[get_output,product],GradientChecker->[check_grad->[__is_close,get_numeric_gradient]]] | This function checks the gradient of a . no op. outputs_missing. | self.AssertIsClose ? Assert close |
@@ -377,6 +377,13 @@ func showOrgProfile(ctx *context.Context) {
}
org := ctx.Org.Organization
+
+ canSeeOrg := models.HasOrgVisible(org, ctx.User)
+ if !canSeeOrg {
+ ctx.NotFound("HasOrgVisible", nil)
+ return
+ }
+
ctx.Data["Title"] = org.DisplayName()
page := ctx.QueryInt("page")
| [Handle,GetAccessibleRepositories,AccessibleReposEnv,Params,LoadRepositories,RepositoryList,LoadAttributes,IsSliceContainsStr,Redirect,GetUserByEmail,GetUserIssueStats,GetMirrorRepositories,PlainText,HTML,RepoIDs,GetOrganizations,SetParams,GetRepositoryByID,GetUserRepositories,GetAccessRepoIDs,New,ListPublicKeys,Errorf,IssueList,Bytes,IsOrganization,CountRepos,HandleOrgAssignment,CountUserRepositories,Tr,Written,MirrorRepos,IsOwnedBy,MirrorRepositoryList,DisplayName,Repos,IsErrUserNotExist,GetRepositories,GetUserByName,OmitEmail,OptionalBoolOf,Query,HasAccess,GetFeeds,GetMembers,QueryInt,Issues,WriteString,RelAvatarLink,QueryInt64] | ShowSSHKeys outputs all the public keys of a user. Get the list of users in the system. | no need for variable here |
@@ -305,7 +305,8 @@ func newNewCmd() *cobra.Command {
cmd.PersistentFlags().BoolVar(
&generateOnly, "generate-only", false,
"Generate the project only; do not create a stack, save config, or install dependencies")
- cmd.PersistentFlags().StringVar(&dir, "dir", "",
+ cmd.PersistentFlags().StringVar(
+ &dir, "dir", "",
"The location to place the generated project; if not specified, the current directory is used")
return cmd
| [IsValidProjectName,CloudURL,StringVar,Colorize,SaveProjectStack,ValueOrDefaultProjectDescription,Wrap,Interactive,AskOne,CopyTemplateFilesDryRun,IsNotExist,Strings,CombinedOutput,LoadLocalTemplate,New,RunFunc,Diag,StringVarP,Errorf,Chdir,TrimSpace,Wrapf,Join,GetGlobalColorization,BoolVarP,ReadString,ListTemplates,Name,StackName,Sort,ToLower,DetectProjectStack,IsSpace,Highlight,Base,Rel,DownloadTemplate,ListLocalTemplates,MkdirAll,ValueOrSanitizedDefaultProjectName,NewValue,Printf,Command,EqualFold,Fprintf,Println,TrimSuffix,NewReader,Sprintf,CopyTemplateFiles,InstallTemplate,ParseStackReference,DefaultURL,Print,String,Getwd,Getenv,MaximumNArgs,PersistentFlags,EmojiOr,BoolVar] | getDevStackName returns the name of the dev stack that is created by the stackDeploy saveConfig saves the config for the given stack. | Unrelated: Did we end up deciding if this "generate into the current folder" thing was the right default? |
@@ -432,7 +432,7 @@ func (consensus *Consensus) onViewChange(msg *msg_pb.Message) {
priKey := consensus.priKey.PrivateKey[i]
if _, err := consensus.Decider.SubmitVote(
quorum.Commit,
- key,
+ consensus.PubKey.PublicKeyBytes[i],
priKey.SignHash(commitPayload),
common.BytesToHash(consensus.blockHash[:]),
block.NumberU64(),
| [startViewChange->[GetNextLeaderKey,SetViewID,ViewID,SetMode],onViewChange->[switchPhase,ViewID,SetMode,SetViewID,ResetViewChangeState],onNewView->[switchPhase,SetViewID,ViewID,ResetViewChangeState],ResetViewChangeState->[SetMode]] | onViewChange is called when a view change message is received This function is called when a message is prepared This function is called when a message is received from a validator. It will add self m This function checks if the sender key is valid for the message and if so checks if the This function is called when a message is received from a leader and is ready to be sent Check and add viewID message from the validator state to normal if the mask is achieved by the FBFTCommit This function is called when a new view change is received from a new leader. | This should be `AddNewVote`? how is `SubmitVote` still works? |
@@ -128,6 +128,11 @@ void Config_StoreSettings() {
EEPROM_WRITE_VAR(i, max_e_jerk);
EEPROM_WRITE_VAR(i, add_homing);
+ #if defined(MESH_BED_LEVELING)
+ EEPROM_WRITE_VAR(i, mbl.active);
+ EEPROM_WRITE_VAR(i, mbl.z_values);
+ #endif // MESH_BED_LEVELING
+
#ifdef DELTA
EEPROM_WRITE_VAR(i, endstop_adj); // 3 floats
EEPROM_WRITE_VAR(i, delta_radius); // 1 float
| [No CFG could be retrieved] | The default values are used when the data is not in the store. region Private Methods. | When we change the EEPROM layout we must always write something. If the option is disabled then you should write reasonable default values in the same place where you would have written the real values. You should also bump the EEPROM version number up by 1. |
@@ -167,6 +167,11 @@ namespace Microsoft.WebAssembly.Diagnostics
break;
}
+ case "Target.targetDestroyed":
+ {
+ await SendMonoCommand(sessionId, MonoCommands.DetachDebugger(), token);
+ break;
+ }
}
return false;
| [MonoProxy->[RuntimeGetProperties->[SendMonoCommand],AcceptCommand->[SendMonoCommand,IsRuntimeAlreadyReadyAlready],IsRuntimeAlreadyReadyAlready->[SendMonoCommand],GetScopeProperties->[SendMonoCommand],RuntimeReady->[SendMonoCommand,Task,LoadStore],SetMonoBreakpoint->[SendMonoCommand],Step->[SendMonoCommand],Task->[SendMonoCommand,SetMonoBreakpoint,UpdateContext,IsRuntimeAlreadyReadyAlready],LoadStore->[SendMonoCommand,Task],OnPause->[SendMonoCommand]]] | Override to accept events that are not related to the current context. Check if a given token is in the chain of tokens. | Isn't this an event saying the debug target has gone away? I don't think we can actually interact with it after that? |
@@ -236,7 +236,7 @@ logger.info(
)}`
)
-setNetwork(config.network)
+setNetwork(config.network, { performanceMode: false })
main().catch(err => {
logger.error('Error occurred in listener main() process:', err)
logger.error('Exiting')
| [No CFG could be retrieved] | The main function of the listener. | Good catch! So that was causing the listener to talk to graphql.originprotocol.com in prod ? |
@@ -97,8 +97,17 @@ FXA_CONFIG = {
'redirect_url': 'http://localhost:3000/api/v3/accounts/authenticate/',
'scope': 'profile',
},
+ 'code-manager': {
+ 'client_id': env('CODE_MANAGER_FXA_CLIENT_ID', default='CHANGE_ME'),
+ 'client_secret': env('CODE_MANAGER_FXA_CLIENT_SECRET', default='CHANGE_ME'), # noqa
+ 'content_host': 'https://stable.dev.lcip.org',
+ 'oauth_host': 'https://oauth-stable.dev.lcip.org/v1',
+ 'profile_host': 'https://stable.dev.lcip.org/profile/v1',
+ 'redirect_url': 'http://localhost:3000/fxa-authenticate',
+ 'scope': 'profile',
+ },
}
-ALLOWED_FXA_CONFIGS = ['default', 'amo', 'local']
+ALLOWED_FXA_CONFIGS = ['default', 'amo', 'local', 'code-manager']
# CSP report endpoint which returns a 204 from addons-nginx in local dev.
CSP_REPORT_URI = '/csp-report'
| [env,get,format,format_exc,get_db_config,join,warn,urlparse] | Return a dictionary of all possible configuration values for a specific node. Adds the image source to the list of image sources. | can you use the dashboard to create some credentials (like the others do) |
@@ -35,7 +35,10 @@ def fc_layer(input,
"Y": w,
},
outputs={"Out": tmp},
- attrs={'x_num_col_dims': num_flatten_dims})
+ attrs={
+ 'x_num_col_dims': num_flatten_dims,
+ 'y_num_col_dims': len(input_shape) - num_flatten_dims
+ })
mul_results.append(tmp)
# sum
| [_create_op_func_->[func->[_convert_]],_create_op_func_] | Creates a Fourier network layer for a single node. | `x_num_col_dims` and `num_flatten_dims` refer to the same thing. Maybe we should use the same name? |
@@ -15,6 +15,7 @@ define([
defineProperties,
EllipseGeometry,
Ellipsoid,
+ Rectangle,
VertexFormat) {
'use strict';
| [No CFG could be retrieved] | Constructs a geometry for a single . Circle Geometry constructor. | Is this needed anymore? |
@@ -36,11 +36,11 @@ class Libszip(AutotoolsPackage):
homepage = "https://support.hdfgroup.org/doc_resource/SZIP/"
url = "https://support.hdfgroup.org/ftp/lib-external/szip/2.1.1/src/szip-2.1.1.tar.gz"
list_url = "https://support.hdfgroup.org/ftp/lib-external/szip"
- list_depth = 2
+ list_depth = 3
provides('szip')
- version('2.1.1', 'dd579cf0f26d44afd10a0ad7291fc282')
+ version('2.1.1', '5addbf2a5b1bf928b92c47286e921f72')
version('2.1', '902f831bcefb69c6b635374424acbead')
def configure_args(self):
| [Libszip->[version,provides]] | Returns a list of arguments to configure the node. | There's really no point in even having a `list_url`. There are no other versions of `szip` available. |
@@ -553,7 +553,15 @@ public final class TransformTranslator {
EvaluationContext context) {
Iterable<? extends WindowedValue<?>> iter =
context.getWindowedValues(context.getInput(transform));
- context.putPView(context.getOutput(transform), iter);
+ PCollectionView<WriteT> output = context.getOutput(transform);
+ Coder<Iterable<WindowedValue<?>>> coderInternal = output.getCoderInternal();
+
+ @SuppressWarnings("unchecked")
+ Iterable<WindowedValue<?>> iterCast = (Iterable<WindowedValue<?>>) iter;
+
+ context.putPView(output,
+ iterCast,
+ coderInternal);
}
};
}
| [TransformTranslator->[groupByKey->[evaluate->[groupByKey]],combinePerKey->[evaluate->[combinePerKey]],combineGlobally->[evaluate->[combineGlobally]],writeHadoopFile->[getShardTemplate,getNumShards,getFilenamePrefix,getFilenameSuffix],storageLevel,groupByKey,parDo,readBounded,combinePerKey,window,create,viewAsIter,multiDo,combineGrouped,combineGlobally,writeHadoop,createPCollView,readHadoop,flattenPColl,viewAsSingleton]] | create a PCollectionView that can be used to create a PCollectionView with a single. | This can be a one-liner. |
@@ -88,6 +88,9 @@ def histogram_fixed_width_bins(values,
# map tensor values within the open interval value_range to {0,.., nbins-1},
# values outside the open interval will be zero or less, or nbins or more.
indices = math_ops.floor(nbins_float * scaled_values, name='indices')
+
+ if array_ops.where(value_range[0] == value_range[1]):
+ indices=array_ops.where(math_ops.is_nan(indices), array_ops.ones_like(indices), indices)
# Clip edge cases (e.g. value = value_range[1]) or "outliers."
indices = math_ops.cast(
| [histogram_fixed_width_bins->[clip_by_value,cast,truediv,name_scope,reshape,convert_to_tensor,floor,shape],histogram_fixed_width->[name_scope,_histogram_fixed_width],tf_export] | Bins the given values for a histogram into which each element of values would be bin Convert nbins to tensor. | Use tf.equal instead of == here to make it work in tf1 too. Also use tf.cond instead of if so it doesn't require autograph |
@@ -2059,8 +2059,15 @@ var requiredDirective = function() {
if (!ctrl) return;
attr.required = true; // force truthy in case we are on non input element
+ var validity = elm.prop('validity');
+ if (!isObject(validity)) {
+ validity = {
+ valid: true
+ };
+ }
+
var validator = function(value) {
- if (attr.required && ctrl.$isEmpty(value)) {
+ if (attr.required && (validity.valueMissing || ctrl.$isEmpty(value))) {
ctrl.$setValidity('required', false);
return;
} else {
| [No CFG could be retrieved] | ×ðððððððð A directive that converts between a delimited string and an array of strings. | Unless Ian Hixon agrees to alter the spec (and in turn, fixes get added to browsers), what we really need to check for here is !validity.badInput --- because if badInput is set, then we're really suffering from bad input and not a "real" value missing. |
@@ -1414,7 +1414,7 @@ def categorical_hinge(y_true, y_pred):
y_true = math_ops.cast(y_true, y_pred.dtype)
pos = math_ops.reduce_sum(y_true * y_pred, axis=-1)
neg = math_ops.reduce_max((1. - y_true) * y_pred, axis=-1)
- return math_ops.maximum(0., neg - pos + 1.)
+ return math_ops.maximum(neg - pos + 1., 0.)
@keras_export('keras.losses.huber', v1=[])
| [binary_crossentropy->[binary_crossentropy],categorical_crossentropy->[categorical_crossentropy],sparse_categorical_crossentropy->[sparse_categorical_crossentropy],hinge->[_maybe_convert_labels],log_cosh->[_logcosh],get->[deserialize],squared_hinge->[_maybe_convert_labels]] | Categorical hinge loss between y_true and y_pred. D - D where d is the point where the Huber loss function changes from. | `maximum(a, b)` should have the same behaviour as `maximum(b, a)`. I'm not sure this is the right fix. |
@@ -458,7 +458,7 @@ static int dummy_dma_probe(struct dma *dma)
dma->chan[i].dma = dma;
dma->chan[i].index = i;
dma->chan[i].status = COMP_STATE_INIT;
- dma->chan[i].private = &chanp[i];
+ dma->chan[i].priv_data = &chanp[i];
}
atomic_init(&dma->num_channels_busy, 0);
| [int->[tracev_dummydma,spin_unlock_irq,dummy_dma_compute_avail_data,atomic_init,rzalloc,spin_lock_irq,timer_get,dma_chan_get_data,trace_dummydma_error,timer_get_system,dummy_dma_do_copies,notifier_event,rfree],size_t->[dummy_dma_comp_avail_data_noncyclic,dummy_dma_comp_avail_data_cyclic],dma_chan_data->[spin_lock_irq,spin_unlock_irq,atomic_add,trace_dummydma_error],void->[spin_unlock_irq,atomic_sub,notifier_unregister_all,dummy_dma_channel_put_unlocked,spin_lock_irq,dma_chan_get_data],ssize_t->[dcache_writeback_region,dummy_dma_compute_avail_data,MIN,memcpy_s,dummy_dma_copy_crt_elem,dcache_invalidate_region,assert]] | This function is called by the DMA driver when it is not needed to initialize a dummy. | Can you replace it with `dma_chan_set_data(&dma->chan[i], &chanp[i]);`? In a separate patch. Thank you for showing that I am accessing this directly which isn't the best thing to do. |
@@ -0,0 +1,15 @@
+package games.strategy.util;
+
+import java.awt.Toolkit;
+import java.awt.datatransfer.StringSelection;
+
+/**
+ * A utility for interacting with the system clipboard, humble object pattern.
+ */
+public class SystemClipboard {
+
+ public static void setClipboardContents(final String contents) {
+ final StringSelection select = new StringSelection(contents);
+ Toolkit.getDefaultToolkit().getSystemClipboard().setContents(select, select);
+ }
+}
| [No CFG could be retrieved] | No Summary Found. | `g.s.ui` (or possibly a new subpackage, `g.s.ui.util`) might be a better location for this type since it is UI-specific. `g.s.util` should be independent of UI stuff (although it's not today, we probably shouldn't be making it worse). |
@@ -667,8 +667,7 @@ public abstract class AbstractSession implements CoreSession, Serializable {
if (lifecycleStateInfo instanceof String) {
initialLifecycleState = (String) lifecycleStateInfo;
}
- notifyEvent(DocumentEventTypes.ABOUT_TO_CREATE, docModel, options, null, null, false, true); // no lifecycle
- // yet
+ notifyEvent(DocumentEventTypes.ABOUT_TO_CREATE, docModel, options, null, null, false, true); // no lifecycle yet
// document validation
if (getValidationService().isActivated(DocumentValidationService.CTX_CREATEDOC, options)) {
| [AbstractSession->[getLastDocumentVersion->[resolveReference,readModel,checkPermission],removeDocument->[resolveReference,removeDocument,canRemoveDocument],getLastDocumentVersionRef->[resolveReference,checkPermission],setACP->[notifyEvent,setACP,resolveReference,readModel,checkPermission,getACP],createDocumentModelFromTypeName->[notifyEvent,getDocumentType],getVersionLabel->[getVersionLabel],getBinaryFulltext->[getBinaryFulltext,resolveReference,checkPermission],readModel->[readModel],createDocument->[notifyEvent,getContextMapEventInfo,readModel,createDocument,writeModel],notifyVersionChange->[notifyEvent],getDirectAccessibleParent->[getRootDocument,hasPermission,resolveReference,readModel,getDirectAccessibleParent],setRetentionActive->[notifyEvent,setRetentionActive,isRetentionActive,resolveReference,readModel,checkPermission],getDocumentWithVersion->[getDocument,resolveReference,readModel,getVersion,checkPermission],isTrashed->[isTrashed],getDataModelField->[resolveReference,checkPermission],getChild->[resolveReference,readModel,getChild,checkPermission],getDataModelsField->[getDataModelField],getDataModel->[resolveReference,checkPermission],getSuperSpace->[getSuperSpace,getRootDocument],getSourceDocument->[getSourceDocument,resolveReference,readModel,checkPermission],getParentDocuments->[hasPermission,resolveReference,readModel],removeNotifyOneDoc->[notifyEvent,getSession,readModel],updateExistingProxies->[readModel],checkOut->[notifyEvent,resolveReference,readModel,checkPermission,writeModel],hasChild->[hasChild,resolveReference,checkPermission],getBaseVersion->[getBaseVersion,resolveReference,checkPermission],notifyCheckedInVersion->[notifyEvent,getDocument],getFiles->[getChildren,hasPermission,resolveReference,readModel,checkPermission],createDocumentModel->[createDocumentModel,createDocumentModelFromTypeName],publishDocument->[notifyEvent,isCheckedOut,removeExistingProxies,updateExistingProxies,publishDocument,getBaseVersion,notifyCheckedInVersion,createProxyInternal,resolveReference,readModel,checkPermission,getVersionSeriesId],copy->[notifyEvent,copy,resolveReference,readModel,checkPermission,writeModel],isAdministrator->[isAdministrator],getChangeToken->[resolveReference,getChangeToken],query->[getSecurityService,hasPermission,readModel,query],queryAndFetch->[queryAndFetch,getSecurityService],removeChildren->[hasPermission,resolveReference,getChildren,checkPermission],getVersionsRefs->[resolveReference,checkPermission],getAllowedStateTransitions->[resolveReference,getAllowedStateTransitions,checkPermission],removeExistingProxies->[removeNotifyOneDoc],getCurrentLifeCycleState->[resolveReference,checkPermission],removeLock->[notifyEvent,removeLock,hasPermission,resolveReference,readModel],getVersion->[readModel,getVersion,getDocument,checkPermission],getPermissionsToCheck->[getPermissionsToCheck],refreshDocument->[resolveReference,isTrashed,checkPermission],adaptFirstMatchingDocumentWithFacet->[readModel],getDocuments->[resolveReference,readModel,checkPermission],getLifeCyclePolicy->[getLifeCyclePolicy,resolveReference,checkPermission],isRetentionActive->[isRetentionActive,resolveReference,checkPermission],applyDefaultPermissions->[isAdministrator,setACP,getRootDocument],saveDocument->[notifyEvent,move,getContextMapEventInfo,resolveReference,readModel,checkPermission,writeModel],getChildrenIterator->[getChildrenIterator],scroll->[getSecurityService,getPrincipalsToCheck,getPoliciesQueryTransformers,scroll],getParentDocument->[hasPermission,resolveReference,readModel],removeDocuments->[resolveReference,removeDocument],getDocument->[resolveReference,readModel,checkPermission],canRemoveDocument->[hasPermission,resolveReference,canRemoveDocument],getDocumentSystemProp->[resolveReference],move->[notifyEvent,move,getContextMapEventInfo,resolveReference,readModel,checkPermission],reinitLifeCycleState->[resolveReference,checkPermission],isNegativeAclAllowed->[isNegativeAclAllowed],getRootDocument->[readModel,getRootDocument],createProxy->[resolveReference,checkPermission],getFirstParentDocumentWithFacet->[resolveReference],filterGrantedPermissions->[filterGrantedPermissions],getVersionSeriesId->[resolveReference,getVersionSeriesId,checkPermission],restoreToVersion->[notifyEvent,restoreToVersion,resolveReference,readModel,checkPermission,writeModel],importDocument->[notifyEvent,getContextMapEventInfo,importDocument,readModel,fillCreateOptions,writeModel],getParentDocumentRef->[resolveReference],save->[notifyEvent,save],getVersionsForDocument->[getVersions,resolveReference,getVersionModel,checkPermission],getPoliciesQueryTransformers->[getPoliciesQueryTransformers],getACP->[resolveReference,checkPermission],getChildren->[getChildren,hasPermission,resolveReference,readModel,checkPermission],getFolders->[getChildren,hasPermission,resolveReference,readModel,checkPermission],setDocumentSystemProp->[notifyEvent,resolveReference,readModel],getParentDocumentRefs->[resolveReference],fillCreateOptions->[resolveReference,checkPermission],hasChildren->[hasChildren,resolveReference,checkPermission],isCheckedOut->[isCheckedOut,resolveReference,checkPermission],setLock->[notifyEvent,setLock,resolveReference,readModel,checkPermission],getVersions->[getVersions,resolveReference,readModel,checkPermission],getLockInfo->[resolveReference,checkPermission],updateReadACLs->[updateReadACLs],exists->[hasPermission,resolveReference],getOrCreateDocument->[exists,getOrCreateDocument,createDocument,getDocument],getChildrenRefs->[resolveReference,checkPermission],copyProxyAsDocument->[notifyEvent,copy,copyProxyAsDocument,resolveReference,readModel,checkPermission],createProxyInternal->[notifyEvent,readModel,createProxy],saveDocuments->[saveDocument],hasPermission->[hasPermission,checkPermission],followTransition->[notifyEvent,isCheckedOut,checkOut,followTransition,resolveReference,readModel,checkPermission,writeModel],checkIn->[notifyEvent,resolveReference,readModel,checkPermission,writeModel],queryProjection->[queryProjection,computeCountUpTo],getProxies->[checkPermission,hasPermission,resolveReference,readModel,getProxies],getPrincipalsToCheck->[getPrincipalsToCheck],getDocumentType->[getDocumentType],getDataModelsFieldUp->[getParentDocumentRefs,getDataModelsField],orderBefore->[notifyEvent,readModel,resolveReference,orderBefore],replaceACE->[setACP,resolveReference,replaceACE,checkPermission,getACP],getWorkingCopy->[getWorkingCopy,resolveReference,readModel,checkPermission]]] | Creates a new document in the system. if document not yet present create it. | Can you please add the formatting changes in a separate commit? |
@@ -7,7 +7,7 @@ import numpy as np
from numpy.testing import assert_array_equal
from nose.tools import assert_raises, assert_true, assert_equal
from ...utils import requires_sklearn_0_15
-from ..search_light import _SearchLight, _GeneralizationLight
+from ..search_light import SlidingEstimator, GeneralizingEstimator
from .. import Vectorizer
| [test_GeneralizationLight->[make_data],test_SearchLight->[_LogRegTransformer,make_data]] | Generate random data with n_epochs n_chan n_time and nanoseconds. | full imports needed |
@@ -389,13 +389,14 @@ public class RuntimeUpdatesProcessor implements HotReplacementContext, Closeable
}
private void checkForClassFilesChangesInModule(DevModeContext.ModuleInfo module, List<Path> moduleChangedSourceFiles,
- boolean isInitialRun, ClassScanResult classScanResult) {
- if (module.getClassesPath() == null) {
+ boolean isInitialRun, ClassScanResult classScanResult,
+ Function<DevModeContext.ModuleInfo, DevModeContext.CompilationUnit> cuf, TimestampSet timestampSet) {
+ if (cuf.apply(module).getClassesPath() == null) {
return;
}
try {
- for (String folder : module.getClassesPath().split(File.pathSeparator)) {
+ for (String folder : cuf.apply(module).getClassesPath().split(File.pathSeparator)) {
final Path moduleClassesPath = Paths.get(folder);
if (!Files.exists(moduleClassesPath)) {
continue;
| [RuntimeUpdatesProcessor->[close->[close],syncState->[getDevModeType],isTest->[isTest]]] | Checks for class files changes in module. Add changed class to cache. | Is `cuf` being applied twice to the same `module` on purpose (line 539 and 544)? |
@@ -62,7 +62,12 @@ class CheckAndPrepareModelProcess(KratosMultiphysics.Process):
fluid_computational_model_part.AddConditions(list(list_of_ids))
- #verify the orientation of the skin
+ #verify the orientation of the skin (only implemented for tris and tets)
+ geometry = self.main_model_part.GetElement(1).GetGeometry()
+ is_simplex = geometry.LocalSpaceDimension() + 1 == geometry.PointsNumber()
+ if not is_simplex:
+ return
+
tmoc = KratosMultiphysics.TetrahedralMeshOrientationCheck
throw_errors = False
flags = (tmoc.COMPUTE_NODAL_NORMALS).AsFalse() | (tmoc.COMPUTE_CONDITION_NORMALS).AsFalse()
| [CheckAndPrepareModelProcess->[__init__->[__init__]]] | Execute the KratosMultiphysics. KratosMultiphysics. The function that checks the orientation of the mesh is not met. | I'd throw a warning in here saying that `TetrahedralMeshOrientationCheck` will not be executed. |
@@ -660,7 +660,7 @@ def test_map_over_map_and_unmapped(executor):
@pytest.mark.parametrize("x,y,out", [(1, 2, 3), ([0, 2], [1, 7], [0, 2, 1, 7])])
-def test_task_map_that_doesnt_actually_map(x, y, out):
+def test_task_map_with_all_inputs_unmapped(x, y, out):
print(out)
@prefect.task
| [test_map_over_parameters->[AddTask,run],test_map_tracks_non_mapped_upstream_tasks->[DivTask,zeros,run],test_map_can_handle_nonkeyed_nonmapped_upstreams_and_mapped_args->[ListTask,IdTask,run],test_task_map_downstreams_handle_single_failures->[append_four,run],test_map_skips_dont_leak_out->[ListTask,run],test_map_can_handle_nonkeyed_mapped_upstreams_and_mapped_args->[ListTask,IdTask,run],test_map_allows_for_retries->[ListTask,IdTask,DivTask,run],test_task_map_can_be_passed_to_upstream_with_and_without_map->[append_four,run],test_task_map_that_doesnt_actually_map->[run],test_map_can_handle_fixed_kwargs->[ListTask,AddTask,run],test_map_reduce->[reduce_sum,run,numbers],test_map_behaves_like_zip_with_differing_length_results->[AddTask,ll,run],test_map_can_handle_nonkeyed_upstreams->[ListTask,run],test_map_composition->[ListTask,AddTask,run],test_map_can_handle_nonkeyed_mapped_upstreams->[ListTask,IdTask,run],test_map_returns_a_task_copy->[ListTask,AddTask],test_map_spawns_new_tasks->[ListTask,AddTask,run],test_multiple_map_arguments->[ListTask,AddTask,run],test_map_skips_return_exception_as_result->[ListTask,run],test_task_map_doesnt_assume_purity_of_functions->[run],test_map_works_with_retries_and_cached_states->[DivTask,run],test_map_handles_non_keyed_upstream_empty->[run],test_reduce_task_honors_trigger_across_all_mapped_states->[take_sum,run],test_calling_map_with_bind_returns_self->[ListTask,AddTask],test_map_over_map_and_unmapped->[run,numbers],test_deep_map_composition->[ListTask,AddTask,run],test_map_failures_dont_leak_out->[ListTask,AddTask,DivTask,run,IdTask],test_map_handles_upstream_empty->[AddTask,run]] | Test task map that doesnt actually map. | Is this an old debug print? |
@@ -63,7 +63,7 @@ import (
svc "github.com/elastic/beats/libbeat/service"
"github.com/elastic/beats/libbeat/template"
"github.com/elastic/beats/libbeat/version"
- "github.com/elastic/go-sysinfo"
+ sysinfo "github.com/elastic/go-sysinfo"
"github.com/elastic/go-sysinfo/types"
ucfg "github.com/elastic/go-ucfg"
)
| [launch->[InitWithSettings,createBeater],TestConfig->[createBeater,Init],Setup->[createBeater,Init],createBeater->[BeatConfig],configure->[BeatConfig],Init->[InitWithSettings]] | - list of all known beat components Got from the package github. com. | Seems like my idea / goimports did this automatically. Interesting. |
@@ -67,13 +67,15 @@ describe SmsOtpSenderJob do
it 'sanitizes phone numbers embedded in error messages from Twilio' do
raw_message = "The 'To' number +1 (888) 555-5555 is not a valid phone number"
+ error_code = '21211'
+ status_code = 400
sanitized_message = "The 'To' number +# (###) ###-#### is not a valid phone number"
expect_any_instance_of(TwilioService).to receive(:send_sms).
- and_raise(Twilio::REST::RequestError.new(raw_message))
+ and_raise(Twilio::REST::RestError.new(raw_message, error_code, status_code))
expect { perform }.
- to raise_error(Twilio::REST::RequestError, sanitized_message)
+ to raise_error(Twilio::REST::RestError, sanitized_message)
end
end
end
| [direct_otp_valid_for,new,let,describe,subject,first,it,to,and_raise,before,telephony_service,t,require,to_s,include,strftime,match,messages,context,perform_now,eq,raise_error] | sanitizes phone numbers embedded in error messages from Twilio. | should we make sure the newly re-raised error also has `error_code` and `status_code` ? |
@@ -128,7 +128,7 @@ def display_path(path):
if possible."""
path = os.path.normcase(os.path.abspath(path))
if sys.version_info[0] == 2:
- path = path.decode(sys.getfilesystemencoding(), 'replace')
+ path = fs_decode(path)
path = path.encode(sys.getdefaultencoding(), 'replace')
if path.startswith(os.getcwd() + os.path.sep):
path = '.' + path[len(os.getcwd()):]
| [dist_in_site_packages->[normalize_path],has_leading_dir->[split_leading_dir],get_installed_distributions->[editables_only_test->[dist_is_editable],editable_test->[dist_is_editable],user_test,editables_only_test,local_test,editable_test],captured_output->[from_stream],dist_in_usersite->[normalize_path],unzip_file->[ensure_dir,has_leading_dir,split_leading_dir,current_umask],captured_stdout->[captured_output],splitext->[splitext],is_local->[normalize_path],dist_is_local->[is_local],get_terminal_size->[ioctl_GWINSZ],untar_file->[ensure_dir,has_leading_dir,split_leading_dir,current_umask],unpack_file->[untar_file,unzip_file,file_contents,is_svn_page],rmtree->[rmtree],dist_location->[egg_link_path]] | Gives the display value for a given path making it relative to cwd if possible. | You've lost the "replace" error handler here, so this has introduced the possibility of Unicode errors... |
@@ -152,7 +152,7 @@ class MapTaskExecutorRunnerTest(unittest.TestCase):
derived = ((pcoll,) | beam.Flatten()
| beam.Map(lambda x: (x, x))
| beam.GroupByKey()
- | 'Unkey' >> beam.Map(lambda (x, _): x))
+ | 'Unkey' >> beam.Map(lambda x__: x__[0]))
assert_that(
pcoll | beam.FlatMap(cross_product, AsList(derived)),
equal_to([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', 'b')]))
| [MapTaskExecutorRunnerTest->[test_pardo->[create_pipeline],test_assert_that->[create_pipeline],test_pardo_side_outputs->[create_pipeline],test_read->[create_pipeline],test_pardo_side_inputs->[create_pipeline],test_errors->[create_pipeline],test_pardo_unfusable_side_inputs->[create_pipeline],test_create->[create_pipeline],test_pardo_metrics->[MyOtherDoFn,create_pipeline,MyDoFn],test_pardo_side_and_main_outputs->[create_pipeline],test_flatten->[create_pipeline],test_windowing->[create_pipeline],test_combine_per_key->[create_pipeline],test_group_by_key->[create_pipeline]]] | Test that the pipeline can be run with unfusable side inputs. | x__ looks a bit strange. How about just do beam.Map(lambda y: y[0])) |
@@ -272,7 +272,11 @@ func newSaramaConfig(log *logp.Logger, config *kafkaConfig) (*sarama.Config, err
retryMax = 1000
}
k.Producer.Retry.Max = retryMax
- // TODO: k.Producer.Retry.Backoff = ?
+ k.Producer.Retry.BackoffFunc = func(_, _ int) time.Duration {
+ d := b.WaitDuration()
+ log.Infof("backing off for %v (init: %v, max: %v)", d, config.Backoff.Init, config.Backoff.Max)
+ return d
+ }
// configure per broker go channel buffering
k.ChannelBufferSize = config.ChanBufferSize
| [configureSarama->[ToUpper,SASLMechanism,Errorf],Validate->[ToLower,Validate,New,Errorf],BuildModuleConfig,Unpack,LoadTLSConfig,Beta,configureSarama,Version,Validate,IsEnabled,ToLower,GetGoMetrics,NewConfig,Errorf,Get,RequiredAcks,Rename] | configure a new instance of the Kafka broker returns the kafka configuration or nil if the configuration is invalid. | Is BackoffFunc called 'globally' in sarama, or per broker, or per partition? Is the function guaranteed to be called by one go-routine only? Is a single 'backoff' instance good enough, or do we actually need multiple separate ones? This function and reset are called from different go-routines. AFAIK the backoff instance used is not threadsafe. Given that we call this function potentially from multiple workers in sarama, this implementation might not exactly give us exponential backoff. If a subset of workers is active, the Reset will continue to reset our state, while errors in more than one worker can `skip` one or two backoff states. |
@@ -119,6 +119,7 @@ def to_pipeline_file(stage: "PipelineStage"):
outs, metrics, plots = _serialize_outs(stage.outs)
res = [
+ (stage.PARAM_DESC, stage.desc),
(stage.PARAM_CMD, stage.cmd),
(stage.PARAM_WDIR, wdir),
(stage.PARAM_DEPS, deps),
| [_serialize_outs->[_serialize_out],to_single_stage_lockfile->[_serialize_params_values,_dumpd],to_pipeline_file->[_serialize_outs,_serialize_params_keys],_serialize_out->[_get_flags],to_lockfile->[to_single_stage_lockfile]] | Serialize a PipelineStage object into a pipeline file. | Adding it before `cmd` since it seems to feel more natural. |
@@ -17,7 +17,7 @@
import {AmpEvents} from '../../src/amp-events';
import {createFixtureIframe, poll} from '../../testing/iframe.js';
-describe('on="..."', () => {
+describe.configure().run('on="..."', () => {
let fixture;
beforeEach(() => {
| [No CFG could be retrieved] | A series of functions that describe the type of element that is not visible on the page. Clicks the navigating button on the window. | Per my understanding, changing `describe(` to `describe.configure().run(` is a no-op, since there's nothing between the `configure()` and the `run(`. Also, since IE tests are opt-in, a plain `describe(` block will not run on IE. Is this different from what you're seeing? |
@@ -195,6 +195,9 @@
kmutex_t zfsdev_state_lock;
zfsdev_state_t *zfsdev_state_list;
+/* The bits are stored in decimal format, not octal */
+int zfsdev_umask = 77;
+
extern void zfs_init(void);
extern void zfs_fini(void);
| [No CFG could be retrieved] | A utility to provide a list of functions that are used to access the ZFS device. zfs_ioc_func_t - the function to call for each entry in. | I felt using a decimal format would be easier for those reading the setting in /sys/module/zfs/parameters/zfsdev_umask, but I am open to switching to octal if people prefer that. |
@@ -129,7 +129,7 @@ public class EntryFactoryImpl implements EntryFactory {
mvccEntry = wrapMvccEntryForRemove(ctx, key, cacheEntry, true);
}
} else {
- InternalCacheEntry ice = getFromContainer(key);
+ InternalCacheEntry ice = getFromContainer(key, forInvalidation);
if (ice != null || clusterModeWriteSkewCheck) {
mvccEntry = wrapInternalCacheEntryForPut(ctx, key, ice, null, skipRead);
}
| [EntryFactoryImpl->[wrapEntryForDelta->[wrapEntryForDelta],wrapEntry->[getFromContainer,wrapInternalCacheEntryForPut,wrapMvccEntryForPut,getFromContext]]] | Wrap the entry for remove. | Just wondering, do we really need to read the value from the container when doing an invalidation? Couldn't we just wrap a null instead? |
@@ -221,10 +221,11 @@ static int ipc4_set_pipeline_state(union ipc4_message_header *ipc4)
ret = pipeline_trigger(host->cd->pipeline, host->cd, COMP_TRIGGER_STOP);
if (ret < 0) {
tr_err(&ipc_tr, "ipc: comp %d trigger 0x%x failed %d", id, cmd, ret);
- ret = IPC4_PIPELINE_STATE_NOT_SET;
+ return IPC4_PIPELINE_STATE_NOT_SET;
}
- cmd = COMP_TRIGGER_RESET;
+ /* resource is not released by triggering reset which is used by current FW */
+ return pipeline_reset(host->cd->pipeline, host->cd);
break;
case SOF_IPC4_PIPELINE_STATE_PAUSED:
if (pcm_dev->pipeline->status == COMP_STATE_INIT)
| [ipc_compact_read_msg->[ipc_to_hdr,mailbox_validate,ipc_platform_compact_read_msg],int->[ipc4_unbind_module,pipeline_prepare,ipc_get,cpu_is_me,ipc_process_on_core,ipc_comp_connect,ipc_get_comp_by_ppl_id,ipc4_init_module,ipc4_bind_module,ipc_pipeline_new,ipc4_set_pipeline_state,ipc4_set_large_config_module,IPC4_COMP_ID,tr_err,pipeline_trigger,comp_params,pipeline_reset,comp_func,ipc4_delete_pipeline,ipc4_pipeline_params,ipc_get_comp_by_id,pipeline_for_each_comp,ipc_pipeline_complete,memset,comp_new,ipc4_pcm_params,cmd,ipc4_create_pipeline,ipc4_get_comp_dev,ipc_to_hdr,ipc_pipeline_free],ipc_cmd->[ipc_from_hdr,tr_err,ipc4_process_glb_message,ipc4_process_module_message,ipc_msg_send],mailbox_validate->[ipc_get],ipc_process_msg->[ipc_to_hdr,mailbox_dspbox_write]] | set the pipeline state trigger the component. | This `break` here is deadcode. |
@@ -944,12 +944,11 @@ func convertBlockHintsToULIDs(hints []hintspb.Block) ([]ulid.ULID, error) {
return res, nil
}
-func countSeriesBytes(series []*storepb.Series) (count uint64) {
+// countChunkBytes returns the size of the chunks making up the provided series in bytes
+func countChunkBytes(series ...*storepb.Series) (count int) {
for _, s := range series {
for _, c := range s.Chunks {
- if c.Raw != nil {
- count += uint64(len(c.Raw.Data))
- }
+ count += c.Size()
}
}
| [fetchLabelNamesFromStore->[LabelNames],fetchLabelValuesFromStore->[LabelValues]] | countSeriesBytes returns the number of bytes in series. | It's not the exact same as `len(c.Raw.Data)`. I would get stick to `len(c.Raw.Data)` which is the actual chunk size and not the whole struct size. `len(c.Raw.Data)` is also faster to run. |
@@ -1693,6 +1693,9 @@ class TypeStrVisitor(SyntheticTypeVisitor[str]):
suffix = ', fallback={}'.format(t.fallback.accept(self))
return 'TypedDict({}{}{})'.format(prefix, s, suffix)
+ def visit_literal_type(self, t: LiteralType) -> str:
+ return 'Literal[{}]'.format(t.value)
+
def visit_star_type(self, t: StarType) -> str:
s = t.type.accept(self)
return '*{}'.format(s)
| [TypeStrVisitor->[visit_callable_type->[accept],visit_typeddict_type->[accept,item_str,items],visit_star_type->[accept],visit_instance->[name],visit_tuple_type->[accept],visit_type_type->[accept],visit_overloaded->[accept,items],visit_partial_type->[name],visit_forwardref_type->[accept],visit_callable_argument->[accept],visit_type_var->[accept],list_str->[accept]],TupleType->[serialize->[serialize],copy_modified->[TupleType],slice->[TupleType],deserialize->[deserialize_type,TupleType,deserialize]],TypeVarId->[__repr__->[__repr__],new->[TypeVarId]],FunctionLike->[is_concrete_type_obj->[is_type_obj]],TypeVarDef->[serialize->[serialize,is_meta_var],deserialize->[deserialize_type,TypeVarDef],new_unification_variable->[new,TypeVarDef],__init__->[TypeVarId]],true_only->[true_only,UninhabitedType,copy_type,make_simplified_union],callable_type->[CallableType,name,AnyType],UnionType->[make_simplified_union->[make_union],has_readable_member->[has_readable_member],serialize->[serialize],deserialize->[deserialize_type,UnionType],make_union->[UninhabitedType,UnionType]],set_typ_args->[copy_modified,Instance,UnionType],Instance->[has_readable_member->[has_readable_member],serialize->[serialize],copy_modified->[Instance],deserialize->[deserialize_type,Instance]],true_or_false->[make_simplified_union,can_be_false_default,copy_type,can_be_true_default,true_or_false],strip_type->[strip_type,copy_modified,items,Overloaded],remove_optional->[make_union],UnboundType->[serialize->[serialize],deserialize->[UnboundType,deserialize_type]],Type->[__repr__->[accept]],UninhabitedType->[deserialize->[UninhabitedType]],AnyType->[serialize->[serialize],copy_modified->[AnyType],deserialize->[AnyType,deserialize]],TypeVarType->[serialize->[serialize,is_meta_var],deserialize->[deserialize_type,TypeVarType,TypeVarDef]],union_items->[union_items],Overloaded->[__hash__->[items],with_name->[with_name,Overloaded],__eq__->[items],serialize->[serialize,items],name->[get_name],deserialize->[deserialize,Overloaded]],CallableType->[type_object->[is_type_obj],__hash__->[is_type_obj],with_name->[copy_modified],try_synthesizing_arg_from_kwarg->[kw_arg],__eq__->[is_type_obj],serialize->[serialize],deserialize->[CallableType,deserialize,deserialize_type],copy_modified->[CallableType],try_synthesizing_arg_from_vararg->[var_arg]],NoneTyp->[deserialize->[NoneTyp]],DeletedType->[deserialize->[DeletedType]],TypeType->[serialize->[serialize],deserialize->[deserialize_type,make_normalized],make_normalized->[TypeType,make_normalized,make_union]],get_type_vars->[get_typ_args,get_type_vars],false_only->[make_simplified_union,UninhabitedType,copy_type,false_only],TypedDictType->[zipall->[items],__hash__->[items],create_anonymous_fallback->[copy_modified,as_anonymous],serialize->[serialize,items],as_anonymous->[is_anonymous,as_anonymous],deserialize->[deserialize_type,TypedDictType,deserialize],zip->[items],copy_modified->[TypedDictType]],flatten_nested_unions->[flatten_nested_unions],items] | Return string representation of TypedDict t. | Maybe use `repr()` here so that string literals that contain unusual characters won't cause trouble. It would also match the source syntax. |
@@ -193,4 +193,18 @@ public final class IntegrationUtils {
return runtimeException;
}
+ /**
+ * Obtain a component name from the provided {@link NamedComponent}.
+ * @param component the {@link NamedComponent} source for component name.
+ * @return the component name
+ * @since 5.3
+ */
+ public static String obtainComponentName(NamedComponent component) {
+ String name = component.getComponentName();
+ if (name.startsWith('_' + IntegrationConfigUtils.BASE_PACKAGE)) {
+ name = name.substring(('_' + IntegrationConfigUtils.BASE_PACKAGE).length() + 1);
+ }
+ return name;
+ }
+
}
| [IntegrationUtils->[stringToBytes->[getBytes,IllegalArgumentException],getConversionService->[getBeanOfType],wrapInDeliveryExceptionIfNecessary->[getFailedMessage,get,MessageDeliveryException],bytesToString->[IllegalArgumentException,String],getMessageBuilderFactory->[getMessage,DefaultMessageBuilderFactory,debug,IllegalStateException,getBean,isDebugEnabled],wrapInHandlingExceptionIfNecessary->[getFailedMessage,get,MessageHandlingException,getCause],getBeanOfType->[containsBean,getBean,notNull],getLog,valueOf,getenv]] | Wraps a message handling exception in a runtime exception if it is not already wrapped. | This would be more efficient as `if (name.charAt(0) == '_' && name.startsWith(...`. Not a big deal with JMX but we're building the graph more often than during initialization. |
@@ -296,12 +296,12 @@ public final class DataConversion {
}
public static void writeTo(ObjectOutput output, DataConversion dataConversion) throws IOException {
- if (isDefault(dataConversion)) {
- output.writeByte(1);
- } else {
- byte flags = 0;
- if (dataConversion.isKey) flags = (byte) (flags | 2);
- output.writeByte(flags);
+ byte flags = 0;
+ boolean isDefault = isDefault(dataConversion);
+ if (isDefault) flags = 1;
+ if (dataConversion.isKey) flags = (byte) (flags | 2);
+ output.writeByte(flags);
+ if (!isDefault) {
output.writeShort(dataConversion.encoder.id());
output.writeByte(dataConversion.wrapper.id());
output.writeObject(dataConversion.requestMediaType);
| [DataConversion->[Externalizer->[writeObject->[writeTo],readObject->[readFrom],isDefault->[equals]],toStorage->[toStorage],isConversionSupported->[isConversionSupported],extractIndexable->[fromStorage],isStorageFormatFilterable->[isStorageFormatFilterable],withWrapping->[DataConversion],equals->[equals],writeTo->[isDefault],newValueDataConversion->[DataConversion],withRequestMediaType->[DataConversion],isDefault->[equals],newKeyDataConversion->[DataConversion],readFrom->[DataConversion],fromStorage->[fromStorage],withEncoding->[DataConversion],injectDependencies->[getStorageMediaType],DataConversion]] | Writes the data conversion information to the given output object. | We could use the newly extracted isDefault flag instead of invoking isDefault(dataConversion) again |
@@ -154,7 +154,7 @@ class PropertiesXmlParser
'default-type',
'minOccurs',
'maxOccurs',
- 'colspan',
+ 'colSpan',
'cssClass',
'disabledCondition',
'visibleCondition',
| [PropertiesXmlParser->[loadTags->[validateTag,loadTag],loadParam->[loadParams,loadMeta],loadSection->[loadProperties],createSection->[createProperty],loadType->[loadProperties]]] | Load block data from a node. | Shouldn't that be left to `colspan`? Isn't reading the value from the XML, where we've decided to keep it this way? |
@@ -129,9 +129,16 @@ func (p *fileProspector) Run(ctx input.Context, s *statestore.Store, hg loginp.H
}
}
+ // TODO close_removed
+
case loginp.OpRename:
log.Debugf("File %s has been renamed to %s", fe.OldPath, fe.NewPath)
// TODO update state information in the store
+ if p.identifier.Name() == "path" {
+ s.UpdateID(fe.OldPath, fe.NewPath)
+ }
+
+ // TODO close_renamed
default:
log.Error("Unkown return value %v", fe.Op)
| [updateIdentifiersBetweenRuns->[Stat,Decode,Name,Remove,Errorf,Each,GetSource,HasPrefix,Set],Run->[Now,With,Error,Go,Errorf,updateIdentifiersBetweenRuns,Sub,Wait,Debugf,Debug,Name,cleanRemovedBetweenRuns,Remove,Err,ModTime,WrapAll,Event,GetSource,Run],cleanRemovedBetweenRuns->[Stat,Decode,Remove,Errorf,Each,HasPrefix]] | Run runs the prospector Name removes the state from the statestore. | Do not change the key of an potential active source. The active harvester might not be aware that the 'key' changed. Another input might hold the lock for the new key. In case `identifier.Name() == path`, we are required to stop the old harvester and create a new one, with it's own state. New file, new idenfitiy. The new file might even not be in the glob-list. In case the `path` changed, we should update the `Source` of state as well, Always. This is not really safe to do from here, because updates in here would race with events getting ACKed. Separating the `state` into `offset` and `meta-data` allows the prospector to do updates as we wish. |
@@ -127,7 +127,12 @@ def test_path_to_url_unix():
@pytest.mark.skipif("sys.platform == 'win32'")
def test_url_to_path_unix():
+ assert url_to_path('file:tmp') == 'tmp'
assert url_to_path('file:///tmp/file') == '/tmp/file'
+ assert url_to_path('file:/path/to/file') == '/path/to/file'
+ assert url_to_path('file://localhost/tmp/file') == '/tmp/file'
+ with pytest.raises(ValueError):
+ url_to_path('file://somehost/tmp/file')
@pytest.mark.skipif("sys.platform != 'win32'")
| [test_unpack_http_url_bad_downloaded_checksum->[MockResponse,read],MockResponse->[__init__->[FakeStream]],Test_unpack_file_url->[test_unpack_file_url_download_already_exists->[prep],test_unpack_file_url_download_bad_hash->[prep],test_unpack_file_url_bad_hash->[prep],test_unpack_file_url_no_download->[prep],test_unpack_file_url_thats_a_dir->[prep],test_unpack_file_url_and_download->[prep]],FakeStream->[stream->[read],read->[read]]] | Test for path_to_url_unix and path_to_path_win. | I think it would help readability if these were ordered `file:`, `file:/`, `file://`, `file:///`, etc. I also think this and the other test is a good candidate for `@pytest.mark.parametrize()`. |
@@ -272,7 +272,7 @@ class GradeEntryFormsController < ApplicationController
errors.push(I18n.t('grade_entry_forms.grades.must_select_a_student'))
else
params[:students].each do |student_id|
- grade_entry_students.push(grade_entry_form.grade_entry_students.find_or_create_by_user_id(student_id))
+ grade_entry_students.push(grade_entry_form.grade_entry_students.find_or_create_by(user_id: student_id))
end
end
end
| [GradeEntryFormsController->[new->[new],create->[new]]] | Updates the list of all grade entry students based on the user s filter. This action shows the user that the user has not seen a specific item in the list of. | Line is too long. [113/80] |
@@ -422,13 +422,13 @@ func (cfg *PeriodicTableConfig) periodicTables(from, through model.Time, pCfg Pr
}
// ChunkTableFor calculates the chunk table shard for a given point in time.
-func (cfg SchemaConfig) ChunkTableFor(t model.Time) string {
+func (cfg SchemaConfig) ChunkTableFor(t model.Time) (string, error) {
for i := range cfg.Configs {
if t > cfg.Configs[i].From && (i+1 == len(cfg.Configs) || t < cfg.Configs[i+1].From) {
- return cfg.Configs[i].ChunkTables.TableFor(t)
+ return cfg.Configs[i].ChunkTables.TableFor(t), nil
}
}
- return ""
+ return "", fmt.Errorf("no chunk table found for time %v", t)
}
// TableFor calculates the table shard for a given point in time.
| [hourlyBuckets->[tableForBucket],Load->[translate],dailyBuckets->[tableForBucket],RegisterFlags->[RegisterFlags]] | ChunkTableFor returns the table name for the chunk at time t. | @bboreham This return statement is the source of the empty string table name. |
@@ -82,6 +82,16 @@ public interface LogTailer<M extends Externalizable> extends AutoCloseable {
*/
void seek(LogOffset offset);
+ /**
+ * Look up the offset for the given partition by timestamp.
+ * The position is the earliest offset whose timestamp is greater than or equal to the given timestamp.<p/>
+ * The timestamp used depends on the implementation, for Kafka this is the LogAppendTime.
+ * Returns null if no record offset is found with an appropriate timestamp.
+ *
+ * @since 10.1
+ */
+ LogOffset offsetForTimestamp(LogPartition partition, long timestamp);
+
/**
* Reset all committed positions for this group, next read will be done from beginning.
*
| [No CFG could be retrieved] | Seeks to the given offset. | Returns null if no record offset is found with appropriate timestamp |
@@ -785,7 +785,10 @@ def _compute_covariance_auto(data, method, info, method_params, cv,
scalings, n_jobs, stop_early, picks_list,
verbose):
"""docstring for _compute_covariance_auto."""
- from sklearn.grid_search import GridSearchCV
+ try:
+ from sklearn.model_selection import GridSearchCV
+ except: # XXX support sklearn < 0.18
+ from sklearn.grid_search import GridSearchCV
from sklearn.covariance import (LedoitWolf, ShrunkCovariance,
EmpiricalCovariance)
| [make_ad_hoc_cov->[Covariance],compute_raw_covariance->[Covariance,_check_n_samples],_get_covariance_classes->[_ShrunkCovariance->[fit->[fit]],_RegCovariance->[fit->[fit,Covariance]]],_undo_scaling_array->[_apply_scaling_array],write_cov->[save],Covariance->[as_diag->[copy],__iadd__->[_check_covs_algebra],__add__->[_check_covs_algebra]],_estimate_rank_meeg_signals->[_apply_scaling_array,_undo_scaling_array],_estimate_rank_meeg_cov->[_undo_scaling_cov,_apply_scaling_cov],_gaussian_loglik_scorer->[_logdet],compute_covariance->[_check_n_samples,_get_tslice,Covariance,_unpack_epochs],read_cov->[Covariance],_compute_covariance_auto->[copy],_get_whitener_data->[compute_whitener],compute_whitener->[prepare_noise_cov],_auto_low_rank_model->[_cross_val],_regularized_covariance->[fit],_undo_scaling_cov->[_apply_scaling_cov],prepare_noise_cov->[_get_ch_whitener]] | Compute the covariance of a model using a method. Fits the model and returns the covariance of the best node in the model. return a dictionary of all the missing objects for all estimators and methods. | boo @agramfort should be `except Exception`. will fix in `master` |
@@ -162,7 +162,7 @@ describe.configure().skipIfPropertiesObfuscated().run('amp-ad 3P', () => {
}).then(() => {
return poll('wait for attemptChangeSize', () => {
return iframe.contentWindow.ping.resizeSuccess != undefined;
- });
+ }, null, 5000);
}).then(() => {
lastIO = null;
iframe.contentWindow.context.observeIntersection(changes => {
| [No CFG could be retrieved] | test amp - ad will respond to render - start. | Hmmm I don't understand. If we don't want to fail the test on this assertion, it's same as we mute it. What's is the difference? |
@@ -46,5 +46,8 @@ class Extends
# List of implemented core API versions
API_VERSIONS = ['20170715']
+ # Array used for injecting names of additional authentication methods for API
+ API_PLUGABLE_AUTH_METHODS = []
+
OMNIAUTH_PROVIDERS = []
end
| [No CFG could be retrieved] | List of all the API versions implemented by the OMNIAuth provider. | Freeze mutable objects assigned to constants. |
@@ -95,7 +95,7 @@ public class HoodieClientExample {
HoodieWriteConfig cfg = HoodieWriteConfig.newBuilder().withPath(tablePath)
.withSchema(HoodieTestDataGenerator.TRIP_EXAMPLE_SCHEMA).withParallelism(2, 2).forTable(tableName)
.withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(IndexType.BLOOM).build())
- .withCompactionConfig(HoodieCompactionConfig.newBuilder().archiveCommitsWith(2, 3).build()).build();
+ .withCompactionConfig(HoodieCompactionConfig.newBuilder().archiveCommitsWith(20, 30).build()).build();
HoodieWriteClient client = new HoodieWriteClient(jsc, cfg);
List<HoodieRecord> recordsSoFar = new ArrayList<>();
| [HoodieClientExample->[main->[HoodieClientExample]]] | Run the Hoodie client example. 2 - write records delete 1 - delete records. | if we set minToKeep=2, the app will exit with `java.lang.IllegalArgumentException: Increase hoodie.keep.min.commits=2 to be greater than hoodie.cleaner.commits.retained=10. Otherwise, there is risk of incremental pull missing data from few instants`. just set to DEFAULT_MIN_COMMITS_TO_KEEP and DEFAULT_MAX_COMMITS_TO_KEEP |
@@ -19,6 +19,10 @@ using Content.Server.GameObjects.Components.Interactable;
using Content.Server.GameObjects.EntitySystems;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Robust.Shared.Interfaces.Map;
+using System.Diagnostics.Tracing;
+using Content.Server.GameObjects;
+using Content.Shared.GameObjects;
+using Content.Shared.GameObjects.Components.Inventory;
namespace Content.Server.Chat
{
| [ChatManager->[EntityMe->[Channel,ServerSendToMany,Message,Uid,MessageWrap,Name,ConnectedClient,Select,SenderEntity,ToList,Emotes,_netManager,GridPosition,CanEmote],SendAdminChat->[Channel,ServerSendToMany,SessionId,Message,MessageWrap,ConnectedClient,Select,GetString,ToList,AdminChat,_netManager,CanCommand,SendOOC],SendDeadChat->[Channel,ServerSendToMany,Message,MessageWrap,Name,ConnectedClient,Select,GetString,SenderEntity,ToList,GetValueOrDefault,_netManager,Dead],SendHookOOC->[Channel,Message,MessageWrap,OOC,_netManager,ServerSendToAll],EntitySay->[Channel,ServerSendToMany,_entitySystemManager,Local,Message,Uid,MessageWrap,Name,ConnectedClient,Select,SenderEntity,ToList,PingListeners,_netManager,GridPosition,CanSpeak],Initialize->[_netManager,NAME],DispatchServerMessage->[Channel,Message,MessageWrap,ServerSendMessage,ConnectedClient,Server,_netManager],DispatchServerAnnouncement->[Channel,Message,MessageWrap,Server,_netManager,ServerSendToAll],SendOOC->[Channel,SessionId,Message,MessageWrap,SendOOCMessage,OOC,_netManager,ToString,ServerSendToAll]]] | Creates a ChatManager object that can be used to manage chat messages. Dispatch server message. | I think this is unneeded. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.