text
stringlengths
1
22.8M
Phoenix of Spain (foaled 17 February 2016) is an Irish-bred, British-trained Thoroughbred racehorse. As a two-year-old in 2018 he showed top-class form to win two races including the Acomb Stakes as well as finishing second in both the Champagne Stakes and the Vertem Futurity Trophy. He recorded his greatest success on his first run of 2019 when he easily defeated a strong field to take the Irish 2000 Guineas. He failed to reproduce his best form in four subsequent starts and was retired from racing at the end of the year. Background Phoenix of Spain is a grey colt bred in Ireland by Cherry Faeste. As a foal in December 2016 he was offered for sale at Tattersalls and was bought for 78,000 guineas by Good Will Bloodstock. He returned to the Tattersalls sale ring in October 2017 and was sold to Howson & Houldsworth Bloodstock for 220,000 guineas. The colt entered the ownership of Tony Wechsler and Ann Plummer and was sent into training with Charles Hills at Lambourn in Berkshire. He was ridden in seven of his ten races by Jamie Spencer. He is from the fifth crop of foals sired by the Prix du Jockey Club winner Lope de Vega. His other foals have included Newspaperofrecord, Belardo, Vega Magic (Memsie Stakes) The Right Man (Al Quoz Sprint) and Santa Ana Lane (Stradbroke Handicap). Phoenix of Spain's dam Lucky Clio showed no racing ability, failing to win in five starts, but did better as a broodmare, producing six other winners. She was distantly descended from La Faisanderie, a full sister to Slieve Gallion. Racing career 2018: two-year-old season On 6 July 2018 Phoenix of Spain began his racing career in a novice race (for horses with no more than two previous wins) over seven furlongs at Sandown Park for which he started a 16/1 outsider and came home fourth behind the John Gosden-trained King of Comedy. Later that month in a similar event on the Tapeta surface at Wolverhampton Racecourse the colt was ridden by Callum Shepherd and started the 6/5 favourite against six opponents. After tracking the leaders he went to the front a furlong out and recorded his first success as he won "comfortably" by two and a half lengths. He was then stepped up in class for the Group 3 Acomb Stakes at York Racecourse on 22 August and started at odds of 9/2 in an eight-runner field. Phoenix of Spain was restrained towards the rear by Spencer before taking the lead a furlong out and winning by one and a half lengths from the favourite Watan. After the race Charles Hills said: "He was impressive at Wolverhampton and we felt York would suit him better. I thought Jamie gave him a great ride. He was a bit windy beforehand and he settled him well... You'd like to think a mile would be within his compass this season". James Doyle took the ride when Phoenix of Spain contested the Group 2 Champagne Stakes on 15 September at Doncaster Racecourse. He raced in third place and kept on well in the closing stages to finish second of the six runners, beaten one and a quarter lengths by the odds-on favourite Too Darn Hot. The colt returned to the same track on 27 October when he was moved up in class and distance for the Vertem Futurity Trophy over one mile and went off the 11/2 third choice in the betting. Phoenix of Spain came from well off the pace to dispute the lead inside the final furlong, but after being hampered in the closing stages he finished second in a blanket finish, beaten a head by Magna Grecia with Western Australia, Circus Maximus and Great Scot close behind. In the official ratings of European juveniles for 2018 Phoenix of Spain was given a mark of 112, making him the fourteen pounds inferior to the top-rated Too Darn Hot. 2019: three-year-old season For his three-year-old debut Phoenix of Spain was sent to Ireland to contest the Irish 2000 Guineas at the Curragh on 25 May and went off at odds of 16/1. Too Darn Hot and Magna Grecia started joint favourites while the other eleven runners included Skardu (Craven Stakes), Mohawk (Royal Lodge Stakes), Shelir (Tetrarch Stakes), Emaraaty Ana (Gimcrack Stakes) and Van Beethoven (Railway Stakes). Phoenix of Spain led from the start, opened up a clear advantage approaching the final furlong and came home three lengths clear of Too Darn Hot in second place. Charles Hills said "The plan wasn’t really to make the running, but Jamie gave him an absolute peach and he’s some horse. He sustained that gallop all the way through and he just keep lengthening. He’s a big horse and whatever he did last year was a bonus. He’s got a hell of a future ahead of him". In June at Royal Ascot Phoenix of Spain started 5/2 second favourite for the St James's Palace Stakes but after racing in third place for most of the way he made no impression in the closing stages and finished sixth of the nine runners behind Circus Maximus. At Goodwood Racecourse on 31 July the colt was matched against older horses for the first time in the Sussex Stakes. He led for most of the way before fading in the last quarter mile and came home sixth as Too Darn Hot won from Circus Maximus. In September he was sent to France and ran fifth to Circus Maximus in the Prix du Moulin at Longchamp Racecourse. On his final racecourse appearance Phoenix of Spain contested the Queen Elizabeth II Stakes at Ascot on 19 October. Ridden by Doyle he raced in second place before dropping out of contention in the last quarter mile and finished tenth behind King of Change, beaten almost twelve lengths by the winner. Four days after his final race it was announced that Phoenix of Spain had been retired from racing and would begin his career as a breeding stallion at the Irish National Stud in 2020. Pedigree Through his sire, Phoenix of Spain was inbred 4 × 4 to Machiavellian, meaning that this stallion appears twice in the fourth generation of his pedigree. References External links Career 1-2-3 Colour Chart – Phoenix of Spain 2016 racehorse births Racehorses bred in Ireland Racehorses trained in the United Kingdom Thoroughbred family 8-d Irish Classic Race winners
```java Common mistake on switch statements Using bounded type parameters in generic methods Measuring time Using `synchronized` statements Using an interface as a parameter ```
Westley Barber (born 19 January 1982) is a British racing driver who was the 2002 British Formula Ford and 1998 French Formula Renault Campus champion. Career Barber began racing in the Formula Renault Campus series in France. He won the title with six victories in the ten races across the year. He graduated to French Formula 3 for 1999 and drove for La Filière Elf finishing 4th in the class B series. He was a nominated finalist for the Autosport BRDC Award that year, but lost out to Gary Paffett. He returned to Britain in 2000, joining Alan Docking Racing for the British Formula 3 season. He would ultimately finish in 14th and a best finish of 6th in the final race at Silverstone. In 2001, he joined British Formula Ford with Haywood Racing and progressed to the Duckhams team for 2002 winning the championship. Of the eighteen races, Barber won eight, including the first seven races of the season. In 2003, Barber moved to the United States to compete in Formula Ford 2000 with Cape Motorsports. He would finish 2nd in the championship behind American racer Jonathan Bomarito. For 2004, Barber returned to the UK and joined Comtec Racing in Formula Renault UK. He would finish the season in 2nd place, behind future World Endurance Championship victor Mike Conway. He remained with Comtec for 2005 competing in both the French Formula Renault 2.0 series and Formula Renault Eurocup. In France he won one race at Nogaro Circuit. In 2006, he became a full member of the British Racing Drivers Club. Following a number of years away from racing, Barber returned to British Formula Ford in 2008 competing in the 11 race season securing three podiums including a pair of second places at the final round of the season at Brands Hatch. Racing Record Career Summary References External Links Westley Barber at Motorsport Magazine 1982_births Living_people English_racing_drivers Sportspeople_from_Hitchin French_Formula_Three_Championship_drivers Formula_Ford_drivers U.S._F2000_National_Championship_drivers British_Formula_Renault_2.0_drivers French_Formula_Renault_2.0_drivers Formula_Renault_Eurocup_drivers Comtec Racing drivers Alan Docking Racing drivers La Filière drivers Formule Campus Renault Elf drivers
```java /* * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ package org.apache.beam.io.debezium; import com.google.auto.service.AutoService; import java.util.List; import java.util.Map; import org.apache.beam.sdk.expansion.ExternalTransformRegistrar; import org.apache.beam.sdk.transforms.ExternalTransformBuilder; import org.apache.beam.sdk.transforms.PTransform; import org.apache.beam.sdk.values.PBegin; import org.apache.beam.sdk.values.PCollection; import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.collect.ImmutableMap; import org.checkerframework.checker.nullness.qual.Nullable; /** Exposes {@link DebeziumIO.Read} as an external transform for cross-language usage. */ @AutoService(ExternalTransformRegistrar.class) @SuppressWarnings({ "nullness" // TODO(path_to_url }) public class DebeziumTransformRegistrar implements ExternalTransformRegistrar { public static final String READ_JSON_URN = "beam:transform:org.apache.beam:debezium_read:v1"; @Override public Map<String, Class<? extends ExternalTransformBuilder<?, ?, ?>>> knownBuilders() { return ImmutableMap.of( READ_JSON_URN, (Class<? extends ExternalTransformBuilder<?, ?, ?>>) (Class<?>) ReadBuilder.class); } private abstract static class CrossLanguageConfiguration { String username; String password; String host; String port; Connectors connectorClass; public void setUsername(String username) { this.username = username; } public void setPassword(String password) { this.password = password; } public void setHost(String host) { this.host = host; } public void setPort(String port) { this.port = port; } public void setConnectorClass(String connectorClass) { this.connectorClass = Connectors.fromName(connectorClass); } } public static class ReadBuilder implements ExternalTransformBuilder<ReadBuilder.Configuration, PBegin, PCollection<String>> { public static class Configuration extends CrossLanguageConfiguration { private @Nullable List<String> connectionProperties; private @Nullable Long maxNumberOfRecords; public void setConnectionProperties(@Nullable List<String> connectionProperties) { this.connectionProperties = connectionProperties; } public void setMaxNumberOfRecords(@Nullable Long maxNumberOfRecords) { this.maxNumberOfRecords = maxNumberOfRecords; } } @Override public PTransform<PBegin, PCollection<String>> buildExternal(Configuration configuration) { DebeziumIO.ConnectorConfiguration connectorConfiguration = DebeziumIO.ConnectorConfiguration.create() .withUsername(configuration.username) .withPassword(configuration.password) .withHostName(configuration.host) .withPort(configuration.port) .withConnectorClass(configuration.connectorClass.getConnector()); if (configuration.connectionProperties != null) { for (String connectionProperty : configuration.connectionProperties) { String[] parts = connectionProperty.split("=", -1); String key = parts[0]; String value = parts[1]; connectorConfiguration.withConnectionProperty(key, value); } } DebeziumIO.Read<String> readTransform = DebeziumIO.readAsJson().withConnectorConfiguration(connectorConfiguration); if (configuration.maxNumberOfRecords != null) { readTransform = readTransform.withMaxNumberOfRecords(configuration.maxNumberOfRecords.intValue()); } return readTransform; } } } ```
```python import re ma = re.match(r'a', 'a') print(ma.group()) # '.' ma = re.match(r'.', 'b') print(ma.group()) ma = re.match(r'{.}', '{a}') # print(ma.group()) ma = re.match(r'{..}', '{ab}') # print(ma.group()) # [...] ma = re.match(r'{[abc]}', '{a}') # abc print(ma.group()) ma = re.match(r'{[a-z]}', '{d}') # a-z print(ma.group()) ma = re.match(r'{[a-zA-Z]}', '{Z}') print(ma.group()) ma = re.match(r'{[\w]}', '{0}') # `\w`a-zA-Z0-9 print(ma.group()) # , ma = re.match(r'\[[\w]\]', '[a]') # '\[''\]' print(ma.group()) ma = re.match(r'[A-Z][a-z]', 'Aa') print(ma.group()) ma = re.match(r'[A-Z][a-z]*', 'A') # 0 print(ma.group()) ma = re.match(r'[A-Z][a-z]*', 'Aa') print(ma.group()) ma = re.match(r'[A-Z][a-z]*', 'Aafdsfdsb') print(ma.group()) ma = re.match(r'[A-Z][a-z]*', 'Aafdsfdsbadfas154154') # print(ma.group()) # _a-zA-Z, _a-zA-Z # + 1 ma = re.match(r'[_a-zA-Z]+[_\w]*', '__init') print(ma.group()) ma = re.match(r'[1-9]?[0-9]', '99') # 0-99, 09, 9?0 print(ma.group()) ma = re.match(r'[1-9]?[0-9]', '09') # 0 , 09 print(ma.group()) ma = re.match(r'[a-zA-Z0-9]{6}', 'bac123') # a-zA-Z0-9 print(ma.group()) # ma = re.match(r'[a-zA-Z0-9]{6}', 'bac12') # , bac126 # print(ma.group()) ma = re.match(r'[a-zA-Z0-9]{6}', 'bac1234') # bac123 print(ma.group()) # 6163 ma = re.match(r'[a-zA-Z0-9]{6}@163.com', 'abc123@163.com') print(ma.group()) # 6-10163 ma = re.match(r'[a-zA-Z0-9]{6,10}@163.com', 'abc123gd@163.com') print(ma.group()) ma = re.match(r'[0-9][a-z]*?', '1bc') # 1, , print(ma.group()) ma = re.match(r'[0-9][a-z]+?', '1bc') # 1b print(ma.group()) # ma = re.match(r'[\w]{4,10}@163.com', 'abc123gd@163.comabc') # abc123gd@163.com print(ma.group()) # ma = re.match(r'[\w]{4,10}@163.com$', 'abc123gd@163.comabc') # , $, @163.com # print(ma.group()) ma = re.match(r'^[\w]{4,10}@163.com$', 'abc123gd@163.com') # ^ , [\w] print(ma.group()) ma = re.match(r'\Aabc[\w]{4,10}@163.com$', 'abc123gd@163.com') # \A abc print(ma.group()) ```
```html <div layout="row" layout-align="start center"> <div hide-xs class="mat-subtitle-1 pad-left pad-right push-bottom-none"> Chart Theme: </div> <mat-form-field> <mat-select [(value)]="selectedTheme" (valueChange)="selectChartTheme($event)" > <mat-option *ngFor="let theme of themes" [value]="theme"> {{ theme }} </mat-option> </mat-select> </mat-form-field> </div> <td-chart [style.height.px]="800" [themeName]="themeSelector.selected$ | async"> <td-chart-tooltip [trigger]="'item'"> <ng-template let-params let-ticket="ticket" tdTooltipFormatter> <ng-container *ngIf="params"> <div layout="row" layout-align="start center"> <mat-icon class="push-right-sm"> <span class="tc-blue-300">info</span> </mat-icon> <span>{{ params.name }}</span> </div> </ng-container> </ng-template> </td-chart-tooltip> <td-chart-series td-tree [top]="'10%'" [left]="'10%'" [bottom]="'10%'" [right]="'30%'" [data]="[ { name: 'flare', children: [ { name: 'analytics', collapsed: true, children: [ { name: 'cluster', children: [ { name: 'AgglomerativeCluster', value: 3938 }, { name: 'CommunityStructure', value: 3812 }, { name: 'HierarchicalCluster', value: 6714 }, { name: 'MergeEdge', value: 743 } ] }, { name: 'graph', children: [ { name: 'BetweennessCentrality', value: 3534 }, { name: 'LinkDistance', value: 5731 }, { name: 'MaxFlowMinCut', value: 7840 }, { name: 'ShortestPaths', value: 5914 }, { name: 'SpanningTree', value: 3416 } ] }, { name: 'optimization', children: [{ name: 'AspectRatioBanker', value: 7074 }] } ] }, { name: 'animate', children: [ { name: 'Easing', value: 17010 }, { name: 'FunctionSequence', value: 5842 }, { name: 'interpolate', children: [ { name: 'ArrayInterpolator', value: 1983 }, { name: 'ColorInterpolator', value: 2047 }, { name: 'DateInterpolator', value: 1375 }, { name: 'Interpolator', value: 8746 }, { name: 'MatrixInterpolator', value: 2202 }, { name: 'NumberInterpolator', value: 1382 }, { name: 'ObjectInterpolator', value: 1629 }, { name: 'PointInterpolator', value: 1675 }, { name: 'RectangleInterpolator', value: 2042 } ] }, { name: 'ISchedulable', value: 1041 }, { name: 'Parallel', value: 5176 }, { name: 'Pause', value: 449 }, { name: 'Scheduler', value: 5593 }, { name: 'Sequence', value: 5534 }, { name: 'Transition', value: 9201 }, { name: 'Transitioner', value: 19975 }, { name: 'TransitionEvent', value: 1116 }, { name: 'Tween', value: 6006 } ] }, { name: 'data', collapsed: true, children: [ { name: 'converters', children: [ { name: 'Converters', value: 721 }, { name: 'DelimitedTextConverter', value: 4294 }, { name: 'GraphMLConverter', value: 9800 }, { name: 'IDataConverter', value: 1314 }, { name: 'JSONConverter', value: 2220 } ] }, { name: 'DataField', value: 1759 }, { name: 'DataSchema', value: 2165 }, { name: 'DataSet', value: 586 }, { name: 'DataSource', value: 3331 }, { name: 'DataTable', value: 772 }, { name: 'DataUtil', value: 3322 } ] }, { name: 'display', children: [ { name: 'DirtySprite', value: 8833 }, { name: 'LineSprite', value: 1732 }, { name: 'RectSprite', value: 3623 }, { name: 'TextSprite', value: 10066 } ] }, { name: 'flex', collapsed: true, children: [{ name: 'FlareVis', value: 4116 }] }, { name: 'physics', children: [ { name: 'DragForce', value: 1082 }, { name: 'GravityForce', value: 1336 }, { name: 'IForce', value: 319 }, { name: 'NBodyForce', value: 10498 }, { name: 'Particle', value: 2822 }, { name: 'Simulation', value: 9983 }, { name: 'Spring', value: 2213 }, { name: 'SpringForce', value: 1681 } ] }, { name: 'query', collapsed: true, children: [ { name: 'AggregateExpression', value: 1616 }, { name: 'And', value: 1027 }, { name: 'Arithmetic', value: 3891 }, { name: 'Average', value: 891 }, { name: 'BinaryExpression', value: 2893 }, { name: 'Comparison', value: 5103 }, { name: 'CompositeExpression', value: 3677 }, { name: 'Count', value: 781 }, { name: 'DateUtil', value: 4141 }, { name: 'Distinct', value: 933 }, { name: 'Expression', value: 5130 }, { name: 'ExpressionIterator', value: 3617 }, { name: 'Fn', value: 3240 }, { name: 'If', value: 2732 }, { name: 'IsA', value: 2039 }, { name: 'Literal', value: 1214 }, { name: 'Match', value: 3748 }, { name: 'Maximum', value: 843 }, { name: 'methods', children: [ { name: 'add', value: 593 }, { name: 'and', value: 330 }, { name: 'average', value: 287 }, { name: 'count', value: 277 }, { name: 'distinct', value: 292 }, { name: 'div', value: 595 }, { name: 'eq', value: 594 }, { name: 'fn', value: 460 }, { name: 'gt', value: 603 }, { name: 'gte', value: 625 }, { name: 'iff', value: 748 }, { name: 'isa', value: 461 }, { name: 'lt', value: 597 }, { name: 'lte', value: 619 }, { name: 'max', value: 283 }, { name: 'min', value: 283 }, { name: 'mod', value: 591 }, { name: 'mul', value: 603 }, { name: 'neq', value: 599 }, { name: 'not', value: 386 }, { name: 'or', value: 323 }, { name: 'orderby', value: 307 }, { name: 'range', value: 772 }, { name: 'select', value: 296 }, { name: 'stddev', value: 363 }, { name: 'sub', value: 600 }, { name: 'sum', value: 280 }, { name: 'update', value: 307 }, { name: 'variance', value: 335 }, { name: 'where', value: 299 }, { name: 'xor', value: 354 }, { name: '-', value: 264 } ] }, { name: 'Minimum', value: 843 }, { name: 'Not', value: 1554 }, { name: 'Or', value: 970 }, { name: 'Query', value: 13896 }, { name: 'Range', value: 1594 }, { name: 'StringUtil', value: 4130 }, { name: 'Sum', value: 791 }, { name: 'Variable', value: 1124 }, { name: 'Variance', value: 1876 }, { name: 'Xor', value: 1101 } ] }, { name: 'scale', children: [ { name: 'IScaleMap', value: 2105 }, { name: 'LinearScale', value: 1316 }, { name: 'LogScale', value: 3151 }, { name: 'OrdinalScale', value: 3770 }, { name: 'QuantileScale', value: 2435 }, { name: 'QuantitativeScale', value: 4839 }, { name: 'RootScale', value: 1756 }, { name: 'Scale', value: 4268 }, { name: 'ScaleType', value: 1821 }, { name: 'TimeScale', value: 5833 } ] }, { name: 'util', collapsed: true, children: [ { name: 'Arrays', value: 8258 }, { name: 'Colors', value: 10001 }, { name: 'Dates', value: 8217 }, { name: 'Displays', value: 12555 }, { name: 'Filter', value: 2324 }, { name: 'Geometry', value: 10993 }, { name: 'heap', children: [ { name: 'FibonacciHeap', value: 9354 }, { name: 'HeapNode', value: 1233 } ] }, { name: 'IEvaluable', value: 335 }, { name: 'IPredicate', value: 383 }, { name: 'IValueProxy', value: 874 }, { name: 'math', children: [ { name: 'DenseMatrix', value: 3165 }, { name: 'IMatrix', value: 2815 }, { name: 'SparseMatrix', value: 3366 } ] }, { name: 'Maths', value: 17705 }, { name: 'Orientation', value: 1486 }, { name: 'palette', children: [ { name: 'ColorPalette', value: 6367 }, { name: 'Palette', value: 1229 }, { name: 'ShapePalette', value: 2059 }, { name: 'SizePalette', value: 2291 } ] }, { name: 'Property', value: 5559 }, { name: 'Shapes', value: 19118 }, { name: 'Sort', value: 6887 }, { name: 'Stats', value: 6557 }, { name: 'Strings', value: 22026 } ] }, { name: 'vis', children: [ { name: 'axis', children: [ { name: 'Axes', value: 1302 }, { name: 'Axis', value: 24593 }, { name: 'AxisGridLine', value: 652 }, { name: 'AxisLabel', value: 636 }, { name: 'CartesianAxes', value: 6703 } ] }, { name: 'controls', children: [ { name: 'AnchorControl', value: 2138 }, { name: 'ClickControl', value: 3824 }, { name: 'Control', value: 1353 }, { name: 'ControlList', value: 4665 }, { name: 'DragControl', value: 2649 }, { name: 'ExpandControl', value: 2832 }, { name: 'HoverControl', value: 4896 }, { name: 'IControl', value: 763 }, { name: 'PanZoomControl', value: 5222 }, { name: 'SelectionControl', value: 7862 }, { name: 'TooltipControl', value: 8435 } ] }, { name: 'data', children: [ { name: 'Data', value: 20544 }, { name: 'DataList', value: 19788 }, { name: 'DataSprite', value: 10349 }, { name: 'EdgeSprite', value: 3301 }, { name: 'NodeSprite', value: 19382 }, { name: 'render', children: [ { name: 'ArrowType', value: 698 }, { name: 'EdgeRenderer', value: 5569 }, { name: 'IRenderer', value: 353 }, { name: 'ShapeRenderer', value: 2247 } ] }, { name: 'ScaleBinding', value: 11275 }, { name: 'Tree', value: 7147 }, { name: 'TreeBuilder', value: 9930 } ] }, { name: 'events', children: [ { name: 'DataEvent', value: 2313 }, { name: 'SelectionEvent', value: 1880 }, { name: 'TooltipEvent', value: 1701 }, { name: 'VisualizationEvent', value: 1117 } ] }, { name: 'legend', children: [ { name: 'Legend', value: 20859 }, { name: 'LegendItem', value: 4614 }, { name: 'LegendRange', value: 10530 } ] }, { name: 'operator', children: [ { name: 'distortion', children: [ { name: 'BifocalDistortion', value: 4461 }, { name: 'Distortion', value: 6314 }, { name: 'FisheyeDistortion', value: 3444 } ] }, { name: 'encoder', children: [ { name: 'ColorEncoder', value: 3179 }, { name: 'Encoder', value: 4060 }, { name: 'PropertyEncoder', value: 4138 }, { name: 'ShapeEncoder', value: 1690 }, { name: 'SizeEncoder', value: 1830 } ] }, { name: 'filter', children: [ { name: 'FisheyeTreeFilter', value: 5219 }, { name: 'GraphDistanceFilter', value: 3165 }, { name: 'VisibilityFilter', value: 3509 } ] }, { name: 'IOperator', value: 1286 }, { name: 'label', children: [ { name: 'Labeler', value: 9956 }, { name: 'RadialLabeler', value: 3899 }, { name: 'StackedAreaLabeler', value: 3202 } ] }, { name: 'layout', children: [ { name: 'AxisLayout', value: 6725 }, { name: 'BundledEdgeRouter', value: 3727 }, { name: 'CircleLayout', value: 9317 }, { name: 'CirclePackingLayout', value: 12003 }, { name: 'DendrogramLayout', value: 4853 }, { name: 'ForceDirectedLayout', value: 8411 }, { name: 'IcicleTreeLayout', value: 4864 }, { name: 'IndentedTreeLayout', value: 3174 }, { name: 'Layout', value: 7881 }, { name: 'NodeLinkTreeLayout', value: 12870 }, { name: 'PieLayout', value: 2728 }, { name: 'RadialTreeLayout', value: 12348 }, { name: 'RandomLayout', value: 870 }, { name: 'StackedAreaLayout', value: 9121 }, { name: 'TreeMapLayout', value: 9191 } ] }, { name: 'Operator', value: 2490 }, { name: 'OperatorList', value: 5248 }, { name: 'OperatorSequence', value: 4190 }, { name: 'OperatorSwitch', value: 2581 }, { name: 'SortOperator', value: 2023 } ] }, { name: 'Visualization', value: 16540 } ] } ] } ]" [initialTreeDepth]="2" [symbolSize]="10" [leaves]="{ label: { padding: 5, fontSize: 9, distance: 5, position: 'right' } }" [label]="{ padding: 5, borderRadius: 10, fontSize: 9, distance: 5, position: 'left' }" ></td-chart-series> </td-chart> ```
James Richardson (born October 30, 1984) is a conservative American political strategist and columnist best known as a spokesman and adviser to the Republican National Committee and former Governors Jon Huntsman and Haley Barbour. In a September 2014 opinion editorial published in the Washington Post, Richardson openly disclosed that he is gay. As of July 2015, he serves as managing director of Dentons, a global law practice, in the firm's public policy and regulatory affairs group. Career In the 2008 presidential election, Richardson served as Online Communications Manager for the Republican National Committee. He briefly served as Communications Director for the College Republican National Committee before accepting a position with the Conservative consultancy Hynes Communications, which specializes in Conservative blogger outreach. Richardson took leave from the firm in 2011 to advise then-Mississippi Governor Haley Barbour, who was openly weighing a presidential bid. After weathering criticism for his perceived proximity to racist groups, Barbour eventually announced in mid-2011 he would forgo a campaign for the White House. Richardson was the first of Barbour's advisors to join another campaign, accepting a position as Director of Online Communications for Jon Huntsman's presidential campaign. After Huntsman's resignation from the race following his third-place finish in New Hampshire, Richardson returned to Hynes Communications as Vice President of Public Relations. According to media reports, some of his clients have included the National Republican Senatorial Committee and Indiana Senator Dan Coats Richardson has written extensively on political and cultural issues and has appeared on CNN, MSNBC and Fox News. His columns have appeared in The Atlantic, GQ, US News & World Report, National Review, The Washington Post, The Guardian,' The Advocate Magazine, The Christian Science Monitor, USA Today, Politico, Roll Call, the Washington Times, Creative Loafing, Fox News, CNN, CBS News, and The Huffington Post. He edits the political news blog Georgia Tipsheet, which the Washington Post named one of the "best state-based blogs" in the country in 2013. Personal life In September 2014, Richardson authored an Op-Ed in The Washington Post in which he publicly disclosed he is gay. In the column, Richardson said he advocated for equal rights for LGBT persons throughout his career "even as I never openly disclosed my personal stake" in the debate. Richardson's coming out was covered by CNN, The Huffington Post, The Advocate, MTV, and the Atlanta Journal-Constitution, among others. Richardson lives in Atlanta, Georgia, with his partner of five years. He attended the University of Georgia. Selected writings "Who Were the Midwives of America's Gay Marriage Movement?," Newsweek, June 21, 2015, James Richardson "A good Republican for gay marriage," The New York Daily News, Feb. 28, 2015, James Richardson "The Real Reason Rob Portman Won't Run for President," The Advocate Magazine, Dec. 4, 2014, James Richardson "How House Dems Lost Their Last Southern White Guy," The Daily Beast, Nov. 9, 2014, James Richardson "I'm A Senior GOP Spokesman, And I'm Gay. Let Me Get Married." The Washington Post, Sept. 4, 2014, James Richardson "Teacher Tenure Refugees Flee Public Schools," USA Today, July 29, 2014, James Richardson "One Year After DOMA Fell, and Still No Revolution," U.S. News & World Report, June 26, 2014, James Richardson "An Honesty Gap in the Pay Gap Debate," Roll Call, April 23, 2014, James Richardson "Sin City A Virtuous Venue for GOP Convention," USA Today, March 21, 2014, James Richardson "Stop Arizona-Style Anti-Gay Bill In Georgia," CNN, Feb. 27, 2014, James Richardson "You're Not Fired, Ever," National Review, Dec. 30, 2013, James Richardson "Congress Must Lead By Dealing With The Deficit," POLITICO, Nov. 5, 2014, James Richardson "Why Supreme Court's Gay Marriage Ruling Won't Be Like Roe," Christian Science Monitor, June 10, 2013, James Richardson "Gun Control Misfire To Cost Lives In Georgia," Fox News, Feb. 1, 2013, James Richardson "How Mitt Romney's Historic Debate Confounded Political Science Convention," The Guardian, Oct. 10, 2012, James Richardson "Kasim's Gay Problem," Creative Loafing, June 25, 2012, James Richardson "Not Waiting Their Turn," FOX News, June 12, 2012, James Richardson "Gingrich Refuses To Quit," GQ, March 15, 2012, James Richardson "American Has Moved On From Romney's Mormonism," The Guardian, March 8, 2012, James Richardson "Northern Snobbery Fuels Paula Deen Fingerpointing," FOX News, Jan. 21, 2012, James Richardson "What Was The Huntsman Campaign's Problem?", The Atlantic, Jan. 17, 2012, James Richardson "The Politics Of Appointment," POLITICO, April 29, 2011, James Richardson "Haley Barbour, The GOP's Best Candidate Not To Run," The Guardian, April 26, 2011, James Richardson "Food Or Facebook For America's Homeless?," The Huffington Post, Dec. 12, 2010, James Richardson "What Democrats Wish For," FOX News, Oct. 28, 2010, James Richardson "Dems Play Politics With 9/11 Workers," POLITICO, Aug. 1, 2010, James Richardson References 1984 births Living people
```ruby # code is released under a tri EPL/GPL/LGPL license. You can use it, # redistribute it and/or modify it under the terms of the: # require_relative '../../ruby/spec_helper' require_relative 'fixtures/classes' describe "Truffle::Interop.as_pointer" do it "is not supported for nil" do -> { Truffle::Interop.as_pointer(nil) }.should raise_error(Polyglot::UnsupportedMessageError) end it "is not supported for objects which cannot be converted to a pointer" do -> { Truffle::Interop.as_pointer(Object.new) }.should raise_error(Polyglot::UnsupportedMessageError) end it "works on Truffle::FFI::Pointer" do Truffle::Interop.as_pointer(Truffle::FFI::Pointer.new(0x123)).should == 0x123 end it "calls #address" do Truffle::Interop.as_pointer(TruffleInteropSpecs::AsPointerClass.new).should == 0x123 end end ```
```rust // A useless example application using `git2`, to make sure that we link it // correctly. extern crate git2; use git2::Repository; fn main() { let _ = Repository::init("test-repo"); println!("Hello, world!"); } ```
```python # # # path_to_url # # Unless required by applicable law or agreed to in writing, software # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. import unittest import numpy as np import paddle import paddle.nn.functional as F from paddle import nn from paddle.base import core, framework from paddle.nn import BatchNorm np.random.seed(2023) class PrimeNet(paddle.nn.Layer): def __init__(self): super().__init__() self.conv = nn.Conv2D(2, 4, (3, 3), bias_attr=False) self.bn = BatchNorm(4, act="relu") def forward(self, x): y = self.conv(x) out = self.bn(y) res = F.max_pool2d(out, kernel_size=2, stride=2, padding=0) return res class TestPrimAMPO1(unittest.TestCase): """ Test PrimeNet with @to_static + prim v.s Dygraph in AMPO1. """ def setUp(self): paddle.seed(2022) self.x = paddle.randn([4, 2, 6, 6], dtype="float32") self.x.stop_gradient = False def train(self, use_prim): core._set_prim_all_enabled(use_prim) paddle.seed(2022) net = PrimeNet() sgd = paddle.optimizer.SGD( learning_rate=0.1, parameters=net.parameters() ) if use_prim: net = paddle.jit.to_static( net, build_strategy=False, full_graph=True ) with paddle.amp.auto_cast(level='O1'): out = net(self.x) loss = paddle.mean(out) loss.backward() sgd.step() sgd.clear_grad() return loss def test_amp_01(self): if not isinstance(framework._current_expected_place(), core.CPUPlace): expected = self.train(False) actual = self.train(True) np.testing.assert_allclose( expected, actual, rtol=1e-3, atol=1e-3, ) def test_amp_O1_infer(self): if not isinstance(framework._current_expected_place(), core.CPUPlace): net = PrimeNet() core._set_prim_all_enabled(False) net.eval() static_net = paddle.jit.to_static( net, build_strategy=False, full_graph=True ) res = static_net(self.x) # set prim all enabled core._set_prim_all_enabled(True) net.eval() static_net = paddle.jit.to_static( net, build_strategy=False, full_graph=True ) with paddle.amp.auto_cast(level='O1'): res_amp = static_net(self.x) np.testing.assert_allclose( res, res_amp, rtol=1e-3, atol=1e-3, ) if __name__ == '__main__': unittest.main() ```
```javascript /** * @license Apache-2.0 * * * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ 'use strict'; // MODULES // var resolve = require( 'path' ).resolve; var exec = require( 'child_process' ).exec; var tape = require( 'tape' ); var IS_BROWSER = require( '@stdlib/assert/is-browser' ); var IS_WINDOWS = require( '@stdlib/assert/is-windows' ); var EXEC_PATH = require( '@stdlib/process/exec-path' ); var RE_EOL = require( '@stdlib/regexp/eol' ).REGEXP; var readFileSync = require( '@stdlib/fs/read-file' ).sync; var sotu = require( './../lib' ); // VARIABLES // var fpath = resolve( __dirname, '..', 'bin', 'cli' ); var opts = { 'skip': IS_BROWSER || IS_WINDOWS }; // FIXTURES // var PKG_VERSION = require( './../package.json' ).version; // TESTS // tape( 'command-line interface', function test( t ) { t.ok( true, __filename ); t.end(); }); tape( 'when invoked with a `--help` flag, the command-line interface prints the help text to `stderr`', opts, function test( t ) { var expected; var cmd; expected = readFileSync( resolve( __dirname, '..', 'docs', 'usage.txt' ), { 'encoding': 'utf8' }); cmd = [ EXEC_PATH, fpath, '--help' ]; exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { if ( error ) { t.fail( error.message ); } else { t.strictEqual( stdout.toString(), '', 'does not print to `stdout`' ); t.strictEqual( stderr.toString(), expected+'\n', 'expected value' ); } t.end(); } }); tape( 'when invoked with a `-h` flag, the command-line interface prints the help text to `stderr`', opts, function test( t ) { var expected; var cmd; expected = readFileSync( resolve( __dirname, '..', 'docs', 'usage.txt' ), { 'encoding': 'utf8' }); cmd = [ EXEC_PATH, fpath, '-h' ]; exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { if ( error ) { t.fail( error.message ); } else { t.strictEqual( stdout.toString(), '', 'does not print to `stdout`' ); t.strictEqual( stderr.toString(), expected+'\n', 'expected value' ); } t.end(); } }); tape( 'when invoked with a `--version` flag, the command-line interface prints the version to `stderr`', opts, function test( t ) { var cmd = [ EXEC_PATH, fpath, '--version' ]; exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { if ( error ) { t.fail( error.message ); } else { t.strictEqual( stdout.toString(), '', 'does not print to `stdout`' ); t.strictEqual( stderr.toString(), PKG_VERSION+'\n', 'expected value' ); } t.end(); } }); tape( 'when invoked with a `-V` flag, the command-line interface prints the version to `stderr`', opts, function test( t ) { var cmd = [ EXEC_PATH, fpath, '-V' ]; exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { if ( error ) { t.fail( error.message ); } else { t.strictEqual( stdout.toString(), '', 'does not print to `stdout`' ); t.strictEqual( stderr.toString(), PKG_VERSION+'\n', 'expected value' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses by U.S. presidents (newline-delimited JSON)', opts, function test( t ) { var expected; var opts; var cmd; cmd = [ EXEC_PATH, fpath ]; expected = sotu(); opts = { 'maxBuffer': 15000*1024 }; exec( cmd.join( ' ' ), opts, done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses by Republican presidents (newline-delimited JSON)', opts, function test( t ) { var expected; var opts; var cmd; cmd = [ EXEC_PATH, fpath, '--party Republican' ]; expected = sotu({ 'party': 'Republican' }); opts = { 'maxBuffer': 5000*1024 }; exec( cmd.join( ' ' ), opts, done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses by Democratic presidents (newline-delimited JSON)', opts, function test( t ) { var expected; var opts; var cmd; cmd = [ EXEC_PATH, fpath, '--party Democratic' ]; expected = sotu({ 'party': 'Democratic' }); opts = { 'maxBuffer': 5000*1024 }; exec( cmd.join( ' ' ), opts, done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses by a certain president (newline-delimited JSON)', opts, function test( t ) { var expected; var cmd; cmd = [ EXEC_PATH, fpath, '--name "Abraham Lincoln"' ]; expected = sotu({ 'name': 'Abraham Lincoln' }); exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses of a selected range of years (newline-delimited JSON)', opts, function test( t ) { var expected; var cmd; cmd = [ EXEC_PATH, fpath, '--range "2000,2005"' ]; expected = sotu({ 'range': [ 2000, 2005 ] }); exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); tape( 'the command-line interface prints State of the Union addresses for the selected year(s) (newline-delimited JSON)', opts, function test( t ) { var expected; var cmd; cmd = [ EXEC_PATH, fpath, '--year "2008,2012,1999"' ]; expected = sotu({ 'year': [ 2008, 2012, 1999 ] }); exec( cmd.join( ' ' ), done ); function done( error, stdout, stderr ) { var str; var i; if ( error ) { t.fail( error.message ); } else { stdout = stdout.toString().split( RE_EOL ); for ( i = 0; i < expected.length; i++ ) { str = JSON.stringify( expected[ i ] ); t.strictEqual( stdout[ i ], str, 'returns expected JSON string' ); } t.strictEqual( stderr.toString(), '', 'does not print to `stderr`' ); } t.end(); } }); ```
```objective-c /* * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * */ #ifndef PingLoader_h #define PingLoader_h #include "core/CoreExport.h" #include "core/fetch/ResourceLoaderOptions.h" #include "core/page/PageLifecycleObserver.h" #include "platform/Timer.h" #include "platform/heap/Handle.h" #include "public/platform/WebURLLoaderClient.h" #include "wtf/Noncopyable.h" #include "wtf/RefPtr.h" namespace blink { class FormData; class LocalFrame; class KURL; class ResourceRequest; // Issue an asynchronous, one-directional request at some resources, ignoring // any response. The request is made independent of any LocalFrame staying alive, // and must only stay alive until the transmission has completed successfully // (or not -- errors are not propagated back either.) Upon transmission, the // the load is cancelled and the loader cancels itself. // // The ping loader is used by audit pings, beacon transmissions and image loads // during page unloading. // class CORE_EXPORT PingLoader : public RefCountedWillBeRefCountedGarbageCollected<PingLoader>, public PageLifecycleObserver, private WebURLLoaderClient { WILL_BE_USING_GARBAGE_COLLECTED_MIXIN(PingLoader); WTF_MAKE_NONCOPYABLE(PingLoader); WTF_MAKE_FAST_ALLOCATED_WILL_BE_REMOVED(PingLoader); public: ~PingLoader() override; enum ViolationReportType { ContentSecurityPolicyViolationReport, XSSAuditorViolationReport }; static void loadImage(LocalFrame*, const KURL&); static void sendLinkAuditPing(LocalFrame*, const KURL& pingURL, const KURL& destinationURL); static void sendViolationReport(LocalFrame*, const KURL& reportURL, PassRefPtr<FormData> report, ViolationReportType); DECLARE_VIRTUAL_TRACE(); protected: PingLoader(LocalFrame*, ResourceRequest&, const FetchInitiatorInfo&, StoredCredentials); static void start(LocalFrame*, ResourceRequest&, const FetchInitiatorInfo&, StoredCredentials = AllowStoredCredentials); void dispose(); private: void didReceiveResponse(WebURLLoader*, const WebURLResponse&) override; void didReceiveData(WebURLLoader*, const char*, int, int) override; void didFinishLoading(WebURLLoader*, double, int64_t) override; void didFail(WebURLLoader*, const WebURLError&) override; void timeout(Timer<PingLoader>*); void didFailLoading(Page*); OwnPtr<WebURLLoader> m_loader; Timer<PingLoader> m_timeout; String m_url; unsigned long m_identifier; }; } // namespace blink #endif // PingLoader_h ```
```smalltalk /**************************************************************************** * * path_to_url * path_to_url * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. ****************************************************************************/ using System; using System.IO; namespace QFramework { public class UIPanelDesignerTemplate { public static void Write(string name, string scriptsFolder, string scriptNamespace, PanelCodeInfo panelCodeInfo, UIKitSettingData uiKitSettingData) { var scriptFile = string.Format(scriptsFolder + "/{0}.Designer.cs",name); var writer = File.CreateText(scriptFile); var root = new RootCode() .Using("System") .Using("UnityEngine") .Using("UnityEngine.UI") .Using("QFramework") .EmptyLine() .Namespace(string.IsNullOrWhiteSpace(scriptNamespace) ? uiKitSettingData.Namespace : scriptNamespace, ns => { ns.Custom(string.Format("// Generate Id:{0}",Guid.NewGuid().ToString())); ns.Class(name, null, true, false, (classScope) => { classScope.Custom("public const string Name = \"" + name + "\";"); classScope.EmptyLine(); foreach (var bindInfo in panelCodeInfo.BindInfos) { if (!string.IsNullOrEmpty(bindInfo.BindScript.Comment)) { classScope.Custom("/// <summary>"); classScope.Custom("/// " + bindInfo.BindScript.Comment); classScope.Custom("/// </summary>"); } classScope.Custom("[SerializeField]"); classScope.Custom("public " + bindInfo.BindScript.TypeName + " " + bindInfo.TypeName + ";"); } classScope.EmptyLine(); classScope.Custom("private " + name + "Data mPrivateData = null;"); classScope.EmptyLine(); classScope.CustomScope("protected override void ClearUIComponents()", false, (function) => { foreach (var bindInfo in panelCodeInfo.BindInfos) { function.Custom(bindInfo.TypeName + " = null;"); } function.EmptyLine(); function.Custom("mData = null;"); }); classScope.EmptyLine(); classScope.CustomScope("public " + name + "Data Data", false, (property) => { property.CustomScope("get", false, (getter) => { getter.Custom("return mData;"); }); }); classScope.EmptyLine(); classScope.CustomScope(name + "Data mData", false, (property) => { property.CustomScope("get", false, (getter) => { getter.Custom("return mPrivateData ?? (mPrivateData = new " + name + "Data());"); }); property.CustomScope("set", false, (setter) => { setter.Custom("mUIData = value;"); setter.Custom("mPrivateData = value;"); }); }); }); }); var codeWriter = new FileCodeWriter(writer); root.Gen(codeWriter); codeWriter.Dispose(); } } } ```
Gonodactylidae is a family of mantis shrimp. It contains these genera: Gonodactylaceus Manning, 1995 Gonodactylellus Manning, 1995 Gonodactyloideus Manning, 1984 Gonodactylolus Manning, 1970 Gonodactylopsis Manning, 1969 Gonodactylus Berthold, 1827 Hoplosquilla Holthuis, 1964 Hoplosquilloides Manning, 1978c Neogonodactylus Manning, 1995 References Stomatopoda Malacostraca families
```java /* * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ package org.graalvm.visualvm.heapviewer.truffle.nodes; import org.graalvm.visualvm.heapviewer.java.StackFrameNode; import org.graalvm.visualvm.heapviewer.model.HeapViewerNode; import org.graalvm.visualvm.heapviewer.ui.HeapViewerRenderer; import org.graalvm.visualvm.lib.ui.swing.renderer.LabelRenderer; import org.graalvm.visualvm.lib.ui.swing.renderer.MultiRenderer; import org.graalvm.visualvm.lib.ui.swing.renderer.NormalBoldGrayRenderer; import org.graalvm.visualvm.lib.ui.swing.renderer.ProfilerRenderer; import org.openide.util.NbBundle; /** * * @author Jiri Sedlacek */ @NbBundle.Messages({ "TruffleStackFrameNode_Unknown=<unknown>" }) public class TruffleStackFrameNode extends StackFrameNode { public TruffleStackFrameNode(String name, HeapViewerNode[] children) { super(name, children); } // NOTE: temporary solution, should probably be implemented for each Truffle language separately static class Renderer extends MultiRenderer implements HeapViewerRenderer { private final LabelRenderer atRenderer; private final NormalBoldGrayRenderer frameRenderer; private final ProfilerRenderer[] renderers; private String name1; private String name2; private String detail; Renderer() { atRenderer = new LabelRenderer() { public String toString() { return getText() + " "; // NOI18N } }; atRenderer.setText("at"); // NOI18N atRenderer.setMargin(3, 3, 3, 0); frameRenderer = new NormalBoldGrayRenderer() { public void setValue(Object value, int row) { if (value == null) { setNormalValue(""); // NOI18N setBoldValue(""); // NOI18N setGrayValue(""); // NOI18N } else { setNormalValue(((Object[])value)[0].toString()); setBoldValue(((Object[])value)[1].toString()); setGrayValue(((Object[])value)[2].toString()); } } }; renderers = new ProfilerRenderer[] { atRenderer, frameRenderer }; } protected ProfilerRenderer[] valueRenderers() { return renderers; } public void setValue(Object value, int row) { if (value == null) { // no value - fallback to <unknown> name1 = ""; // NOI18N name2 = Bundle.TruffleStackFrameNode_Unknown(); detail = ""; // NOI18N } else { String val = value.toString(); int idx = val.lastIndexOf(' '); // NOI18N if (idx != -1) { // multiple strings detail = val.substring(idx); if (detail.startsWith(" (")) { // NOI18N val = val.substring(0, idx); // detail contains source:line } else { detail = ""; // no detail available // NOI18N } idx = val.startsWith("<") ? -1 : val.lastIndexOf(' '); // NOI18N if (idx != -1) { // multiple strings - last bold name2 = val.substring(idx + 1); name1 = val.substring(0, idx + 1); } else { // single string or meta value - all bold name1 = ""; // NOI18N name2 = val; } idx = name2.lastIndexOf('.'); // NOI18N if (idx != -1) { // class.method detected in last string - only method bold if (!name1.isEmpty()) name1 += " "; // NOI18N name1 = name1 + name2.substring(0, idx + 1); name2 = name2.substring(idx + 1); } } else { // single string - all bold name1 = ""; // NOI18N name2 = val; detail = ""; // NOI18N } } frameRenderer.setValue(new Object[] { name1, name2, detail }, row); } public String getShortName() { return "at " + name2 + " " + detail; // NOI18N } } } ```
André Filipe Bernardes Santos (born 2 March 1989) is a Portuguese professional footballer who plays as a defensive midfielder for U.D. Oliveirense. Club career Sporting CP Born in Sobreiro Curvo, Torres Vedras, Santos joined Sporting CP's youth system at the age of 11. He made his senior debut while on loan, with C.D. Fátima of the third division and U.D. Leiria; in the 2009–10 season, with the latter, he first appeared in the Primeira Liga, playing 30 complete games to help the club finish in ninth place and being the first Portuguese player to achieve the feat in the league in the process. Santos returned to the Lions for 2010–11, playing his first official match with the side on 5 August in a 2–1 home win against FC Nordsjælland in the UEFA Europa League. He scored his first goal as a professional exactly four months later, helping the visiting team defeat 3–1 Portimonense SC, and finished the campaign with 42 appearances in all competitions (two goals). On 25 August 2011, also in the Europa League and against the same Danish opponent, and with the same match result but now in the playoff round, Santos netted to eventually help Sporting reach the group stage as that was also the aggregate score. However, he eventually lost his importance in the squad under new manager Domingos Paciência. In June 2012, Santos moved alongside a host of compatriots – including Sporting teammate Diogo Salomão – to Deportivo de La Coruña from Spain, in a season-long loan. He made his La Liga debut on 1 September by coming on as a late substitute in a 1–1 home draw with Getafe CF, but featured sparingly overall and the Galicians were also relegated. Later career On 8 August 2014, after one season back in his homeland with Vitória de Guimarães, Santos changed teams and countries again, joining Balıkesirspor in the Süper Lig. In July 2015, he signed a two-year contract with FC Metz of the French Ligue 2. Santos spent the better part of the following four campaigns back in the Portuguese top tier, with F.C. Arouca and B-SAD. In between, he served a small loan at Romania's CS Universitatea Craiova. On 28 August 2020, Santos agreed to a deal at Grasshopper Club Zürich. After two years and promotion to the Swiss Super League in his first season, his two-year contract expired and was not renewed. International career Santos earned 30 caps for Portugal at youth level, scoring three times. He made his debut for the full side on 29 March 2011, replacing Rúben Micael for the last 15 minutes of a 2–0 friendly win over Finland in Aveiro. Career statistics References External links 1989 births Living people Sportspeople from Torres Vedras Portuguese men's footballers Footballers from Lisbon District Men's association football midfielders Primeira Liga players Liga Portugal 2 players Segunda Divisão players Sporting CP footballers C.D. Fátima players U.D. Leiria players Vitória S.C. players F.C. Arouca players B-SAD players Vitória F.C. players U.D. Oliveirense players La Liga players Deportivo de La Coruña players Süper Lig players Balıkesirspor footballers Ligue 2 players FC Metz players Liga I players CS Universitatea Craiova players Swiss Super League players Swiss Challenge League players Grasshopper Club Zürich players Portugal men's youth international footballers Portugal men's under-21 international footballers Portugal men's international footballers Portuguese expatriate men's footballers Expatriate men's footballers in Spain Expatriate men's footballers in Turkey Expatriate men's footballers in France Expatriate men's footballers in Romania Expatriate men's footballers in Switzerland Portuguese expatriate sportspeople in Spain Portuguese expatriate sportspeople in Turkey Portuguese expatriate sportspeople in France Portuguese expatriate sportspeople in Romania Portuguese expatriate sportspeople in Switzerland
```xml /* * @license Apache-2.0 * * * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ /* eslint-disable @typescript-eslint/no-unused-expressions */ import Frechet = require( './index' ); // TESTS // // The function returns a distribution instance... { new Frechet(); // $ExpectType Frechet new Frechet( 1.0, 2.0, 1.5 ); // $ExpectType Frechet } // The compiler throws an error if the function is provided values other than three numbers... { new Frechet( true, 2.0, 1.5 ); // $ExpectError new Frechet( false, 2.0, 1.5 ); // $ExpectError new Frechet( '5', 2.0, 1.5 ); // $ExpectError new Frechet( [], 2.0, 1.5 ); // $ExpectError new Frechet( {}, 2.0, 1.5 ); // $ExpectError new Frechet( ( x: number ): number => x, 2.0, 1.5 ); // $ExpectError new Frechet( 1.0, true, 1.5 ); // $ExpectError new Frechet( 1.0, false, 1.5 ); // $ExpectError new Frechet( 1.0, '5', 1.5 ); // $ExpectError new Frechet( 1.0, [], 1.5 ); // $ExpectError new Frechet( 1.0, {}, 1.5 ); // $ExpectError new Frechet( 1.0, ( x: number ): number => x, 1.5 ); // $ExpectError new Frechet( 1.0, 2.0, true ); // $ExpectError new Frechet( 1.0, 2.0, false ); // $ExpectError new Frechet( 1.0, 2.0, '5' ); // $ExpectError new Frechet( 1.0, 2.0, [] ); // $ExpectError new Frechet( 1.0, 2.0, {} ); // $ExpectError new Frechet( 1.0, 2.0, ( x: number ): number => x ); // $ExpectError } // The compiler throws an error if the function is provided an unsupported number of arguments... { new Frechet( 0.0 ); // $ExpectError new Frechet( 0.0, 2.0 ); // $ExpectError new Frechet( 0.0, 2.0, 1.5, 1.5 ); // $ExpectError } ```
Edgar Gabriel Marsden (25 March 1919 – 15 October 2010) was a Trinidad cricketer who played two matches of first-class cricket in 1949. Edgar Marsden captained Trinidad in the only two first-class matches of the 1948-49 West Indies season, when Trinidad played Barbados twice at Kensington Oval, Bridgetown, over two weeks in January and February 1949. He batted in the lower middle order and made 30 runs in three innings. He also played several times for South Trinidad in the Beaumont Cup in the 1940s, usually as opening batsman and captain. He and his wife Jean lived in the seaside suburb of Bayshore in Diego Martin, about five kilometres west of Queen's Park Oval, Port of Spain. References External links 1919 births 2010 deaths Trinidad and Tobago cricketers
```xml /// <reference types="mocha"/> /// <reference types="node" /> import xs from '../../src/index'; describe('xs.empty()', function() { it('should create a stream with 0 events that has already completed', (done: any) => { const stream = xs.empty(); stream.addListener({ next: () => done(new Error('This should not be called')), error: () => done(new Error('This should not be called')), complete: done, }); }); }); ```
```java /* * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ package jdk.graal.compiler.core.test; import java.lang.invoke.MethodHandle; import java.lang.invoke.MethodHandles; import java.lang.reflect.Field; import org.junit.Test; import jdk.graal.compiler.nodes.DeoptimizeNode; import jdk.graal.compiler.nodes.StructuredGraph; import jdk.graal.compiler.nodes.StructuredGraph.AllowAssumptions; import jdk.graal.compiler.test.AddExports; // Export needed to open String.value field to reflection by this test @AddExports("java.base/java.lang") public final class MethodHandleEagerResolution extends GraalCompilerTest { private static final MethodHandle FIELD_HANDLE; static { Field field; try { field = String.class.getDeclaredField("value"); } catch (NoSuchFieldException ex) { throw new RuntimeException(ex.getMessage(), ex); } field.setAccessible(true); try { FIELD_HANDLE = MethodHandles.lookup().unreflectGetter(field); } catch (IllegalAccessException e) { throw new RuntimeException("unable to initialize field handle", e); } } public static char[] getBackingCharArray(String str) { try { return (char[]) FIELD_HANDLE.invokeExact(str); } catch (Throwable e) { throw new IllegalStateException(); } } @Test public void testFieldInvokeExact() { StructuredGraph graph = parseEager("getBackingCharArray", AllowAssumptions.NO); assertTrue(graph.getNodes().filter(DeoptimizeNode.class).isEmpty()); } } ```
The 2017 Tercera División play-offs to Segunda División B from Tercera División (Promotion play-offs) were the final playoffs for the promotion from 2016–17 Tercera División to 2017–18 Segunda División B. The first four teams in each group took part in the play-off. Format The eighteen group winners have the opportunity to be promoted directly to Segunda División B. The eighteen group winners were drawn into a two-legged series where the nine winners will promote to Segunda División B. The nine losing clubs will enter the play-off round for the last nine promotion spots. The eighteen runners-up were drawn against one of the eighteen fourth-placed clubs outside their group and the eighteen third-placed clubs were drawn against one another in a two-legged series. The twenty-seven winners will advance with the nine losing clubs from the champions' series to determine the eighteen teams that will enter the last two-legged series for the last nine promotion spots. In all the playoff series, the lower-ranked club play at home first. Whenever there is a tie in position (e.g. like the group winners in the champions' series or the third-placed teams in the first round), a draw determines the club to play at home first. Group Winners promotion play-off Qualified teams Matches |} Non-champions promotion play-off First round Qualified teams Matches |} Second round Qualified teams Matches |} Third round Qualified teams Matches |} See also 2017 Segunda División play-offs 2017 Segunda División B play-offs References External links Playoffs at Futbolme 2017 play-offs 3
Antanimasaka is a town and commune in Madagascar. It belongs to the district of Ambatolampy, which is a part of Vakinankaratra Region. The population of the commune was estimated to be approximately 5,000 in 2001 commune census. Only primary schooling is available. The majority 98% of the population of the commune are farmers, while an additional 1% receives their livelihood from raising livestock. The most important crop is rice, while other important products are sweet potatoes and potatoes. Services provide employment for 1% of the population. References and notes Populated places in Vakinankaratra
```c++ All rights reserved. Use is subject to license terms. This program is free software; you can redistribute it and/or modify This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA */ #ifndef CREATE_NODEGROUP_HPP #define CREATE_NODEGROUP_HPP #include "SignalData.hpp" struct CreateNodegroupReq { /** * Sender(s) / Reciver(s) */ friend class NdbDictInterface; friend class Dbdict; /** * For printing */ friend bool printCREATE_NODEGROUP_REQ(FILE*, const Uint32*, Uint32, Uint16); STATIC_CONST( SignalLength = 10 ); union { Uint32 senderData; Uint32 clientData; }; union { Uint32 senderRef; Uint32 clientRef; }; Uint32 requestInfo; Uint32 transId; Uint32 transKey; Uint32 nodegroupId; // RNIL == unspecified Uint32 nodes[4]; // 0 terminated }; struct CreateNodegroupRef { /** * Sender(s) */ friend class Dbdict; /** * Sender(s) / Reciver(s) */ friend class NdbDictInterface; /** * For printing */ friend bool printCREATE_NODEGROUP_REF(FILE*, const Uint32*, Uint32, Uint16); STATIC_CONST( SignalLength = 7 ); enum ErrorCode { NoError = 0, Busy = 701, NotMaster = 702, NoMoreObjectRecords = 710, InvalidFormat = 740, SingleUser = 299, InvalidNoOfNodesInNodegroup = 320, InvalidNodegroupId = 321, NodeAlreadyInNodegroup = 322, NodegroupInUse = 323, NoNodeAlive = 324 }; Uint32 senderData; Uint32 senderRef; Uint32 masterNodeId; Uint32 errorCode; Uint32 errorLine; Uint32 errorNodeId; Uint32 transId; }; struct CreateNodegroupConf { /** * Sender(s) */ friend class Dbdict; /** * Sender(s) / Reciver(s) */ friend class NdbDictInterface; /** * For printing */ friend bool printCREATE_NODEGROUP_CONF(FILE*, const Uint32*, Uint32, Uint16); STATIC_CONST( SignalLength = 4 ); Uint32 senderData; Uint32 senderRef; Uint32 nodegroupId; Uint32 transId; }; #endif ```
```python import doctest import toolz def test_doctests(): toolz.__test__ = {} for name, func in vars(toolz).items(): if isinstance(func, toolz.curry): toolz.__test__[name] = func.func assert doctest.testmod(toolz).failed == 0 del toolz.__test__ ```
Constantine "Gene" Mako ( ; January 24, 1916 – June 14, 2013) was an American tennis player and art gallery owner. He was born in Budapest, capital of Hungary. He won four Grand Slam doubles titles in the 1930s. Mako was inducted into the International Tennis Hall of Fame in Newport, Rhode Island in 1973. Early life His father Bartholomew Mako () graduated from the Budapest Academy of Fine Arts in 1914. He started to work as a draftsman for his mentor Viktor Madarász. He was an avid soccer player himself. He fought in World War I. After the war, he left Hungary with his wife, Georgina Elizabeth Farkas Mako () and only son, traveling first to Italy, then stopping for three years in Buenos Aires, Argentina, then settled in Los Angeles, California. There he created works for public places like churches, libraries and post offices. Gene attended Glendale High School and the University of Southern California, and he was offered a Hungarian University Scholarship in the meantime. He quit before graduation. Tennis career In 1934. he won the NCAA championships in singles and the doubles (with Phillip Caslin) while playing for the University of Southern California where he lettered at USC for three years (1934-36-37). He also won the boys' singles event at the U.S. National Championships in 1932 and 1934 and the boys' doubles in 1932, 1933 and 1934. Mako was especially successful as a doubles player with his partner and friend Don Budge. They competed in seven Grand Slam finals, four of which they won. In 1936 Gene Mako and Alice Marble won the finals at the US Mixed Doubles Championships against Sarah Palfrey and Don Budge (6:3 and 6:2). They won the Newport Casino Invitational Tournament three consecutive times from 1936 to 1938. From 1935 to 1938. Mako was a member of the United States Davis Cup team and played in eight ties. The US team won the Davis Cup in 1937, defeating the United Kingdom in the final at Wimbledon, and in 1938 in the final against Australia at the Germantown Cricket Club in Philadelphia. As a Davis Cup player he compiled a record of six wins and three losses. Mako was in the U.S. top 10 in 1937 and 1938 (reaching as high as No. 3), and was ranked World No. 8 by A. Wallis Myers of The Daily Telegraph in 1938. That year, he reached the U.S. final at Forest Hills against his doubles partner Don Budge, who was in pursuit of the first Grand Slam. In 1939. he was suspended and banned from playing for breaching the amateur rules. He and Don Budge allegedly accepted a sum of 20A£ for an exhibition match in Australia, which was against amateurism. Afterwards he continued to play tennis at that time during the Second World War while serving in the Navy. He also played professional basketball while stationed in Norfolk, Virginia. In 1973 Mako was inducted into the International Tennis Hall of Fame. In 1999, he was elected to the University of Southern California Athletic Hall of Fame. Playing style He possessed strong serve and powerful smashes but due to several injuries in his career, he had to give up his power game. He preferred a volleying style, which he perfected with quickness, good angle selection and pacing paired with strategy. Personal life Apart from being a sportsman, Mako composed music in his early 20s. He's the author of two songs, namely "Lovely as Spring" and "What Did You Dream Last Night?". He also starred in the 1938 musical Happy Landing and the 1941 war comedy Caught in the Draft, although he remained uncredited in both movies. Mako married actress Laura Mae Church in Manhattan in 1941. A month later, World War II broke out, and he joined the United States Navy. After this, he worked in a broadcasting studio. After his retirement, he designed tennis courts. His wife worked as an interior designer. He was involved in wrestling and was hired as a coach at the California Institute of Technology while also coaching the basketball team. He owned Gene Mako Galleries in Los Angeles, California. He also published a book about his father titled Bartholomew Mako: A Hungarian Master, 1890-1970. In the final decade of his life, he taught art. He died in 2013 at Cedars-Sinai Medical Center in Los Angeles, aged 97, of pneumonia. Grand Slam finals Singles (1 runner-up) Doubles (4 titles, 3 runners-up) References External links Gene Mako at Find a Grave American art dealers American male tennis players Hungarian expatriates in Argentina Hungarian emigrants to the United States International Tennis Hall of Fame inductees Tennis players from Los Angeles United States National champions (tennis) USC Trojans men's tennis players Caltech Beavers wrestling coaches Caltech Beavers men's basketball coaches Wimbledon champions (pre-Open Era) 1916 births 2013 deaths Deaths from pneumonia in California Tennis players from Budapest Grand Slam (tennis) champions in mixed doubles Grand Slam (tennis) champions in men's doubles Hungarian University of Fine Arts alumni Glendale High School (Glendale, California) alumni Professional tennis players before the Open Era Burials at Holy Cross Cemetery, Culver City
```ruby # frozen_string_literal: true InvisibleCaptcha.setup do |config| config.honeypots << "another_fake_attribute" config.visual_honeypots = false config.timestamp_threshold = 4 config.timestamp_enabled = false config.injectable_styles = true # Leave these unset if you want to use I18n (see below) # config.sentence_for_humans = 'If you are a human, ignore this field' # config.timestamp_error_message = 'Sorry, that was too quick! Please resubmit.' end ```
The Behagen House is a Neoclassical townhouse located at Strandgade 26 in the Christianshavn neighbourhood of Copenhagen, Denmark. The building was listed on the Danish registry of protected buildings and places in 1918. History origins Two houses similar to the neighbouring Sigvart Grubbe House at No. 28 were built at the site by Sigvart Grubbe in 1626. One of the properties was listed as No. 19 in Copenhagen's first cadastre of 1689 and was at that time owned by one Dreier. The other one was as No. 20 owned by soap manufacturer Peder Hansen. The old No. 10 was listed as No. 36 in the new cadastre of 1756 and was at that time owned by a widow named Hegelund. The old No 20 was as No. 37 owned by Frederik Holmsted. Holmsted owned the property from 1739 to 1769. Behagen family In 1759, Gysbert Behagen, a wealthy merchant, acquired one of the two houses. In 1764, he obtained a royal licence to establish a sugar refinery in the yard. In 1768, Behagen also acquired the other house and in 1769 he undertook a comprehensive renovation of his properties, merging them into one building. Behagen lived there until his death in 1783. In 1791, the house was acquired by Jeppe Prætorius, another merchant. In 1796, it was subject to another renovation. The property was home to 22 residents at the 1787 census. Elisabeth Giertrud Behagen resided in the building with her son Joost Johan Behagen, her daughter-in-law Maria Agatha Augusta and their one-year-old daughter Elisabeth Alida Augusta, sugar refinery master Christen Herbom and another nine employees associated with the refinery were also part of the household. The staff consisted of a housekeeper, five maids and a coachman. The property was home to 25 residents in three households at the 1801 census. Andreas Ewald Meinert, a , resided in the building with his wife Marie Kirstine Meinert, their five children (aged nine to 23), a housekeeper, a male servant and two maids. Carsten Carstensen, a bookkeeper, resided in the building with his eight-year-old daughter Mariane Charlotte Carstensdatter, a female cook and a maid. Rasmus Jensen, sugar master at the sugar refinery, resided in the building with his Christiane Frederikke Jensen, a female cook and seven workmen. The property was listed as No. 45 in the new cadastre of 1806. Later history The property was home to 25 residents in two households at the 1840 census. Johannes Henrik Hedemann, a merchant (), resided on the ground floor with his wife Dorothea Margrethe Hedeman and five employees. Joh. St. Brandt, another merchant (grosserer), resided on the first floor with his nine children (aged nine to 30), three employees and a housekeeper. Two male servants and three maids resided on the third floor. Later notable residents include supreme court attorney and politician Orla Lehmann who lived there in 1847–1848. He was one of the fathers of the Danish Constitution of 1849. Carl Joakim Brandt, a priest, church historian and literary historian, lived on the ground floor from 1876 to 79. The painter Frants Henningsen lived on the second floor from 1890 to 1894. The philosopher and professor Harald Høffding lived on the second floor from 1906 to 1908. The property was home to 32 residents in four households at the 1850 census. August Seydal, a merchant (), resided on the ground floor with his wife Sira Helsted, lodger Peter Petersen (merchant, ) and one maid. Hans Peter Prior, another merchant (), resided on the first floor with his wife Regine Schmidt, their six children (aged two to 18), a 27-year-old son from his first marriage, three clerks and two maids. Thora Brandt født Plugemacher, a widow, resided on the second floor with her three children (aged three to nine), her father Gottfried Plugemacher, a lodger, two maids and a coachman. Carl August Hemeche, a workman, resided in the basement with his wife Johanne Bruyn and their three children (aged two to nine). Prior had lived in the building since 1848. He bought the Prior House in Bredgade in 1850. Architecture The Neoclassical townhouse seen today is 10 bays wide and consists of three storeys, a cellar and a mansard roof with black-glazed tiles. The central triangular pediment above the third floor features a cartouche with the letter 'B' (for Behagen) and the year '1769'. The building contains a mural from about 1771 featuring a royal hunting scene. It depicts Christian VII accompanied by Johann Friedrich Struensee and Queen Caroline Matilda during a par force hunt in what is believed to be an imaginary setting although it has been speculated that Selsø lake and manor house may have served as an inspiration. The artist is unknown. Christian VII revived par force hunting in August 1767 but the practice was abolished in 1777. Gallery References External links Drawings in the Danish Bational Art Library Dansk Forfatterforening Source Source Houses in Copenhagen Listed residential buildings in Copenhagen Listed buildings and structures in Christianshavn Neoclassical architecture in Copenhagen Houses completed in 1769 Sugar refineries in Copenhagen
```objective-c #pragma once #include "source/extensions/router/cluster_specifiers/lua/lua_cluster_specifier.h" namespace Envoy { namespace Extensions { namespace Router { namespace Lua { class LuaClusterSpecifierPluginFactoryConfig : public Envoy::Router::ClusterSpecifierPluginFactoryConfig { public: LuaClusterSpecifierPluginFactoryConfig() = default; Envoy::Router::ClusterSpecifierPluginSharedPtr createClusterSpecifierPlugin(const Protobuf::Message& config, Server::Configuration::CommonFactoryContext&) override; ProtobufTypes::MessagePtr createEmptyConfigProto() override { return std::make_unique<LuaClusterSpecifierConfigProto>(); } std::string name() const override { return "envoy.router.cluster_specifier_plugin.lua"; } }; } // namespace Lua } // namespace Router } // namespace Extensions } // namespace Envoy ```
Paul VI was crowned as Pope on 30 June 1963 at Vatican City's St. Peter's Square, nine days after he was elected. The representatives of over 90 countries and international organizations were present at the coronation. The Pope was crowned with a jewelled, but lightweight custom-made tiara. The centuries-old practice of inaugurating a papacy with a papal coronation lapsed thereafter as his successors, beginning with John Paul I, adopted simpler ceremonies that did not include the imposition of a tiara. Ceremony Anticipating large crowds, for the first time the papal coronation took place on the square outside Saint Peter's Basilica; much of the basilica's interior was inaccessible because seating had been erected for the Second Vatican Council. The ceremony was scheduled for 6 p.m. to avoid Rome's afternoon heat. More than 90 countries and international organizations sent delegations, including the presidents of Brazil and Ireland and the king and queen of Belgium. Some 71 cardinals attended. The Pope's throne was draped in white and set in front of the main entrance to the basilica. To either side were placed crimson-covered benches for the cardinals and other high-ranking clergy. Seats for the diplomatic corps were located to the Pope's right and places were reserved for his relatives, European royalty, Roman nobility, visiting dignitaries and journalists to his left. Additional places for journalists were provided on the rooftops of the Apostolic Palace and the colonnades around the square. Some 400 journalists had attended Pope Paul's press conference on the eve of the coronation and 500 attended the coronation. An altar with golden candlesticks and a crucifix by the Renaissance artist Benvenuto Cellini was set in place. A group of Swiss Guards led the papal procession into the square, followed by members of the papal household and an attendant carrying the papal tiara on a red cushion and others carrying mitres. Next came a large body of clerics, curial officials and prelates vested in white with white mitres. Then came a variety of papal officials of higher rank in "the costumes of 16-century Spanish grandees", and the prefect of pontifical ceremonies, Archbishop Enrico Dante. Finally eight men carried in the Pope on his portable throne, the sedia gestatoria, canopied in cream-colored silk and flanked by two flabelli (long-handled, semicircular ostrich-feather fans that "lent an exotic touch to the scene"), as well as by sword-bearing Swiss Guards, by a dozen mace bearers, by more members of the papal household and by senior officers of the Vatican's military forces. The Pope wore a gold mitre and white gloves, and he was covered in "a large richly-embroidered cape that enveloped him from neck to feet". His papal ring could be seen as he blessed the crowd. As the papal procession came through the square, trumpets played the Pontifical Anthem and the bells of the basilica were rung. Arriving near the altar, the Pope took his throne and received each of the cardinals in order of seniority as they offered their obedience. The Pope was vested for the Mass. During the Mass that preceded the coronation, the epistle was sung in both Latin and Greek. Pope Paul delivered a homily in nine languages, emphasizing efforts to promote Christian unity and international peace. The sun had set by the time the Mass had concluded and floodlights illuminated the papal throne. The crowd cheered as the Pope walked to the throne for the coronation ceremony. A choir intoned the hymn "Corona aurea super caput ejus". Next the Dean of the College of Cardinals Eugène Tisserant led the recitation of the Lord's Prayer and the cardinal deacon Alberto di Jorio removed the mitre from Pope's head. Finally Cardinal Alfredo Ottaviani held the papal tiara high above the Pope's head so the crowd could see it sparkle in the brilliant lighting and then placed it on the Pope's head, saying in Latin: "Receive the tiara adorned with three crowns and know that thou art the Father of Princes and Ruler of Kings, the Vicar on Earth of Our Savior Jesus Christ, to whom is honor and glory through the ages". The bells of St. Peter's Basilica rang, soon joined by the bells of Rome's 500 churches. Pope Paul then delivered his blessing to the city and the world, "urbi et orbi", and the crowd responded with an ovation. The entire liturgy lasted three hours, the coronation ceremony about four minutes. Tiara It was anticipated that Paul VI would be crowned with the gem-studded but lightweight Palatine tiara, presented to Pius IX by the Palatine Guard in 1877 on the 30th anniversary of his episcopal consecration, and used for all coronations from Leo XIII in 1878 to John XXIII in 1958. Its decoration included 540 pearls, more than a hundred gemstones, and extensive gold elements. Instead, a new papal tiara created for this occasion was used, designed to Paul's specifications. It was more modest in its decoration than previous ones, tapered and not heavily ornamented. It was made of "beaten silver with three superimposed, gold circlets encrusted with diamonds, sapphires and rubies". It was a gift to Pope Paul on the occasion of his coronation from the Catholics of the Archdiocese of Milan where he had been archbishop for almost a decade. Paul VI later abandoned the use of a tiara entirely. On 13 November 1964, at the conclusion of a Mass in St. Peter's Basilica with two thousand bishops in attendance, he stood up from his throne, descended a few steps, removed his tiara and placed it on the altar. Reports said he meant it as a donation to the poor, that he was moved by discussions during the Council of world poverty and the need for the Church to replace traditional finery. He nevertheless allowed, in the apostolic constitution Romano Pontifici eligendo (1975), for his successors to be crowned, though they chose not to. John Paul II's Universi Dominici gregis (1996) did not mention a coronation, but a "Mass for the inauguration of the pontificate". Related tributes In honor of the coronation, the Spanish government granted broad clemencies to incarcerated criminals in Spain; reductions of prison terms ranged from one-half to one-sixth. The Holy See struck a commemorative coin to mark the occasion. An exemplar was presented to Queen Elizabeth II "for the honour of the despatch of a Special Mission" to the coronation. See also List of papal tiaras in existence Notes References Additional sources External links Newsreel of the coronation (British Movietone) 1963 in Vatican City Papal coronations Pope Paul VI June 1963 events in Europe
```java package com.blankj.utilcode.util; import org.greenrobot.eventbus.EventBus; import org.greenrobot.eventbus.Subscribe; import org.junit.Before; import org.junit.Test; import java.util.ArrayList; import java.util.List; /** * <pre> * author: blankj * blog : path_to_url * time : 2019/07/14 * desc : * </pre> */ public class BusUtilsVsEventBusTest extends BaseTest { @Subscribe public void eventBusFun(String param) { } @BusUtils.Bus(tag = "busUtilsFun") public void busUtilsFun(String param) { } @Before public void setUp() throws Exception { // AOP busUtilsFun ReflectUtils getInstance = ReflectUtils.reflect(BusUtils.class).method("getInstance"); getInstance.method("registerBus", "busUtilsFun", BusUtilsVsEventBusTest.class.getName(), "busUtilsFun", String.class.getName(), "param", false, "POSTING"); } /** * 10000 10 */ // @Test public void compareRegister10000Times() { final List<BusUtilsVsEventBusTest> eventBusTests = new ArrayList<>(); final List<BusUtilsVsEventBusTest> busUtilsTests = new ArrayList<>(); compareWithEventBus("Register 10000 times.", 10, 10000, new CompareCallback() { @Override public void runEventBus() { BusUtilsVsEventBusTest test = new BusUtilsVsEventBusTest(); EventBus.getDefault().register(test); eventBusTests.add(test); } @Override public void runBusUtils() { BusUtilsVsEventBusTest test = new BusUtilsVsEventBusTest(); BusUtils.register(test); busUtilsTests.add(test); } @Override public void restState() { for (BusUtilsVsEventBusTest test : eventBusTests) { EventBus.getDefault().unregister(test); } eventBusTests.clear(); for (BusUtilsVsEventBusTest test : busUtilsTests) { BusUtils.unregister(test); } busUtilsTests.clear(); } }); } /** * 1 * 1000000 10 */ // @Test public void comparePostTo1Subscriber1000000Times() { comparePostTemplate("Post to 1 subscriber 1000000 times.", 1, 1000000); } /** * 100 * 100000 10 */ // @Test public void comparePostTo100Subscribers100000Times() { comparePostTemplate("Post to 100 subscribers 100000 times.", 100, 100000); } private void comparePostTemplate(String name, int subscribeNum, int postTimes) { final List<BusUtilsVsEventBusTest> tests = new ArrayList<>(); for (int i = 0; i < subscribeNum; i++) { BusUtilsVsEventBusTest test = new BusUtilsVsEventBusTest(); EventBus.getDefault().register(test); BusUtils.register(test); tests.add(test); } compareWithEventBus(name, 10, postTimes, new CompareCallback() { @Override public void runEventBus() { EventBus.getDefault().post("EventBus"); } @Override public void runBusUtils() { BusUtils.post("busUtilsFun", "BusUtils"); } @Override public void restState() { } }); for (BusUtilsVsEventBusTest test : tests) { EventBus.getDefault().unregister(test); BusUtils.unregister(test); } } /** * 10000 10 */ // @Test public void compareUnregister10000Times() { final List<BusUtilsVsEventBusTest> tests = new ArrayList<>(); for (int i = 0; i < 10000; i++) { BusUtilsVsEventBusTest test = new BusUtilsVsEventBusTest(); EventBus.getDefault().register(test); BusUtils.register(test); tests.add(test); } compareWithEventBus("Unregister 10000 times.", 10, 1, new CompareCallback() { @Override public void runEventBus() { for (BusUtilsVsEventBusTest test : tests) { EventBus.getDefault().unregister(test); } } @Override public void runBusUtils() { for (BusUtilsVsEventBusTest test : tests) { BusUtils.unregister(test); } } @Override public void restState() { for (BusUtilsVsEventBusTest test : tests) { EventBus.getDefault().register(test); BusUtils.register(test); } } }); for (BusUtilsVsEventBusTest test : tests) { EventBus.getDefault().unregister(test); BusUtils.unregister(test); } } /** * @param name * @param sampleSize * @param times * @param callback */ private void compareWithEventBus(String name, int sampleSize, int times, CompareCallback callback) { long[][] dur = new long[2][sampleSize]; for (int i = 0; i < sampleSize; i++) { long cur = System.currentTimeMillis(); for (int j = 0; j < times; j++) { callback.runEventBus(); } dur[0][i] = System.currentTimeMillis() - cur; cur = System.currentTimeMillis(); for (int j = 0; j < times; j++) { callback.runBusUtils(); } dur[1][i] = System.currentTimeMillis() - cur; callback.restState(); } long eventBusAverageTime = 0; long busUtilsAverageTime = 0; for (int i = 0; i < sampleSize; i++) { eventBusAverageTime += dur[0][i]; busUtilsAverageTime += dur[1][i]; } System.out.println( name + "\nEventBusCostTime: " + eventBusAverageTime / sampleSize + "\nBusUtilsCostTime: " + busUtilsAverageTime / sampleSize ); } public interface CompareCallback { void runEventBus(); void runBusUtils(); void restState(); } } ```
```go /* path_to_url Unless required by applicable law or agreed to in writing, software WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ package values import ( "io" "net/url" "os" "strings" "github.com/pkg/errors" "sigs.k8s.io/yaml" "helm.sh/helm/v3/pkg/getter" "helm.sh/helm/v3/pkg/strvals" ) // Options captures the different ways to specify values type Options struct { ValueFiles []string // -f/--values StringValues []string // --set-string Values []string // --set FileValues []string // --set-file JSONValues []string // --set-json LiteralValues []string // --set-literal } // MergeValues merges values from files specified via -f/--values and directly // via --set-json, --set, --set-string, or --set-file, marshaling them to YAML func (opts *Options) MergeValues(p getter.Providers) (map[string]interface{}, error) { base := map[string]interface{}{} // User specified a values files via -f/--values for _, filePath := range opts.ValueFiles { currentMap := map[string]interface{}{} bytes, err := readFile(filePath, p) if err != nil { return nil, err } if err := yaml.Unmarshal(bytes, &currentMap); err != nil { return nil, errors.Wrapf(err, "failed to parse %s", filePath) } // Merge with the previous map base = mergeMaps(base, currentMap) } // User specified a value via --set-json for _, value := range opts.JSONValues { if err := strvals.ParseJSON(value, base); err != nil { return nil, errors.Errorf("failed parsing --set-json data %s", value) } } // User specified a value via --set for _, value := range opts.Values { if err := strvals.ParseInto(value, base); err != nil { return nil, errors.Wrap(err, "failed parsing --set data") } } // User specified a value via --set-string for _, value := range opts.StringValues { if err := strvals.ParseIntoString(value, base); err != nil { return nil, errors.Wrap(err, "failed parsing --set-string data") } } // User specified a value via --set-file for _, value := range opts.FileValues { reader := func(rs []rune) (interface{}, error) { bytes, err := readFile(string(rs), p) if err != nil { return nil, err } return string(bytes), err } if err := strvals.ParseIntoFile(value, base, reader); err != nil { return nil, errors.Wrap(err, "failed parsing --set-file data") } } // User specified a value via --set-literal for _, value := range opts.LiteralValues { if err := strvals.ParseLiteralInto(value, base); err != nil { return nil, errors.Wrap(err, "failed parsing --set-literal data") } } return base, nil } func mergeMaps(a, b map[string]interface{}) map[string]interface{} { out := make(map[string]interface{}, len(a)) for k, v := range a { out[k] = v } for k, v := range b { if v, ok := v.(map[string]interface{}); ok { if bv, ok := out[k]; ok { if bv, ok := bv.(map[string]interface{}); ok { out[k] = mergeMaps(bv, v) continue } } } out[k] = v } return out } // readFile load a file from stdin, the local directory, or a remote file with a url. func readFile(filePath string, p getter.Providers) ([]byte, error) { if strings.TrimSpace(filePath) == "-" { return io.ReadAll(os.Stdin) } u, err := url.Parse(filePath) if err != nil { return nil, err } // FIXME: maybe someone handle other protocols like ftp. g, err := p.ByScheme(u.Scheme) if err != nil { return os.ReadFile(filePath) } data, err := g.Get(filePath, getter.WithURL(filePath)) if err != nil { return nil, err } return data.Bytes(), err } ```
```objective-c /* Bullet Continuous Collision Detection and Physics Library This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. */ #ifndef BT_SERIALIZER_H #define BT_SERIALIZER_H #include "btScalar.h" // has definitions like SIMD_FORCE_INLINE #include "btHashMap.h" #if !defined(__CELLOS_LV2__) && !defined(__MWERKS__) #include <memory.h> #endif #include <string.h> extern char sBulletDNAstr[]; extern int sBulletDNAlen; extern char sBulletDNAstr64[]; extern int sBulletDNAlen64; SIMD_FORCE_INLINE int btStrLen(const char* str) { if (!str) return (0); int len = 0; while (*str != 0) { str++; len++; } return len; } class btChunk { public: int m_chunkCode; int m_length; void* m_oldPtr; int m_dna_nr; int m_number; }; enum btSerializationFlags { BT_SERIALIZE_NO_BVH = 1, BT_SERIALIZE_NO_TRIANGLEINFOMAP = 2, BT_SERIALIZE_NO_DUPLICATE_ASSERT = 4, BT_SERIALIZE_CONTACT_MANIFOLDS = 8, }; class btSerializer { public: virtual ~btSerializer() {} virtual const unsigned char* getBufferPointer() const = 0; virtual int getCurrentBufferSize() const = 0; virtual btChunk* allocate(size_t size, int numElements) = 0; virtual void finalizeChunk(btChunk* chunk, const char* structType, int chunkCode, void* oldPtr) = 0; virtual void* findPointer(void* oldPtr) = 0; virtual void* getUniquePointer(void* oldPtr) = 0; virtual void startSerialization() = 0; virtual void finishSerialization() = 0; virtual const char* findNameForPointer(const void* ptr) const = 0; virtual void registerNameForPointer(const void* ptr, const char* name) = 0; virtual void serializeName(const char* ptr) = 0; virtual int getSerializationFlags() const = 0; virtual void setSerializationFlags(int flags) = 0; virtual int getNumChunks() const = 0; virtual const btChunk* getChunk(int chunkIndex) const = 0; }; #define BT_HEADER_LENGTH 12 #if defined(__sgi) || defined(__sparc) || defined(__sparc__) || defined(__PPC__) || defined(__ppc__) || defined(__BIG_ENDIAN__) #define BT_MAKE_ID(a, b, c, d) ((int)(a) << 24 | (int)(b) << 16 | (c) << 8 | (d)) #else #define BT_MAKE_ID(a, b, c, d) ((int)(d) << 24 | (int)(c) << 16 | (b) << 8 | (a)) #endif #define BT_MULTIBODY_CODE BT_MAKE_ID('M', 'B', 'D', 'Y') #define BT_MB_LINKCOLLIDER_CODE BT_MAKE_ID('M', 'B', 'L', 'C') #define BT_SOFTBODY_CODE BT_MAKE_ID('S', 'B', 'D', 'Y') #define BT_COLLISIONOBJECT_CODE BT_MAKE_ID('C', 'O', 'B', 'J') #define BT_RIGIDBODY_CODE BT_MAKE_ID('R', 'B', 'D', 'Y') #define BT_CONSTRAINT_CODE BT_MAKE_ID('C', 'O', 'N', 'S') #define BT_BOXSHAPE_CODE BT_MAKE_ID('B', 'O', 'X', 'S') #define BT_QUANTIZED_BVH_CODE BT_MAKE_ID('Q', 'B', 'V', 'H') #define BT_TRIANLGE_INFO_MAP BT_MAKE_ID('T', 'M', 'A', 'P') #define BT_SHAPE_CODE BT_MAKE_ID('S', 'H', 'A', 'P') #define BT_ARRAY_CODE BT_MAKE_ID('A', 'R', 'A', 'Y') #define BT_SBMATERIAL_CODE BT_MAKE_ID('S', 'B', 'M', 'T') #define BT_SBNODE_CODE BT_MAKE_ID('S', 'B', 'N', 'D') #define BT_DYNAMICSWORLD_CODE BT_MAKE_ID('D', 'W', 'L', 'D') #define BT_CONTACTMANIFOLD_CODE BT_MAKE_ID('C', 'O', 'N', 'T') #define BT_DNA_CODE BT_MAKE_ID('D', 'N', 'A', '1') struct btPointerUid { union { void* m_ptr; int m_uniqueIds[2]; }; }; struct btBulletSerializedArrays { btBulletSerializedArrays() { } btAlignedObjectArray<struct btQuantizedBvhDoubleData*> m_bvhsDouble; btAlignedObjectArray<struct btQuantizedBvhFloatData*> m_bvhsFloat; btAlignedObjectArray<struct btCollisionShapeData*> m_colShapeData; btAlignedObjectArray<struct btDynamicsWorldDoubleData*> m_dynamicWorldInfoDataDouble; btAlignedObjectArray<struct btDynamicsWorldFloatData*> m_dynamicWorldInfoDataFloat; btAlignedObjectArray<struct btRigidBodyDoubleData*> m_rigidBodyDataDouble; btAlignedObjectArray<struct btRigidBodyFloatData*> m_rigidBodyDataFloat; btAlignedObjectArray<struct btCollisionObjectDoubleData*> m_collisionObjectDataDouble; btAlignedObjectArray<struct btCollisionObjectFloatData*> m_collisionObjectDataFloat; btAlignedObjectArray<struct btTypedConstraintFloatData*> m_constraintDataFloat; btAlignedObjectArray<struct btTypedConstraintDoubleData*> m_constraintDataDouble; btAlignedObjectArray<struct btTypedConstraintData*> m_constraintData; //for backwards compatibility btAlignedObjectArray<struct btSoftBodyFloatData*> m_softBodyFloatData; btAlignedObjectArray<struct btSoftBodyDoubleData*> m_softBodyDoubleData; }; ///The btDefaultSerializer is the main Bullet serialization class. ///The constructor takes an optional argument for backwards compatibility, it is recommended to leave this empty/zero. class btDefaultSerializer : public btSerializer { protected: btAlignedObjectArray<char*> mTypes; btAlignedObjectArray<short*> mStructs; btAlignedObjectArray<short> mTlens; btHashMap<btHashInt, int> mStructReverse; btHashMap<btHashString, int> mTypeLookup; btHashMap<btHashPtr, void*> m_chunkP; btHashMap<btHashPtr, const char*> m_nameMap; btHashMap<btHashPtr, btPointerUid> m_uniquePointers; int m_uniqueIdGenerator; int m_totalSize; unsigned char* m_buffer; bool m_ownsBuffer; int m_currentSize; void* m_dna; int m_dnaLength; int m_serializationFlags; btAlignedObjectArray<btChunk*> m_chunkPtrs; protected: virtual void* findPointer(void* oldPtr) { void** ptr = m_chunkP.find(oldPtr); if (ptr && *ptr) return *ptr; return 0; } virtual void writeDNA() { btChunk* dnaChunk = allocate(m_dnaLength, 1); memcpy(dnaChunk->m_oldPtr, m_dna, m_dnaLength); finalizeChunk(dnaChunk, "DNA1", BT_DNA_CODE, m_dna); } int getReverseType(const char* type) const { btHashString key(type); const int* valuePtr = mTypeLookup.find(key); if (valuePtr) return *valuePtr; return -1; } void initDNA(const char* bdnaOrg, int dnalen) { ///was already initialized if (m_dna) return; int littleEndian = 1; littleEndian = ((char*)&littleEndian)[0]; m_dna = btAlignedAlloc(dnalen, 16); memcpy(m_dna, bdnaOrg, dnalen); m_dnaLength = dnalen; int* intPtr = 0; short* shtPtr = 0; char* cp = 0; int dataLen = 0; intPtr = (int*)m_dna; /* SDNA (4 bytes) (magic number) NAME (4 bytes) <nr> (4 bytes) amount of names (int) <string> <string> */ if (strncmp((const char*)m_dna, "SDNA", 4) == 0) { // skip ++ NAME intPtr++; intPtr++; } // Parse names if (!littleEndian) *intPtr = btSwapEndian(*intPtr); dataLen = *intPtr; intPtr++; cp = (char*)intPtr; int i; for (i = 0; i < dataLen; i++) { while (*cp) cp++; cp++; } cp = btAlignPointer(cp, 4); /* TYPE (4 bytes) <nr> amount of types (int) <string> <string> */ intPtr = (int*)cp; btAssert(strncmp(cp, "TYPE", 4) == 0); intPtr++; if (!littleEndian) *intPtr = btSwapEndian(*intPtr); dataLen = *intPtr; intPtr++; cp = (char*)intPtr; for (i = 0; i < dataLen; i++) { mTypes.push_back(cp); while (*cp) cp++; cp++; } cp = btAlignPointer(cp, 4); /* TLEN (4 bytes) <len> (short) the lengths of types <len> */ // Parse type lens intPtr = (int*)cp; btAssert(strncmp(cp, "TLEN", 4) == 0); intPtr++; dataLen = (int)mTypes.size(); shtPtr = (short*)intPtr; for (i = 0; i < dataLen; i++, shtPtr++) { if (!littleEndian) shtPtr[0] = btSwapEndian(shtPtr[0]); mTlens.push_back(shtPtr[0]); } if (dataLen & 1) shtPtr++; /* STRC (4 bytes) <nr> amount of structs (int) <typenr> <nr_of_elems> <typenr> <namenr> <typenr> <namenr> */ intPtr = (int*)shtPtr; cp = (char*)intPtr; btAssert(strncmp(cp, "STRC", 4) == 0); intPtr++; if (!littleEndian) *intPtr = btSwapEndian(*intPtr); dataLen = *intPtr; intPtr++; shtPtr = (short*)intPtr; for (i = 0; i < dataLen; i++) { mStructs.push_back(shtPtr); if (!littleEndian) { shtPtr[0] = btSwapEndian(shtPtr[0]); shtPtr[1] = btSwapEndian(shtPtr[1]); int len = shtPtr[1]; shtPtr += 2; for (int a = 0; a < len; a++, shtPtr += 2) { shtPtr[0] = btSwapEndian(shtPtr[0]); shtPtr[1] = btSwapEndian(shtPtr[1]); } } else { shtPtr += (2 * shtPtr[1]) + 2; } } // build reverse lookups for (i = 0; i < (int)mStructs.size(); i++) { short* strc = mStructs.at(i); mStructReverse.insert(strc[0], i); mTypeLookup.insert(btHashString(mTypes[strc[0]]), i); } } public: btHashMap<btHashPtr, void*> m_skipPointers; btDefaultSerializer(int totalSize = 0, unsigned char* buffer = 0) : m_uniqueIdGenerator(0), m_totalSize(totalSize), m_currentSize(0), m_dna(0), m_dnaLength(0), m_serializationFlags(0) { if (buffer == 0) { m_buffer = m_totalSize ? (unsigned char*)btAlignedAlloc(totalSize, 16) : 0; m_ownsBuffer = true; } else { m_buffer = buffer; m_ownsBuffer = false; } const bool VOID_IS_8 = ((sizeof(void*) == 8)); #ifdef BT_INTERNAL_UPDATE_SERIALIZATION_STRUCTURES if (VOID_IS_8) { #if _WIN64 initDNA((const char*)sBulletDNAstr64, sBulletDNAlen64); #else btAssert(0); #endif } else { #ifndef _WIN64 initDNA((const char*)sBulletDNAstr, sBulletDNAlen); #else btAssert(0); #endif } #else //BT_INTERNAL_UPDATE_SERIALIZATION_STRUCTURES if (VOID_IS_8) { initDNA((const char*)sBulletDNAstr64, sBulletDNAlen64); } else { initDNA((const char*)sBulletDNAstr, sBulletDNAlen); } #endif //BT_INTERNAL_UPDATE_SERIALIZATION_STRUCTURES } virtual ~btDefaultSerializer() { if (m_buffer && m_ownsBuffer) btAlignedFree(m_buffer); if (m_dna) btAlignedFree(m_dna); } static int getMemoryDnaSizeInBytes() { const bool VOID_IS_8 = ((sizeof(void*) == 8)); if (VOID_IS_8) { return sBulletDNAlen64; } return sBulletDNAlen; } static const char* getMemoryDna() { const bool VOID_IS_8 = ((sizeof(void*) == 8)); if (VOID_IS_8) { return (const char*)sBulletDNAstr64; } return (const char*)sBulletDNAstr; } void insertHeader() { writeHeader(m_buffer); m_currentSize += BT_HEADER_LENGTH; } void writeHeader(unsigned char* buffer) const { #ifdef BT_USE_DOUBLE_PRECISION memcpy(buffer, "BULLETd", 7); #else memcpy(buffer, "BULLETf", 7); #endif //BT_USE_DOUBLE_PRECISION int littleEndian = 1; littleEndian = ((char*)&littleEndian)[0]; if (sizeof(void*) == 8) { buffer[7] = '-'; } else { buffer[7] = '_'; } if (littleEndian) { buffer[8] = 'v'; } else { buffer[8] = 'V'; } buffer[9] = '3'; buffer[10] = '2'; buffer[11] = '0'; } virtual void startSerialization() { m_uniqueIdGenerator = 1; if (m_totalSize) { unsigned char* buffer = internalAlloc(BT_HEADER_LENGTH); writeHeader(buffer); } } virtual void finishSerialization() { writeDNA(); //if we didn't pre-allocate a buffer, we need to create a contiguous buffer now if (!m_totalSize) { if (m_buffer) btAlignedFree(m_buffer); m_currentSize += BT_HEADER_LENGTH; m_buffer = (unsigned char*)btAlignedAlloc(m_currentSize, 16); unsigned char* currentPtr = m_buffer; writeHeader(m_buffer); currentPtr += BT_HEADER_LENGTH; for (int i = 0; i < m_chunkPtrs.size(); i++) { int curLength = sizeof(btChunk) + m_chunkPtrs[i]->m_length; memcpy(currentPtr, m_chunkPtrs[i], curLength); btAlignedFree(m_chunkPtrs[i]); currentPtr += curLength; } } mTypes.clear(); mStructs.clear(); mTlens.clear(); mStructReverse.clear(); mTypeLookup.clear(); m_skipPointers.clear(); m_chunkP.clear(); m_nameMap.clear(); m_uniquePointers.clear(); m_chunkPtrs.clear(); } virtual void* getUniquePointer(void* oldPtr) { btAssert(m_uniqueIdGenerator >= 0); if (!oldPtr) return 0; btPointerUid* uptr = (btPointerUid*)m_uniquePointers.find(oldPtr); if (uptr) { return uptr->m_ptr; } void** ptr2 = m_skipPointers[oldPtr]; if (ptr2) { return 0; } m_uniqueIdGenerator++; btPointerUid uid; uid.m_uniqueIds[0] = m_uniqueIdGenerator; uid.m_uniqueIds[1] = m_uniqueIdGenerator; m_uniquePointers.insert(oldPtr, uid); return uid.m_ptr; } virtual const unsigned char* getBufferPointer() const { return m_buffer; } virtual int getCurrentBufferSize() const { return m_currentSize; } virtual void finalizeChunk(btChunk* chunk, const char* structType, int chunkCode, void* oldPtr) { if (!(m_serializationFlags & BT_SERIALIZE_NO_DUPLICATE_ASSERT)) { btAssert(!findPointer(oldPtr)); } chunk->m_dna_nr = getReverseType(structType); chunk->m_chunkCode = chunkCode; void* uniquePtr = getUniquePointer(oldPtr); m_chunkP.insert(oldPtr, uniquePtr); //chunk->m_oldPtr); chunk->m_oldPtr = uniquePtr; //oldPtr; } virtual unsigned char* internalAlloc(size_t size) { unsigned char* ptr = 0; if (m_totalSize) { ptr = m_buffer + m_currentSize; m_currentSize += int(size); btAssert(m_currentSize < m_totalSize); } else { ptr = (unsigned char*)btAlignedAlloc(size, 16); m_currentSize += int(size); } return ptr; } virtual btChunk* allocate(size_t size, int numElements) { unsigned char* ptr = internalAlloc(int(size) * numElements + sizeof(btChunk)); unsigned char* data = ptr + sizeof(btChunk); btChunk* chunk = (btChunk*)ptr; chunk->m_chunkCode = 0; chunk->m_oldPtr = data; chunk->m_length = int(size) * numElements; chunk->m_number = numElements; m_chunkPtrs.push_back(chunk); return chunk; } virtual const char* findNameForPointer(const void* ptr) const { const char* const* namePtr = m_nameMap.find(ptr); if (namePtr && *namePtr) return *namePtr; return 0; } virtual void registerNameForPointer(const void* ptr, const char* name) { m_nameMap.insert(ptr, name); } virtual void serializeName(const char* name) { if (name) { //don't serialize name twice if (findPointer((void*)name)) return; int len = btStrLen(name); if (len) { int newLen = len + 1; int padding = ((newLen + 3) & ~3) - newLen; newLen += padding; //serialize name string now btChunk* chunk = allocate(sizeof(char), newLen); char* destinationName = (char*)chunk->m_oldPtr; for (int i = 0; i < len; i++) { destinationName[i] = name[i]; } destinationName[len] = 0; finalizeChunk(chunk, "char", BT_ARRAY_CODE, (void*)name); } } } virtual int getSerializationFlags() const { return m_serializationFlags; } virtual void setSerializationFlags(int flags) { m_serializationFlags = flags; } int getNumChunks() const { return m_chunkPtrs.size(); } const btChunk* getChunk(int chunkIndex) const { return m_chunkPtrs[chunkIndex]; } }; ///In general it is best to use btDefaultSerializer, ///in particular when writing the data to disk or sending it over the network. ///The btInMemorySerializer is experimental and only suitable in a few cases. ///The btInMemorySerializer takes a shortcut and can be useful to create a deep-copy ///of objects. There will be a demo on how to use the btInMemorySerializer. #ifdef ENABLE_INMEMORY_SERIALIZER struct btInMemorySerializer : public btDefaultSerializer { btHashMap<btHashPtr, btChunk*> m_uid2ChunkPtr; btHashMap<btHashPtr, void*> m_orgPtr2UniqueDataPtr; btHashMap<btHashString, const void*> m_names2Ptr; btBulletSerializedArrays m_arrays; btInMemorySerializer(int totalSize = 0, unsigned char* buffer = 0) : btDefaultSerializer(totalSize, buffer) { } virtual void startSerialization() { m_uid2ChunkPtr.clear(); //todo: m_arrays.clear(); btDefaultSerializer::startSerialization(); } btChunk* findChunkFromUniquePointer(void* uniquePointer) { btChunk** chkPtr = m_uid2ChunkPtr[uniquePointer]; if (chkPtr) { return *chkPtr; } return 0; } virtual void registerNameForPointer(const void* ptr, const char* name) { btDefaultSerializer::registerNameForPointer(ptr, name); m_names2Ptr.insert(name, ptr); } virtual void finishSerialization() { } virtual void* getUniquePointer(void* oldPtr) { if (oldPtr == 0) return 0; // void* uniquePtr = getUniquePointer(oldPtr); btChunk* chunk = findChunkFromUniquePointer(oldPtr); if (chunk) { return chunk->m_oldPtr; } else { const char* n = (const char*)oldPtr; const void** ptr = m_names2Ptr[n]; if (ptr) { return oldPtr; } else { void** ptr2 = m_skipPointers[oldPtr]; if (ptr2) { return 0; } else { //If this assert hit, serialization happened in the wrong order // 'getUniquePointer' btAssert(0); } } return 0; } return oldPtr; } virtual void finalizeChunk(btChunk* chunk, const char* structType, int chunkCode, void* oldPtr) { if (!(m_serializationFlags & BT_SERIALIZE_NO_DUPLICATE_ASSERT)) { btAssert(!findPointer(oldPtr)); } chunk->m_dna_nr = getReverseType(structType); chunk->m_chunkCode = chunkCode; //void* uniquePtr = getUniquePointer(oldPtr); m_chunkP.insert(oldPtr, oldPtr); //chunk->m_oldPtr); // chunk->m_oldPtr = uniquePtr;//oldPtr; void* uid = findPointer(oldPtr); m_uid2ChunkPtr.insert(uid, chunk); switch (chunk->m_chunkCode) { case BT_SOFTBODY_CODE: { #ifdef BT_USE_DOUBLE_PRECISION m_arrays.m_softBodyDoubleData.push_back((btSoftBodyDoubleData*)chunk->m_oldPtr); #else m_arrays.m_softBodyFloatData.push_back((btSoftBodyFloatData*)chunk->m_oldPtr); #endif break; } case BT_COLLISIONOBJECT_CODE: { #ifdef BT_USE_DOUBLE_PRECISION m_arrays.m_collisionObjectDataDouble.push_back((btCollisionObjectDoubleData*)chunk->m_oldPtr); #else //BT_USE_DOUBLE_PRECISION m_arrays.m_collisionObjectDataFloat.push_back((btCollisionObjectFloatData*)chunk->m_oldPtr); #endif //BT_USE_DOUBLE_PRECISION break; } case BT_RIGIDBODY_CODE: { #ifdef BT_USE_DOUBLE_PRECISION m_arrays.m_rigidBodyDataDouble.push_back((btRigidBodyDoubleData*)chunk->m_oldPtr); #else m_arrays.m_rigidBodyDataFloat.push_back((btRigidBodyFloatData*)chunk->m_oldPtr); #endif //BT_USE_DOUBLE_PRECISION break; }; case BT_CONSTRAINT_CODE: { #ifdef BT_USE_DOUBLE_PRECISION m_arrays.m_constraintDataDouble.push_back((btTypedConstraintDoubleData*)chunk->m_oldPtr); #else m_arrays.m_constraintDataFloat.push_back((btTypedConstraintFloatData*)chunk->m_oldPtr); #endif break; } case BT_QUANTIZED_BVH_CODE: { #ifdef BT_USE_DOUBLE_PRECISION m_arrays.m_bvhsDouble.push_back((btQuantizedBvhDoubleData*)chunk->m_oldPtr); #else m_arrays.m_bvhsFloat.push_back((btQuantizedBvhFloatData*)chunk->m_oldPtr); #endif break; } case BT_SHAPE_CODE: { btCollisionShapeData* shapeData = (btCollisionShapeData*)chunk->m_oldPtr; m_arrays.m_colShapeData.push_back(shapeData); break; } case BT_TRIANLGE_INFO_MAP: case BT_ARRAY_CODE: case BT_SBMATERIAL_CODE: case BT_SBNODE_CODE: case BT_DYNAMICSWORLD_CODE: case BT_DNA_CODE: { break; } default: { } }; } int getNumChunks() const { return m_uid2ChunkPtr.size(); } const btChunk* getChunk(int chunkIndex) const { return *m_uid2ChunkPtr.getAtIndex(chunkIndex); } }; #endif //ENABLE_INMEMORY_SERIALIZER #endif //BT_SERIALIZER_H ```
```javascript import { filterUnreadMessageIds, filterUnreadMessagesInRange } from '../unread'; describe('filterUnreadMessageIds', () => { test('empty message list has no unread messages', () => { const messages = []; const flags = {}; const expectedUnread = []; const actualUnread = filterUnreadMessageIds(messages, flags); expect(actualUnread).toEqual(expectedUnread); }); test('messages with no flags or empty flag array are not read', () => { const messages = [1, 2, 3]; const flags = { read: { 3: true, }, }; const expectedUnread = [1, 2]; const actualUnread = filterUnreadMessageIds(messages, flags); expect(actualUnread).toEqual(expectedUnread); }); test('messages are not read if not in flags object, regardless of message property', () => { const messages = [1]; const flags = {}; const expectedUnread = [1]; const actualUnread = filterUnreadMessageIds(messages, flags); expect(actualUnread).toEqual(expectedUnread); }); }); describe('filterUnreadMessagesInRange', () => { test('if from or to ids are -1 result is empty', () => { const messages = [{ id: 1 }]; const flags = {}; const expectedUnread = []; const actualUnread = filterUnreadMessagesInRange(messages, flags, -1, -1); expect(actualUnread).toEqual(expectedUnread); }); test('empty message list has no unread messages', () => { const messages = []; const flags = {}; const expectedUnread = []; const actualUnread = filterUnreadMessagesInRange(messages, flags, 1, 5); expect(actualUnread).toEqual(expectedUnread); }); test('messages with no flags or empty flag array are not read', () => { const messages = [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }]; const flags = { read: { 3: true, }, }; const expectedUnread = [2, 4, 5]; const actualUnread = filterUnreadMessagesInRange(messages, flags, 2, 5); expect(actualUnread).toEqual(expectedUnread); }); test('if start is after end no messages are returned', () => { const messages = [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }, { id: 6 }]; const flags = {}; const expectedUnread = []; const actualUnread = filterUnreadMessagesInRange(messages, flags, 5, 1); expect(actualUnread).toEqual(expectedUnread); }); test('messages in outbox are filtered out', () => { const messages = [{ id: 1 }, { id: 2 }, { id: 34567, isOutbox: true }]; const flags = {}; const expectedUnread = [1, 2]; const actualUnread = filterUnreadMessagesInRange(messages, flags, 1, Number.MAX_SAFE_INTEGER); expect(actualUnread).toEqual(expectedUnread); }); test('messages are not read if not in flags object, regardless of message property', () => { const messages = [{ id: 1 }]; const flags = {}; const expectedUnread = [1]; const actualUnread = filterUnreadMessagesInRange(messages, flags, 1, 5); expect(actualUnread).toEqual(expectedUnread); }); }); ```
The Fédération internationale du béton – International Federation for Structural Concrete (fib) is a not-for-profit association committed to advancing the technical, economic, aesthetic and environmental performances of concrete structures worldwide. The organization depends on the voluntary contributions of international experts to achieve its mission and plays a role in stimulating research and promoting the use and development of concrete. History The fib was created in 1998 via the merger of the CEB and the FIP. FIP Fédération Internationale de la Précontrainte - International Federation for Prestressing was inaugurated in 1952 at an international meeting in Cambridge, United Kingdom. CEB The Comité européen du béton - European Committee for Concrete (later: Comité euro-international du béton) was established in 1953. In 1962 a common initiative by the FIP and CEB led to the creation of the 'Mixed CEB-FIP Committee for Drafting of Recommendations for Prestressed Concrete'. In 1983 the Ecole polytechnique fédérale de Lausanne (EPFL) in Switzerland invited the CEB to open an office on its campus. Today this office is the headquarters of the fib. The CEB and the FIP merged in 1998 during the last FIP Congress to form the "fib". The fib continues the work of its founding associations by providing technical reports, state-of-the-art reports, manuals, textbooks, guides, recommendations and model codes. Working structure The fib’s general assembly (GA) is composed of delegates appointed by the organization’s national member groups. There are forty-one national member groups (NMGs) in the fib. They act as forums for co-operation and coordination. The general assembly deals with high-level administrative and technical matters, such as elections, finances, statutes and the approval of model codes. The technical council (TC) oversees the work of the commissions and task groups. The commissions and task groups of the fib develop the technical bulletins that form the cornerstone of the fib’s activities. The presidium is the organization’s executive committee and implements decisions made by the GA and the TC. It handles such matters as the scheduling of events, membership, awards and honours. Member countries In 2019 the fib counts forty-five member countries. They are: Argentina, Australia, Austria, Belgium, Brazil, Canada, China, Cyprus, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, India, Indonesia, Iran, Israel, Italy, Japan, Luxembourg, the Netherlands, New Zealand, Norway, Poland, Portugal, Romania, Russia, Slovakia, Slovenia, South Africa, South Korea, Spain, Sweden, Switzerland, Turkey, Ukraine, the United Arab Emirates, the United Kingdom and the United States of America. Publications Future Publication: fib Model Code for Concrete Structures 2020 The fib aims to produce the first general structural code for new and existing concrete structures which fully integrates the provisions for the design of new concrete structures and matters relating to existing concrete structures, including situations where new structural members are incorporated as parts of existing structures. The fib project for advancing the fib Model Code for Concrete Structures will use the working title of “fib Model Code 2020”. fib Model Code for Concrete Structures 2010 This work, published as a hardcover and an e-book by John Wiley & Sons, presents new developments in and ideas about concrete structures and structural materials and describes the complete life cycle of structures, from conceptual design through to dismantlement. Its purpose is to serve as a basis for future codes and to present new developments in structural material, techniques and the means to achieve optimum behaviour. It is an essential document for national and international code committees, practitioners and researchers. fib Bulletins The bulletins include technical reports, state-of-the-art reports, manuals, textbooks, guides, recommendations and model codes, all of which form a detailed record of the results obtained by commissions and task groups in the field of research synthesis and operational applications to concrete structures. Journal Structural Concrete, the official journal of the fib, publishes peer-reviewed papers featuring the design, construction and performance of concrete structures, as well as broader issues such as environmental impact assessment. Newsletter fib-news is a newsletter that is printed at the back of Structural Concrete. Events Symposia Symposia organized by the national member groups of the fib are international meetings where innovations in concrete design and construction are analysed and debated. Symposia take place three years out of four in principle. Past Symposia include Prague (1999), Orlando (2000), Berlin (2001), Athens (2003), Avign (2004), New Delhi (2004), Budapest (2005), La Plata (2005), Dubrovnik (2007), Amsterdam (2008), London (2009), Prague (2011), Stockholm (2012), Tel-Aviv (2013), Copenhagen (2015), Cape Town (2016), Maastricht (2017), Krakow (2019), Shanghai (2020), and Lisbon (2021). PhD Symposia are biennial forums that allow PhD students to share information with the international research community. Prizes are awarded for outstanding papers and presentations. Past PhD Symposia include Budapest (1996 and 1998), Vienna (2000), Munich (2002), Delft (2004), Zurich (2006), Stuttgart (2008), Copenhagen (2010), Karlsruhe (2012), Quebec (2014), Tokyo (2016, Prague (2018), and Paris (2020-2021). Congresses Held every four years, the fib Congress is the organization's flagship event, where practitioners and researchers from around the world convene to discuss and exhibit all aspects of concrete structures. Pas Congresses include Osaka (2002), Naples (2006), Washington (2010), Mumbai (2014), and Melbourne (2018). Courses and workshops Generally held at least once a year, short courses, seminars and workshops are aimed at local, specialized audiences and are presented by international experts. References External links Official website Civil engineering organizations International organisations based in Switzerland Organisations based in Lausanne Reinforced concrete Structural steel
The Counterfeit Man is a collection of science fiction short stories by American writer Alan E. Nourse, published in 1963 by David McKay. Several of the stories have a medical or psychological theme. Contents "The Counterfeit Man". The medical officer of an exploratory spaceship returning from Ganymede determines that the crew has been infiltrated by at least one highly malicious shapeshifting alien. He forces one intruder, which can almost perfectly mimic human physiology down to the cellular level, to betray itself and ejects it into space with the cooperation of the expedition commander. The doctor then sabotages the ship to temporarily strand it in orbit around Earth, going ahead in a shuttle with proof in hand to order a quarantine. When the ship lands, the entire crew is accounted for and the incredulous doctor storms back into the ship to search it, only to be ambushed and killed by the remaining alien, which has taken on his own guise. It heads into the surrounding city. This story was adapted as the second episode of the BBC television series Out of the Unknown. "The Canvas Bag" - A drifter with only a vague recall of his own past stops in a town and falls in love, which prompts him toward introspection. Examining his rather fuzzy memories, he realizes - to his shock - that he is over 150 years old. Having cursed his mother and his home, he has been punished with eternal homelessness, saddled with immortality, forgetfulness, and an irresistible thousand-year compulsion to wander the Earth. The girl he loved chases him down at the bus stop at the last minute, choosing to wander with him. "An Ounce of Cure" - a short, absurdist satirical piece on Nourse's own medical profession: a middle-aged man goes to his doctor seeking treatment of his foot pain, but instead becomes trapped in a maelstrom of arcane diagnostic procedures and endless referrals to increasingly ridiculous specialists. Eventually the patient gives up on conventional medicine and goes to a beturbaned Eastern mystic instead. "The Dark Door" - a former psychological experimenter, trapped in deeply paranoid persecution fantasies, appeals to his former mentor for help. The mentor, operating on a sinister hidden agenda, instead imprisons him in a machine resembling an early conception of a VR rig, subjecting him to a series of psychotic delusions. The purpose of the abuse is apparently to try to use trauma to force the subject to rediscover a vital finding he had uncovered during his earlier experimental work, which may be what had driven him insane to begin with. "Meeting of the Board" - another satire, this one set in a future in which American industry has been badly compromised by workers purchasing full joint-stock ownership of their companies, mismanaging them into a state of stolid uncompetitiveness. For its humor, the story relies on the role-reversal of workers abusing and mistreating management, until the white-collar employees go on strike, withholding managerial services. "Circus" - a very human-like alien is stranded on Earth, and futilely tries to convince Earthlings that he is an authentic extraterrestrial. Unfortunately the only human who will believe him is an SF writer, who over coffee in a diner gently informs the visitor that - due to his profession - no other human would find him credible on the subject. "My Friend Bobby" - a disturbing story told in first person, by a young boy who can read minds. His father is mostly absent, and his power causes his mother to slowly come to fear and hate him, his only companion being his collie Bobby, with whom he has formed a telepathic link. "The Link" - a cultured, gentle alien society has spent millennia fleeing from planet to planet, one step ahead of "the Hunters," their long-separated militaristic cousins, with whom they had fought a long-past war on their mutual homeworld. A young man and woman elect to stay behind on the aliens' current homeworld, which is being abandoned in the face of an advancing Hunter war fleet, to meet the pursuers face to face for the first time in ages. They hope to sue for peace, but find the Hunters implacable and cruel, lacking empathy or culture. Ordered to sing for the Hunters' commander, they are accused of trying to telepathically subvert their listeners, and are tortured for information until they erase their own minds in despair. However, their captors - oddly and subtly affected by their music - decide to dump them in the wilderness of the abandoned world, instead of killing them. The ending implies an "Adam and Eve" scenario. "Image of the Gods" - the hardscrabble human agricultural colony on the hostile world of Baron IV is informed that Earth has undergone a regime change, apparently for the worse, and that they must accept a new military governor, whose first act is to unreasonably increase their agricultural-export quota. The colonists refuse the order and attempt to offer armed resistance, with the unexpected aid of Baron IV's primitive but helpful alien autochthones, the Dusties, who - they learn - have come to worship them as gods. "The Expert Touch" - a reluctant experimental subject is tricked by his doctor into a painful battle against his inner demons, as part of a scheme to find the key to human sanity; the ordeal leaves him perfectly sane, but also - and perhaps as a consequence - highly unhelpful. "Second Sight" - a young woman is the only true psychic on Earth. She is pressured by her government handlers to take part in an experiment to try to induce her powers in other similarly handicapped people - the ending reveals that she is deaf and blind, all of her senses being entirely psionic. 1963 short story collections Science fiction short story collections
```xml /* * @license Apache-2.0 * * * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ // TypeScript Version: 4.1 /// <reference types="@stdlib/types"/> import { NumericArray } from '@stdlib/types/array'; /** * Interface describing `nanmeanwd`. */ interface Routine { /** * Computes the arithmetic mean of a strided array, ignoring `NaN` values and using Welford's algorithm. * * @param N - number of indexed elements * @param x - input array * @param stride - stride length * @returns arithmetic mean * * @example * var x = [ 1.0, -2.0, NaN, 2.0 ]; * * var v = nanmeanwd( x.length, x, 1 ); * // returns ~0.3333 */ ( N: number, x: NumericArray, stride: number ): number; /** * Computes the arithmetic mean of a strided array, ignoring `NaN` values and using Welford's algorithm and alternative indexing semantics. * * @param N - number of indexed elements * @param x - input array * @param stride - stride length * @param offset - starting index * @returns arithmetic mean * * @example * var x = [ 1.0, -2.0, NaN, 2.0 ]; * * var v = nanmeanwd.ndarray( x.length, x, 1, 0 ); * // returns ~0.3333 */ ndarray( N: number, x: NumericArray, stride: number, offset: number ): number; } /** * Computes the arithmetic mean of a strided array, ignoring `NaN` values and using Welford's algorithm. * * @param N - number of indexed elements * @param x - input array * @param stride - stride length * @returns arithmetic mean * * @example * var x = [ 1.0, -2.0, NaN, 2.0 ]; * * var v = nanmeanwd( x.length, x, 1 ); * // returns ~0.3333 * * @example * var x = [ 1.0, -2.0, NaN, 2.0 ]; * * var v = nanmeanwd.ndarray( x.length, x, 1, 0 ); * // returns ~0.3333 */ declare var nanmeanwd: Routine; // EXPORTS // export = nanmeanwd; ```
David Gommon (12 December 1913 – 20 January 1987) was a British painter born in Battersea, South London. Early life and education David Gommon was born on 12 December 1913 in Battersea in South London. His father was a Londoner, a journeyman carpenter. At the age of 16 he was enrolled in Battersea Polytechnic and the Clapham School of Art. He was able to visit art galleries of the Netherlands to study the paintings of the great masters. He met art collector, Lucy Carrington Wertheim and, when he was 18-19, she became his patron paying £2 a week for his work. Career It was through Lucy Wertheim that he held his first one-man show at her gallery in Burlington Gardens, and attracted positive critical attention. During this time he met many other patrons of the arts, and he painted the young dancer's Margot Fonteyn and Robert Helpmann at Saddlers Wells. He was part of the 20s group supported by Lucy Wertheim that included Christopher Wood, Barbara Hepworth, Roger Hilton, Robert Medley, Phelan Gibb, David Burton, Humphrey Slater and Victor Pasmore. In his own work, he gradually focussed on the essence of the English and Welsh landscapes. In 1938, he stopped painting altogether. Teaching His first teaching job at Northampton Grammar School where he worked and taught pupils, including actor/artist Jonathan Adams, of whom he completed a portrait. While here he painted until he retired. He would often paint reproductions of famous paintings to illustrate his lessons; the sets for the school's regular theatrical productions would be designed, painted and constructed in his art room. In the 1960s he delivered WEA Lectures on Art for a number of years, in Northampton and the county. His last commission, the two large murals at St. Crispin's hospital, Northampton, arose from this conviction. Personal life During the Second World War his spinal curvature rendered him unfit for military service, he served in the London Auxiliary fire service. During the war, he had met and married Jean Vipond. He died in 1987. Reviews Art critic Ian Mayes summed up this aspect of his work in his review of his 1975 exhibition at St Catharine's College, Cambridge. “I know of few artists whose work communicates such a sense of joy in life as that which comes from these beautifully quiet, very modest and English paintings, so accurate in their evocation of the changing moods and feeling of nature. The landscapes (which together with the related paintings of enclosed gardens and cricket matches I consider to be his finest work) show that in his use of colour and simplified shapes he has found a personal and eloquent language, perfectly suited to its purpose; and an important part of that purpose is the expression of wonderment and delight in nature.“ Exhibitions His work forms part of the permanent collections at; Salford City Art Gallery; Whitworth Art Gallery; Manchester, England; Northampton Art Gallery; The University of Leicester; Auckland Art Gallery, New Zealand; Queensland Art Gallery, Brisbane, Australia; Barbados City Art Gallery; The Chapel of Canberra Grammar School, Australia. Exhibitions have also been shown at:- Northampton Art Gallery; The House of Commons; The Herbert Gallery, Coventry; Kettering Art Gallery; Luton Museum and Art Gallery; Michael Jones Jewellers, Northampton; Burton-on -Trent Art Gallery; Gainsborough's House, Sudbury; Stafford Art Gallery; South London Art Gallery-Camberwell; The Paris Salon, 1979; The Liverpool School of Architecture; The University Centre, Northampton; Vaughan College, Leicester; St Catherine's College, Cambridge; The Fairfield Halls, Croydon; Durham City Art Gallery; The Angel Row Gallery, Nottingham; St Crispin Hospital, Northampton-Creation Mural and Shopping Mural; St Saviours Parish Church, Oxton - Arts festival; The Williamson Art Gallery, Birkenhead. Bibliography Wertheim, Lucy (1947)Adventure In Art, Nicholson and Watson, London, P48. Hoskin, Sarah and Haggard, Liz (1999) Healing the Hospital Environment, Taylor Francis, P62-63. References 1987 deaths 1913 births 20th-century British painters British male painters Painters from London 20th-century British male artists
```python # # # path_to_url # # Unless required by applicable law or agreed to in writing, software # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. import unittest import numpy as np from get_test_cover_info import ( XPUOpTestWrapper, check_run_big_shape_test, create_test_class, get_xpu_op_support_types, ) from op_test_xpu import XPUOpTest import paddle paddle.enable_static() class XPUTestReshapeOp(XPUOpTestWrapper): def __init__(self): self.op_name = "reshape2" self.use_dynamic_create_class = False # situation 1: have shape( list, no tensor), no actual shape(Tensor) class TestReshapeOp(XPUOpTest): def setUp(self): self.init_data() self.op_type = "reshape2" self.dtype = self.in_type self.init_test_input() self.init_test_output() self.init_attrs() def init_data(self): self.ori_shape = (2, 60) self.new_shape = (12, 10) self.infered_shape = (12, 10) def init_test_input(self): self.inputs = { "X": np.random.random(self.ori_shape).astype(self.dtype) } def init_test_output(self): self.outputs = { "Out": self.inputs["X"].reshape(self.infered_shape), 'XShape': np.random.random(self.ori_shape).astype(self.dtype), } def init_attrs(self): self.attrs = {"shape": self.new_shape, "use_xpu": True} def test_check_output(self): if paddle.is_compiled_with_xpu(): place = paddle.XPUPlace(0) self.check_output_with_place(place, no_check_set=['XShape']) def test_check_grad(self): if paddle.is_compiled_with_xpu(): place = paddle.XPUPlace(0) self.check_grad_with_place(place, ["X"], "Out") class TestReshapeOpDimInfer1(TestReshapeOp): def init_data(self): self.ori_shape = (5, 25) self.new_shape = (5, -1, 5) self.infered_shape = (5, -1, 5) class TestReshapeOpDimInfer2(TestReshapeOp): def init_data(self): self.ori_shape = (10, 2, 6) self.new_shape = (10, 0, 3, -1) self.infered_shape = (10, 2, 3, -1) # situation 2: have shape(list, no tensor), have actual shape(Tensor) class TestReshapeOpWithInputShape(TestReshapeOp): def init_data(self): self.ori_shape = (6, 20) self.new_shape = (0, -1, 20) self.actual_shape = (2, 3, 20) def init_test_input(self): self.inputs = { "X": np.random.random(self.ori_shape).astype(self.dtype), "Shape": np.array(self.actual_shape, dtype="int32"), } def init_test_output(self): self.outputs = { "Out": self.inputs["X"].reshape(self.actual_shape), 'XShape': np.random.random(self.ori_shape).astype(self.dtype), } # Situation 3: have shape(list, have tensor), no actual shape(Tensor) class TestReshapeOp_attr_ShapeTensor(TestReshapeOp): def init_data(self): self.ori_shape = (4, 25) self.new_shape = (10, 10) self.infered_shape = (10, 10) self.shape = (-1, -1) def init_test_input(self): shape_tensor = [] for index, ele in enumerate(self.new_shape): shape_tensor.append( ("x" + str(index), np.ones(1).astype('int32') * ele) ) self.inputs = { "X": np.random.random(self.ori_shape).astype(self.dtype), 'ShapeTensor': shape_tensor, } def init_attrs(self): self.attrs = {'shape': self.shape, "use_xpu": True} class TestReshapeOpDimInfer1_attr_ShapeTensor( TestReshapeOp_attr_ShapeTensor ): def init_data(self): self.ori_shape = (5, 20) self.new_shape = (5, -1, 20) self.infered_shape = (5, -1, 20) self.shape = (5, -1, -1) class TestReshapeOpDimInfer2_attr_ShapeTensor( TestReshapeOp_attr_ShapeTensor ): def init_data(self): self.ori_shape = (10, 2, 6) self.new_shape = (10, 0, 3, -1) self.infered_shape = (10, 2, 3, -1) self.shape = (10, 0, 3, -1) # Situation 4: have shape(Tensor), no actual shape(Tensor) class TestReshapeOp_attr_OnlyShape(TestReshapeOp): def init_data(self): self.ori_shape = (4, 25) self.new_shape = (10, 10) self.infered_shape = (10, 10) def init_test_input(self): self.inputs = { "X": np.random.random(self.ori_shape).astype(self.dtype), "Shape": np.array(self.new_shape, dtype="int32"), } def init_attrs(self): self.attrs = {"use_xpu": True} class TestReshapeOpDimInfer1_attr_OnlyShape(TestReshapeOp_attr_OnlyShape): def init_data(self): self.ori_shape = (5, 20) self.new_shape = (5, -1, 10) self.infered_shape = (5, -1, 10) self.shape = (5, -1, -1) class TestReshapeOpDimInfer2_attr_OnlyShape(TestReshapeOp_attr_OnlyShape): def init_data(self): self.ori_shape = (10, 2, 6) self.new_shape = (10, 0, 3, -1) self.infered_shape = (10, 2, 3, -1) self.shape = (10, 0, 3, -1) @check_run_big_shape_test() class TestReshapeOpLargeShape1(TestReshapeOp): def init_data(self): self.ori_shape = (5120, 32) self.new_shape = (32, 5120) self.infered_shape = (32, 5120) @check_run_big_shape_test() class TestReshapeOpLargeShape2(TestReshapeOp): def init_data(self): self.ori_shape = (1, 8192, 5120) self.new_shape = (8192, 5120) self.infered_shape = (8192, 5120) @check_run_big_shape_test() class TestReshapeOpLargeShape3(TestReshapeOp): def init_data(self): self.ori_shape = (1, 8192) self.new_shape = (8192,) self.infered_shape = (8192,) @check_run_big_shape_test() class TestReshapeOpLargeShape4(TestReshapeOp): def init_data(self): self.ori_shape = (1, 8192, 5, 64, 2) self.new_shape = (1, 8192, 5, 128) self.infered_shape = (1, 8192, 5, 128) @check_run_big_shape_test() class TestReshapeOpLargeShape5(TestReshapeOp): def init_data(self): self.ori_shape = (1, 8192, 5, 128) self.new_shape = (1, 8192, 640) self.infered_shape = (1, 8192, 640) support_types = get_xpu_op_support_types("reshape2") for stype in support_types: create_test_class(globals(), XPUTestReshapeOp, stype) if __name__ == "__main__": unittest.main() ```
```c #include <SDL.h> #include <stdio.h> #include <string.h> #include <unistd.h> #include <limits.h> #include "utils.h" static const char *resource_folder(void) { static const char *ret = NULL; if (!ret) { ret = SDL_GetBasePath(); if (!ret) { ret = "./"; } } return ret; } char *resource_path(const char *filename) { static char path[PATH_MAX + 1]; snprintf(path, sizeof(path), "%s%s", resource_folder(), filename); #ifdef DATA_DIR if (access(path, F_OK) == 0) { return path; } snprintf(path, sizeof(path), "%s%s", DATA_DIR, filename); #endif return path; } void replace_extension(const char *src, size_t length, char *dest, const char *ext) { memcpy(dest, src, length); dest[length] = 0; /* Remove extension */ for (size_t i = length; i--;) { if (dest[i] == '/') break; if (dest[i] == '.') { dest[i] = 0; break; } } /* Add new extension */ strcat(dest, ext); } ```
The 2015 Toyota Racing Series was the eleventh running of the Toyota Racing Series, the premier open-wheeler motorsport category held in New Zealand. The series, which consisted of sixteen races at five meetings, began on 14 January at Ruapuna Park in Christchurch, and ended on 15 February with the 60th running of the New Zealand Grand Prix, at Manfeild Autocourse in Feilding. With a third-place finish in the penultimate race of the season at Manfeild, Canadian driver Lance Stroll – driving for M2 Competition – clinched the championship title, having amassed an unassailable 93-point lead ahead of the final race. Stroll won three of the first four races to be held in the series at Ruapuna and Teretonga (taking the round wins at both circuits) before consistent finishing for the remainder of the campaign allowed him to maintain his championship lead throughout. Stroll added his fourth win of the season in the final race, the New Zealand Grand Prix, becoming the first Canadian to win the Grand Prix. M2 Competition team-mate Brandon Maïsano finished the season as runner-up, 108 points in arrears of Stroll. Maïsano won five races during the season – the most of all drivers – with a win at each meeting except for Teretonga, while taking the round win at Hampton Downs. Third place in the championship went to Santino Ferrucci, for the Giles Motorsport team. Ferrucci took five podium finishes before taking his first victory at Manfeild, in the second race. He finished 33 points behind Maïsano and 141 behind Stroll. Four other drivers took race victories during the 2015 season as Arjun Maini (M2 Competition) and Sam MacLeod (Giles Motorsport) each won two races – both at Hampton Downs and Taupō respectively – as they completed the top five in the drivers' championship, with MacLeod taking the round wins at Taupō and Manfeild. Two drivers from New Zealand also won races, both coming at Teretonga Park as Jamie Conroy – another M2 Competition driver – and Brendon Leitch, for Victory Motor Racing, both achieved their first victories in the series. Only ETEC Motorsport failed to take a race win, with a pair of third places from Thomas Randle being their best result. Teams and drivers All teams were New-Zealand registered. Race calendar and results The calendar for the series was announced on 14 July 2014, and was held over five successive weekends in January and February. The event at Taupo Motorsport Park was held as a quadruple-header, the first such instance for the series. Championship standings In order for a driver to score championship points, they had to complete at least 75% of the race winner's distance, and be running at the race's completion. All races counted towards the final championship standings. Scoring system Drivers' championship References External links Toyota Racing Series Toyota Racing Series
```go // Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. package deadline import ( "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/request" ) // WaitUntilFleetActive uses the AWSDeadlineCloud API operation // GetFleet to wait for a condition to be met before returning. // If the condition is not met within the max attempt window, an error will // be returned. func (c *Deadline) WaitUntilFleetActive(input *GetFleetInput) error { return c.WaitUntilFleetActiveWithContext(aws.BackgroundContext(), input) } // WaitUntilFleetActiveWithContext is an extended version of WaitUntilFleetActive. // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. func (c *Deadline) WaitUntilFleetActiveWithContext(ctx aws.Context, input *GetFleetInput, opts ...request.WaiterOption) error { w := request.Waiter{ Name: "WaitUntilFleetActive", MaxAttempts: 180, Delay: request.ConstantWaiterDelay(5 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "ACTIVE", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "CREATE_FAILED", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "UPDATE_FAILED", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { var inCpy *GetFleetInput if input != nil { tmp := *input inCpy = &tmp } req, _ := c.GetFleetRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // WaitUntilJobCreateComplete uses the AWSDeadlineCloud API operation // GetJob to wait for a condition to be met before returning. // If the condition is not met within the max attempt window, an error will // be returned. func (c *Deadline) WaitUntilJobCreateComplete(input *GetJobInput) error { return c.WaitUntilJobCreateCompleteWithContext(aws.BackgroundContext(), input) } // WaitUntilJobCreateCompleteWithContext is an extended version of WaitUntilJobCreateComplete. // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. func (c *Deadline) WaitUntilJobCreateCompleteWithContext(ctx aws.Context, input *GetJobInput, opts ...request.WaiterOption) error { w := request.Waiter{ Name: "WaitUntilJobCreateComplete", MaxAttempts: 120, Delay: request.ConstantWaiterDelay(1 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "CREATE_COMPLETE", }, { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "UPDATE_IN_PROGRESS", }, { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "UPDATE_FAILED", }, { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "UPDATE_SUCCEEDED", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "UPLOAD_FAILED", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "lifecycleStatus", Expected: "CREATE_FAILED", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { var inCpy *GetJobInput if input != nil { tmp := *input inCpy = &tmp } req, _ := c.GetJobRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // If the condition is not met within the max attempt window, an error will // be returned. } // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. w := request.Waiter{ MaxAttempts: 234, Delay: request.ConstantWaiterDelay(10 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.ErrorWaiterMatch, Expected: "ResourceNotFoundException", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "NOT_READY", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { if input != nil { tmp := *input inCpy = &tmp } req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // If the condition is not met within the max attempt window, an error will // be returned. } // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. w := request.Waiter{ MaxAttempts: 114, Delay: request.ConstantWaiterDelay(10 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "READY", }, { State: request.FailureWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "NOT_READY", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { if input != nil { tmp := *input inCpy = &tmp } req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // WaitUntilQueueFleetAssociationStopped uses the AWSDeadlineCloud API operation // GetQueueFleetAssociation to wait for a condition to be met before returning. // If the condition is not met within the max attempt window, an error will // be returned. func (c *Deadline) WaitUntilQueueFleetAssociationStopped(input *GetQueueFleetAssociationInput) error { return c.WaitUntilQueueFleetAssociationStoppedWithContext(aws.BackgroundContext(), input) } // WaitUntilQueueFleetAssociationStoppedWithContext is an extended version of WaitUntilQueueFleetAssociationStopped. // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. func (c *Deadline) WaitUntilQueueFleetAssociationStoppedWithContext(ctx aws.Context, input *GetQueueFleetAssociationInput, opts ...request.WaiterOption) error { w := request.Waiter{ Name: "WaitUntilQueueFleetAssociationStopped", MaxAttempts: 60, Delay: request.ConstantWaiterDelay(10 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "STOPPED", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { var inCpy *GetQueueFleetAssociationInput if input != nil { tmp := *input inCpy = &tmp } req, _ := c.GetQueueFleetAssociationRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // WaitUntilQueueScheduling uses the AWSDeadlineCloud API operation // GetQueue to wait for a condition to be met before returning. // If the condition is not met within the max attempt window, an error will // be returned. func (c *Deadline) WaitUntilQueueScheduling(input *GetQueueInput) error { return c.WaitUntilQueueSchedulingWithContext(aws.BackgroundContext(), input) } // WaitUntilQueueSchedulingWithContext is an extended version of WaitUntilQueueScheduling. // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. func (c *Deadline) WaitUntilQueueSchedulingWithContext(ctx aws.Context, input *GetQueueInput, opts ...request.WaiterOption) error { w := request.Waiter{ Name: "WaitUntilQueueScheduling", MaxAttempts: 70, Delay: request.ConstantWaiterDelay(10 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "SCHEDULING", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { var inCpy *GetQueueInput if input != nil { tmp := *input inCpy = &tmp } req, _ := c.GetQueueRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } // WaitUntilQueueSchedulingBlocked uses the AWSDeadlineCloud API operation // GetQueue to wait for a condition to be met before returning. // If the condition is not met within the max attempt window, an error will // be returned. func (c *Deadline) WaitUntilQueueSchedulingBlocked(input *GetQueueInput) error { return c.WaitUntilQueueSchedulingBlockedWithContext(aws.BackgroundContext(), input) } // WaitUntilQueueSchedulingBlockedWithContext is an extended version of WaitUntilQueueSchedulingBlocked. // With the support for passing in a context and options to configure the // Waiter and the underlying request options. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See path_to_url // for more information on using Contexts. func (c *Deadline) WaitUntilQueueSchedulingBlockedWithContext(ctx aws.Context, input *GetQueueInput, opts ...request.WaiterOption) error { w := request.Waiter{ Name: "WaitUntilQueueSchedulingBlocked", MaxAttempts: 30, Delay: request.ConstantWaiterDelay(10 * time.Second), Acceptors: []request.WaiterAcceptor{ { State: request.SuccessWaiterState, Matcher: request.PathWaiterMatch, Argument: "status", Expected: "SCHEDULING_BLOCKED", }, }, Logger: c.Config.Logger, NewRequest: func(opts []request.Option) (*request.Request, error) { var inCpy *GetQueueInput if input != nil { tmp := *input inCpy = &tmp } req, _ := c.GetQueueRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil }, } w.ApplyOptions(opts...) return w.WaitWithContext(ctx) } ```
Cognitive neuropsychology is a branch of cognitive psychology that aims to understand how the structure and function of the brain relates to specific psychological processes. Cognitive psychology is the science that looks at how mental processes are responsible for the cognitive abilities to store and produce new memories, produce language, recognize people and objects, as well as our ability to reason and problem solve. Cognitive neuropsychology places a particular emphasis on studying the cognitive effects of brain injury or neurological illness with a view to inferring models of normal cognitive functioning. Evidence is based on case studies of individual brain damaged patients who show deficits in brain areas and from patients who exhibit double dissociations. Double dissociations involve two patients and two tasks. One patient is impaired at one task but normal on the other, while the other patient is normal on the first task and impaired on the other. For example, patient A would be poor at reading printed words while still being normal at understanding spoken words, while the patient B would be normal at understanding written words and be poor at understanding spoken words. Scientists can interpret this information to explain how there is a single cognitive module for word comprehension. From studies like these, researchers infer that different areas of the brain are highly specialised. Cognitive neuropsychology can be distinguished from cognitive neuroscience, which is also interested in brain-damaged patients, but is particularly focused on uncovering the neural mechanisms underlying cognitive processes. History Cognitive neuropsychology has its roots in the diagram-making approach to language disorder that started in the second half of the 19th century. The discovery that aphasia takes different forms depending on the location of brain damage provided a powerful framework for understanding brain function. In 1861, Paul Broca reported a post mortem study of an aphasic patient who was speechless apart from a single nonsense word: "Tan". Broca showed that an area of the left frontal lobe was damaged. As Tan was unable to produce speech but could still understand it, Broca argued that this area might be specialised for speech production and that language skills might be localized to this cortical area. Broca did a similar study on another patient, Lelong, a few weeks later. Lelong, like Tan, could understand speech but could only repeat the same five words. After examining his brain, Broca noticed that Lelong had a lesion in approximately the same area as his patient Tan. He also noticed that in the more than 25 patients he examined with aphasia, they all had lesions to the left frontal lobe but there was no damage to the right hemisphere of the brain. From this he concluded that the function of speech was probably localized in the inferior frontal gyrus of the left hemisphere of the brain, an area now known as Broca's area. Karl Wernicke subsequently reported patients with damage further back in the temporal lobe who could speak but were unable to understand what was said to them, providing evidence for two potentially interconnected language centres. These clinical descriptions were integrated into a theory of language organisation by Lichtheim. Subsequently, these models were used and developed to inform Dejerine's account of reading, Liepmann's theory of action and Lissauer's 1890 account of object recognition and Lewandowsky and Stadelmann's 1908 account of calculation. However, the early 20th century saw a reaction to the overly-precise accounts of the diagram-making neurologists. Pierre Marie challenged conclusions against previous evidence of Broca's areas in 1906 and Henry Head attacked the whole field of cerebral localisation 1926. The modern science of cognitive neuropsychology emerged during the 1960s stimulated by the insights of the neurologist Norman Geschwind who demonstrated that the insights of Broca and Wernicke were still clinically relevant. The other stimulus to the discipline was the cognitive revolution and the growing science of cognitive psychology which had emerged as a reaction to behaviorism in the mid-20th century. Psychologists in the mid-1950s acknowledged that the structure of mental information-processing systems could be investigated in scientifically acceptable ways. They developed and applied new cognitive processing models to explain experimental data from not only studies of speech and language but also those of selective attention. Cognitive psychologists and clinical neuropsychologists developed more research collaborations to gain a better understanding of these disorders. The rebirth of neuropsychology was marked by the publishing of two seminal collaborative papers from Marshall & Newcombe (1966) on reading and Warrington & Shallice (1969) on memory. Subsequently, work by pioneers such as Elizabeth Warrington, Brenda Milner, Tim Shallice, Alan Baddeley and Lawrence Weiskrantz demonstrated that neurological patients were an important source of data for cognitive psychologists. It took less than one decade for neuropsychology to be fully re-established. More achievements in neuropsychology were recognized: the establishment of the first major book discussing neuropsychology using a cognitive approach, Deep Dyslexia, in 1980 after a scientific meeting about the topic in Oxford in 1977, the birth of the Cognitive Neuropsychology journal in 1984, and the publishing of the first textbook of neuropsychology, Human Cognitive Neuropsychology in 1988. A particular area of interest was memory. Patients with amnesia caused by injuries to the hippocampus in the temporal cortex and midbrain areas (especially the mamillary bodies) were of early interest. A patient with severe case of amnesia will not be able to remember meeting the examiner if they leave the room and return, let alone events of the previous day (episodic memory), but they will still be able to learn how to tie their shoes (procedural memory), remember a series of numbers for a few seconds (short-term memory or working memory) and be able to recall historical events they have learned in school (semantic memory). By contrast, patients may lose their short-term memory abilities while retaining their long term memory functions. Many other studies like this have been done in the field of neuropsychology examining lesions and the effect they have on certain areas of the brain and their functions. Studies on the amnesic patient Henry Molaison, formerly known as patient H.M., are commonly cited as some of the precursors, if not the beginning of modern cognitive neuropsychology. Molaison had parts of his medial temporal lobes surgically removed to treat intractable epilepsy in 1953. Much of the hippocampus was also removed along with the medial temporal lobes. The treatment proved successful in reducing his dangerous seizures, but left him with a profound but selective amnesia. After the surgery, Molaison was able to remember some big events from before the surgery, such as the stock market crash in 1929, but was confused about many others and could no longer form new memories. This accidental experiment showed scientists how the brain processes different types of memory. Because Molaison's impairment was caused by surgery, the damaged parts of his brain were known, information which was usually not knowable in a time before accurate neuroimaging became widespread. Scientists concluded that while the hippocampus is needed in the creation of new memories, it is not needed in the retrieval of old ones; they are two separate processes. They also realized that the hippocampus and the medial temporal lobes, both of the areas removed from Molaison, are the areas responsible for converting short-term memory to long-term memory. Much of the early work of cognitive neuropsychology was carried out with limited reference to the detailed localisation of brain pathology. Neuroimaging was relatively imprecise and other anatomically based techniques were also limited. The emphasis of many researchers as late as 1990 was on the analysis of patterns of cognitive deficit rather than on where the injury was located. Despite the lack of detailed anatomical data, studies of reading, language and memory had a number of important implications. The first is that certain cognitive processes (such as language) could be damaged separately from others, and so might be handled by distinct and independent cognitive (and neural) processes. (For more on the cognitive neuropsychological approach to language, see Eleanor Saffran, among others.) The second is that such processes might be localized to specific areas of the brain. Whilst both of these claims are still controversial to some degree, the influence led to a focus on brain injury as a potentially fruitful way of understanding the relationship between psychology and neuroscience. Methods A key approach within cognitive neuropsychology has been to use single case studies and dissociation as a means of testing theories of cognitive function. For example, if a theory states that reading and writing are simply different skills stemming from a single cognitive process, it should not be possible to find a person who, after brain injury, can write but not read or read but not write. This selective breakdown in skills suggests that different parts of the brain are specialized for the different processes and so the cognitive systems are separable. The philosopher Jerry Fodor has been particularly influential in cognitive neuropsychology, particularly with the idea that the mind, or at least certain parts of it, may be organised into independent modules. Evidence that cognitive skills may be damaged independently seem to support this theory to some degree, although it is clear that some aspects of mind (such as belief for example) are unlikely to be modular. Fodor, a strict functionalist, rejects the idea that the neurological properties of the brain have any bearing on its cognitive properties and doubts the whole discipline of cognitive neuropsychology. With improved neuroimaging techniques, it has been possible to correlate patterns of impairment with a knowledge of exactly which parts of the nervous system are damaged, allowing previously undiscovered functional relationships to be explored (the lesion method). Contemporary cognitive neuropsychology uses many of the same techniques and technologies from the wider science of neuropsychology and fields such as cognitive neuroscience. These may include neuroimaging, electrophysiology and neuropsychological tests to measure either brain function or psychological performance. Useful technology in cognitive neuropsychology includes positron-emission tomography (PET) and functional magnetic resonance imaging (fMRI). These techniques make it possible to identify the areas of the brain responsible for performing certain cognitive tasks by measuring blood flow in the brain. PET scans sense the low-level radiation in the brain and produce 3-D images, whereas an fMRI works on a magnetic signal and is used to “map the brain”. Electroencephalography (EEG) records the brain’s electrical activity and can identify changes that occur over milliseconds. EEG is often used in patients with epilepsy to detect seizure activity. The principles of cognitive neuropsychology have recently been applied to mental illness, with a view to understanding, for example, what the study of delusions may tell us about the function of normal belief. This relatively young field is known as cognitive neuropsychiatry. See also References Further reading Neuropsychology Cognition
```yaml tests: drivers.i2c.emul.target_pio: platform_allow: - native_sim extra_configs: - CONFIG_I2C_TARGET_BUFFER_MODE=n drivers.i2c.emul.target_buf: platform_allow: - native_sim extra_configs: - CONFIG_I2C_TARGET_BUFFER_MODE=y extra_dtc_overlay_files: - "boards/native_sim.overlay" - "boards/native_sim.buf.overlay" ```
```python # # # path_to_url # # Unless required by applicable law or agreed to in writing, software # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. import unittest from functools import partial import hypothesis.strategies as st import numpy as np from auto_scan_test import PassAutoScanTest from program_config import OpConfig, ProgramConfig, TensorConfig class TestSqueeze2Transpose2OneDNNFusePass(PassAutoScanTest): def sample_program_config(self, draw): def generate_input(shape): return np.random.random(shape).astype(np.float32) channel = draw(st.sampled_from([1, 2, 4, 8, 16])) transpose_axis = draw( st.sampled_from( [[0, 1, 2], [0, 2, 1], [1, 0, 2], [2, 1, 0], [2, 1, 0]] ) ) squeeze2_op = OpConfig( type="squeeze2", inputs={"X": ["squeeze_x"]}, outputs={ "Out": ["squeeze_out"], "XShape": ["squeeze2_xshape"], }, attrs={ "axes": [2], "use_mkldnn": True, }, ) transpose2_op = OpConfig( type="transpose2", inputs={ "X": ["squeeze_out"], }, outputs={ "Out": ["trans_out"], "XShape": ['transpose2_xshape'], }, attrs={ "axis": transpose_axis, "use_mkldnn": True, }, ) model_net = [squeeze2_op, transpose2_op] program_config = ProgramConfig( ops=model_net, weights={}, inputs={ "squeeze_x": TensorConfig( data_gen=partial(generate_input, [channel, 16, 1, 32]) ) }, outputs=["trans_out"], ) return program_config def sample_predictor_configs(self, program_config): config = self.create_inference_config( use_mkldnn=True, passes=[ "squeeze2_transpose2_onednn_fuse_pass", ], ) yield config, ["fused_transpose"], (1e-5, 1e-5) def test(self): self.run_and_statis( quant=False, passes=[ "squeeze2_transpose2_onednn_fuse_pass", ], ) if __name__ == "__main__": unittest.main() ```
```go // // // path_to_url // // Unless required by applicable law or agreed to in writing, software // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. package security import ( "net/http" "strings" "time" "github.com/goharbor/harbor/src/common/security" robotCtx "github.com/goharbor/harbor/src/common/security/robot" "github.com/goharbor/harbor/src/common/utils" robot_ctl "github.com/goharbor/harbor/src/controller/robot" "github.com/goharbor/harbor/src/lib/config" "github.com/goharbor/harbor/src/lib/log" "github.com/goharbor/harbor/src/lib/q" ) type robot struct{} func (r *robot) Generate(req *http.Request) security.Context { log := log.G(req.Context()) name, secret, ok := req.BasicAuth() if !ok { return nil } if !strings.HasPrefix(name, config.RobotPrefix(req.Context())) { return nil } // The robot name can be used as the unique identifier to locate robot as it contains the project name. robots, err := robot_ctl.Ctl.List(req.Context(), q.New(q.KeyWords{ "name": strings.TrimPrefix(name, config.RobotPrefix(req.Context())), }), &robot_ctl.Option{ WithPermission: true, }) if err != nil { log.Errorf("failed to list robots: %v", err) return nil } if len(robots) == 0 { return nil } robot := robots[0] if utils.Encrypt(secret, robot.Salt, utils.SHA256) != robot.Secret { log.Errorf("failed to authenticate robot account: %s", name) return nil } if robot.Disabled { log.Errorf("failed to authenticate deactivated robot account: %s", name) return nil } now := time.Now().Unix() if robot.ExpiresAt != -1 && robot.ExpiresAt <= now { log.Errorf("the robot account is expired: %s", name) return nil } log.Infof("a robot security context generated for request %s %s", req.Method, req.URL.Path) return robotCtx.NewSecurityContext(robot) } ```
Benjamin (Ben) "Danger" Davies (born 2 November 1989) is a professional rugby league footballer who last played as a for Oldham (Heritage No. 1372) in Kingstone Press League 1. He has previously played for:- Wigan Warriors (Heritage No. 1020) Widnes Vikings Workington Town South Wales Scorpions Barrow Raiders Castleford Tigers (Heritage No. 920) Halifax RLFC Whitehaven RLFC Background Wigan, Greater Manchester, England. Career He signed for the Wigan Warriors from local amateur team Leigh East. He has junior representative honours with Lancashire (at under 14s and 15s level) and England (under 15s and 18s.) At the start of 2010, he was sent out on loan to Widnes. After impressing on his one-month loan, he was dual registered with the Chemics. He made his Super League début off the bench against Crusaders on 22 May 2010. Ben has previously played for Halifax. References External links Oldham profile Ben Davies Wigan Career Page on the Wigan RL Fansite. Statistics at thecastlefordtigers.co.uk 1989 births Living people Barrow Raiders players Castleford Tigers players English rugby league players Halifax R.L.F.C. players Oldham R.L.F.C. players Rugby league players from Wigan Rugby league props South Wales Scorpions players Whitehaven R.L.F.C. players Widnes Vikings players Wigan Warriors players Workington Town players
Fullerton India Credit Co. Ltd. is a non - banking financial company in India. It is headquartered at Mumbai, India and deals with financing across retail and rural segments. The company provides unsecured as well as secured lending products through a diverse branch network as well as via digital channels to individuals and MSMEs. Overview Fullerton India Credit Co. Ltd. largely deals with unsecured lending products such as personal loans, unsecured business loans, group loans and so on across retail and rural segments. Over the years, the company has established 628 branches spread across India, serving over 2.3 million customers. History Fullerton India Credit Company Ltd. was established in July 2007. Sumitomo Mitsui Financial Group or SMFG, Japan owns a major stake (up to 74.9%) with Fullerton Financial Holdings, Singapore (25.1% stake) possessing most of the remainder. SMFG is one of the largest global banking and financial groups, offering a wide range of financial services including commercial banking, leasing, securities and consumer finance with a heritage of over 400 years in Japan. Fullerton India Fullerton Financial Holdings, is in turn, 100% held by Temasek, Singapore - an investment company owned by Government of Singapore with an autonomous independent board. In 2016, the company launched its housing finance company Fullerton India Home Finance Company Limited (FIHFC), also known as Grihashakti. In 2022, the NBFC raised Rs 2,795 crore from Sumitomo Mitsui Banking Corporation (SMBC) Singapore through the external commercial borrowing (ECB) route for a period of 5 years. The funding got fully hedged for foreign currency risks. Management Shantanu Mitra- Chief Executive Officer & Managing Director Ajay Pareek - Chief Business Officer Pankaj Malik - Chief Financial Officer & Head of Strategy Execution Dhananjay Tiwari - Chief Risk Officer Swaminathan Subramanian - Chief People Officer Deepak Patkar – Chief Executive Officer of Grihashakti, Fullerton India Home Finance Company Limited Businesses Fullerton India Credit Company Ltd. deals with financing across segments over a variety of distribution channels. Urban financing Offers secured as well as unsecured lending across Metro, Tier 1 – 4 cities through its branch network. The unsecured business covers products such as personal loans, two wheeler loans and small business loans, while the secured business covers products such as loan against property to individual as well as SME customers, loan against securities, commercial vehicle loans and so on. Rural financing Fullerton India covers more than 600 towns and 58,000 villages in rural India. It offers rural customers lines of credit for personal as well as livelihood purposes such as purchase of merchandise, livestock, etc. Digital Business Fullerton India recently launched its digital business in 2018. This channel focuses on digital technologies such as an online web portal, mobile applications and tablets. It also focuses on strategic partnerships with fintech companies and aggregators such as Paytm to derive symbiotic benefits and services. Fullerton India Home Finance Company Limited (FIHFC) Fullerton India Home Finance Ltd or "Grihashakti" is a wholly owned subsidiary of Fullerton India Credit Co. Ltd. This company focuses on providing financing to householders across India, including sale of new / old residential property, commercial property purchase or leasing, balance transfers and so on. Launched in December 2015 and head-quartered in Mumbai, the company with its 70 branches offers loans to salaried and self-employed individuals and organisations. References 2007 establishments in Maharashtra Indian companies established in 2007 Financial services companies based in Mumbai Financial services companies established in 2007
Borabenzene is a hypothetical organoboron compound with the formula C5H5B. Unlike the related but highly stable benzene molecule, borabenzene would be electron-deficient. Related derivatives are the boratabenzene anions, including the parent [C5H5BH]−. Adducts Adducts of borabenzene with Lewis bases are isolatable. Since borabenzene is unavailable, these adducts require indirect methods. 4-Silyl-1-methoxyboracyclohexadiene is used as a precursor to the borabenzene: + → + MeOSiMe3 The pyridine adduct is structurally related to biphenyl. It is a yellow whereas biphenyl is colorless, indicating distinct electronic structures. The pyridine ligand is tightly bound: no exchange is observed with free pyridine, even at elevated temperatures. The borabenzene-pyridine adduct behaves like a diene, not an analog of biphenyl, and will undergo Diels-Alder reactions. See also 6-membered aromatic rings with one carbon replaced by another group: silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium Borazine References Boron heterocycles Six-membered rings Hypothetical chemical compounds
Pedro Antonio Porro Sauceda (; born 13 September 1999) is a Spanish professional footballer who plays as a right-back or right wing-back for club Tottenham Hotspur and the Spain national team. He started his career with Girona's reserve team Peralda, before being promoted to the first team in 2017. In 2019, Porro signed for Premier League side Manchester City for a reported fee of €13 million (£11 million). He was loaned to La Liga side Real Valladolid and then Portuguese Primeira Liga club Sporting on an initial two-year loan deal. With Sporting, he won a double of the Primeira Liga and Taça da Liga, while being named in the Primeira Liga Team of the Year. His transfer was made permanent at the end of his second season. Porro first appeared with the Spain under-21s in March 2019, aged 19. He made his senior international debut in 2021 and was included in the squad for the 2021 UEFA Nations League Finals in Italy, in which Spain finished as runner-up. Club career Early career Born in Don Benito, Badajoz, Extremadura, Porro represented Gimnástico de Don Benito and Rayo Vallecano before joining Girona on 10 August 2017. He reportedly rejected Real Madrid, Atlético Madrid and Bayern Munich to sign for the Catalans. Girona On 28 November 2017, before even appearing with the reserves, Porro made his first team debut by coming on as a late substitute for goalscorer Johan Mojica in a 1–1 away draw against Levante, for the season's Copa del Rey. He made his first appearance for the B-side five days later, playing the last seven minutes in a 0–0 Segunda División B draw at Ebro. Porro scored his first senior goals on 6 May 2018, scoring twice in a 3–0 away win against Villarreal B. On 2 July, he renewed his contract until 2022, and played most of the pre-season as a right back. Porro made his La Liga debut on 17 August 2018, starting as right back in a 0–0 home draw against Real Valladolid. The following day, he extended his contract for a further year. Porro established himself as a starter under Eusebio Sacristán, becoming first choice ahead of Aday Benítez and replacing departed Pablo Maffeo. He scored his first professional goal on 31 January 2019, his team's only goal in a 1–3 cup loss at Real Madrid. Manchester City On 8 August 2019, Porro signed for Manchester City, for a reported fee of £11 million. Upon signing for Manchester City, Porro was immediately loaned to Real Valladolid in La Liga for one season. The side had an option to make a permanent move, and after playing 13 league games, helping the Pucelanos to a 13th place finish, Real Valladolid opted to not exercise their option to sign Porro on a permanent basis. Sporting CP On 16 August 2020, Porro joined Sporting CP on a two-year loan deal until 30 June 2022 with the option of making the move permanent for €8.5 million (£7 million). On 24 September 2020, he made his debut for the club in a 1–0 home win against Aberdeen in the third qualifying round of the UEFA Europa League. After arriving in Lisbon at the Estádio José Alvalade under some suspicion for young age, Porro immediately established himself as starter in the right side of the defence, making his league debut in a 2–0 away win against Paços de Ferreira. He scored his first goal for the Leões on 1 November in a 4–0 win over Tondela. For his outstanding performances for the club, Porro was named the Primeira Liga's Defender of the Month for three consecutive months, between November and January 2021. On 23 January, Porro scored the only goal in the defeat of Braga to help his club win the Taça da Liga. Three days later, Porro scored a shot outside the box, in a 2–0 away victory against Boavista. His strike was later voted as the Primeira Liga's Goal of the Month. He played 30 games for the eventual champions, ending a 19-year drought, while also being named in the Team of the Year. At the beginning of the following season, Porro continued with outstanding performances, providing an assist for Nuno Santos in a 1–1 home draw against Sporting's rivals Porto, and scored two goals from the penalty spot on two consecutive league games, in a 1–0 win over Estoril on 19 September and a 1–0 win over Marítimo on 24 September, which earned him the Primeira Liga's Defender of the Month award for two consecutive months in August and September. On 24 November, Porro scored the third goal against Borussia Dortmund in a 3–1 home win at a group stage match of the 2021–22 UEFA Champions League, by converting the rebound after Gregor Kobel saved a penalty from Pedro Gonçalves, to ensure his team qualification to the round of sixteen, for the first time since the 2008–09 season. Shortly after, Porro began suffering from recurring hamstring injuries, which sidelined him for two months of the season. He made his return from injury on 29 January 2022, with his crucial assist to Pablo Sarabia helping his team come from behind to defeat crosstown rivals Benfica 2–1 in the 2021–22 Taça da Liga final. On 16 May 2022, Sporting triggered Porro's buyout clause of €8.5m (£7.2m) million, signing him on a permanent three-year deal with a reported €20m (£17.6m) million buy-back clause. Following a season in which he helped Sporting to a runner-up finish behind rivals Porto, scoring five goals and providing seven assists, he was named in the Team of the Year for a second consecutive season. Tottenham Hotspur On 31 January 2023, the deadline day of the transfer window, Premier League club Tottenham Hotspur announced the signing of Porro on loan from Sporting with an obligation to buy in the summer. Porro made his Spurs debut on 11 February, starting in a 4–1 defeat to Leicester City. His performance drew criticism from former Spurs manager Tim Sherwood, who described Porro as "so bad it's unbelievable." He scored his first goal for the club on 18 March 2023 in a 3–3 away draw at Southampton. International career Porro earned his first cap for Spain at under-21 level making his debut on 29 March, starting in a friendly against Romania, which Spain won 1–0. In March 2021, Porro received his first call-up to the Spain national football team for the group stage of the 2022 FIFA World Cup qualification. He made his debut on 28 March 2021 in a 2–1 win against Georgia. Porro was named in Spain's final squad that finished as runner-up in the 2021 UEFA Nations League Finals in Italy in October, but did not make an appearance. Career statistics Club International Honours Sporting CP Primeira Liga: 2020–21 Taça da Liga: 2020–21, 2021–22 Spain UEFA Nations League runner-up: 2020–21 Individual Primeira Liga Defender of the Month: November 2020, December 2020, January 2021, September 2021 Primeira Liga Goal of the Month: January 2021 Primeira Liga Team of the Year: 2020–21, 2021–22 References External links Profile at the Tottenham Hotspur F.C. website 1999 births Living people People from Don Benito Footballers from the Province of Badajoz Spanish men's footballers Men's association football defenders Men's association football fullbacks Men's association football wingers La Liga players Segunda División B players Primeira Liga players Premier League players CF Peralada players Girona FC players Real Valladolid players Manchester City F.C. players Tottenham Hotspur F.C. players Sporting CP footballers Spain men's under-21 international footballers Spanish expatriate men's footballers Spanish expatriate sportspeople in Portugal Spanish expatriate sportspeople in England Expatriate men's footballers in Portugal Expatriate men's footballers in England Spain men's international footballers
```c++ /* This program is free software: you can redistribute it and/or modify published by the Free Software Foundation, either version 3 of the This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the along with this program. If not, see <path_to_url */ /*! @file @brief Implementation of KeyDataStore */ #include "key_data_store.h" #include "btree_map.h" /** ** @brief @param [in] stAlloc util::StackAllocator @param [in] resultSetPool resultSet @param [in] configTable ConfigTable @param [in] txnMgr TransactionManager @param [in] chunkmanager ChunkManager @param [in] logmanager LogManager @param [in] keyStore KeyDataStore(KeyDataStore) ** **/ KeyDataStore::KeyDataStore( util::StackAllocator* stAlloc, util::FixedSizeAllocator<util::Mutex>* resultSetPool, ConfigTable* configTable, TransactionManager* txnMgr, ChunkManager* chunkmanager, LogManager<MutexLocker>* logmanager, KeyDataStore* keyStore, const StatsSet &stats) : DataStoreBase( stAlloc, resultSetPool, configTable, txnMgr, chunkmanager, logmanager, keyStore), headerOId_(UNDEF_OID) { try { objectManager_ = UTIL_NEW ObjectManagerV4( *configTable, chunkmanager, stats.objMgrStats_); allocateStrategy_.set(META_GROUP_ID, objectManager_); if (objectManager_->isActive(allocateStrategy_.getGroupId())) { headerOId_ = getHeadOId(allocateStrategy_.getGroupId()); } } catch (std::exception& e) { delete objectManager_; GS_RETHROW_SYSTEM_ERROR(e, ""); } } /** ** @brief ** **/ KeyDataStore::~KeyDataStore() { delete objectManager_; } /** ** @brief DataStore @param [IN] type Support @return @note ** **/ bool KeyDataStore::support(Support type) { bool isSupport = false; switch (type) { case Support::TRIGGER: isSupport = false; break; default: break; } return isSupport; } /** ** @brief DataStoreBase::exec @param [in] txn TransactionContext @param [in] clsService ClusterService() @note DataStoreBase::Scope()postProcess ** **/ void KeyDataStore::preProcess( TransactionContext* txn, ClusterService* clsService) { UNUSED_VARIABLE(txn); UNUSED_VARIABLE(clsService); ObjectManagerV4& objectManager = *(getObjectManager()); objectManager.checkDirtyFlag(); const double HOT_MODE_RATE = 1.0; objectManager.setStoreMemoryAgingSwapRate(HOT_MODE_RATE); } /** ** @brief DataStoreBase::exec @param [in] txn TransactionContext @note DataStoreBase::Scope()preProcess @note ** **/ void KeyDataStore::postProcess(TransactionContext* txn) { UNUSED_VARIABLE(txn); ObjectManagerV4& objectManager = *(getObjectManager()); objectManager.checkDirtyFlag(); objectManager.resetRefCounter(); objectManager.freeLastLatchPhaseMemory(); objectManager.setSwapOutCounter(0); } /** ** @brief @param [in] txn TransactionContext @param [in] storeType @param [in] allocateSize @return OId @note OId @attention ** **/ OId KeyDataStore::put(TransactionContext& txn, StoreType storeType, DSObjectSize allocateSize) { try { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { initializeHeader(txn); } partitionHeaderObject.load(headerOId_, true); OId oId = UNDEF_OID; BaseObject storeObject(*getObjectManager(), allocateStrategy_); uint8_t* data = storeObject.allocate<uint8_t>(allocateSize, oId, OBJECT_TYPE_UNKNOWN); memset(data, 0, allocateSize); BtreeMap storeMap(txn, *getObjectManager(), partitionHeaderObject.getStoreMapOId(), allocateStrategy_, NULL); util::XArray<OId> list(txn.getDefaultAllocator()); TermCondition cond(COLUMN_TYPE_INT, COLUMN_TYPE_INT, DSExpression::EQ, UNDEF_COLUMNID, &storeType, sizeof(storeType)); BtreeMap::SearchContext sc(txn.getDefaultAllocator(), cond, 1); storeMap.search(txn, sc, list); int32_t status; bool isCaseSensitive = true; if (list.empty()) { status = storeMap.insert<StoreType, OId>(txn, storeType, oId, isCaseSensitive); } else { status = storeMap.update<StoreType, OId>(txn, storeType, list[0], oId, isCaseSensitive); } if ((status & BtreeMap::ROOT_UPDATE) != 0) { partitionHeaderObject.setStoreMapOId(storeMap.getBaseOId()); } return oId; } catch (std::exception& e) { handleUpdateError(e, GS_ERROR_DS_DS_GET_COLLECTION_FAILED); return UNDEF_OID; } } /** ** @brief @param [in] txn TransactionContext @param [in] storeType @return OId ** **/ OId KeyDataStore::get(TransactionContext& txn, StoreType storeType) { if (!isActive()) { return UNDEF_OID; } DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_, headerOId_); BtreeMap storeMap(txn, *getObjectManager(), partitionHeaderObject.getStoreMapOId(), allocateStrategy_, NULL); util::XArray<OId> list(txn.getDefaultAllocator()); TermCondition cond(COLUMN_TYPE_INT, COLUMN_TYPE_INT, DSExpression::EQ, UNDEF_COLUMNID, &storeType, sizeof(storeType)); BtreeMap::SearchContext sc(txn.getDefaultAllocator(), cond, 1); storeMap.search(txn, sc, list); if (list.empty()) { return UNDEF_OID; } else { return list[0]; } } /** ** @brief ID @param [in] txn TransactionContext @param [in] id ContainerId @return ** **/ KeyDataStoreValue KeyDataStore::get( util::StackAllocator& alloc, ContainerId id) { UNUSED_VARIABLE(alloc); try { KeyDataStoreValue val = containerIdTable_.get(id); if (val.oId_ == UNDEF_OID) { GS_THROW_USER_ERROR(GS_ERROR_DS_CONTAINER_UNEXPECTEDLY_REMOVED, ""); } return val; } catch (std::exception& e) { handleSearchError(e, GS_ERROR_DS_DS_GET_COLLECTION_FAILED); return KeyDataStoreValue(); } } /** ** @brief @param [in] txn TransactionContext @param [in] containerKey @param [in] isCaseSensitive @return ** **/ KeyDataStoreValue KeyDataStore::get(TransactionContext& txn, const FullContainerKey& containerKey, bool isCaseSensitive) { try { KeyDataStoreValue ret = KeyDataStoreValue(); if (!isActive()) { return ret; } DataStorePartitionHeaderObject partitionHeaderObject( *getObjectManager(), allocateStrategy_, headerOId_); BtreeMap keyMap(txn, *getObjectManager(), partitionHeaderObject.getKeyMapOId(), allocateStrategy_, NULL); FullContainerKeyCursor keyCursor(const_cast<FullContainerKey*>(&containerKey)); keyMap.search<FullContainerKeyCursor, KeyDataStoreValue, KeyDataStoreValue>( txn, keyCursor, ret, isCaseSensitive); return ret; } catch (std::exception& e) { handleSearchError(e, GS_ERROR_DS_DS_GET_COLLECTION_FAILED); return KeyDataStoreValue(); } } /** ** @brief @param [in] txn TransactionContext @param [in] keyOId OId @param [in] newValue @return ** **/ PutStatus KeyDataStore::put(TransactionContext& txn, OId keyOId, KeyDataStoreValue& newValue) { PutStatus putStatus = PutStatus::CREATE; try { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { GS_THROW_SYSTEM_ERROR(GS_ERROR_CM_INTERNAL_ERROR, "must call 'put(TransactionContext&, StoreType, Size_t)', at first"); } else { partitionHeaderObject.load(headerOId_, false); } BtreeMap keyMap(txn, *getObjectManager(), partitionHeaderObject.getKeyMapOId(), allocateStrategy_, NULL); bool isCaseSensitive = false; KeyDataStoreValue value; FullContainerKeyCursor keyCursor( *getObjectManager(), allocateStrategy_, keyOId); keyMap.search<FullContainerKeyCursor, KeyDataStoreValue, KeyDataStoreValue>( txn, keyCursor, value, isCaseSensitive); if (value.oId_ != UNDEF_OID) { int32_t status = keyMap.remove<FullContainerKeyCursor, KeyDataStoreValue>( txn, keyCursor, value, isCaseSensitive); if ((status & BtreeMap::ROOT_UPDATE) != 0) { partitionHeaderObject.setKeyMapOId(keyMap.getBaseOId()); } containerIdTable_.remove(newValue.containerId_); putStatus = PutStatus::UPDATE; } { int32_t status = keyMap.insert<FullContainerKeyCursor, KeyDataStoreValue>( txn, keyCursor, newValue, isCaseSensitive); if ((status & BtreeMap::ROOT_UPDATE) != 0) { partitionHeaderObject.setKeyMapOId(keyMap.getBaseOId()); } } containerIdTable_.set(newValue.containerId_, newValue.oId_, keyOId, keyCursor.getKey().getComponents(txn.getDefaultAllocator()).dbId_, newValue.storeType_, newValue.attribute_); return putStatus; } catch (std::exception& e) { handleUpdateError(e, GS_ERROR_CM_INTERNAL_ERROR); return putStatus; } } /** ** @brief @param [in] txn TransactionContext @param [in] keyOId OId @return ** **/ bool KeyDataStore::remove(TransactionContext& txn, OId keyOId) { try { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { return false; } else { partitionHeaderObject.load(headerOId_, false); } BtreeMap keyMap(txn, *getObjectManager(), partitionHeaderObject.getKeyMapOId(), allocateStrategy_, NULL); bool isCaseSensitive = false; KeyDataStoreValue value; FullContainerKeyCursor keyCursor( *getObjectManager(), allocateStrategy_, keyOId); keyMap.search<FullContainerKeyCursor, KeyDataStoreValue, KeyDataStoreValue>( txn, keyCursor, value, isCaseSensitive); if (value.oId_ != UNDEF_OID) { int32_t status = keyMap.remove<FullContainerKeyCursor, KeyDataStoreValue>( txn, keyCursor, value, isCaseSensitive); if ((status & BtreeMap::ROOT_UPDATE) != 0) { partitionHeaderObject.setKeyMapOId(keyMap.getBaseOId()); } containerIdTable_.remove(value.containerId_); } return value.oId_ != UNDEF_OID; } catch (std::exception& e) { handleUpdateError(e, GS_ERROR_DS_DS_DROP_COLLECTION_FAILED); return false; } } /** ** @brief I/F @param [in] txn TransactionContext @param [in] storeValue @param [in] message @return @note KeyDataStoreI/F ** **/ Serializable* KeyDataStore::exec( TransactionContext* txn, KeyDataStoreValue* storeValue, Serializable* message) { UNUSED_VARIABLE(txn); UNUSED_VARIABLE(storeValue); UNUSED_VARIABLE(message); assert(false); return NULL; } /*! @brief Handle Exception of update phase */ /** ** @brief @param [in] errorCode @attention UserErrorSystemError ** **/ void KeyDataStore::handleUpdateError(std::exception&, ErrorCode) { try { throw; } catch (SystemException& e) { GS_RETHROW_SYSTEM_ERROR(e, ""); } catch (UserException& e) { if (e.getErrorCode() == GS_ERROR_CM_NO_MEMORY || e.getErrorCode() == GS_ERROR_CM_MEMORY_LIMIT_EXCEEDED || e.getErrorCode() == GS_ERROR_CM_SIZE_LIMIT_EXCEEDED) { GS_RETHROW_SYSTEM_ERROR(e, ""); } else { GS_RETHROW_USER_ERROR(e, ""); } } catch (LockConflictException& e) { DS_RETHROW_LOCK_CONFLICT_ERROR(e, ""); } catch (std::exception& e) { GS_RETHROW_SYSTEM_ERROR(e, ""); } } /*! @brief Handle Exception of search phase */ /** ** @brief @param [in] errorCode ** **/ void KeyDataStore::handleSearchError(std::exception&, ErrorCode) { try { throw; } catch (SystemException& e) { GS_RETHROW_SYSTEM_ERROR(e, ""); } catch (UserException& e) { GS_RETHROW_USER_ERROR(e, ""); } catch (LockConflictException& e) { DS_RETHROW_LOCK_CONFLICT_ERROR(e, ""); } catch (std::exception& e) { GS_RETHROW_USER_OR_SYSTEM(e, ""); } } /** ** @brief KeyDataStore @param [in] txn TransactionContext @note Partition ** **/ void KeyDataStore::initializeHeader(TransactionContext& txn) { assert(!objectManager_->isActive(allocateStrategy_.getGroupId())); DataStorePartitionHeaderObject partitionHeaderObject( *getObjectManager(), allocateStrategy_); partitionHeaderObject.initialize(txn, allocateStrategy_); headerOId_ = getHeadOId(allocateStrategy_.getGroupId()); if (partitionHeaderObject.getBaseOId() != headerOId_) { GS_THROW_SYSTEM_ERROR(GS_ERROR_DS_DS_CHUNK_OFFSET_INVALID, "must be first object"); } } /** ** @brief ID @return ID ** **/ ContainerId KeyDataStore::allocateContainerId() { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { GS_THROW_SYSTEM_ERROR(GS_ERROR_CM_INTERNAL_ERROR, ""); } else { partitionHeaderObject.load(headerOId_, false); } ContainerId containerId = partitionHeaderObject.allocateContainerId(); return containerId; } /** ** @brief ID @param [in] num ID @return ID ** **/ DSGroupId KeyDataStore::allocateGroupId(int32_t num) { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { GS_THROW_SYSTEM_ERROR(GS_ERROR_CM_INTERNAL_ERROR, ""); } else { partitionHeaderObject.load(headerOId_, false); } DSGroupId groupId = partitionHeaderObject.allocateGroupId(num); return groupId; } /** ** @brief calculate checkSum @param [in] alloc @note @note V4ContainerId ** **/ /*! @brief Allocate DataStorePartitionHeader Object and BtreeMap Objects for DataStores and Containers */ /** ** @brief @param [in] txn TransactionContext @param [in] allocateStrategy Object ** **/ void KeyDataStore::DataStorePartitionHeaderObject::initialize( TransactionContext& txn, AllocateStrategy& allocateStrategy) { BaseObject::allocate<DataStorePartitionHeader>( sizeof(DataStorePartitionHeader), getBaseOId(), OBJECT_TYPE_CONTAINER_ID); memset(get(), 0, sizeof(DataStorePartitionHeader)); BtreeMap keyMap(txn, *getObjectManager(), allocateStrategy, NULL); keyMap.initialize<FullContainerKeyCursor, KeyDataStoreValue>( txn, COLUMN_TYPE_STRING, true, BtreeMap::TYPE_SINGLE_KEY); setKeyMapOId(keyMap.getBaseOId()); BtreeMap storeMap(txn, *getObjectManager(), allocateStrategy, NULL); storeMap.initialize<StoreType, OId>( txn, COLUMN_TYPE_INT, true, BtreeMap::TYPE_SINGLE_KEY); setStoreMapOId(storeMap.getBaseOId()); get()->maxContainerId_ = 0; get()->groupIdCounter_ = 1; } /*! @brief Free DataStorePartitionHeader Object and BtreeMap Objects for DataStores and Containers */ /** ** @brief @param [in] txn TransactionContext @param [in] allocateStrategy Object ** **/ void KeyDataStore::DataStorePartitionHeaderObject::finalize( TransactionContext& txn, AllocateStrategy& allocateStrategy) { BtreeMap keyMap( txn, *getObjectManager(), getKeyMapOId(), allocateStrategy, NULL); keyMap.finalize(txn); BtreeMap storeMap( txn, *getObjectManager(), getStoreMapOId(), allocateStrategy, NULL); storeMap.finalize(txn); } /*! @brief Get Container Information by ContainerId */ /** ** @brief ID @param [in] containerId ContainerId @return ** **/ KeyDataStoreValue KeyDataStore::ContainerIdTable::get(ContainerId containerId) { ContainerIdMap::const_iterator itr = containerIdMap_.find(containerId); if (itr != containerIdMap_.end()) { return KeyDataStoreValue(itr->first, itr->second.containerOId_, itr->second.storeType_, itr->second.attribute_); } else { return KeyDataStoreValue(); } } /*! @brief Get ContainerKey OId by ContainerId */ /** ** @brief IDOId @param [in] containerId ContainerId @return OId ** **/ OId KeyDataStore::ContainerIdTable::getKey(ContainerId containerId) { ContainerIdMap::const_iterator itr = containerIdMap_.find(containerId); if (itr != containerIdMap_.end()) { return itr->second.keyOId_; } else { return UNDEF_OID; } } /** ** @brief DataStore @param [in] txn TransactionContext @param [in] clusterService ClusterService @note ** **/ void KeyDataStore::activate( TransactionContext& txn, ClusterService* clusterService) { restoreContainerIdTable(txn, clusterService); } /*! @brief Restore ContainerIdTable in the partition */ /** ** @brief ContainerIdChunk @param [in] txn TransactionContext @param [in] clsService ClusterService() ** **/ void KeyDataStore::restoreContainerIdTable( TransactionContext& txn, ClusterService* clusterService) { const DataStoreBase::Scope dsScope(&txn, this, clusterService); if (!isActive()) { return; } DataStorePartitionHeaderObject partitionHeaderObject( *getObjectManager(), allocateStrategy_, headerOId_); BtreeMap keyMap(txn, *getObjectManager(), partitionHeaderObject.getKeyMapOId(), allocateStrategy_, NULL); size_t containerListSize = 0; BtreeMap::BtreeCursor btreeCursor; while (1) { util::StackAllocator::Scope scope(txn.getDefaultAllocator()); typedef std::pair<FullContainerKeyAddr, KeyDataStoreValue> KeyValue; util::XArray<KeyValue> idList(txn.getDefaultAllocator()); util::XArray<KeyValue>::iterator itr; int32_t getAllStatus = keyMap.getAll<FullContainerKeyAddr, KeyDataStoreValue>( txn, PARTIAL_RESULT_SIZE, idList, btreeCursor); for (itr = idList.begin(); itr != idList.end(); itr++) { FullContainerKeyCursor keyCursor(*getObjectManager(), allocateStrategy_, itr->first.oId_); const FullContainerKey& containerKey = keyCursor.getKey(); KeyDataStoreValue& value = itr->second; const DatabaseId databaseVersionId = containerKey.getComponents(txn.getDefaultAllocator()).dbId_; containerIdTable_.set( value.containerId_, value.oId_, itr->first.oId_, databaseVersionId, value.storeType_, value.attribute_); containerListSize++; } if (getAllStatus == GS_SUCCESS) { break; } } GS_TRACE_INFO(KEY_DATA_STORE, GS_TRACE_DS_DS_CONTAINER_ID_TABLE_STATUS, "Restore container (pId=" << txn.getPartitionId() << ", count=" << containerListSize << ")"); } /*! @brief Returns names of Container to meet a given condition in the partition */ /** ** @brief @param [in] txn TransactionContext @param [in] start @param [in] limit @param [in] dbId DatabaseId @param [in] condition ContainerCondition @param [out] nameList ** **/ void KeyDataStore::getContainerNameList(TransactionContext& txn, int64_t start, ResultSize limit, const DatabaseId dbId, ContainerCondition& condition, util::XArray<FullContainerKey>& nameList) { nameList.clear(); if (start < 0) { GS_THROW_USER_ERROR(GS_ERROR_DS_DS_GET_CONTAINER_LIST_FAILED, "Illeagal parameter. start < 0"); } try { ContainerIdTable::ContainerIdList list(txn.getDefaultAllocator()); containerIdTable_.getList(0, INT64_MAX, list); std::sort(list.begin(), list.end(), containerIdMapAsc()); const StoreType currentStoreType = condition.getStoreType(); const int64_t currentDatabaseVersionId = dbId; int64_t count = 0; nameList.clear(); for (size_t i = 0; i < list.size() && nameList.size() < limit; i++) { const ContainerAttribute attribute = list[i].second.attribute_; const StoreType storeType = list[i].second.storeType_; bool isStoreMatch = (currentStoreType == UNDEF_STORE || storeType == currentStoreType); const int64_t databaseVersionId = list[i].second.databaseVersionId_; bool isDbMatch = (currentDatabaseVersionId == UNDEF_DBID || databaseVersionId == currentDatabaseVersionId); const util::Set<ContainerAttribute>& conditionAttributes = condition.getAttributes(); bool isAttributeMatch = conditionAttributes.find(attribute) != conditionAttributes.end(); if (isStoreMatch && isDbMatch && isAttributeMatch) { if (count >= start) { FullContainerKeyCursor keyCursor(*getObjectManager(), allocateStrategy_, list[i].second.keyOId_); util::StackAllocator& alloc = txn.getDefaultAllocator(); const void* srcBody; size_t bodySize = 0; keyCursor.getKey().toBinary(srcBody, bodySize); void* destBody = alloc.allocate(bodySize); memcpy(destBody, srcBody, bodySize); nameList.push_back( FullContainerKey(alloc, KeyConstraint::getNoLimitKeyConstraint(), destBody, bodySize)); } count++; } } } catch (std::exception& e) { handleSearchError(e, GS_ERROR_DS_DS_GET_CONTAINER_LIST_FAILED); } } /*! @brief Returns number of Container in the partition */ /** ** @brief @param [in] txn TransactionContext @param [in] dbId DatabaseId @param [in] condition ContainerCondition @return ** **/ uint64_t KeyDataStore::getContainerCount(TransactionContext& txn, const DatabaseId dbId, ContainerCondition& condition) { uint64_t count = 0; try { ContainerIdTable::ContainerIdList list(txn.getDefaultAllocator()); containerIdTable_.getList(0, INT64_MAX, list); const StoreType currentStoreType = condition.getStoreType(); const int64_t currentDatabaseVersionId = dbId; for (size_t i = 0; i < list.size(); i++) { const ContainerAttribute attribute = list[i].second.attribute_; const StoreType storeType = list[i].second.storeType_; bool isStoreMatch = (currentStoreType == UNDEF_STORE || storeType == currentStoreType); const int64_t databaseVersionId = list[i].second.databaseVersionId_; bool isDbMatch = (currentDatabaseVersionId == UNDEF_DBID || databaseVersionId == currentDatabaseVersionId); const util::Set<ContainerAttribute>& conditionAttributes = condition.getAttributes(); bool isAttributeMatch = conditionAttributes.find(attribute) != conditionAttributes.end(); if (isStoreMatch && isDbMatch && isAttributeMatch) { count++; } } } catch (std::exception& e) { handleSearchError(e, GS_ERROR_DS_DS_GET_CONTAINER_LIST_FAILED); } return count; } /** ** @brief @param [in] txn TransactionContext @param [in] start @param [in] limit @param [in] dbId DatabaseId @param [in] condition ContainerCondition @param [out] storeValueList @return ** **/ bool KeyDataStore::scanContainerList( TransactionContext& txn, ContainerId startContainerId, uint64_t limit, const DatabaseId dbId, ContainerCondition& condition, util::XArray< KeyDataStoreValue* >& storeValueList) { util::StackAllocator& alloc = txn.getDefaultAllocator(); typedef ContainerIdTable::ContainerIdRefList ContainerIdRefList; ContainerIdRefList list(alloc); const bool followingFound = containerIdTable_.getListOrdered( startContainerId, limit, dbId, condition, list); for (ContainerIdRefList::iterator itr = list.begin(); itr != list.end(); ++itr) { KeyDataStoreValue *storeValue = ALLOC_NEW(alloc) KeyDataStoreValue(itr->first, itr->second->containerOId_, itr->second->storeType_, itr->second->attribute_); storeValueList.push_back(storeValue); } return followingFound; } /*! @brief Get FullContainerKey */ /** ** @brief ID @param [in] alloc util::StackAllocator @param [in] id ContainerId @return ** **/ FullContainerKey* KeyDataStore::getKey(util::StackAllocator& alloc, ContainerId id) { FullContainerKey* returnKey = NULL; OId keyOId = containerIdTable_.getKey(id); if (keyOId != UNDEF_OID) { FullContainerKeyCursor cursor(*getObjectManager(), allocateStrategy_, keyOId); FullContainerKey containerKey = cursor.getKey(); const void* keyData; size_t keySize; containerKey.toBinary(keyData, keySize); uint8_t * destBody = ALLOC_NEW(alloc) uint8_t[keySize]; memcpy(destBody, keyData, keySize); returnKey = ALLOC_NEW(alloc) FullContainerKey(alloc, KeyConstraint::getNoLimitKeyConstraint(), destBody, keySize); } else { GS_THROW_USER_ERROR(GS_ERROR_DS_CONTAINER_UNEXPECTEDLY_REMOVED, ""); } return returnKey; } /** ** @brief OId @param [in] txn TransactionContext @param [in] oId ID @return ** **/ FullContainerKey* KeyDataStore::getKey(TransactionContext& txn, OId oId) { util::StackAllocator& alloc = txn.getDefaultAllocator(); FullContainerKeyCursor cursor(*getObjectManager(), allocateStrategy_, oId); FullContainerKey containerKey = cursor.getKey(); const void* keyData; size_t keySize; containerKey.toBinary(keyData, keySize); uint8_t* destBody = ALLOC_NEW(alloc) uint8_t[keySize]; memcpy(destBody, keyData, keySize); FullContainerKey* returnKey = ALLOC_NEW(alloc) FullContainerKey(alloc, KeyConstraint::getNoLimitKeyConstraint(), destBody, keySize); return returnKey; } /** ** @brief @param [in] txn TransactionContext @param [in] key @return ID ** **/ OId KeyDataStore::allocateKey(TransactionContext& txn, const FullContainerKey &key) { DataStorePartitionHeaderObject partitionHeaderObject(*getObjectManager(), allocateStrategy_); if (!isActive()) { initializeHeader(txn); } FullContainerKeyCursor keyCursor(*getObjectManager(), allocateStrategy_); keyCursor.initialize(txn, key); return keyCursor.getBaseOId(); } /** ** @brief @param [in] txn TransactionContext @param [in] oId ID ** **/ void KeyDataStore::removeKey(OId oId) { assert(oId != UNDEF_OID); FullContainerKeyCursor keyCursor(*getObjectManager(), allocateStrategy_, oId); keyCursor.finalize(); } /** ** @brief ID @param [in] groupId DSGroupId @return ID ** **/ OId KeyDataStore::getHeadOId(DSGroupId groupId) { ChunkId headChunkId = objectManager_->getHeadChunkId(groupId); OId partitionHeaderOId = objectManager_->getOId(groupId, headChunkId, FIRST_OBJECT_OFFSET); return partitionHeaderOId; } /** ** @brief ID @param [in] alloc util::StackAllocator @param [in] containerKey @param [in] partitionCount @param [in] hashMode @return ID ** **/ PartitionId KeyDataStore::resolvePartitionId( util::StackAllocator& alloc, const FullContainerKey& containerKey, PartitionId partitionCount, ContainerHashMode hashMode) { UNUSED_VARIABLE(hashMode); assert(partitionCount > 0); const FullContainerKeyComponents normalizedComponents = containerKey.getComponents(alloc, false); if (normalizedComponents.affinityNumber_ != UNDEF_NODE_AFFINITY_NUMBER) { return static_cast<PartitionId>( normalizedComponents.affinityNumber_ % partitionCount); } else if (normalizedComponents.affinityStringSize_ > 0) { const uint32_t crcValue = util::CRC32::calculate( normalizedComponents.affinityString_, normalizedComponents.affinityStringSize_); return (crcValue % partitionCount); } else { const char8_t* baseContainerName = (normalizedComponents.baseNameSize_ == 0 ? "" : normalizedComponents.baseName_); const uint32_t crcValue = util::CRC32::calculate( baseContainerName, normalizedComponents.baseNameSize_); return (crcValue % partitionCount); } } /*! @brief Set value(ContainerId, ContainerInfoCache) */ /** ** @brief OIdDB @param [in] containerId ContainerId @param [in] containerOId OId @param [in] keyOId OId @param [in] databaseVersionId DB @param [in] storeType @param [in] attribute @note databaseVersionIdDBRowIdint64_t ** **/ void KeyDataStore::ContainerIdTable::set(ContainerId containerId, OId containerOId, OId keyOId, int64_t databaseVersionId, StoreType storeType, ContainerAttribute attribute) { try { std::pair<ContainerIdMap::iterator, bool> itr; ContainerInfoCache containerInfoCache( containerOId, keyOId, databaseVersionId, storeType, attribute); itr = containerIdMap_.insert( std::make_pair( containerId, containerInfoCache)); if (!itr.second) { GS_THROW_SYSTEM_ERROR( GS_ERROR_DS_DS_CONTAINER_ID_INVALID, "duplicate container id"); } } catch (std::exception& e) { GS_RETHROW_SYSTEM_ERROR(e, ""); } } /*! @brief Remove value by ContainerId key */ /** ** @brief ID @param [in] containerId ContainerId ** **/ void KeyDataStore::ContainerIdTable::remove(ContainerId containerId) { ContainerIdMap::size_type result = containerIdMap_.erase(containerId); if (result == 0) { GS_TRACE_WARNING(KEY_DATA_STORE, GS_TRACE_DS_DS_CONTAINER_ID_TABLE_STATUS, "KeyDataStore::ContainerIdTable::remove: out of bounds"); } } /*! @brief Get list of all ContainerId in the map */ /** ** @brief @param [in] start @param [in] limit @param [out] list Id @attention limitMAX_INT64OK MAX_INT32(EventEngine) ** **/ void KeyDataStore::ContainerIdTable::getList( int64_t start, ResultSize limit, ContainerIdList& list) { try { list.clear(); if (static_cast<uint64_t>(start) > size()) { return; } int64_t skipCount = 0; ResultSize listCount = 0; bool inRange = false; ContainerIdMap::const_iterator itr; for (itr = containerIdMap_.begin(); itr != containerIdMap_.end(); itr++) { ++skipCount; if (!inRange && skipCount > start) { inRange = true; } if (inRange) { if (listCount >= limit) { break; } if (listCount > CONTAINER_NAME_LIST_NUM_UPPER_LIMIT) { GS_THROW_USER_ERROR( GS_ERROR_DS_DS_GET_CONTAINER_LIST_FAILED, "Numbers of containers exceed an upper limit level."); } list.push_back(*itr); ++listCount; } } return; } catch (std::exception& e) { GS_RETHROW_USER_OR_SYSTEM( e, GS_EXCEPTION_MERGE_MESSAGE(e, "Failed to list container")); } } /** ** @brief @param [in] start @param [in] limit @param [in] dbId DatabaseId @param [in] condition ContainerCondition @param [out] list Id @attention limitMAX_INT64OK MAX_INT32(EventEngine) ** **/ bool KeyDataStore::ContainerIdTable::getListOrdered( ContainerId startId, uint64_t limit, const DatabaseId dbId, ContainerCondition& condition, ContainerIdRefList& list) const { list.clear(); list.reserve(std::min<uint64_t>(containerIdMap_.size(), limit)); const util::Set<ContainerAttribute>& attributes = condition.getAttributes(); containerIdMapAsc pred; const StoreType storeType = condition.getStoreType(); bool followingFound = false; for (ContainerIdMap::const_iterator itr = containerIdMap_.begin(); itr != containerIdMap_.end(); ++itr) { const ContainerId id = itr->first; bool isSkip = (id < startId) || (storeType != UNDEF_STORE && itr->second.storeType_ != storeType) || (dbId != UNDEF_DBID && itr->second.databaseVersionId_ != dbId) || (attributes.find(itr->second.attribute_) == attributes.end()); if (isSkip) { continue; } const ContainerIdRefList::value_type entry(itr->first, &itr->second); if (list.size() >= limit) { followingFound = true; if (list.empty()) { break; } std::pop_heap(list.begin(), list.end(), pred); if (pred(entry, list.back())) { list.back() = entry; } } else { list.push_back(entry); } std::push_heap(list.begin(), list.end(), pred); } std::sort_heap(list.begin(), list.end(), pred); return followingFound; } bool KeyDataStore::containerIdMapAsc::operator()( const std::pair<ContainerId, ContainerInfoCache>& left, const std::pair<ContainerId, ContainerInfoCache>& right) const { return left.first < right.first; } bool KeyDataStore::containerIdMapAsc::operator()( const std::pair<ContainerId, const ContainerInfoCache*>& left, const std::pair<ContainerId, const ContainerInfoCache*>& right) const { return left.first < right.first; } ```
× Fatshedera is hybrid genus of flowering plants, common name tree ivy or aralia ivy. It has only one species, × Fatshedera lizei. The hybrid symbol × in front of the name indicates that this is an inter-generic hybrid, a cross between plants from different genera. The name may be displayed with or without a space after the × symbol. × Fatshedera lizei was created by crossing Fatsia japonica 'Moserii' (Moser's Japanese fatsia, the seed parent) and Hedera helix (common ivy, the pollen parent) at the Lizé Frères tree nursery at Nantes in France in 1912. Its generic name is derived from the names of the two parent genera. Description The plant combines the shrubby shape of Fatsia with the five-lobed leaves of Hedera. As a shrub, × F. lizei can grow up to 1.2 m tall, above which the weight of the fairly weak branches makes them tend to bend over. It can however also be tied to a support and grow into a vine up to 3–4 m tall; unlike Hedera, it does not readily climb without assistance. The leaf blades are 7–25 cm long and broad, with a 5–20 cm petiole. The flowers are 4–6 mm diameter, yellowish-white, produced in late autumn or early winter in dense umbels; they are sterile and do not produce any fruit. However, specimens have been reported, which did produce not only flowers but also clusters of berries, one of which put forth shoots in the pot. Cultivation It is grown both as a garden plant outdoors, and as a houseplant indoors, where its tolerance of shady conditions is valued. Inside it grows well in bright indirect light. Outdoors, it can tolerate winter temperatures down to −15 °C, but can also be grown successfully indoors with temperatures never falling below 20 °C. Several cultivars have been selected, with dark green to variously white- or yellow-variegated leaves. × Fatshedera lizei, together with the cultivars 'Annemieke' and 'Variegata' have gained the Royal Horticultural Society's Award of Garden Merit. References External links Araliaceae Plant nothogenera Plants described in 1912 Monotypic Apiales genera
```objective-c // This file is part of Eigen, a lightweight C++ template library // for linear algebra. // // // This Source Code Form is subject to the terms of the Mozilla // with this file, You can obtain one at path_to_url #ifndef EIGEN_TYPE_CASTING_SSE_H #define EIGEN_TYPE_CASTING_SSE_H namespace Eigen { namespace internal { #ifndef EIGEN_VECTORIZE_AVX template <> struct type_casting_traits<float, int> { enum { VectorizedCast = 1, SrcCoeffRatio = 1, TgtCoeffRatio = 1 }; }; template <> struct type_casting_traits<int, float> { enum { VectorizedCast = 1, SrcCoeffRatio = 1, TgtCoeffRatio = 1 }; }; template <> struct type_casting_traits<double, float> { enum { VectorizedCast = 1, SrcCoeffRatio = 2, TgtCoeffRatio = 1 }; }; template <> struct type_casting_traits<float, double> { enum { VectorizedCast = 1, SrcCoeffRatio = 1, TgtCoeffRatio = 2 }; }; #endif template<> EIGEN_STRONG_INLINE Packet4i pcast<Packet4f, Packet4i>(const Packet4f& a) { return _mm_cvttps_epi32(a); } template<> EIGEN_STRONG_INLINE Packet4f pcast<Packet4i, Packet4f>(const Packet4i& a) { return _mm_cvtepi32_ps(a); } template<> EIGEN_STRONG_INLINE Packet4f pcast<Packet2d, Packet4f>(const Packet2d& a, const Packet2d& b) { return _mm_shuffle_ps(_mm_cvtpd_ps(a), _mm_cvtpd_ps(b), (1 << 2) | (1 << 6)); } template<> EIGEN_STRONG_INLINE Packet2d pcast<Packet4f, Packet2d>(const Packet4f& a) { // Simply discard the second half of the input return _mm_cvtps_pd(a); } template<> EIGEN_STRONG_INLINE Packet4i preinterpret<Packet4i,Packet4f>(const Packet4f& a) { return _mm_castps_si128(a); } template<> EIGEN_STRONG_INLINE Packet4f preinterpret<Packet4f,Packet4i>(const Packet4i& a) { return _mm_castsi128_ps(a); } template<> EIGEN_STRONG_INLINE Packet2d preinterpret<Packet2d,Packet4i>(const Packet4i& a) { return _mm_castsi128_pd(a); } template<> EIGEN_STRONG_INLINE Packet4i preinterpret<Packet4i,Packet2d>(const Packet2d& a) { return _mm_castpd_si128(a); } // Disable the following code since it's broken on too many platforms / compilers. //#elif defined(EIGEN_VECTORIZE_SSE) && (!EIGEN_ARCH_x86_64) && (!EIGEN_COMP_MSVC) #if 0 template <> struct type_casting_traits<Eigen::half, float> { enum { VectorizedCast = 1, SrcCoeffRatio = 1, TgtCoeffRatio = 1 }; }; template<> EIGEN_STRONG_INLINE Packet4f pcast<Packet4h, Packet4f>(const Packet4h& a) { __int64_t a64 = _mm_cvtm64_si64(a.x); Eigen::half h = raw_uint16_to_half(static_cast<unsigned short>(a64)); float f1 = static_cast<float>(h); h = raw_uint16_to_half(static_cast<unsigned short>(a64 >> 16)); float f2 = static_cast<float>(h); h = raw_uint16_to_half(static_cast<unsigned short>(a64 >> 32)); float f3 = static_cast<float>(h); h = raw_uint16_to_half(static_cast<unsigned short>(a64 >> 48)); float f4 = static_cast<float>(h); return _mm_set_ps(f4, f3, f2, f1); } template <> struct type_casting_traits<float, Eigen::half> { enum { VectorizedCast = 1, SrcCoeffRatio = 1, TgtCoeffRatio = 1 }; }; template<> EIGEN_STRONG_INLINE Packet4h pcast<Packet4f, Packet4h>(const Packet4f& a) { EIGEN_ALIGN16 float aux[4]; pstore(aux, a); Eigen::half h0(aux[0]); Eigen::half h1(aux[1]); Eigen::half h2(aux[2]); Eigen::half h3(aux[3]); Packet4h result; result.x = _mm_set_pi16(h3.x, h2.x, h1.x, h0.x); return result; } #endif } // end namespace internal } // end namespace Eigen #endif // EIGEN_TYPE_CASTING_SSE_H ```
```ruby assert_equal %q{[1, 2, 4, 5, 6, 7, 8]}, %q{$a = []; begin; ; $a << 1 [1,2].each{; $a << 2 break; $a << 3 }; $a << 4 begin; $a << 5 ensure; $a << 6 end; $a << 7 ; $a << 8 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 6, 7, 8]}, %q{$a = []; begin; ; $a << 1 begin; $a << 2 [1,2].each do; $a << 3 break; $a << 4 end; $a << 5 ensure; $a << 6 end; $a << 7 ; $a << 8 ; rescue Exception; $a << 99; end; $a} assert_equal %q{ok}, %q{ ["a"].inject("ng"){|x,y| break :ok } } assert_equal %q{ok}, %q{ unless ''.respond_to? :lines class String def lines self end end end ('a').lines.map{|e| break :ok } } assert_equal %q{[1, 2, 4, 5]}, %q{$a = []; begin; ; $a << 1 ["a"].inject("ng"){|x,y|; $a << 2 break :ok; $a << 3 }; $a << 4 ; $a << 5 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 4, 5]}, %q{$a = []; begin; ; $a << 1 ('a'..'b').map{|e|; $a << 2 break :ok; $a << 3 }; $a << 4 ; $a << 5 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 7, 8]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 break; $a << 4 ensure; $a << 5 end; $a << 6 end; $a << 7 ; $a << 8 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 6, 9, 10]}, %q{$a = []; begin; ; $a << 1 i=0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 begin; $a << 5 ensure; $a << 6 break; $a << 7 end; $a << 8 end; $a << 9 ; $a << 10 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 7, 10, 11]}, %q{$a = []; begin; ; $a << 1 i=0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 begin; $a << 5 raise; $a << 6 ensure; $a << 7 break; $a << 8 end; $a << 9 end; $a << 10 ; $a << 11 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 7, 10, 11]}, %q{$a = []; begin; ; $a << 1 i=0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 begin; $a << 5 raise; $a << 6 rescue; $a << 7 break; $a << 8 end; $a << 9 end; $a << 10 ; $a << 11 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 8, 9]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 raise StandardError; $a << 4 ensure; $a << 5 break; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 8, 9]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 raise StandardError; $a << 4 rescue; $a << 5 break; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 6, 8, 10, 11]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 begin; $a << 4 break; $a << 5 ensure; $a << 6 end; $a << 7 ensure; $a << 8 end; $a << 9 end; $a << 10 ; $a << 11 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 6, 7, 8, 10, 13, 3, 4, 5, 6, 7, 8, 10, 13, 3, 4, 5, 6, 7, 8, 10, 13, 14, 15]}, %q{$a = []; begin; ; $a << 1 i = 0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 j = 0; $a << 5 while j<3; $a << 6 j+=1; $a << 7 begin; $a << 8 raise; $a << 9 rescue; $a << 10 break; $a << 11 end; $a << 12 end; $a << 13 end; $a << 14 ; $a << 15 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 16, 17]}, %q{$a = []; begin; ; $a << 1 i = 0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 j = 0; $a << 5 while j<3; $a << 6 j+=1; $a << 7 1.times{; $a << 8 begin; $a << 9 raise; $a << 10 rescue; $a << 11 break; $a << 12 end; $a << 13 }; $a << 14 end; $a << 15 end; $a << 16 ; $a << 17 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 6, 7, 8, 10, 13, 3, 4, 5, 6, 7, 8, 10, 13, 3, 4, 5, 6, 7, 8, 10, 13, 14, 15]}, %q{$a = []; begin; ; $a << 1 i = 0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 j = 0; $a << 5 while j<3; $a << 6 j+=1; $a << 7 begin; $a << 8 raise; $a << 9 ensure; $a << 10 break; $a << 11 end; $a << 12 end; $a << 13 end; $a << 14 ; $a << 15 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 3, 4, 5, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 6, 7, 8, 9, 11, 14, 15, 16, 17]}, %q{$a = []; begin; ; $a << 1 i = 0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 j = 0; $a << 5 while j<3; $a << 6 j+=1; $a << 7 1.times{; $a << 8 begin; $a << 9 raise; $a << 10 ensure; $a << 11 break; $a << 12 end; $a << 13 }; $a << 14 end; $a << 15 end; $a << 16 ; $a << 17 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 8, 9]}, %q{$a = []; begin; ; $a << 1 while true; $a << 2 begin; $a << 3 break; $a << 4 ensure; $a << 5 break; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 99]}, %q{ $a = []; begin; ; $a << 1 while true; $a << 2 begin; $a << 3 break; $a << 4 ensure; $a << 5 raise; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4, 6, 8, 9, 10, 11]}, %q{$a = []; begin; ; $a << 1 begin; $a << 2 [1,2].each do; $a << 3 begin; $a << 4 break; $a << 5 ensure; $a << 6 end; $a << 7 end; $a << 8 ensure; $a << 9 end; $a << 10 ; $a << 11 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 4, 99]}, %q{$a = []; begin; ; $a << 1 begin; $a << 2 raise StandardError; $a << 3 ensure; $a << 4 end; $a << 5 ; $a << 6 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 4]}, %q{$a = []; begin; ; $a << 1 begin; $a << 2 ensure; $a << 3 end ; $a << 4 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 3, 5, 99]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 break; $a << 4 ensure; $a << 5 raise StandardError; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{3}, %q{ def m a, b a + b end m(1, while true break 2 end ) } assert_equal %q{4}, %q{ def m a, b a + b end m(1, (i=0; while i<2 i+=1 class C next 2 end end; 3) ) }, tagged: true assert_equal %q{34}, %q{ def m a, b a+b end m(1, 1.times{break 3}) + m(10, (1.times{next 3}; 20)) } assert_equal %q{[1, 2, 3, 6, 7]}, %q{$a = []; begin; ; $a << 1 3.times{; $a << 2 class C; $a << 3 break; $a << 4 end; $a << 5 }; $a << 6 ; $a << 7 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 4, 8, 9]}, %q{$a = []; begin; ; $a << 1 3.times{; $a << 2 class A; $a << 3 class B; $a << 4 break; $a << 5 end; $a << 6 end; $a << 7 }; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 2, 3, 2, 3, 6, 7]}, %q{$a = []; begin; ; $a << 1 3.times{; $a << 2 class C; $a << 3 next; $a << 4 end; $a << 5 }; $a << 6 ; $a << 7 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 4, 2, 3, 4, 2, 3, 4, 8, 9]}, %q{$a = []; begin; ; $a << 1 3.times{; $a << 2 class C; $a << 3 class D; $a << 4 next; $a << 5 end; $a << 6 end; $a << 7 }; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 6, 7]}, %q{$a = []; begin; ; $a << 1 while true; $a << 2 class C; $a << 3 break; $a << 4 end; $a << 5 end; $a << 6 ; $a << 7 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 4, 8, 9]}, %q{$a = []; begin; ; $a << 1 while true; $a << 2 class C; $a << 3 class D; $a << 4 break; $a << 5 end; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{[1, 2, 3, 4, 5, 3, 4, 5, 3, 4, 5, 8, 9]}, %q{$a = []; begin; ; $a << 1 i=0; $a << 2 while i<3; $a << 3 i+=1; $a << 4 class C; $a << 5 next 10; $a << 6 end; $a << 7 end; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a}, tagged: true assert_equal %q{1}, %q{ 1.times{ while true class C begin break ensure break end end end } }, tagged: true assert_equal %q{[1, 2, 3, 5, 2, 3, 5, 7, 8]}, %q{$a = []; begin; ; $a << 1 [1,2].each do; $a << 2 begin; $a << 3 next; $a << 4 ensure; $a << 5 end; $a << 6 end; $a << 7 ; $a << 8 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 2, 6, 3, 5, 7, 8]}, %q{$a = []; begin; ; $a << 1 o = "test"; $a << 2 def o.test(a); $a << 3 return a; $a << 4 ensure; $a << 5 end; $a << 6 o.test(123); $a << 7 ; $a << 8 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 4, 7, 5, 8, 9]}, %q{$a = []; begin; ; $a << 1 def m1 *args; $a << 2 ; $a << 3 end; $a << 4 def m2; $a << 5 m1(:a, :b, (return 1; :c)); $a << 6 end; $a << 7 m2; $a << 8 ; $a << 9 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 8, 2, 3, 4, 5, 9, 10]}, %q{$a = []; begin; ; $a << 1 def m(); $a << 2 begin; $a << 3 2; $a << 4 ensure; $a << 5 return 3; $a << 6 end; $a << 7 end; $a << 8 m; $a << 9 ; $a << 10 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 3, 11, 4, 5, 6, 7, 12, 13]}, %q{$a = []; begin; ; $a << 1 def m2; $a << 2 end; $a << 3 def m(); $a << 4 m2(begin; $a << 5 2; $a << 6 ensure; $a << 7 return 3; $a << 8 end); $a << 9 4; $a << 10 end; $a << 11 m(); $a << 12 ; $a << 13 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[1, 16, 2, 3, 4, 5, 6, 7, 10, 11, 17, 18]}, %q{$a = []; begin; ; $a << 1 def m; $a << 2 1; $a << 3 1.times{; $a << 4 2; $a << 5 begin; $a << 6 3; $a << 7 return; $a << 8 4; $a << 9 ensure; $a << 10 5; $a << 11 end; $a << 12 6; $a << 13 }; $a << 14 7; $a << 15 end; $a << 16 m(); $a << 17 ; $a << 18 ; rescue Exception; $a << 99; end; $a} assert_equal %q{[:ok, :ok2, :last]}, %q{ a = [] i = 0 begin while i < 1 i+=1 begin begin next ensure a << :ok end ensure a << :ok2 end end ensure a << :last end a } assert_equal %q{[:ok, :ok2, :last]}, %q{ a = [] i = 0 begin while i < 1 i+=1 begin begin break ensure a << :ok end ensure a << :ok2 end end ensure a << :last end a } assert_equal %q{[:ok, :ok2, :last]}, %q{ a = [] i = 0 begin while i < 1 if i>0 break end i+=1 begin begin redo ensure a << :ok end ensure a << :ok2 end end ensure a << :last end a } assert_equal %Q{ENSURE\n}, %q{ def test while true return end ensure puts("ENSURE") end test }, '[ruby-dev:37967]' [['[ruby-core:28129]', %q{ class Bug2728 include Enumerable define_method(:dynamic_method) do "dynamically defined method" end def each begin yield :foo ensure dynamic_method end end end e = Bug2728.new }], ['[ruby-core:28132]', %q{ class Bug2729 include Enumerable def each begin yield :foo ensure proc {}.call end end end e = Bug2729.new }], ['[ruby-core:39125]', %q{ class Bug5234 include Enumerable def each(&block) begin yield :foo ensure proc(&block) end end end e = Bug5234.new }], ['[ruby-dev:45656]', %q{ class Bug6460 include Enumerable def each(&block) begin yield :foo ensure 1.times { Proc.new(&block) } end end end e = Bug6460.new }]].each do |bug, src| assert_equal "foo", src + %q{e.detect {true}}, bug assert_equal "true", src + %q{e.any? {true}}, bug assert_equal "false", src + %q{e.all? {false}}, bug assert_equal "true", src + %q{e.include?(:foo)}, bug end assert_equal "foo", %q{ class Bug6460 def m1 m2 {|e| return e } end def m2 begin yield :foo ensure begin begin yield :foo ensure Proc.new raise '' end rescue end end end end Bug6460.new.m1 }, '[ruby-dev:46372]' assert_equal "foo", %q{ obj = "foo" if obj || any1 any2 = any2 else raise obj.inspect end obj }, '[ruby-core:87830]' ```
Radosław Sylwestrzak (born 8 September 1992) is a Polish professional footballer who plays as a defender for III liga club Lechia Zielona Góra. Career Sylwestrzak started his career with Polish fourth division side Ilanka Rzepin. Before the second half of 2013–14, Sylwestrzak signed for GKS Katowice in the Polish second division, where he made 7 appearances and scored 0 goals. Before the second half of 2014–15, he signed for Polish fourth division club Formacja Port 2000 Mostki. In 2015, he signed for Radomiak Radom in the Polish third division. In 2017, Sylwestrzak signed for Polish fourth division team Widzew Łódź. In 2019, he signed for Stal Rzeszów in the Polish third division. References External links Living people 1992 births People from Słubice Footballers from Lubusz Voivodeship Polish men's footballers Men's association football defenders Lechia Zielona Góra players GKS Katowice players Radomiak Radom players Siarka Tarnobrzeg players Widzew Łódź players Stal Rzeszów (football) players KSZO Ostrowiec Świętokrzyski players I liga players II liga players III liga players
```xml <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop"> <PropertyGroup Label="Build"> <TargetFrameworks>net48;netcoreapp3.1;net5.0-windows;net6.0-windows</TargetFrameworks> <UseWPF>true</UseWPF> <!-- Needed so System.Windows.Automation is added --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> <PropertyGroup Label="Package"> <GeneratePackageOnBuild>True</GeneratePackageOnBuild> <PackageOutputPath>..\..\artifacts</PackageOutputPath> <Version>4.0.0</Version> <Product>FlaUI</Product> <Authors>Roemer</Authors> <Description>Library to use FlaUI with UIA2.</Description> <PackageProjectUrl>path_to_url <PackageIcon>FlaUI.png</PackageIcon> <RepositoryUrl>path_to_url <PackageTags>UI Automation UIA2 UIA3 UIA System.Windows.Automation</PackageTags> <IncludeSource>True</IncludeSource> <IncludeSymbols>True</IncludeSymbols> <SymbolPackageFormat>snupkg</SymbolPackageFormat> </PropertyGroup> <PropertyGroup Label="Signing" Condition="'$(EnableSigning)'=='true'"> <SignAssembly>true</SignAssembly> <AssemblyOriginatorKeyFile>../../FlaUI.snk</AssemblyOriginatorKeyFile> <PublicSign Condition="'$(OS)'!='Windows_NT'">true</PublicSign> <PackageId>FlaUI.UIA2.Signed</PackageId> <OutputPath>bin\$(Configuration)\Signed</OutputPath> <IntermediateOutputPath>obj\$(Configuration)\Signed</IntermediateOutputPath> </PropertyGroup> <ItemGroup> <ProjectReference Include="..\FlaUI.Core\FlaUI.Core.csproj" /> </ItemGroup> <ItemGroup Condition="'$(TargetFramework)' == 'net48'"> <Reference Include="UIAutomationClient" /> <Reference Include="UIAutomationTypes" /> <Reference Include="WindowsBase" /> </ItemGroup> <ItemGroup Label="Additional nuget files"> <None Include="..\..\LICENSE.txt" Pack="true" PackagePath="" /> <None Include="..\..\CHANGELOG.md" Pack="true" PackagePath="" /> <None Include="..\..\FlaUI.png" Pack="true" PackagePath="" /> </ItemGroup> </Project> ```
```c++ // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "src/inspector/v8-stack-trace-impl.h" #include <algorithm> #include "../../third_party/inspector_protocol/crdtp/json.h" #include "src/inspector/v8-debugger.h" #include "src/inspector/v8-inspector-impl.h" using v8_crdtp::SpanFrom; using v8_crdtp::json::ConvertCBORToJSON; using v8_crdtp::json::ConvertJSONToCBOR; namespace v8_inspector { int V8StackTraceImpl::maxCallStackSizeToCapture = 200; namespace { static const char kId[] = "id"; static const char kDebuggerId[] = "debuggerId"; static const char kShouldPause[] = "shouldPause"; static const v8::StackTrace::StackTraceOptions stackTraceOptions = static_cast<v8::StackTrace::StackTraceOptions>( v8::StackTrace::kDetailed | v8::StackTrace::kExposeFramesAcrossSecurityOrigins); std::vector<std::shared_ptr<StackFrame>> toFramesVector( V8Debugger* debugger, v8::Local<v8::StackTrace> v8StackTrace, int maxStackSize) { DCHECK(debugger->isolate()->InContext()); int frameCount = std::min(v8StackTrace->GetFrameCount(), maxStackSize); std::vector<std::shared_ptr<StackFrame>> frames(frameCount); for (int i = 0; i < frameCount; ++i) { frames[i] = debugger->symbolize(v8StackTrace->GetFrame(debugger->isolate(), i)); } return frames; } void calculateAsyncChain(V8Debugger* debugger, int contextGroupId, std::shared_ptr<AsyncStackTrace>* asyncParent, V8StackTraceId* externalParent, int* maxAsyncDepth) { *asyncParent = debugger->currentAsyncParent(); *externalParent = debugger->currentExternalParent(); DCHECK(externalParent->IsInvalid() || !*asyncParent); if (maxAsyncDepth) *maxAsyncDepth = debugger->maxAsyncCallChainDepth(); // Do not accidentally append async call chain from another group. This should // not happen if we have proper instrumentation, but let's double-check to be // safe. if (contextGroupId && *asyncParent && (*asyncParent)->externalParent().IsInvalid() && (*asyncParent)->contextGroupId() != contextGroupId) { asyncParent->reset(); *externalParent = V8StackTraceId(); if (maxAsyncDepth) *maxAsyncDepth = 0; return; } // Only the top stack in the chain may be empty, so ensure that second stack // is non-empty (it's the top of appended chain). if (*asyncParent && (*asyncParent)->isEmpty()) { *asyncParent = (*asyncParent)->parent().lock(); } } std::unique_ptr<protocol::Runtime::StackTrace> buildInspectorObjectCommon( V8Debugger* debugger, const std::vector<std::shared_ptr<StackFrame>>& frames, const String16& description, const std::shared_ptr<AsyncStackTrace>& asyncParent, const V8StackTraceId& externalParent, int maxAsyncDepth) { if (asyncParent && frames.empty() && description == asyncParent->description()) { return asyncParent->buildInspectorObject(debugger, maxAsyncDepth); } auto inspectorFrames = std::make_unique<protocol::Array<protocol::Runtime::CallFrame>>(); for (const std::shared_ptr<StackFrame>& frame : frames) { V8InspectorClient* client = nullptr; if (debugger && debugger->inspector()) client = debugger->inspector()->client(); inspectorFrames->emplace_back(frame->buildInspectorObject(client)); } std::unique_ptr<protocol::Runtime::StackTrace> stackTrace = protocol::Runtime::StackTrace::create() .setCallFrames(std::move(inspectorFrames)) .build(); if (!description.isEmpty()) stackTrace->setDescription(description); if (asyncParent) { if (maxAsyncDepth > 0) { stackTrace->setParent( asyncParent->buildInspectorObject(debugger, maxAsyncDepth - 1)); } else if (debugger) { stackTrace->setParentId( protocol::Runtime::StackTraceId::create() .setId(stackTraceIdToString( AsyncStackTrace::store(debugger, asyncParent))) .build()); } } if (!externalParent.IsInvalid()) { stackTrace->setParentId( protocol::Runtime::StackTraceId::create() .setId(stackTraceIdToString(externalParent.id)) .setDebuggerId(V8DebuggerId(externalParent.debugger_id).toString()) .build()); } return stackTrace; } } // namespace V8StackTraceId::V8StackTraceId() : id(0), debugger_id(V8DebuggerId().pair()) {} V8StackTraceId::V8StackTraceId(uintptr_t id, const std::pair<int64_t, int64_t> debugger_id) : id(id), debugger_id(debugger_id) {} V8StackTraceId::V8StackTraceId(uintptr_t id, const std::pair<int64_t, int64_t> debugger_id, bool should_pause) : id(id), debugger_id(debugger_id), should_pause(should_pause) {} V8StackTraceId::V8StackTraceId(StringView json) : id(0), debugger_id(V8DebuggerId().pair()) { if (json.length() == 0) return; std::vector<uint8_t> cbor; if (json.is8Bit()) { ConvertJSONToCBOR( v8_crdtp::span<uint8_t>(json.characters8(), json.length()), &cbor); } else { ConvertJSONToCBOR( v8_crdtp::span<uint16_t>(json.characters16(), json.length()), &cbor); } auto dict = protocol::DictionaryValue::cast( protocol::Value::parseBinary(cbor.data(), cbor.size())); if (!dict) return; String16 s; if (!dict->getString(kId, &s)) return; bool isOk = false; int64_t parsedId = s.toInteger64(&isOk); if (!isOk || !parsedId) return; if (!dict->getString(kDebuggerId, &s)) return; V8DebuggerId debuggerId(s); if (!debuggerId.isValid()) return; if (!dict->getBoolean(kShouldPause, &should_pause)) return; id = parsedId; debugger_id = debuggerId.pair(); } bool V8StackTraceId::IsInvalid() const { return !id; } std::unique_ptr<StringBuffer> V8StackTraceId::ToString() { if (IsInvalid()) return nullptr; auto dict = protocol::DictionaryValue::create(); dict->setString(kId, String16::fromInteger64(id)); dict->setString(kDebuggerId, V8DebuggerId(debugger_id).toString()); dict->setBoolean(kShouldPause, should_pause); std::vector<uint8_t> json; v8_crdtp::json::ConvertCBORToJSON(v8_crdtp::SpanFrom(dict->Serialize()), &json); return StringBufferFrom(std::move(json)); } StackFrame::StackFrame(v8::Isolate* isolate, v8::Local<v8::StackFrame> v8Frame) : m_functionName(toProtocolString(isolate, v8Frame->GetFunctionName())), m_scriptId(String16::fromInteger(v8Frame->GetScriptId())), m_sourceURL( toProtocolString(isolate, v8Frame->GetScriptNameOrSourceURL())), m_lineNumber(v8Frame->GetLineNumber() - 1), m_columnNumber(v8Frame->GetColumn() - 1), m_hasSourceURLComment(v8Frame->GetScriptName() != v8Frame->GetScriptNameOrSourceURL()) { DCHECK_NE(v8::Message::kNoLineNumberInfo, m_lineNumber + 1); DCHECK_NE(v8::Message::kNoColumnInfo, m_columnNumber + 1); } const String16& StackFrame::functionName() const { return m_functionName; } const String16& StackFrame::scriptId() const { return m_scriptId; } const String16& StackFrame::sourceURL() const { return m_sourceURL; } int StackFrame::lineNumber() const { return m_lineNumber; } int StackFrame::columnNumber() const { return m_columnNumber; } std::unique_ptr<protocol::Runtime::CallFrame> StackFrame::buildInspectorObject( V8InspectorClient* client) const { String16 frameUrl = m_sourceURL; if (client && !m_hasSourceURLComment && frameUrl.length() > 0) { std::unique_ptr<StringBuffer> url = client->resourceNameToUrl(toStringView(m_sourceURL)); if (url) { frameUrl = toString16(url->string()); } } return protocol::Runtime::CallFrame::create() .setFunctionName(m_functionName) .setScriptId(m_scriptId) .setUrl(frameUrl) .setLineNumber(m_lineNumber) .setColumnNumber(m_columnNumber) .build(); } bool StackFrame::isEqual(StackFrame* frame) const { return m_scriptId == frame->m_scriptId && m_lineNumber == frame->m_lineNumber && m_columnNumber == frame->m_columnNumber; } // static void V8StackTraceImpl::setCaptureStackTraceForUncaughtExceptions( v8::Isolate* isolate, bool capture) { isolate->SetCaptureStackTraceForUncaughtExceptions( capture, V8StackTraceImpl::maxCallStackSizeToCapture); } // static std::unique_ptr<V8StackTraceImpl> V8StackTraceImpl::create( V8Debugger* debugger, int contextGroupId, v8::Local<v8::StackTrace> v8StackTrace, int maxStackSize) { DCHECK(debugger); v8::Isolate* isolate = debugger->isolate(); v8::HandleScope scope(isolate); std::vector<std::shared_ptr<StackFrame>> frames; if (!v8StackTrace.IsEmpty() && v8StackTrace->GetFrameCount()) { frames = toFramesVector(debugger, v8StackTrace, maxStackSize); } int maxAsyncDepth = 0; std::shared_ptr<AsyncStackTrace> asyncParent; V8StackTraceId externalParent; calculateAsyncChain(debugger, contextGroupId, &asyncParent, &externalParent, &maxAsyncDepth); if (frames.empty() && !asyncParent && externalParent.IsInvalid()) return nullptr; return std::unique_ptr<V8StackTraceImpl>(new V8StackTraceImpl( std::move(frames), maxAsyncDepth, asyncParent, externalParent)); } // static std::unique_ptr<V8StackTraceImpl> V8StackTraceImpl::capture( V8Debugger* debugger, int contextGroupId, int maxStackSize) { DCHECK(debugger); v8::Isolate* isolate = debugger->isolate(); v8::HandleScope handleScope(isolate); v8::Local<v8::StackTrace> v8StackTrace; if (isolate->InContext()) { v8StackTrace = v8::StackTrace::CurrentStackTrace(isolate, maxStackSize, stackTraceOptions); } return V8StackTraceImpl::create(debugger, contextGroupId, v8StackTrace, maxStackSize); } V8StackTraceImpl::V8StackTraceImpl( std::vector<std::shared_ptr<StackFrame>> frames, int maxAsyncDepth, std::shared_ptr<AsyncStackTrace> asyncParent, const V8StackTraceId& externalParent) : m_frames(std::move(frames)), m_maxAsyncDepth(maxAsyncDepth), m_asyncParent(std::move(asyncParent)), m_externalParent(externalParent) {} V8StackTraceImpl::~V8StackTraceImpl() = default; std::unique_ptr<V8StackTrace> V8StackTraceImpl::clone() { return std::unique_ptr<V8StackTrace>(new V8StackTraceImpl( m_frames, 0, std::shared_ptr<AsyncStackTrace>(), V8StackTraceId())); } StringView V8StackTraceImpl::firstNonEmptySourceURL() const { StackFrameIterator current(this); while (!current.done()) { if (current.frame()->sourceURL().length()) { return toStringView(current.frame()->sourceURL()); } current.next(); } return StringView(); } bool V8StackTraceImpl::isEmpty() const { return m_frames.empty(); } StringView V8StackTraceImpl::topSourceURL() const { return toStringView(m_frames[0]->sourceURL()); } int V8StackTraceImpl::topLineNumber() const { return m_frames[0]->lineNumber() + 1; } int V8StackTraceImpl::topColumnNumber() const { return m_frames[0]->columnNumber() + 1; } StringView V8StackTraceImpl::topScriptId() const { return toStringView(m_frames[0]->scriptId()); } StringView V8StackTraceImpl::topFunctionName() const { return toStringView(m_frames[0]->functionName()); } std::unique_ptr<protocol::Runtime::StackTrace> V8StackTraceImpl::buildInspectorObjectImpl(V8Debugger* debugger) const { return buildInspectorObjectImpl(debugger, m_maxAsyncDepth); } std::unique_ptr<protocol::Runtime::StackTrace> V8StackTraceImpl::buildInspectorObjectImpl(V8Debugger* debugger, int maxAsyncDepth) const { return buildInspectorObjectCommon(debugger, m_frames, String16(), m_asyncParent.lock(), m_externalParent, maxAsyncDepth); } std::unique_ptr<protocol::Runtime::API::StackTrace> V8StackTraceImpl::buildInspectorObject() const { return buildInspectorObjectImpl(nullptr); } std::unique_ptr<protocol::Runtime::API::StackTrace> V8StackTraceImpl::buildInspectorObject(int maxAsyncDepth) const { return buildInspectorObjectImpl(nullptr, std::min(maxAsyncDepth, m_maxAsyncDepth)); } std::unique_ptr<StringBuffer> V8StackTraceImpl::toString() const { String16Builder stackTrace; for (size_t i = 0; i < m_frames.size(); ++i) { const StackFrame& frame = *m_frames[i]; stackTrace.append("\n at " + (frame.functionName().length() ? frame.functionName() : "(anonymous function)")); stackTrace.append(" ("); stackTrace.append(frame.sourceURL()); stackTrace.append(':'); stackTrace.append(String16::fromInteger(frame.lineNumber() + 1)); stackTrace.append(':'); stackTrace.append(String16::fromInteger(frame.columnNumber() + 1)); stackTrace.append(')'); } return StringBufferFrom(stackTrace.toString()); } bool V8StackTraceImpl::isEqualIgnoringTopFrame( V8StackTraceImpl* stackTrace) const { StackFrameIterator current(this); StackFrameIterator target(stackTrace); current.next(); target.next(); while (!current.done() && !target.done()) { if (!current.frame()->isEqual(target.frame())) { return false; } current.next(); target.next(); } return current.done() == target.done(); } V8StackTraceImpl::StackFrameIterator::StackFrameIterator( const V8StackTraceImpl* stackTrace) : m_currentIt(stackTrace->m_frames.begin()), m_currentEnd(stackTrace->m_frames.end()), m_parent(stackTrace->m_asyncParent.lock().get()) {} void V8StackTraceImpl::StackFrameIterator::next() { if (m_currentIt == m_currentEnd) return; ++m_currentIt; while (m_currentIt == m_currentEnd && m_parent) { const std::vector<std::shared_ptr<StackFrame>>& frames = m_parent->frames(); m_currentIt = frames.begin(); if (m_parent->description() == "async function") ++m_currentIt; m_currentEnd = frames.end(); m_parent = m_parent->parent().lock().get(); } } bool V8StackTraceImpl::StackFrameIterator::done() { return m_currentIt == m_currentEnd; } StackFrame* V8StackTraceImpl::StackFrameIterator::frame() { return m_currentIt->get(); } // static std::shared_ptr<AsyncStackTrace> AsyncStackTrace::capture( V8Debugger* debugger, int contextGroupId, const String16& description, int maxStackSize) { DCHECK(debugger); v8::Isolate* isolate = debugger->isolate(); v8::HandleScope handleScope(isolate); std::vector<std::shared_ptr<StackFrame>> frames; if (isolate->InContext()) { v8::Local<v8::StackTrace> v8StackTrace = v8::StackTrace::CurrentStackTrace( isolate, maxStackSize, stackTraceOptions); frames = toFramesVector(debugger, v8StackTrace, maxStackSize); } std::shared_ptr<AsyncStackTrace> asyncParent; V8StackTraceId externalParent; calculateAsyncChain(debugger, contextGroupId, &asyncParent, &externalParent, nullptr); if (frames.empty() && !asyncParent && externalParent.IsInvalid()) return nullptr; // When async call chain is empty but doesn't contain useful schedule stack // but doesn't synchronous we can merge them together. e.g. Promise // ThenableJob. if (asyncParent && frames.empty() && (asyncParent->m_description == description || description.isEmpty())) { return asyncParent; } DCHECK(contextGroupId || asyncParent || !externalParent.IsInvalid()); if (!contextGroupId && asyncParent) { contextGroupId = asyncParent->m_contextGroupId; } return std::shared_ptr<AsyncStackTrace>( new AsyncStackTrace(contextGroupId, description, std::move(frames), asyncParent, externalParent)); } AsyncStackTrace::AsyncStackTrace( int contextGroupId, const String16& description, std::vector<std::shared_ptr<StackFrame>> frames, std::shared_ptr<AsyncStackTrace> asyncParent, const V8StackTraceId& externalParent) : m_contextGroupId(contextGroupId), m_id(0), m_suspendedTaskId(nullptr), m_description(description), m_frames(std::move(frames)), m_asyncParent(std::move(asyncParent)), m_externalParent(externalParent) { DCHECK(m_contextGroupId || (!externalParent.IsInvalid() && m_frames.empty())); } std::unique_ptr<protocol::Runtime::StackTrace> AsyncStackTrace::buildInspectorObject(V8Debugger* debugger, int maxAsyncDepth) const { return buildInspectorObjectCommon(debugger, m_frames, m_description, m_asyncParent.lock(), m_externalParent, maxAsyncDepth); } int AsyncStackTrace::contextGroupId() const { return m_contextGroupId; } void AsyncStackTrace::setSuspendedTaskId(void* task) { m_suspendedTaskId = task; } void* AsyncStackTrace::suspendedTaskId() const { return m_suspendedTaskId; } uintptr_t AsyncStackTrace::store(V8Debugger* debugger, std::shared_ptr<AsyncStackTrace> stack) { if (stack->m_id) return stack->m_id; stack->m_id = debugger->storeStackTrace(stack); return stack->m_id; } const String16& AsyncStackTrace::description() const { return m_description; } std::weak_ptr<AsyncStackTrace> AsyncStackTrace::parent() const { return m_asyncParent; } bool AsyncStackTrace::isEmpty() const { return m_frames.empty(); } } // namespace v8_inspector ```
```shell #!/bin/bash source ${PWD}/runtimes.sh git tag -f latest for RUNTIME in $RUNTIMES; do git tag -f $RUNTIME done git tag -f build for RUNTIME in $RUNTIMES; do git tag -f build-${RUNTIME} done ```
Dhusha, now Benighat Rorang is a Rural Municipality in Dhading District in the Bagmati Zone of central Nepal. At the time of the 1991 Nepal census it had a population of 6350. The Dhusha VDC office is located at Charaundi Bazar, which is one of the Commencing place of the White Water Rafting in Trishuli River which started longway back. The bazar is also the main business area for the whole VDC which is located along the Prithvi Highway. Like the general geographical status of the whole country, Dhusha rises from low altitude to medium - high altitude region. Charaundi Khola (Charaundi Stream), flows very close to the bazar. Dhusha proudly boasts as it is one of the few Rural Municipality (VDCs) in the country which serves the country producing tonnes and tonnes of vegetables all the year round which mainly comprises Cabbage, Brinjal, Bitter Gourd, Tomato etc. The vegetable are sent to Kalimati Vegetable Market in Kathmandu and also various cities and towns in the country including Mugling, Narayanghat, Pokhara, Biratnagar, Dharan. It's one of the gateway to the Chitwan District via walking trail. Tourism can be the next source of income in the VDC as the area is rich with natural resources. White water rafting is the main attraction in the area. Trishuli River, which is one of the famous river for White Water Rafting in the country, flows in between Dhusha (Dhading District) and Ghaylchowk VDC (Gorkha District). The main rafting spots which are within the VDC are just 80 km (approx.) with some good resorts (Royal Beach Camp, Himalika Camp/Resort etc.)mainly targeted for tourist are opened in the area. The VDC is just 80 km (approx.) from Kathmandu, the capital city. "Nohak Gupha (Nohak Cave)" also lies within the VDC which can be one of the longest cave in the country though it is yet to be publicized and verified. Canyoning in also a major attraction for local and foreign tourist in the VDC. References Populated places in Dhading District
```java package com.yahoo.prelude.semantics.engine; import com.yahoo.search.Query; /** * A name space representing the (http) parameters following this query * * @author bratseth */ public class ParameterNameSpace extends NameSpace { public boolean matches(String term,RuleEvaluation e) { Query query=e.getEvaluation().getQuery(); String value=query.properties().getString(term); if (value==null) return false; e.setValue(value); return true; } } ```
Nordskogbygda Church () is a parish church of the Church of Norway in Elverum Municipality in Innlandet county, Norway. It is located in the village of Nordskogbygda. It is the church for the Nordskogbygda parish which is part of the Sør-Østerdal prosti (deanery) in the Diocese of Hamar. The white, wooden church was built in a long church design in 1873 using plans drawn up by the architect Otto Schønheyder. The church seats about 250 people. History In 1869, it was decided to build an annex chapel in Nordskogbygda, northeast of the town of Elverum. (Another chapel at Sørskogbygda was built at the same time by the same architect. Identical buildings except this one was smaller.) Land was donated by the local farmer Iver Nederberg. The church was designed by Otto Schønheyder and the lead builder was Günther Schüssler. The building was constructed in 1873. It was consecrated as Nedreberg Chapel on 19 November 1873. In 1963, the chapel was remodeled using plans by Rolf Prague. At the completion of this project, it was re-named Nordskogbygda Chapel. More recently, the chapel was upgraded to the status of parish church and it was renamed Nordskogbygda Church. See also List of churches in Hamar References Elverum Churches in Innlandet Long churches in Norway Wooden churches in Norway 19th-century Church of Norway church buildings Churches completed in 1873 1873 establishments in Norway
```go // // This software (Documize Community Edition) is licensed under // GNU AGPL v3 path_to_url // // You can operate outside the AGPL restrictions by purchasing // Documize Enterprise Edition and obtaining a commercial license // by contacting <sales@documize.com>. // // path_to_url package store import ( "github.com/documize/community/domain" "github.com/documize/community/model/account" "github.com/documize/community/model/activity" "github.com/documize/community/model/attachment" "github.com/documize/community/model/audit" "github.com/documize/community/model/block" "github.com/documize/community/model/category" "github.com/documize/community/model/doc" "github.com/documize/community/model/group" "github.com/documize/community/model/label" "github.com/documize/community/model/link" "github.com/documize/community/model/org" "github.com/documize/community/model/page" "github.com/documize/community/model/permission" "github.com/documize/community/model/pin" "github.com/documize/community/model/search" "github.com/documize/community/model/space" "github.com/documize/community/model/user" ) // Store provides access to data store (database) type Store struct { Account AccountStorer Activity ActivityStorer Attachment AttachmentStorer Audit AuditStorer Block BlockStorer Category CategoryStorer Document DocumentStorer Group GroupStorer Link LinkStorer Label LabelStorer Meta MetaStorer Organization OrganizationStorer Page PageStorer Pin PinStorer Permission PermissionStorer Search SearchStorer Setting SettingStorer Space SpaceStorer User UserStorer Onboard OnboardStorer } // SpaceStorer defines required methods for space management type SpaceStorer interface { Add(ctx domain.RequestContext, sp space.Space) (err error) Get(ctx domain.RequestContext, id string) (sp space.Space, err error) PublicSpaces(ctx domain.RequestContext, orgID string) (sp []space.Space, err error) GetViewable(ctx domain.RequestContext) (sp []space.Space, err error) Update(ctx domain.RequestContext, sp space.Space) (err error) Delete(ctx domain.RequestContext, id string) (rows int64, err error) AdminList(ctx domain.RequestContext) (sp []space.Space, err error) SetStats(ctx domain.RequestContext, spaceID string) (err error) } // CategoryStorer defines required methods for category and category membership management type CategoryStorer interface { Add(ctx domain.RequestContext, c category.Category) (err error) Update(ctx domain.RequestContext, c category.Category) (err error) Get(ctx domain.RequestContext, id string) (c category.Category, err error) GetBySpace(ctx domain.RequestContext, spaceID string) (c []category.Category, err error) GetAllBySpace(ctx domain.RequestContext, spaceID string) (c []category.Category, err error) GetSpaceCategorySummary(ctx domain.RequestContext, spaceID string) (c []category.SummaryModel, err error) Delete(ctx domain.RequestContext, id string) (rows int64, err error) AssociateDocument(ctx domain.RequestContext, m category.Member) (err error) DisassociateDocument(ctx domain.RequestContext, categoryID, documentID string) (rows int64, err error) RemoveCategoryMembership(ctx domain.RequestContext, categoryID string) (rows int64, err error) DeleteBySpace(ctx domain.RequestContext, spaceID string) (rows int64, err error) GetDocumentCategoryMembership(ctx domain.RequestContext, documentID string) (c []category.Category, err error) GetSpaceCategoryMembership(ctx domain.RequestContext, spaceID string) (c []category.Member, err error) RemoveDocumentCategories(ctx domain.RequestContext, documentID string) (rows int64, err error) RemoveSpaceCategoryMemberships(ctx domain.RequestContext, spaceID string) (rows int64, err error) GetByOrg(ctx domain.RequestContext, userID string) (c []category.Category, err error) GetOrgCategoryMembership(ctx domain.RequestContext, userID string) (c []category.Member, err error) } // PermissionStorer defines required methods for space/document permission management type PermissionStorer interface { AddPermission(ctx domain.RequestContext, r permission.Permission) (err error) AddPermissions(ctx domain.RequestContext, r permission.Permission, actions ...permission.Action) (err error) GetUserSpacePermissions(ctx domain.RequestContext, spaceID string) (r []permission.Permission, err error) GetSpacePermissionsForUser(ctx domain.RequestContext, spaceID, userID string) (r []permission.Permission, err error) GetSpacePermissions(ctx domain.RequestContext, spaceID string) (r []permission.Permission, err error) GetCategoryPermissions(ctx domain.RequestContext, catID string) (r []permission.Permission, err error) GetCategoryUsers(ctx domain.RequestContext, catID string) (u []user.User, err error) GetUserCategoryPermissions(ctx domain.RequestContext, userID string) (r []permission.Permission, err error) GetUserDocumentPermissions(ctx domain.RequestContext, documentID string) (r []permission.Permission, err error) GetDocumentPermissions(ctx domain.RequestContext, documentID string) (r []permission.Permission, err error) DeleteDocumentPermissions(ctx domain.RequestContext, documentID string) (rows int64, err error) DeleteSpacePermissions(ctx domain.RequestContext, spaceID string) (rows int64, err error) DeleteUserSpacePermissions(ctx domain.RequestContext, spaceID, userID string) (rows int64, err error) DeleteUserPermissions(ctx domain.RequestContext, userID string) (rows int64, err error) DeleteCategoryPermissions(ctx domain.RequestContext, categoryID string) (rows int64, err error) DeleteSpaceCategoryPermissions(ctx domain.RequestContext, spaceID string) (rows int64, err error) DeleteGroupPermissions(ctx domain.RequestContext, groupID string) (rows int64, err error) } // UserStorer defines required methods for user management type UserStorer interface { Add(ctx domain.RequestContext, u user.User) (err error) Get(ctx domain.RequestContext, id string) (u user.User, err error) GetByDomain(ctx domain.RequestContext, domain, email string) (u user.User, err error) GetByEmail(ctx domain.RequestContext, email string) (u user.User, err error) GetByToken(ctx domain.RequestContext, token string) (u user.User, err error) GetBySerial(ctx domain.RequestContext, serial string) (u user.User, err error) GetActiveUsersForOrganization(ctx domain.RequestContext) (u []user.User, err error) GetUsersForOrganization(ctx domain.RequestContext, filter string, limit int) (u []user.User, err error) GetSpaceUsers(ctx domain.RequestContext, spaceID string) (u []user.User, err error) GetUsersForSpaces(ctx domain.RequestContext, spaces []string) (u []user.User, err error) UpdateUser(ctx domain.RequestContext, u user.User) (err error) UpdateUserPassword(ctx domain.RequestContext, userID, salt, password string) (err error) DeactiveUser(ctx domain.RequestContext, userID string) (err error) ForgotUserPassword(ctx domain.RequestContext, email, token string) (err error) CountActiveUsers() (c []domain.SubscriptionUserAccount) MatchUsers(ctx domain.RequestContext, text string, maxMatches int) (u []user.User, err error) } // AccountStorer defines required methods for account management type AccountStorer interface { Add(ctx domain.RequestContext, account account.Account) (err error) GetUserAccount(ctx domain.RequestContext, userID string) (account account.Account, err error) GetUserAccounts(ctx domain.RequestContext, userID string) (t []account.Account, err error) GetAccountsByOrg(ctx domain.RequestContext) (t []account.Account, err error) DeleteAccount(ctx domain.RequestContext, ID string) (rows int64, err error) UpdateAccount(ctx domain.RequestContext, account account.Account) (err error) HasOrgAccount(ctx domain.RequestContext, orgID, userID string) bool CountOrgAccounts(ctx domain.RequestContext) int } // OrganizationStorer defines required methods for organization management type OrganizationStorer interface { AddOrganization(ctx domain.RequestContext, org org.Organization) error GetOrganization(ctx domain.RequestContext, id string) (org org.Organization, err error) GetOrganizationByDomain(subdomain string) (org org.Organization, err error) UpdateOrganization(ctx domain.RequestContext, org org.Organization) (err error) DeleteOrganization(ctx domain.RequestContext, orgID string) (rows int64, err error) RemoveOrganization(ctx domain.RequestContext, orgID string) (err error) UpdateAuthConfig(ctx domain.RequestContext, org org.Organization) (err error) CheckDomain(ctx domain.RequestContext, domain string) string Logo(ctx domain.RequestContext, domain string) (l []byte, err error) UploadLogo(ctx domain.RequestContext, l []byte) (err error) } // PinStorer defines required methods for pin management type PinStorer interface { Add(ctx domain.RequestContext, pin pin.Pin) (err error) GetPin(ctx domain.RequestContext, id string) (pin pin.Pin, err error) GetUserPins(ctx domain.RequestContext, userID string) (pins []pin.Pin, err error) UpdatePin(ctx domain.RequestContext, pin pin.Pin) (err error) UpdatePinSequence(ctx domain.RequestContext, pinID string, sequence int) (err error) DeletePin(ctx domain.RequestContext, id string) (rows int64, err error) DeletePinnedSpace(ctx domain.RequestContext, spaceID string) (rows int64, err error) DeletePinnedDocument(ctx domain.RequestContext, documentID string) (rows int64, err error) } // AuditStorer defines required methods for audit trails type AuditStorer interface { // Record logs audit entry using own DB Transaction Record(ctx domain.RequestContext, t audit.EventType) } // DocumentStorer defines required methods for document handling type DocumentStorer interface { Add(ctx domain.RequestContext, document doc.Document) (err error) Get(ctx domain.RequestContext, id string) (document doc.Document, err error) GetBySpace(ctx domain.RequestContext, spaceID string) (documents []doc.Document, err error) TemplatesBySpace(ctx domain.RequestContext, spaceID string) (documents []doc.Document, err error) PublicDocuments(ctx domain.RequestContext, orgID string) (documents []doc.SitemapDocument, err error) Update(ctx domain.RequestContext, document doc.Document) (err error) UpdateRevised(ctx domain.RequestContext, docID string) (err error) UpdateGroup(ctx domain.RequestContext, document doc.Document) (err error) ChangeDocumentSpace(ctx domain.RequestContext, document, space string) (err error) MoveDocumentSpace(ctx domain.RequestContext, id, move string) (err error) Delete(ctx domain.RequestContext, documentID string) (rows int64, err error) DeleteBySpace(ctx domain.RequestContext, spaceID string) (rows int64, err error) GetVersions(ctx domain.RequestContext, groupID string) (v []doc.Version, err error) MoveActivity(ctx domain.RequestContext, documentID, oldSpaceID, newSpaceID string) (err error) Pin(ctx domain.RequestContext, documentID string, seq int) (err error) Unpin(ctx domain.RequestContext, documentID string) (err error) PinSequence(ctx domain.RequestContext, spaceID string) (max int, err error) Pinned(ctx domain.RequestContext, spaceID string) (d []doc.Document, err error) } // SettingStorer defines required methods for persisting global and user level settings type SettingStorer interface { Get(area, path string) (val string, err error) Set(area, value string) error GetUser(orgID, userID, area, path string) (val string, err error) SetUser(orgID, userID, area, json string) error } // AttachmentStorer defines required methods for persisting document attachments type AttachmentStorer interface { Add(ctx domain.RequestContext, a attachment.Attachment) (err error) GetAttachment(ctx domain.RequestContext, orgID, attachmentID string) (a attachment.Attachment, err error) GetAttachments(ctx domain.RequestContext, docID string) (a []attachment.Attachment, err error) GetSectionAttachments(ctx domain.RequestContext, sectionID string) (a []attachment.Attachment, err error) GetAttachmentsWithData(ctx domain.RequestContext, docID string) (a []attachment.Attachment, err error) Delete(ctx domain.RequestContext, id string) (rows int64, err error) DeleteSection(ctx domain.RequestContext, id string) (rows int64, err error) } // LinkStorer defines required methods for persisting content links type LinkStorer interface { Add(ctx domain.RequestContext, l link.Link) (err error) SearchCandidates(ctx domain.RequestContext, keywords string) (docs []link.Candidate, pages []link.Candidate, attachments []link.Candidate, err error) GetLink(ctx domain.RequestContext, linkID string) (l link.Link, err error) GetDocumentOutboundLinks(ctx domain.RequestContext, documentID string) (links []link.Link, err error) GetPageLinks(ctx domain.RequestContext, documentID, pageID string) (links []link.Link, err error) MarkOrphanDocumentLink(ctx domain.RequestContext, documentID string) (err error) MarkOrphanPageLink(ctx domain.RequestContext, pageID string) (err error) MarkOrphanAttachmentLink(ctx domain.RequestContext, attachmentID string) (err error) DeleteSourcePageLinks(ctx domain.RequestContext, pageID string) (rows int64, err error) DeleteSourceDocumentLinks(ctx domain.RequestContext, documentID string) (rows int64, err error) DeleteLink(ctx domain.RequestContext, id string) (rows int64, err error) } // ActivityStorer defines required methods for persisting document activity type ActivityStorer interface { RecordUserActivity(ctx domain.RequestContext, activity activity.UserActivity) GetDocumentActivity(ctx domain.RequestContext, id string) (a []activity.DocumentActivity, err error) DeleteDocumentChangeActivity(ctx domain.RequestContext, id string) (rows int64, err error) } // SearchStorer defines required methods for persisting search queries type SearchStorer interface { IndexDocument(ctx domain.RequestContext, doc doc.Document, a []attachment.Attachment) (err error) DeleteDocument(ctx domain.RequestContext, ID string) (err error) IndexContent(ctx domain.RequestContext, p page.Page) (err error) DeleteContent(ctx domain.RequestContext, pageID string) (err error) Documents(ctx domain.RequestContext, q search.QueryOptions) (results []search.QueryResult, err error) } // Indexer defines required methods for managing search indexing process type Indexer interface { IndexDocument(ctx domain.RequestContext, d doc.Document, a []attachment.Attachment) DeleteDocument(ctx domain.RequestContext, ID string) IndexContent(ctx domain.RequestContext, p page.Page) DeleteContent(ctx domain.RequestContext, pageID string) } // BlockStorer defines required methods for persisting reusable content blocks type BlockStorer interface { Add(ctx domain.RequestContext, b block.Block) (err error) Get(ctx domain.RequestContext, id string) (b block.Block, err error) GetBySpace(ctx domain.RequestContext, spaceID string) (b []block.Block, err error) IncrementUsage(ctx domain.RequestContext, id string) (err error) DecrementUsage(ctx domain.RequestContext, id string) (err error) RemoveReference(ctx domain.RequestContext, id string) (err error) Update(ctx domain.RequestContext, b block.Block) (err error) Delete(ctx domain.RequestContext, id string) (rows int64, err error) } // PageStorer defines required methods for persisting document pages type PageStorer interface { Add(ctx domain.RequestContext, model page.NewPage) (err error) Get(ctx domain.RequestContext, pageID string) (p page.Page, err error) GetPages(ctx domain.RequestContext, documentID string) (p []page.Page, err error) GetUnpublishedPages(ctx domain.RequestContext, documentID string) (p []page.Page, err error) GetPagesWithoutContent(ctx domain.RequestContext, documentID string) (pages []page.Page, err error) Update(ctx domain.RequestContext, page page.Page, refID, userID string, skipRevision bool) (err error) Delete(ctx domain.RequestContext, documentID, pageID string) (rows int64, err error) GetPageMeta(ctx domain.RequestContext, pageID string) (meta page.Meta, err error) GetDocumentPageMeta(ctx domain.RequestContext, documentID string, externalSourceOnly bool) (meta []page.Meta, err error) UpdateMeta(ctx domain.RequestContext, meta page.Meta, updateUserID bool) (err error) UpdateSequence(ctx domain.RequestContext, documentID, pageID string, sequence float64) (err error) UpdateLevel(ctx domain.RequestContext, documentID, pageID string, level int) (err error) UpdateLevelSequence(ctx domain.RequestContext, documentID, pageID string, level int, sequence float64) (err error) GetNextPageSequence(ctx domain.RequestContext, documentID string) (maxSeq float64, err error) GetPageRevision(ctx domain.RequestContext, revisionID string) (revision page.Revision, err error) GetPageRevisions(ctx domain.RequestContext, pageID string) (revisions []page.Revision, err error) GetDocumentRevisions(ctx domain.RequestContext, documentID string) (revisions []page.Revision, err error) DeletePageRevisions(ctx domain.RequestContext, pageID string) (rows int64, err error) } // GroupStorer defines required methods for persisting user groups and memberships type GroupStorer interface { Add(ctx domain.RequestContext, g group.Group) (err error) Get(ctx domain.RequestContext, refID string) (g group.Group, err error) GetAll(ctx domain.RequestContext) (g []group.Group, err error) Update(ctx domain.RequestContext, g group.Group) (err error) Delete(ctx domain.RequestContext, refID string) (rows int64, err error) GetGroupMembers(ctx domain.RequestContext, groupID string) (m []group.Member, err error) GetMembers(ctx domain.RequestContext) (r []group.Record, err error) JoinGroup(ctx domain.RequestContext, groupID, userID string) (err error) LeaveGroup(ctx domain.RequestContext, groupID, userID string) (err error) RemoveUserGroups(ctx domain.RequestContext, userID string) (err error) } // MetaStorer provide specialist methods for global administrators. type MetaStorer interface { Documents(ctx domain.RequestContext) (documents []string, err error) Document(ctx domain.RequestContext, documentID string) (d doc.Document, err error) Pages(ctx domain.RequestContext, documentID string) (p []page.Page, err error) Attachments(ctx domain.RequestContext, docID string) (a []attachment.Attachment, err error) SearchIndexCount(ctx domain.RequestContext) (c int, err error) } // LabelStorer defines required methods for space label management type LabelStorer interface { Add(ctx domain.RequestContext, l label.Label) (err error) Get(ctx domain.RequestContext) (l []label.Label, err error) Update(ctx domain.RequestContext, l label.Label) (err error) Delete(ctx domain.RequestContext, id string) (rows int64, err error) RemoveReference(ctx domain.RequestContext, labelID string) (err error) } // OnboardStorer defines required methods for enterprise customer onboarding process. type OnboardStorer interface { ContentCounts(orgID string) (spaces, docs int) } ```
```html <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=US-ASCII"> <title>serial_port_base::stop_bits::value</title> <link rel="stylesheet" href="../../../../../doc/src/boostbook.css" type="text/css"> <meta name="generator" content="DocBook XSL Stylesheets V1.79.1"> <link rel="home" href="../../../boost_asio.html" title="Boost.Asio"> <link rel="up" href="../serial_port_base__stop_bits.html" title="serial_port_base::stop_bits"> <link rel="prev" href="type.html" title="serial_port_base::stop_bits::type"> <link rel="next" href="../service_already_exists.html" title="service_already_exists"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <table cellpadding="2" width="100%"><tr> <td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../boost.png"></td> <td align="center"><a href="../../../../../index.html">Home</a></td> <td align="center"><a href="../../../../../libs/libraries.htm">Libraries</a></td> <td align="center"><a href="path_to_url">People</a></td> <td align="center"><a href="path_to_url">FAQ</a></td> <td align="center"><a href="../../../../../more/index.htm">More</a></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="type.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../serial_port_base__stop_bits.html"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../boost_asio.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../service_already_exists.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a> </div> <div class="section"> <div class="titlepage"><div><div><h4 class="title"> <a name="boost_asio.reference.serial_port_base__stop_bits.value"></a><a class="link" href="value.html" title="serial_port_base::stop_bits::value">serial_port_base::stop_bits::value</a> </h4></div></div></div> <p> <a class="indexterm" name="boost_asio.indexterm.serial_port_base__stop_bits.value"></a> </p> <pre class="programlisting">type value() const; </pre> </div> <table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr> <td align="left"></td> file LICENSE_1_0.txt or copy at <a href="path_to_url" target="_top">path_to_url </p> </div></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="type.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../serial_port_base__stop_bits.html"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../boost_asio.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../service_already_exists.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a> </div> </body> </html> ```
Divisive may refer to: divisive clustering, which is a type of hierarchical clustering divisive rhythm Divide and rule Divisive, a 2022 studio album by Disturbed
Glycogen storage disease type I (GSD I) is an inherited disease that prevents the liver from properly breaking down stored glycogen, which is necessary in maintain adequate blood sugar levels. GSD I is divided into two main types, GSD Ia and GSD Ib, which differ in cause, presentation, and treatment. There are also possibly rarer subtypes, the translocases for inorganic phosphate (GSD Ic) or glucose (GSD Id); however, a recent study suggests that the biochemical assays used to differentiate GSD Ic and GSD Id from GSD Ib are not reliable, and are therefore GSD Ib. GSD Ia is caused by a deficiency in the enzyme glucose-6-phosphatase; GSD Ib, a deficiency in the transport protein glucose-6-phosphate translocase. Because glycogenolysis is the principal metabolic mechanism by which the liver supplies glucose to the body during fasting, both deficiencies cause severe hypoglycemia and, over time, excess glycogen storage in the liver and (in some cases) in the kidneys. Because of the glycogen buildup, GSD I patients typically present with enlarged livers from non-alcoholic fatty liver disease. Other functions of the liver and kidneys are initially intact in GSD I, but are susceptible to other problems. Without proper treatment, GSD I causes chronic low blood sugar, which can lead to excessive lactic acid, and abnormally high lipids in the blood, and other problems. Frequent feedings of cornstarch or other carbohydrates are the principal treatment for all forms of GSD I. GSD Ib also features chronic neutropenia due to a dysfunction in the production of neutrophils in the bone marrow. This immunodeficiency, if untreated, makes GSD Ib patients susceptible to infection. The principal treatment for this feature of GSD Ib is filgrastim; however, patients often still require treatment for frequent infections, and a chronically enlarged spleen is a common side effect. GSD Ib patients often present with inflammatory bowel disease. It is the most common of the glycogen storage diseases. GSD I has an incidence of approximately 1 in 100,000 births in the American population, and approximately 1 in 20,000 births among Ashkenazi Jews. The disease was named after German doctor Edgar von Gierke, who first described it in 1929. Signs and symptoms Early research into GSD I identified numerous clinical manifestations falsely thought to be primary features of the genetic disorder. However, continuing research has revealed that these clinical features are the consequences of only one (in GSD Ia) or two (in GSD Ib) fundamental abnormalities: impairment in the liver's ability to convert stored glycogen into glucose through glycogenolysis in GSD Ib, impairment of the neutrophil's ability to take up glucose, resulting in neutrophil dysfunction and neutropenia These fundamental abnormalities give rise to a small number of primary clinical manifestations, which are the features considered in diagnosis of GSD I: Low blood sugar (hypoglycemia), due to impairment of glycogen breakdown (glycogenolysis) causing insufficient fasting blood glucose hepatomegaly of non-alcoholic fatty liver disease, due to impairment of glycogenolysis causing glycogen accumulation in the liver in GSD Ib, increased infection risk, due to neutropenia and neutrophil dysfunction Affected people commonly present with secondary clinical manifestations, linked to one or more of the primary clinical manifestations: High levels of uric acid in the blood and attendant risk of gout or kidney damage, caused by low serum insulin levels in prolonged hypoglycemia High levels of lactic acid in the blood, in extreme cases leading to lactic acidosis, caused by prolonged hypoglycemia hepatic adenomas developing in adulthood and attendant risk of anemia, suspected to be caused by blood glucose dysregulation in the presence of non-alcoholic fatty liver disease in GSD Ib, inflammatory bowel disease and attendant risk of anemia, caused by neutrophil dysfunction and exacerbated by the increased carbohydrate intake required to prevent hypoglycemia In addition, there are several clinical manifestations that often result from the treatment of the primary clinical manifestations: pancreatic hypertrophy, due to increased carbohydrate intake causing frequent engagement of the insulin response in GSD Ib, splenomegaly, due to the long-term use of filgrastim to treat neutropenia causing sequestration of blood factors in the spleen in GSD Ib, an abnormally low number of platelets in the blood may occur, due to long-term use of filgrastim causing sequestration of platelets in the spleen in GSD Ib, anemia, due to long-term use of filgrastim causing sequestration of hemoglobin in the spleen, potentially exacerbated by uncontrolled inflammatory bowel disease Hypoglycemia Low blood sugar (hypoglycemia) is the primary clinical symptom common to both GSD Ia and GSD Ib and most often prompts initial diagnosis of the disease. During fetal development in utero, maternal glucose transferred across the placenta prevents hypoglycemia. However, after birth, the inability to maintain blood glucose from stored glycogen in the liver causes measurable hypoglycemia in no more than 1–2 hours after feedings. Without proper dietary treatment after birth, prolonged hypoglycemia often leads to sudden lactic acidosis that can induce primary respiratory distress in the newborn period, as well as ketoacidosis. Neurological manifestations of hypoglycemia are less severe in GSD I than in other instances. Rather than acute hypoglycemia, GSD I patients experience persistent mild hypoglycemia. The diminished likelihood of neurological manifestations is due to the habituation of the brain to mild hypoglycemia. Given the reduced blood glucose level, the brain adapts to using alternative fuels like lactate. These gradual metabolic adaptations during infancy make severe symptoms like unconsciousness or seizure uncommon before diagnosis. In the early weeks of life, undiagnosed infants with GSD I tolerate persistent hypoglycemia and compensated lactic acidosis between feedings without symptoms. Without consistent carbohydrate feeding, infant blood glucose levels typically measure between 25 and 50 mg/dL (1.4 to 2.8 mmol/L). After weeks to months without treatment with consistent oral carbohydrates, infants will progress to show clear symptoms of hypoglycemia and lactic acidosis. Infants may present with paleness, clamminess, irritability, respiratory distress, and an inability to sleep through the night even in the second year of life. Developmental delay is not an intrinsic effect of GSD I, but is common if the diagnosis is not made in early infancy. Genetics GSD I is inherited in an autosomal recessive manner. People with one copy of the faulty gene are carriers of the disease and have no symptoms. As with other autosomal recessive diseases, each child born to two carriers of the disease has a 25% chance of inheriting both copies of the faulty gene and manifesting the disease. Unaffected parents of a child with GSD I can be assumed to be carriers. Prenatal diagnosis has been made by fetal liver biopsy at 18–22 weeks of gestation, but no fetal treatment has been proposed. Prenatal diagnosis is possible with fetal DNA obtained by chorionic villus sampling when a fetus is known to be at risk. The most common forms of GSD I are designated GSD Ia and GSD Ib, the former accounting for over 80% of diagnosed cases and the latter for less than 20%. A few rarer forms have been described. GSD Ia results from mutations of G6PC, the gene for glucose-6-phosphatase, located on chromosome 17q21. GSD Ib results from mutations of the gene for SLC37A4 or "G6PT1", the glucose-6-phosphate transporter. GSD Ic results from mutations of SLC17A3 or SLC37A4. Glucose-6-phosphatase is an enzyme located on the inner membrane of the endoplasmic reticulum. The catalytic unit is associated with a calcium binding protein, and three transport proteins (T1, T2, T3) that facilitate movement of glucose-6-phosphate (G6P), phosphate, and glucose (respectively) into and out of the enzyme. Pathophysiology Normal carbohydrate balance and maintenance of blood glucose levels Glycogen in liver and (to a lesser degree) kidneys serves as a form of stored, rapidly accessible glucose, so that the blood glucose level can be maintained between meals. For about 3 hours after a carbohydrate-containing meal, high insulin levels direct liver cells to take glucose from the blood, to convert it to glucose-6-phosphate (G6P) with the enzyme glucokinase, and to add the G6P molecules to the ends of chains of glycogen (glycogen synthesis). Excess G6P is also shunted into production of triglycerides and exported for storage in adipose tissue as fat. When digestion of a meal is complete, insulin levels fall, and enzyme systems in the liver cells begin to remove glucose molecules from strands of glycogen in the form of G6P. This process is termed glycogenolysis. The G6P remains within the liver cell unless the phosphate is cleaved by glucose-6-phosphatase. This dephosphorylation reaction produces free glucose and free anions. The free glucose molecules can be transported out of the liver cells into the blood to maintain an adequate supply of glucose to the brain and other organs of the body. Glycogenolysis can supply the glucose needs of an adult body for 12–18 hours. When fasting continues for more than a few hours, falling insulin levels permit catabolism of muscle protein and triglycerides from adipose tissue. The products of these processes are amino acids (mainly alanine), free fatty acids, and lactic acid. Free fatty acids from triglycerides are converted to ketones, and to acetyl-CoA. Amino acids and lactic acid are used to synthesize new G6P in liver cells by the process of gluconeogenesis. The last step of normal gluconeogenesis, like the last step of glycogenolysis, is the dephosphorylation of G6P by glucose-6-phosphatase to free glucose and . Thus glucose-6-phosphatase mediates the final, key, step in both of the two main processes of glucose production during fasting. The effect is amplified because the resulting high levels of glucose-6-phosphate inhibit earlier key steps in both glycogenolysis and gluconeogenesis. Pathophysiology The principal metabolic effects of deficiency of glucose-6-phosphatase are hypoglycemia, lactic acidosis, hypertriglyceridemia, and hyperuricemia. The hypoglycemia of GSD I is termed "fasting", or "post-absorptive", usually about 4 hours after the complete digestion of a meal. This inability to maintain adequate blood glucose levels during fasting results from the combined impairment of both glycogenolysis and gluconeogenesis. Fasting hypoglycemia is often the most significant problem in GSD I, and typically the problem that leads to the diagnosis. Chronic hypoglycemia produces secondary metabolic adaptations, including chronically low insulin levels and high levels of glucagon and cortisol. Lactic acidosis arises from impairment of gluconeogenesis. Lactic acid is generated both in the liver and muscle and is oxidized by NAD+ to pyruvic acid and then converted via the gluconeogenic pathway to G6P. Accumulation of G6P inhibits conversion of lactate to pyruvate. The lactic acid level rises during fasting as glucose falls. In people with GSD I, it may not fall entirely to normal even when normal glucose levels are restored. Hypertriglyceridemia resulting from amplified triglyceride production is another indirect effect of impaired gluconeogenesis, amplified by chronically low insulin levels. During fasting, the normal conversion of triglycerides to free fatty acids, ketones, and ultimately acetyl-CoA is impaired. Triglyceride levels in GSD I can reach several times normal and serve as a clinical index of "metabolic control". Hyperuricemia results from a combination of increased generation and decreased excretion of uric acid, which is generated when increased amounts of G6P are metabolized via the pentose phosphate pathway. It is also a byproduct of purine degradation. Uric acid competes with lactic acid and other organic acids for renal excretion in the urine. In GSD I increased availability of G6P for the pentose phosphate pathway, increased rates of catabolism, and diminished urinary excretion due to high levels of lactic acid all combine to produce uric acid levels several times normal. Although hyperuricemia is asymptomatic for years, kidney and joint damage gradually accrue. Elevated lactate and lactic acidosis High levels of lactic acid in the blood are observed in all people with GSD I, due to impaired gluconeogenesis. Baseline elevations generally range from 4 to 10 mol/mL, which will not cause any clinical impact. However, during and after an episode of low blood sugar, lactate levels will abruptly rise to exceed 15 mol/mL, the threshold for lactic acidosis. Symptoms of lactic acidosis include vomiting and hyperpnea, both of which can exacerbate hypoglycemia in the setting of GSD I. In cases of acute lactic acidosis, patients need emergency care to stabilize blood oxygen, and restore blood glucose. Proper identification of lactic acidosis in undiagnosed children presents a challenge, since the first symptoms are typically vomiting and dehydration, both of which mimic childhood infections like gastroenteritis or pneumonia. Moreover, both of these common infections can precipitate more severe hypoglycemia in undiagnosed children, making diagnosis of the underlying cause difficult. As elevated lactate persists, uric acid, ketoacids, and free fatty acids further increase the anion gap. In adults and children, the high concentrations of lactate cause significant discomfort in the muscles. This discomfort is an amplified form of the burning sensation a runner may feel in the quadriceps after sprinting, which is caused by a brief buildup of lactic acid. Proper control of hypoglycemia in GSD I eliminates the possibility for lactic acidosis. Elevated urate and complications High levels of uric acid often present as a consequence of elevated lactic acid in GSD I patients. When lactate levels are elevated, blood-borne lactic acid competes for the same kidney tubular transport mechanism as urate, limiting the rate that urate can be cleared by the kidneys into the urine. If present, increased purine catabolism is an additional contributing factor. Uric acid levels of 6 to 12 mg/dl (530 to 1060 umol/L) are common among GSD I patients, if the disease is not properly treated. In some affected people, the use of the medication allopurinol is necessary to lower blood urate levels. Consequences of hyperuricemia among GSD I patients include the development of kidney stones and the accumulation of uric acid crystals in joints, leading to kidney disease and gout, respectively. Hyperlipidemia and plasma effects Elevated triglycerides in GSD I result from low serum insulin in patients with frequent prolonged hypoglycemia. It may also be caused by intracellular accumulation of glucose-6-phosphate with secondary shunting to pyruvate, which is converted into Acetyl-CoA, which is transported to the cytosol where the synthesis of fatty acids and cholesterol occurs. Triglycerides above the 3.4 mmol/L (300 mg/dL) range may produce visible lipemia, and even a mild pseudohyponatremia due to a reduced aqueous fraction of the blood plasma. In GSD I, cholesterol is typically only mildly elevated compared to other lipids. Hepatomegaly Impairment in the liver's ability to perform gluconeogenesis leads to clinically apparent hepatomegaly. Without this process, the body is unable to liberate glycogen from the liver and convert it into blood glucose, leading to an accumulation of stored glycogen in the liver. Hepatomegaly from the accumulation of stored glycogen in the liver is considered a form of non-alcoholic fatty liver disease. GSD I patients present with a degree of hepatomegaly throughout life, but severity often relates to the consumption of excess dietary carbohydrate. Reductions in the mass of the liver are possible, since most patients retain residual hepatic function that allows for the liberation of stored glycogen at a limited rate. GSD I patients often present with hepatomegaly from the time of birth. In fetal development, maternal glucose transferred to the fetus prevents hypoglycemia, but the storage of glucose as glycogen in the liver leads to hepatomegaly. There is no evidence that this hepatomegaly presents any risk to proper fetal development. Hepatomegaly in GSD type I generally occurs without sympathetic enlargement of the spleen. GSD Ib patients may present with splenomegaly, but this is connected to the use of filgrastim to treat neutropenia in this subtype, not comorbid hepatomegaly. Hepatomegaly will persist to some degree throughout life, often causing the abdomen to protrude, and in severe cases may be palpable at or below the navel. In GSD-related non-alcoholic fatty liver disease, hepatic function is usually spared, with liver enzymes and bilirubin remaining within the normal range. However, liver function may be affected by other hepatic complications in adulthood, including the development of hepatic adenomas. Hepatic adenomas The specific etiology of hepatic adenomas in GSD I remains unknown, despite ongoing research. The typical GSD I patient presenting with at least one adenoma is an adult, though lesions have been observed in patients as young as fourteen. Adenomas, composed of heterogeneous neoplasms, may occur individually or in multiples. Estimates on the rate of conversion of a hepatocellular adenoma into hepatocellular carcinoma in GSD I range from 0% to 11%, with the latter figure representing more recent research. One reason for the increasing estimate is the growing population of GSD I patients surviving into adulthood, when most adenomas develop. Treatment standards dictate regular observation of the liver by MRI or CT scan to monitor for structural abnormalities. Hepatic adenomas may be misidentified as focal nodular hyperplasia in diagnostic imaging, though this condition is rare. However, hepatic adenomas in GSD I uniquely involve diffuse Mallory hyaline deposition, which is otherwise commonly observed in focal nodular hyperplasia. Unlike common hepatic adenomas related to oral contraception, hemorrhaging in GSD I patients is rare. While the reason for the high prevalence of adenomas in GSD I is unclear, research since the 1970s has implicated serum glucagon as a potential driver. In studies, patients that have been put on a dietary regimen to keep blood sugar in a normal range spanning 72 to 108 mg/dL (4.0 to 6.0 mmol/L) have shown a decreased likelihood of developing adenomas. Moreover, patients with well controlled blood glucose have consistently seen a reduction in the size and number of hepatic adenomas, suggesting that adenomas may be caused by imbalances of hepatotropic agents like serum insulin and especially serum glucagon in the liver. Osteopenia Patients with GSD I will often develop osteopenia. The specific etiology of low bone mineral density in GSD is not known, though it is strongly associated with poor metabolic control. Osteopenia may be directly caused by hypoglycemia, or the resulting endocrine and metabolic sequelae. Improvements in metabolic control have consistently been shown to prevent or reverse clinically relevant osteopenia in GSD I patients. In cases where osteopenia progresses with age, bone mineral density in the ribs is typically more severe than in the vertebrae. In some cases bone mineral density T-score will drop below -2.5, indicating osteoporosis. There is some evidence that osteopenia may be connected with associated kidney abnormalities in GSD I, particularly glomular hyperfiltration. The condition also seems responsive to calcium supplementation. In many cases bone mineral density can increase and return to the normal range given proper metabolic control and calcium supplementation alone, reversing osteopenia. Kidney effects The kidneys are usually 10 to 20% enlarged with stored glycogen. In adults with GSD I, chronic glomerular damage similar to diabetic nephropathy may lead to kidney failure. GSD I may present with various kidney complications. Renal tubular abnormalities related to hyperlactatemia are seen early in life, likely because prolonged lactic acidosis is more likely to occur in childhood. This will often present as Fanconi syndrome with multiple derangements of renal tubular reabsorption, including tubular acidosis with bicarbonate and phosphate wasting. These tubular abnormalities in GSD I are typically detected and monitored by urinary calcium. Long term these derangements can exacerbate uric acid nephropathy, otherwise driven by hyperlactatemia. In adolescence and beyond, glomerular disease may independently develop, initially presenting as glomerular hyperfiltration indicated by elevated urinary eGFR. Splenomegaly Enlargement of the spleen (splenomegaly) is common in GSD I and has two primary causes. In GSD Ia, splenomegaly may be caused by a relation between the liver and the spleen which causes either to grow or shrink to match the relative size of the other, to a lessened degree. In GSD Ib, it is a side effect of the use of filgrastim to treat neutropenia. Bowel effects Intestinal involvement can cause mild malabsorption with greasy stools (steatorrhea), but usually requires no treatment. Infection risk Neutropenia is a distinguishing feature of GSD Ib, absent in GSD Ia. The microbiological cause of neutropenia in GSD Ib is not well understood. Broadly, the problem arises from compromised cellular metabolism in the neutrophil, resulting in accelerated neutrophil apoptosis. The neutropenia in GSD is characterized by both a decrease in absolute neutrophil count and diminished neutrophil function. Neutrophils use a specific G6P metabolic pathway which relies on the presence of G6Pase-β or G6PT to maintain energy homeostasis within the cell. The absence of G6PT in GSD Ib limits this pathway, leading to endoplasmic reticulum stress, oxidative stress within the neutrophil, triggering premature apoptosis. Granulocyte colony-stimulating factor (G-CSF), available as filgrastim, can reduce the risk of infection. In some cases, G-CSF formulated as pegfilgrastim, sold under the trade name Neulasta, may be used as a slow-acting alternative, requiring less frequent dosing. Thrombocytopenia and blood clotting problems Impaired platelet aggregation is an uncommon consequence of chronic hypoglycemia, seen in GSD I patients. Research has demonstrated decreased platelet function, characterized by decreased prothrombin consumption, abnormal aggregation reactions, prolonged bleeding time, and low platelet adhesiveness. Severity of platelet dysfunction typically correlates with clinical condition, with the most severe cases correlating with lactic acidosis and severely lipidemia. It may cause clinically significant bleeding, especially epistaxis. Additionally, GSD I patients may present with thrombocytopenia as a consequence of splenomegaly. In the setting of splenomegaly various hematologic factors may be sequestered in the tissues of the spleen as blood is filtered through the organ. This can diminish levels of platelets available in the bloodstream, leading to thrombocytopenia. Developmental effects Developmental delay is a potential secondary effect of chronic or recurrent hypoglycemia, but is at least theoretically preventable. Normal neuronal and muscle cells do not express glucose-6-phosphatase, and are thus not impacted by GSD I directly. However, without proper treatment of hypoglycemia, growth failure commonly results from chronically low insulin levels, persistent acidosis, chronic elevation of catabolic hormones, and calorie insufficiency (or malabsorption). The most dramatic developmental delays are often the cause of severe (not just persistent) episodes of hypoglycemia. Diagnosis Several different problems may lead to the diagnosis, usually by two years of age: seizures or other manifestations of severe fasting hypoglycemia hepatomegaly with abdominal protuberance hyperventilation and apparent respiratory distress due to metabolic acidosis episodes of vomiting due to metabolic acidosis, often precipitated by minor illness and accompanied by hypoglycemia Once the diagnosis is suspected, the multiplicity of clinical and laboratory features usually makes a strong circumstantial case. If hepatomegaly, fasting hypoglycemia, and poor growth are accompanied by lactic acidosis, hyperuricemia, hypertriglyceridemia, and enlarged kidneys by ultrasound, GSD I is the most likely diagnosis. The differential diagnosis list includes glycogenoses types III and VI, fructose 1,6-bisphosphatase deficiency, and a few other conditions (page 5), but none are likely to produce all of the features of GSD I. The next step is usually a carefully monitored fast. Hypoglycemia often occurs within six hours. A critical blood specimen obtained at the time of hypoglycemia typically reveals a mild metabolic acidosis, high free fatty acids and beta-hydroxybutyrate, very low insulin levels, and high levels of glucagon, cortisol, and growth hormone. Administration of intramuscular or intravenous glucagon (0.25 to 1 mg, depending on age) or epinephrine produces little rise of blood sugar. The diagnosis is definitively confirmed by liver biopsy with electron microscopy and assay of glucose-6-phosphatase activity in the tissue and/or specific gene testing, available in recent years. Treatment The primary treatment goal is prevention of hypoglycemia and the secondary metabolic derangements by frequent feedings of foods high in glucose or starch (which is readily digested to glucose). To compensate for the inability of the liver to provide sugar, the total amount of dietary carbohydrate should approximate the 24-hour glucose production rate. The diet should contain approximately 65–70% carbohydrate, 10–15% protein, and 20–25% fat. At least a third of the carbohydrates should be supplied through the night, so that a young child goes no more than 3–4 hours without carbohydrate intake. Once a diagnosis is made, the priority in GSD I treatment is to maintain an adequate blood glucose. Patients aim to maintain a blood glucose above the 72 mg/dL (4.0 mmol/L) cutoff for hypoglycemia. GSD Ib patients have an additional treatment priority relating to neutropenia. Proper management of blood glucose in GSD I is critical in avoiding the more severe effects of high levels of lactic acid and uric acid in the blood, and the development of hepatic adenomas. In the last 30 years, two methods have been used to achieve this goal in young children: (1) continuous nocturnal gastric infusion of glucose or starch; and (2) night-time feedings of uncooked cornstarch. An elemental formula, glucose polymer, and/or cornstarch can be infused continuously through the night at a rate supplying 0.5–0.6 g/kg/h of glucose for an infant, or 0.3–0.4 for an older child. This method requires a nasogastric or gastrostomy tube and pump. Sudden death from hypoglycemia has occurred due to malfunction or disconnection, and periodic cornstarch feedings are now preferred to continuous infusion. Cornstarch is an inexpensive way to provide gradually digested glucose. One tablespoon contains nearly 9 g carbohydrate (36 calories). Although it is safer, less expensive, and requires no equipment, this method does require that parents arise every 3–4 hours to administer the cornstarch. A typical requirement for a young child is 1.6 g/kg every 4 hours. Long-term management should eliminate hypoglycemic symptoms and maintain normal growth. Treatment should achieve normal glucose, lactic acid, and electrolyte levels, and only mild elevations of uric acid and triglycerides. Avoidance of other sugars Intake of carbohydrates which must be converted to G6P to be utilized (e.g., galactose and fructose) should be minimized. Although elemental formulas are available for infants, many foods contain fructose or galactose in the forms of sucrose or lactose. Adherence becomes a contentious treatment issue after infancy. Other therapeutic measures Persistent elevation of uric acid above 6.5 mg/dl warrants treatment with allopurinol to prevent uric acid deposition in kidneys and joints. Because of the potential for impaired platelet function, coagulation ability should be checked and the metabolic state normalized before surgery. Bleeding time may be normalized with 1–2 days of glucose loading, and improved with ddavp. During surgery, IV fluids should contain 10% dextrose and no lactate. A patient with GSD, type 1b was treated with a liver transplant at UCSF Medical Center in 1993 that resulted in the resolution of hypoglycemic episodes and the need for the patient to stay away from natural sources of sugar. Other patients have undergone this procedure as well with positive results. Although a liver transplant resulted in the resolution of hypoglycemia it did not however resolve the chronic neutropenia and the risk of infection among patients. Treatment of acute metabolic acidosis episodes The most significant acute problem in childhood is a vulnerability to episodes of metabolic acidosis precipitated by minor illnesses. If a vomiting illness persists longer than 2–4 hours, the child should be seen and assessed for dehydration, acidosis, and hypoglycemia. If these are developing, intravenous fluids should be provided at a rate above maintenance. For mild acidosis, an effective fluid is 10% dextrose in ½ normal saline with 20 mEq/L KCl, but if acidosis is severe, 75–100 mEq/L and 20 mEq/L of K acetate can be substituted for the NaCl and KCl. Metabolic control Metabolic control often diminishes during and after puberty, as a result of a patient outgrowing their dietary treatment plan. Prognosis Without adequate metabolic treatment, patients with GSD I have died in infancy or childhood of overwhelming hypoglycemia and acidosis. Those who survived were stunted in physical growth and delayed in puberty because of chronically low insulin levels. Intellectual disability resulting from recurrent, severe hypoglycemia is considered preventable with appropriate treatment. Liver complications have been serious in some patients. Adenomas of the liver can develop in the second decade or later, with a small chance of later malignant transformation to hepatoma or hepatic carcinomas (detectable by alpha-fetoprotein screening). Several children with advanced hepatic complications have improved after liver transplantation. Additional problems reported in adolescents and adults with GSD I have included hyperuricemic gout, pancreatitis, and chronic kidney failure. Despite hyperlipidemia, atherosclerotic complications are uncommon. With diagnosis before serious harm occurs, prompt reversal of acidotic episodes, and appropriate long-term treatment, most children will be healthy. With exceptions and qualifications, adult health and life span may also be fairly good, although lack of effective treatment before the mid-1980s means information on long-term efficacy is limited. Epidemiology In the United States, GSD I has an incidence of approximately 1 in 50,000 to 100,000 births. None of the glycogenoses are currently detected by standard or extended newborn screening. The disease is more common in people of Ashkenazi Jewish, Mexican, Chinese, and Japanese descent. References Further reading GeneReview/NIH/UW entry on Glycogen Storage Disease Type I External links Autosomal recessive disorders Hepatology Inborn errors of carbohydrate metabolism
The humeral veil is one of the liturgical vestments of the Roman Rite, also used in some Anglican and Lutheran churches. It consists of a piece of cloth about 2.75 m long and 90 cm wide draped over the shoulders and down the front, normally of silk or cloth of gold. At the ends there are sometimes pockets in the back for hands to go into so that the wearer can hold items without touching them with the hands. There is no clarity on when the humeral veil first appeared, though it was certainly in use in the continental Tridentine Rite and in other pre-Reformation usages including the Sarum Rite. The humeral veil is of the liturgical colour of the day on which it is used, or else is white or cloth of gold. The humeral veil is most often seen during the liturgy of Exposition and Benediction of the Blessed Sacrament. When priests or deacons bless the people with the monstrance, they cover their hands with the ends of the veil so that their hands do not touch the monstrance as a mark of respect for the sacred vessel and as an indication that it is Jesus present in the Eucharistic species who blesses the people and not the minister. The humeral veil is also seen at the Mass of the Lord's Supper of the Catholic Church. It is used when the Ciborium containing the Blessed Sacrament is taken in procession to the place of reposition, and again when it is brought back to the altar without solemnity during the Good Friday service. The ritual for Requiem masses does not require the use of a humeral veil. The exception to this is the Dominican Rite which has a number of distinctive liturgical customs. In the High Mass form of Tridentine Mass, the subdeacon uses a humeral veil when carrying the chalice, paten, or other sacred vessels, which should be touched only by a deacon or another man in major orders. There are several ways to fold the humeral veil; it can be folded so that each side is folded individually like an accordion (with the folds either on top of the center or underneath the center of the humeral veil), or it can be folded by folding both sides simultaneously in an accordion style (after offsetting one side). The humeral veil should not be confused with the vimpa, which is of a similar but narrower design. The vimpa is sometimes used when a bishop celebrates Mass. In the Roman Rite, if the bishop uses a mitre and crosier, the altar servers assigned to the task of holding those items cover their hands with the vimpa when holding them, symbolizing that the items do not belong to them. The vimpa may be in the color of the day or alternatively of a simple material in white or green. In Imperial Roman court ceremonial, a similar veil, or sudarium, was used by attendants approaching the Emperor to cover their hands, presumably in case he handed them something. In art, angels adjacent to Christ often have such a cloth in Late Antique and Early Medieval art. See also Anglican devotions Sacramentals References Roman Catholic vestments Anglican vestments Eucharistic objects Shawls and wraps
```javascript 'use strict'; const { Error } = primordials; const { messaging_deserialize_symbol, messaging_transfer_symbol, messaging_clone_symbol, messaging_transfer_list_symbol } = internalBinding('symbols'); const { JSTransferable, setDeserializerCreateObjectFunction } = internalBinding('messaging'); function setup() { // Register the handler that will be used when deserializing JS-based objects // from .postMessage() calls. The format of `deserializeInfo` is generally // 'module:Constructor', e.g. 'internal/fs/promises:FileHandle'. setDeserializerCreateObjectFunction((deserializeInfo) => { const [ module, ctor ] = deserializeInfo.split(':'); const Ctor = require(module)[ctor]; if (typeof Ctor !== 'function' || !(Ctor.prototype instanceof JSTransferable)) { // Not one of the official errors because one should not be able to get // here without messing with Node.js internals. // eslint-disable-next-line no-restricted-syntax throw new Error(`Unknown deserialize spec ${deserializeInfo}`); } return new Ctor(); }); } module.exports = { setup, JSTransferable, kClone: messaging_clone_symbol, kDeserialize: messaging_deserialize_symbol, kTransfer: messaging_transfer_symbol, kTransferList: messaging_transfer_list_symbol }; ```
```java package com.ctrip.platform.dal.daogen.resource; import com.ctrip.platform.dal.daogen.entity.DalGroup; import com.ctrip.platform.dal.daogen.entity.Project; import com.ctrip.platform.dal.daogen.log.LoggerManager; import com.ctrip.platform.dal.daogen.utils.BeanGetter; import javax.annotation.Resource; import javax.inject.Singleton; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; import javax.ws.rs.core.MediaType; import java.sql.SQLException; import java.util.List; @Resource @Singleton @Path("projectview") public class DalGroupProjectResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<DalGroup> getGroups(@QueryParam("root") boolean root) throws SQLException { try { List<DalGroup> groups = BeanGetter.getDaoOfDalGroup().getAllGroups(); for (DalGroup group : groups) { group.setText(group.getGroup_name()); group.setIcon("glyphicon glyphicon-folder-open"); group.setChildren(true); } return groups; } catch (Throwable e) { LoggerManager.getInstance().error(e); throw e; } } @GET @Path("groupprojects") @Produces(MediaType.APPLICATION_JSON) public List<Project> getGroupProjects(@QueryParam("groupId") String groupId) throws SQLException { try { int groupID = -1; groupID = Integer.parseInt(groupId); return BeanGetter.getDaoOfProject().getProjectByGroupId(groupID); } catch (Throwable e) { LoggerManager.getInstance().error(e); throw e; } } } ```
The Andagua volcanic field (also known as Andahua) is a volcanic field in southern Peru which includes a number of cinder cones, lava domes and lava flows which have filled the Andagua Valley (which is also known as Valley of the Volcanoes for this reason). The volcanic field is part of a larger volcanic province that clusters around the Colca River and is mostly of Pleistocene age, although the Andagua sector also features volcanic cones with historical activity, with the last eruption about 370 years ago. Eruptions were mostly effusive, generating lava flows, cones and small eruption columns. Future eruptions are possible, and there is ongoing fumarolic activity. Volcanic activity in the field has flooded the Andahua valley with lava flows, damming local watersheds in the Laguna de Chachas, Laguna Mamacocha and Laguna Pumajallo lakes and burying the course of the Andagua River. The Andahua valley segment of the larger volcanic province was declared a geopark in 2015. History and name The volcanoes were first mentioned in a 1904 report but scientific investigation began by 1960; owing to the small size of Andagua volcanoes and their remote location they have not gained as much scientific interest as the large stratovolcanoes in the region. Eruptions have been dated on the basis of radiocarbon dating, potassium-argon dating and the morphology of the resulting vents as younger structures are steeper. The term "Andagua volcanic field" has not been used consistently and sometimes the term "Andagua Group" or variants with "Andahua" are used, even though the name of the village is Andagua; the field is also known as Andagua-Orcopampa volcanic field. The term "Valley of the Volcanoes" is a reference to the volcanoes that fill the valley floor. Geology and geomorphology The Andagua volcanic field lies in southern Peru, from the city of Arequipa and within the Arequipa Department and its provinces Castilla, Caylloma and Condesuyos. The towns of Orcopampa, Andagua/Andahua, Soporo, Chachas, Sucna and Ayo lie in its area along with mines and the Inka sites of Antaymarca, Ayo and Jello Jello; economic activity includes farming and mining as well as commerce and industrial activity. The volcanic field consists of cinder cones, lava domes, lava flow fields, pyroclastic cones, and scoria cones. Lava flows emanated from cones, domes and fractures; some cones have been breached by lava flows. Lava flows reach lengths of and thicknesses of ; their surfaces are blocky and feature channels. The highest individual volcano is high although the average height of cones is about or and their width is about ; lava domes reach heights of . Most of the vents are concentrated in the Valley of the Volcanoes, a long valley that descends to the Colca River, where they form clusters and alignments which have flooded the valley and tributary valleys with lava flows; most vents are situated on the valley floor while others lie on its flanks. Aside from the Andagua Valley proper, the volcanoes spread across the Apune Valley to the northwest and the Ayo Valley to the south. These are not monogenetic volcanoes as some of them show evidence of multiple eruption episodes. Colours range from grey over reddish to black, with reddish colours appearing on weathered lavas. The valley is flanked by high mountains. Among the vents are: West-northwest from Orcopampa lies the wide Mauras cone with a surrounding lava flow field. Farther northwest still the Jullulluyoc and Umajala lava domes and lava flows which reach the road between Orcopampa and the Poracota gold mine. All three are of Pleistocene age and the second two developed on ridges flanking the valley; Umajala bears signs of glaciation. In the valley of the Sora River lie a number of lava domes and three pyroclastic cones along with a lava flow field that reaches to the Andagua River. The cinder cones from north to south are Misahuana Mauras, Pabellón and Yana Mauras, while one of the lava domes is known as Jochane. An additional lava flow lies in the Pallca River valley, which joins the Sora River valley from the west, just before the entry of the Sora River valley into the Andagua Valley. These are of Pleistocene to Holocene age. South of Misahuanca in the Andagua valley lie six vents with a surrounding Pleistocene lava flow field; these vents are the Cerro Mauras cinder cone and two lava domes forming a northern cluster, and the cinder cone Challhue Mauras, the lava dome Tororocsa and the cinder cone Panahua aligned in west–east direction. Some of the vents predate the surrounding lava flows, while others post-date them such as Cerro Mauras which formed atop an older vent; the lava flows themselves blocked the valley and formed a large lava flow field. An additional also Pleistocene lava dome and lava flow are located farther east and fill a hanging valley. The Santa Rosa cinder cones and the Cerro Puca Mauras cinder cone, the largest in the Andagua volcanic field, along with a few lava domes such as Chipchane and an unnamed over wide dome lie within a Pleistocene to Holocene age lava flow field that spreads northwards, westwards and southwards towards the Andagua River, covering the entire valley. Consequently, the Andagua River cut a gorge across the lava flow fields, which ends in a waterfall farther south. This part of the volcanic field produced the most volcanic activity; its vents were controlled by faults. Puca Mauras is the largest cone and features pre-Hispanic buildings. Around the town of Andagua and along the road to Viraco west of Andagua lies a Pleistocene lava flow field that propagates from interconnected lava domes/lava craters in the El Tambo River valley east-southeastwards towards Andagua. The largest of these lava domes lies east of Andagua, is called Cochapampa and features a lava dome nested within its crater; additionally, the cinder cones Yanamauras, Yanamauras Sur directly north of Andagua and Ticsho northwest of the town are also part of this field. The vents here are spread across the valley, are smaller and have different ages. For example, the Pra-Ticsho lava dome is 270,000 years old while Ticsho only 4,050. Southeast of Andagua, the valley is mainly filled by Holocene lava flows, except around Soporo and east of Chachas where there are Pleistocene lava flows; this part of the volcanic field is known as the Chilcayoc lava field. Along with these are dispersed cinder cones such as Jenchana south of Andagua, Ninamama east of Andagua, Pampalquita, Ucuja, Chico, Chilcayoc, Jechapita clockwise around Soporo and Chilcayoc Grande farther east, along with a number of lava domes such as the cluster west of Sucna. One of the domes around Soporo is heavily eroded, the cinder cones are in part breached by lava flows. Chilcayoc Grande is the most prominent cinder cone of the Andagua volcanic field. North of the Chachas lake lies another lava flow field, with two lava domes aligned in southwest–northeast direction and a cinder cone Cerro Ticlla; the flows reached Chachas lake. This field is of Pleistocene age and bears signs of glaciation but the Cerro Pucamauras cinder cone in its middle is younger. Older volcanic landforms are vegetated and have developed a soil cover, and sometimes are altered by river or glacial erosion or have been converted into farmland. Overall, in outcrops the volcanic rocks of the Andagua valley reach great thickness, forming plains of lava and occasionally accumulations or fields of volcanic ash; the total volume of volcanic rocks is about and thicknesses are about . The Andagua River flows through the Valley of the Volcanoes; it originates from the confluence of the Chilcaimarca and Orcopampa Rivers and receives several tributaries over its course in the valley. In the Valley of the Volcanoes, the Andagua River has cut a gorge into the lava fields and has formed waterfalls, while elsewhere it disappears under the lava flows. Lava flows have formed lakes by damming drainages, such as Laguna de Chachas, Laguna Mamacocha and Laguna Pumajallo; additionally sediments from older lakes have been found at Canco. The waters of the Andagua River disappear in lava flows over a path of over ; the Laguna Mamacocha produces the Mamacocha River whose water ultimately originates in the Andagua River and which eventually flows into the Colca River. Composition The volcanic field has erupted rocks ranging from basaltic andesite to dacite, with composition varying from one individual volcano to the other but dominantly sodic although it has also been described as potassic owing to the poassium-silica ratio. Generally, the rocks fall into the categories benmoreite, latite and mugearite with rare andesite and basalt. Phenocrysts include hornblende, olivine, plagioclase and pyroxene and less commonly alkali feldspar and biotite, and xenoliths have been reported as well. Overall, the composition of the magma is the most primitive of the magmas of southern Peru and underwent crystallization in deep magma chambers which "overflowed" in the form of an eruption once new magma entered them. In addition, the magma underwent some degree of contamination with crustal materials. Geologic context Subduction off the western margin of South America probably commenced during the Paleozoic and has continued to the present day between the Nazca Plate and the South America Plate, where the former subducts at a rate of below the latter. It has been accompanied by orogeny and volcanic activity, with three distinct phases of folding known as the Mochica, Peruvian and Inca phases which gave rise to faults and folds. The volcanic activity manifested itself as a set of volcanic arcs, such as the Tacaza arc with mineral-bearing calderas and the presently active Central Volcanic Zone which includes the Andagua volcanic field. In turn, the Central Volcanic Zone is one of three main volcanic arcs in the Andes which are separated by gaps without volcanic activity. Small volcanoes such as these of the Andagua volcanic field are a subordinate part of the Peruvian Central Volcanic Zone; most volcanoes are large and among these is Sabancaya with historical activity, El Misti with solfataric activity, Coropuna, which is the highest volcano in Peru and features Holocene activity, Firura and Solimana north and west from Coropuna, and Mismi, Hualca Hualca, Ampato, Chachani and Pichu Pichu. Additional volcanoes of this volcanic zone occur in Bolivia and Chile. The terrain surrounding the volcanic field features alluvium of Pleistocene to Holocene age, the volcanic Neogene/Pliocene Barroso Group and Mesozoic sediments of the Yura Group and the Socosani Formation. Faults crisscross the volcanic field, magma may have used them as ascent paths; the Valley of the Volcanoes itself is a fault-limited graben and some faults offset Quaternary deposits. The Andagua volcanic field is sometimes considered to include a area outside of the Valley of the Volcanoes, which itself features seven separate clusters of volcanoes including the Valley of the Volcanoes but also the Antapuna, Colca Valley, Huambo-Cabanaconde, Laguna Parihuana, Molloco Valley and Pampa Jaran; these clusters are separated from each other by geographic and geologic traits. Alternatively, some of these are considered to be a volcanic province of which Andagua is only one field of. Among these are: The Antapuna field is located just north of the Andagua volcanic field and is centered on the heavily glacially eroded Antapuna volcano. Several lava domes and lava flows occur in this area, such as Cerro Antapuna west of Antapuna, Tanca southwest of Antapuna, Pampa Pisaca and another lava dome southeast of Antapuna and several unnamed cinder cones and lava flows northeast of Antapuna. The vents are glacially eroded and of Pleistocene age with the exception of Pumaranra northwest from Antapuna. The Molloco River valley features several Pleistocene to Holocene lava domes such as Uchuychaca and Cerro Coropuna (not to be confused with Coropuna, a stratovolcano), which are located around the Marhuas cinder cone. Two small lava flows lie in the Colca River valley upstream of the junction with the Molloco River. Several lava domes with associated lava flows are found in the Colca River valley at Chivay; they are between 400,000 and 90,000 years old but thermal springs occur there. South of Caylloma several volcanoes are found on an upland; they are Antaymarca, Saigua, Challpo, Andallullo, Antacollo and Sani and appear to be old given their vegetation. Finally, there are volcanoes associated with the Andagua volcanic field south of the Colca River. These are from west to east the Luceria field west of Gloriahuasi with the Honda and San Cristobal cinder cones, the Gloriahuasi field north of Gloriahuasi with two branches of lava flows, the Timar field northeast of Gloriahuasi with the Gloriahuasi stratovolcano - the only stratovolcano that is part of the Andagua volcanic field -, the Jaran field northwest from Lagunillas Pass which has the Marbas Grande cinder cone, the Marbas Chico cinder cones and Llajuapampa cinder cone, and finally the Uchan field south of the Lagunillas Pass with the Uchan Sur and Tururunca cinder cones, some lava domes farther south and a lava flow field that also runs south. With the exception of the Huambo volcanic field which features Holocene vents they are all of Pleistocene age. Climate and vegetation Temperatures vary between parts of the volcanic field, with Ayo having a semi-warm climate with temperatures of while Chachas has and Orcopampa of . The climate in the region is dry with a wet season that lasts from November to April, although humid periods have occurred recently, including two around 600 and 1000 AD linked to El Nino phenomena. Vegetation in the volcanic field corresponds to the puna and suni vegetation types, but farmland also occurs on agricultural terraces. Plants include xerophytes as well as ichu and yareta and varies with elevation; the Laguna Mamacocha and Chachas are populated by fish and form oases. Eruption history The oldest activity of the Andagua volcanic field occurred between 400,000 and 64,000 years ago and has been identified close to Chivay in the Colca Valley. Three separate generations of volcanic activity have been defined, a Pleistocene generation, a Pleistocene-Holocene generation and a Holocene generation, with about 3-4 vents forming every ten thousand years. The eruptions of the Andagua volcanic field cones have been accompanied by the emission of slow-moving lava flows and ballistic ejecta which reached less than distance from the vents; estimated volcanic explosivity indexes are 0-2 and the volcanic activity has been described as Strombolian eruptions or phreatomagmatic and accompanied by small eruption columns. Hawaiian eruptions and Strombolian eruptions generated scoria cones. Ticsho was emplaced 4,050 years ago, Mauras and Yana Mauras 2,900 years ago while the eruption of Chilcayoc Grande occurred 1451 - 1523. The youngest eruptions occurred along the Jenchana-Ninanmama fault and the most recent event was dated to 370 years ago and took place at Chilcayoc Chico. A more recent eruption was reported in 1913, but it is not clear that it actually occurred in the Andagua volcanic field. Neither historical records nor local records such as legends mention volcanic activity although pre-Inka agricultural areas were impacted by lava flows and two towns were destroyed by volcanic activity later than the Spanish conquest. Presently, hydrogen sulfide emanates from the Ninamama flow and has generated gypsum and sulfur deposits, and fumarolic activity was reported in 2003 although other sources state that no fumarolic activity occurs; future eruptions are certainly possible. Hazards from future eruptions The volcanoes are regarded as "very low hazard" by the Peruvian geological agency, which is working to build a monitoring network for the Andagua volcanoes and has drawn up maps of potentially endangered infrastructure. Various towns with a total population of about 11,800 people are located at the feet of extinct vents, but usually at a distance from the youngest volcanoes although shifts in vent location during the course of an eruption could bring hazards to these towns. Explosive eruptions could result in fallout of lava bombs, tephra and volcanic ash, but the impact would be limited to the surroundings of the vent, probably less than . The volcanic field however also produced lava flows in the past, which can reach larger distances and also infrastructure such as the Mantaro-Socabaya power line and could also bury the ground for perhaps thousands of years. Access and national park project A number of paths and roads pass through the volcanic field. Andagua's surroundings are considered to be a typical expression of the volcanic field and the creation of a national park covering parts of the volcanic field has been proposed. A geopark was created in 2015 and by UNESCO in 2019, some volcanoes of the Andagua volcanic field are considered to be geosites with some spots already protected in some way; the area is of value from the perspectives of both geotourism and science. A concentration of such small volcanoes such as Andagua in an easily accessible location is not common in the world. In general, aside from their role as hazards, volcanoes are important sources of tourism-based income. References Sources Volcanoes of Peru Landforms of Arequipa Region Andean Volcanic Belt Volcanic fields Pleistocene volcanoes Quaternary volcanoes Pleistocene South America Quaternary South America Four-thousanders of the Andes
In 1923, the U.S. state of Virginia renumbered many of its state highways. This renumbering was caused by the increase in mileage. Note that old SR 26 was removed entirely. List of routes Two-digit routes Spur routes Renumbering 1923 Highway renumbering in the United States Renumbering 1923 References Virginia Highways Project CTB Meeting Archives
```ruby describe :net_ftp_pwd, shared: true do end ```
Zaynulla Rasulev (Zaynulla bin Khabibulla bin Rasūl; , , 25 March 1833 – 2 February 1917) was a Bashkir religious leader in the 19th and early 20th century. He is notable as one of the most important representatives of Jadidism and the organizer of one of the first Jadidi madrasah. Life Born in 1833 in the village of Sharip in Verkheuralsk province, Orenburg Governorate (these days in the Uchalinsky District, Republic of Bashkortostan, Russia) to the family of mullah of the local Islamic community. Received instruction in a medrese in his home village, then in the medrese in Troitsk. Upon instruction, began a clerical career. Since 1858, served as Imam khatib in the village of Yuldash (currently in Uchalinsky District, Bashkortostan) . While still in his student years, Zaynulla became interested in Sufism. In 1859, he joined the Sufi order of Naqshbandi. Received individual instruction from sheikh Ahmed Ziyaüddin Gümüşhanevi in Istanbul in 1869–1870, from whom received the Ijazah, or the authorization to teach Sufi Naqshbandi's doctrine. Made a hajj. After returning to Bashkortostan, introduced several innovations into the local Sufi practices: singing zikr aloud, the observance of Mawlid, (the birthday of the Islamic prophet Muhammad), wearing prayer beads etc. Endured persecution for preaching Sufizm: the local conservative mullahs and the officials of mainstream Islam accused him of disseminating heresy and undermining activity aiming at the authorities in power. Upon their written denunciation, Zaynulla Rasulev got arrested and sent to exile. Served his exile successively in Zlatoust (eight months), Nikolsk, Vologda Oblast (1873-1876) and Kostroma (1876-1881). In 1881, he returned from exile and resumed his activity as a religious leader in the village of Aqquzha in Bashkortostan. He did a second hajj. Since 1884, Rasulev took the post of the Imam of the town mosque in Troitsk. He subsequently founded the madrasah of Rasuliya, one of the first jadidi educational institution in the Urals. He had numerous disciples and followers, and became a most influential Muslim leaders in Russia. Zaynulla Rasulev died on February 2, 1917. He is buried in the old Muslim cemetery in Troitsk. Numerous legends are preserved in the Bashkir public memory about the miracles and healings performed by Zaynulla Rasulev. His son, Gabdurakhman Rasulev, also became a Bashkir religious leader. Commemoration In 2009, a mosque was opened in the town of Uchaly, Bashkortostan, named after Zaynulla-Ishan. A street in Ufa, Russia was named after Zanulla Rasulev in 2008. References External links Yunusova Aysylu. Islam in Bashkortostan Moscow, 2007. Encyclopedia entry in Encyclopedia Bashkortostan Naqshbandi order Jadids Muslims from the Russian Empire Bashkir people 1833 births 1917 deaths
```xml import { MockProxy } from "jest-mock-extended"; import { KeyDefinitionLike, MigrationHelper } from "../migration-helper"; import { mockMigrationHelper } from "../migration-helper.spec"; import { KdfConfigMigrator } from "./59-move-kdf-config-to-state-provider"; function exampleJSON() { return { global: { otherStuff: "otherStuff1", }, authenticatedAccounts: ["FirstAccount", "SecondAccount"], FirstAccount: { profile: { kdfIterations: 3, kdfMemory: 64, kdfParallelism: 5, kdfType: 1, otherStuff: "otherStuff1", }, otherStuff: "otherStuff2", }, SecondAccount: { profile: { kdfIterations: 600_001, kdfMemory: null as number, kdfParallelism: null as number, kdfType: 0, otherStuff: "otherStuff3", }, otherStuff: "otherStuff4", }, }; } function rollbackJSON() { return { user_FirstAccount_kdfConfig_kdfConfig: { iterations: 3, memory: 64, parallelism: 5, kdfType: 1, }, user_SecondAccount_kdfConfig_kdfConfig: { iterations: 600_001, memory: null as number, parallelism: null as number, kdfType: 0, }, global: { otherStuff: "otherStuff1", }, authenticatedAccounts: ["FirstAccount", "SecondAccount"], FirstAccount: { profile: { otherStuff: "otherStuff2", }, otherStuff: "otherStuff3", }, SecondAccount: { profile: { otherStuff: "otherStuff4", }, otherStuff: "otherStuff5", }, }; } const kdfConfigKeyDefinition: KeyDefinitionLike = { key: "kdfConfig", stateDefinition: { name: "kdfConfig", }, }; describe("KdfConfigMigrator", () => { let helper: MockProxy<MigrationHelper>; let sut: KdfConfigMigrator; describe("migrate", () => { beforeEach(() => { helper = mockMigrationHelper(exampleJSON(), 59); sut = new KdfConfigMigrator(58, 59); }); it("should remove kdfType and kdfConfig from Account.Profile", async () => { await sut.migrate(helper); expect(helper.set).toHaveBeenCalledTimes(2); expect(helper.set).toHaveBeenCalledWith("FirstAccount", { profile: { otherStuff: "otherStuff1", }, otherStuff: "otherStuff2", }); expect(helper.set).toHaveBeenCalledWith("SecondAccount", { profile: { otherStuff: "otherStuff3", }, otherStuff: "otherStuff4", }); expect(helper.setToUser).toHaveBeenCalledWith("FirstAccount", kdfConfigKeyDefinition, { iterations: 3, memory: 64, parallelism: 5, kdfType: 1, }); expect(helper.setToUser).toHaveBeenCalledWith("SecondAccount", kdfConfigKeyDefinition, { iterations: 600_001, memory: null as number, parallelism: null as number, kdfType: 0, }); }); }); describe("rollback", () => { beforeEach(() => { helper = mockMigrationHelper(rollbackJSON(), 59); sut = new KdfConfigMigrator(58, 59); }); it("should null out new KdfConfig account value and set account.profile", async () => { await sut.rollback(helper); expect(helper.setToUser).toHaveBeenCalledTimes(2); expect(helper.setToUser).toHaveBeenCalledWith("FirstAccount", kdfConfigKeyDefinition, null); expect(helper.setToUser).toHaveBeenCalledWith("SecondAccount", kdfConfigKeyDefinition, null); expect(helper.set).toHaveBeenCalledTimes(2); expect(helper.set).toHaveBeenCalledWith("FirstAccount", { profile: { kdfIterations: 3, kdfMemory: 64, kdfParallelism: 5, kdfType: 1, otherStuff: "otherStuff2", }, otherStuff: "otherStuff3", }); expect(helper.set).toHaveBeenCalledWith("SecondAccount", { profile: { kdfIterations: 600_001, kdfMemory: null as number, kdfParallelism: null as number, kdfType: 0, otherStuff: "otherStuff4", }, otherStuff: "otherStuff5", }); }); }); }); ```
```objective-c #import "GPUImageTwoInputFilter.h" @interface GPUImageVoronoiConsumerFilter : GPUImageTwoInputFilter { GLint sizeUniform; } @property (nonatomic, readwrite) CGSize sizeInPixels; @end ```
```java package com.vladsch.flexmark.util.misc; import org.jetbrains.annotations.NotNull; import java.util.BitSet; import java.util.Objects; import java.util.function.IntPredicate; /** * Interface for set of characters to use for inclusion exclusion tests * Can be used for code points since the argument is int */ public interface CharPredicate extends IntPredicate { CharPredicate NONE = value -> false; CharPredicate ALL = value -> true; CharPredicate SPACE = value -> value == ' '; CharPredicate TAB = value -> value == '\t'; CharPredicate EOL = value -> value == '\n'; CharPredicate ANY_EOL = value -> value == '\n' || value == '\r'; CharPredicate ANY_EOL_NUL = value -> value == '\n' || value == '\r' || value == '\0'; CharPredicate BACKSLASH = value -> value == '\\'; CharPredicate SLASH = value -> value == '/'; CharPredicate LINE_SEP = value -> value == '\u2028'; CharPredicate HASH = value -> value == '#'; CharPredicate SPACE_TAB = value -> value == ' ' || value == '\t'; CharPredicate SPACE_TAB_NUL = value -> value == ' ' || value == '\t' || value == '\0'; CharPredicate SPACE_TAB_LINE_SEP = value -> value == ' ' || value == '\t' || value == '\u2028'; CharPredicate SPACE_TAB_NBSP_LINE_SEP = value -> value == ' ' || value == '\t' || value == '\u00A0' || value == '\u2028'; CharPredicate SPACE_EOL = value -> value == ' ' || value == '\n'; CharPredicate SPACE_ANY_EOL = value -> value == ' ' || value == '\r' || value == '\n'; CharPredicate SPACE_TAB_NBSP = value -> value == ' ' || value == '\t' || value == '\u00A0'; CharPredicate SPACE_TAB_EOL = value -> value == ' ' || value == '\t' || value == '\n'; CharPredicate SPACE_TAB_NBSP_EOL = value -> value == ' ' || value == '\t' || value == '\n' || value == '\u00A0'; CharPredicate WHITESPACE = value -> value == ' ' || value == '\t' || value == '\n' || value == '\r'; CharPredicate WHITESPACE_OR_NUL = value -> value == ' ' || value == '\t' || value == '\n' || value == '\r' || value == '\0'; CharPredicate WHITESPACE_NBSP = value -> value == ' ' || value == '\t' || value == '\n' || value == '\r' || value == '\u00A0'; CharPredicate WHITESPACE_NBSP_OR_NUL = value -> value == ' ' || value == '\t' || value == '\n' || value == '\r' || value == '\u00A0' || value == '\0'; CharPredicate BLANKSPACE = value -> value == ' ' || value == '\t' || value == '\n' || value == '\r' || value == '\u000B' || value == '\f'; CharPredicate HEXADECIMAL_DIGITS = value -> value >= '0' && value <= '9' || value >= 'a' && value <= 'f' || value >= 'A' && value <= 'F'; CharPredicate DECIMAL_DIGITS = value -> value >= '0' && value <= '9'; CharPredicate OCTAL_DIGITS = value -> value >= '0' && value <= '7'; CharPredicate BINARY_DIGITS = value -> value >= '0' && value <= '1'; @Deprecated CharPredicate FALSE = NONE; @Deprecated CharPredicate TRUE = ALL; @Deprecated CharPredicate SPACE_TAB_OR_NUL = SPACE_TAB_NUL; @Override boolean test(int value); default boolean test(char value) { return test((int) value); } /** * Returns a composed predicate that represents a short-circuiting logical * AND of this predicate and another. When evaluating the composed * predicate, if this predicate is {@code false}, then the {@code other} * predicate is not evaluated. * * <p>Any exceptions thrown during evaluation of either predicate are relayed * to the caller; if evaluation of this predicate throws an exception, the * {@code other} predicate will not be evaluated. * * @param other a predicate that will be logically-ANDed with this * predicate * @return a composed predicate that represents the short-circuiting logical * AND of this predicate and the {@code other} predicate * @throws NullPointerException if other is null */ @NotNull default CharPredicate and(@NotNull CharPredicate other) { Objects.requireNonNull(other); return this == NONE || other == NONE ? NONE : this == ALL ? other : other == ALL ? this : (value) -> test(value) && other.test(value); } /** * Returns a predicate that represents the logical negation of this * predicate. * * @return a predicate that represents the logical negation of this * predicate */ @NotNull default CharPredicate negate() { return this == NONE ? ALL : this == ALL ? NONE : (value) -> !test(value); } /** * Returns a composed predicate that represents a short-circuiting logical * OR of this predicate and another. When evaluating the composed * predicate, if this predicate is {@code true}, then the {@code other} * predicate is not evaluated. * * <p>Any exceptions thrown during evaluation of either predicate are relayed * to the caller; if evaluation of this predicate throws an exception, the * {@code other} predicate will not be evaluated. * * @param other a predicate that will be logically-ORed with this * predicate * @return a composed predicate that represents the short-circuiting logical * OR of this predicate and the {@code other} predicate * @throws NullPointerException if other is null */ @NotNull default CharPredicate or(@NotNull CharPredicate other) { Objects.requireNonNull(other); return this == ALL || other == ALL ? ALL : this == NONE ? other : other == NONE ? this : (value) -> test(value) || other.test(value); } @NotNull static CharPredicate standardOrAnyOf(char c1) { return SPACE.test(c1) ? SPACE : EOL.test(c1) ? EOL : TAB.test(c1) ? TAB : value1 -> value1 == (int) c1; } @NotNull static CharPredicate standardOrAnyOf(char c1, char c2) { return c1 == c2 ? standardOrAnyOf(c1) : SPACE_TAB.test(c1) && SPACE_TAB.test(c2) ? SPACE_TAB : ANY_EOL.test(c1) && ANY_EOL.test(c2) ? ANY_EOL : value -> value == (int) c1 || value == (int) c2; } @NotNull static CharPredicate standardOrAnyOf(char c1, char c2, char c3) { return c1 == c2 && c2 == c3 ? standardOrAnyOf(c1) : c1 == c2 || c1 == c3 ? standardOrAnyOf(c2, c3) : c2 == c3 ? standardOrAnyOf(c1, c3) : value -> value == (int) c1 || value == (int) c2 || value == (int) c3; } @NotNull static CharPredicate standardOrAnyOf(char c1, char c2, char c3, char c4) { return c1 == c2 && c2 == c3 && c3 == c4 ? standardOrAnyOf(c1) : c1 == c2 || c1 == c3 || c1 == c4 ? standardOrAnyOf(c2, c3, c4) : c2 == c3 || c2 == c4 ? standardOrAnyOf(c1, c3, c4) : c3 == c4 ? standardOrAnyOf(c1, c2, c3) : WHITESPACE.test(c1) && WHITESPACE.test(c2) && WHITESPACE.test(c3) && WHITESPACE.test(c4) ? WHITESPACE : value -> value == (int) c1 || value == (int) c2 || value == (int) c3 || value == (int) c4; } @NotNull static CharPredicate anyOf(char... chars) { switch (chars.length) { case 0: return NONE; case 1: return standardOrAnyOf(chars[0]); case 2: return standardOrAnyOf(chars[0], chars[1]); case 3: return standardOrAnyOf(chars[0], chars[1], chars[2]); case 4: return standardOrAnyOf(chars[0], chars[1], chars[2], chars[3]); default: return anyOf(String.valueOf(chars)); } } static int indexOf(@NotNull CharSequence thizz, char c) { return indexOf(thizz, c, 0, thizz.length()); } static int indexOf(@NotNull CharSequence thizz, char c, int fromIndex, int endIndex) { fromIndex = Math.max(fromIndex, 0); endIndex = Math.min(thizz.length(), endIndex); for (int i = fromIndex; i < endIndex; i++) { if (c == thizz.charAt(i)) return i; } return -1; } @NotNull static CharPredicate anyOf(@NotNull CharSequence chars) { int maxFixed = 4; switch (chars.length()) { case 0: return NONE; case 1: return standardOrAnyOf(chars.charAt(0)); case 2: return standardOrAnyOf(chars.charAt(0), chars.charAt(1)); case 3: return standardOrAnyOf(chars.charAt(0), chars.charAt(1), chars.charAt(2)); case 4: return standardOrAnyOf(chars.charAt(0), chars.charAt(1), chars.charAt(2), chars.charAt(3)); default: // create bit set for ascii and add any above as a string index of test BitSet ascii = null; StringBuilder others = null; int iMax = chars.length(); for (int i = 0; i < iMax; i++) { char c = chars.charAt(i); if (c <= 127) { if (ascii == null) ascii = new BitSet(); ascii.set(c); } else { if (others == null) others = new StringBuilder(); if (indexOf(others, c) == -1) { others.append(c); } } } String finalOthers = others == null ? null : others.toString(); CharPredicate testOthers = finalOthers == null || finalOthers.isEmpty() ? null : finalOthers.length() <= maxFixed ? anyOf(others) : value -> indexOf(finalOthers, (char) value) != -1; CharPredicate testAscii = ascii == null || ascii.cardinality() == 0 ? null : ascii::get; assert testAscii != null || testOthers != null; if (testAscii != null && testOthers != null) { return testAscii.or(testOthers); } else if (testAscii != null) { return testAscii; } else { return testOthers; } } } } ```
The Roman Catholic Diocese of Shantou/Swatow (, ) is a diocese located in the city of Shantou in the Ecclesiastical province of Guangzhou in China. History April 6, 1914: Established as Apostolic Vicariate of Chaozhou 潮州 from the Apostolic Vicariate of Guangdong 廣東 August 18, 1915: Renamed as Apostolic Vicariate of Shantou 汕頭 April 11, 1946: Promoted as Diocese of Shantou 汕頭 Leadership Bishops of Shantou 汕頭 (Roman rite) Bishop Peter Zhuang Jian-jian (2006–present) (Clandestinely) Bishop John Cai Tiyuan (1981–2000) Bishop Charles Vogel, M.E.P. (April 11, 1946–April 13, 1958) Vicars Apostolic of Shantou 汕頭 (Roman Rite) Bishop Charles Vogel, M.E.P. (December 9, 1935–April 11, 1946) Bishop Adolphe Rayssac, M.E.P. (July 17, 1914–May 1, 1935) References GCatholic.org Catholic Hierarchy 1914 establishments in China Christianity in Guangdong Christian organizations established in 1914 Roman Catholic dioceses and prelatures established in the 20th century Roman Catholic dioceses in China Shantou
```javascript Dangerously set `innerHTML` `PureRenderMixin` in **React** Immutability helpers in **React** Prop Validation Custom `propType`'s to be required ```
```smalltalk // The .NET Foundation licenses this file to you under the MIT license. using Microsoft.NET.Build.Tasks; namespace Microsoft.NET.Build.Tests { public class GivenThatWeWantToBuildACppCliProject : SdkTest { public GivenThatWeWantToBuildACppCliProject(ITestOutputHelper log) : base(log) { } [FullMSBuildOnlyFact] public void It_builds_and_runs() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => AddBuildProperty(projectPath, project, "EnableManagedpackageReferenceSupport", "true")); // build projects separately with BuildProjectReferences=false to simulate VS build behavior new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Pass(); var buildCommand = new BuildCommand(testAsset, "CSConsoleApp"); buildCommand .Execute(new string[] { "-p:Platform=x64", "-p:BuildProjectReferences=false" }) .Should() .Pass(); var exe = Path.Combine( //find the platform directory new DirectoryInfo(Path.Combine(testAsset.TestRoot, "CSConsoleApp", "bin")).GetDirectories().Single().FullName, "Debug", $"{ToolsetInfo.CurrentTargetFramework}-windows", "CSConsoleApp.exe"); var runCommand = new RunExeCommand(Log, exe); runCommand .Execute() .Should() .Pass() .And .HaveStdOutContaining("Hello, World!"); } [FullMSBuildOnlyFact] public void It_builds_and_runs_with_package_reference() { var targetFramework = ToolsetInfo.CurrentTargetFramework + "-windows"; var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => ConfigureProject(projectPath, project, targetFramework, new string[] { "_EnablePackageReferencesInVCProjects", "IncludeWindowsSDKRefFrameworkReferences" })); new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Pass(); var cppnProjProperties = GetPropertyValues(testAsset.TestRoot, "NETCoreCppCliTest", targetFramework: targetFramework); Assert.True(cppnProjProperties["_EnablePackageReferencesInVCProjects"] == "true"); Assert.True(cppnProjProperties["IncludeWindowsSDKRefFrameworkReferences"] == ""); } [FullMSBuildOnlyFact] public void Given_no_restore_It_builds_cpp_project() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => AddBuildProperty(projectPath, project, "EnableManagedpackageReferenceSupport", "True")); ; new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Pass(); } [FullMSBuildOnlyFact] public void Given_Wpf_framework_reference_It_builds_cpp_project() { var testAsset = _testAssetsManager .CopyTestAsset("CppCliLibWithWpfFrameworkReference") .WithSource(); new BuildCommand(testAsset) .Execute("-p:Platform=x64") .Should() .Pass(); } [FullMSBuildOnlyFact] public void It_fails_with_error_message_on_EnableComHosting() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => { if (Path.GetExtension(projectPath) == ".vcxproj") { XNamespace ns = project.Root.Name.Namespace; var globalPropertyGroup = project.Root .Descendants(ns + "PropertyGroup") .Where(e => e.Attribute("Label")?.Value == "Globals") .Single(); globalPropertyGroup.Add(new XElement(ns + "EnableComHosting", "true")); } }); new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Fail() .And .HaveStdOutContaining(Strings.NoSupportCppEnableComHosting); } [FullMSBuildOnlyFact] public void It_fails_with_error_message_on_fullframework() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => ChangeTargetFramework(projectPath, project, "net472")); new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Fail() .And .HaveStdOutContaining(Strings.NETFrameworkWithoutUsingNETSdkDefaults); } [FullMSBuildOnlyFact] public void It_fails_with_error_message_on_tfm_lower_than_3_1() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource() .WithProjectChanges((projectPath, project) => ChangeTargetFramework(projectPath, project, "netcoreapp3.0")); new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64") .Should() .Fail() .And .HaveStdOutContaining(Strings.CppRequiresTFMVersion31); } [FullMSBuildOnlyFact] public void When_run_with_selfcontained_It_fails_with_error_message() { var testAsset = _testAssetsManager .CopyTestAsset("NetCoreCsharpAppReferenceCppCliLib") .WithSource(); new BuildCommand(testAsset, "NETCoreCppCliTest") .Execute("-p:Platform=x64", "-p:selfcontained=true", $"-p:RuntimeIdentifier={ToolsetInfo.LatestWinRuntimeIdentifier}-x64") .Should() .Fail() .And .HaveStdOutContaining(Strings.NoSupportCppSelfContained); } private void ChangeTargetFramework(string projectPath, XDocument project, string targetFramework) { if (Path.GetExtension(projectPath) == ".vcxproj") { XNamespace ns = project.Root.Name.Namespace; project.Root.Descendants(ns + "PropertyGroup") .Descendants(ns + "TargetFramework") .Single().Value = targetFramework; } } private void ConfigureProject(string projectPath, XDocument project, string targetFramework, string[] properties) { AddBuildProperty(projectPath, project, "EnableManagedpackageReferenceSupport", "true"); ChangeTargetFramework(projectPath, project, targetFramework); RecordProperties(projectPath, project, properties); } private void AddBuildProperty(string projectPath, XDocument project, string property, string value) { if (Path.GetExtension(projectPath) == ".vcxproj") { XNamespace ns = project.Root.Name.Namespace; XElement propertyGroup = project.Root.Descendants(ns + "PropertyGroup").First(); propertyGroup.Add(new XElement(ns + $"{property}", value)); } } private void RecordProperties(string projectPath, XDocument project, string[] properties) { if (Path.GetExtension(projectPath) == ".vcxproj") { string propertiesTextElements = ""; XNamespace ns = project.Root.Name.Namespace; foreach (var propertyName in properties) { propertiesTextElements += $" <LinesToWrite Include='{propertyName}: $({propertyName})'/>" + Environment.NewLine; } string target = $@"<Target Name='WritePropertyValues' BeforeTargets='AfterBuild'> <ItemGroup> {propertiesTextElements} </ItemGroup> <WriteLinesToFile File='$(BaseIntermediateOutputPath)\$(Configuration)\$(TargetFramework)\PropertyValues.txt' Lines='@(LinesToWrite)' Overwrite='true' Encoding='Unicode' /> </Target>"; XElement newNode = XElement.Parse(target); foreach (var element in newNode.DescendantsAndSelf()) { element.Name = ns + element.Name.LocalName; } project.Root.AddFirst(newNode); } } public Dictionary<string, string> GetPropertyValues(string testRoot, string project, string targetFramework = null, string configuration = "Debug") { var propertyValues = new Dictionary<string, string>(); string intermediateOutputPath = Path.Combine(testRoot, project, "obj", configuration, targetFramework ?? "foo"); foreach (var line in File.ReadAllLines(Path.Combine(intermediateOutputPath, "PropertyValues.txt"))) { int colonIndex = line.IndexOf(':'); if (colonIndex > 0) { string propertyName = line.Substring(0, colonIndex); string propertyValue = line.Length == colonIndex + 1 ? "" : line.Substring(colonIndex + 2); propertyValues[propertyName] = propertyValue; } } return propertyValues; } } } ```
```php <?php if (!defined('access') or !access) { die('This file cannot be directly accessed.'); } ?> <div class="input-label"> <div class="c7"> <label for="form-threads"><?php _se('Threads'); ?></label> <select name="form-threads" id="form-threads" class="text-input"> <option selected value="0">-- <?php _se('Select number of threads'); ?> --</option> <?php echo CHV\Render\get_select_options_html([1 => 1, 2 => 2, 3 => 3, 4 => 4], null); ?> </select> </div> <div class="input-below font-size-small"><?php _se("This determines how intensive and fast will be the import process. Don't use more than %s threads on a shared server.", 2); ?></div> </div> ```
```go package testsuites import ( "bytes" "crypto/sha1" "io" "io/ioutil" "math/rand" "net/http" "os" "path" "sort" "strings" "sync" "testing" "time" "gopkg.in/check.v1" "github.com/docker/distribution/context" storagedriver "github.com/docker/distribution/registry/storage/driver" ) // Test hooks up gocheck into the "go test" runner. func Test(t *testing.T) { check.TestingT(t) } // RegisterSuite registers an in-process storage driver test suite with // the go test runner. func RegisterSuite(driverConstructor DriverConstructor, skipCheck SkipCheck) { check.Suite(&DriverSuite{ Constructor: driverConstructor, SkipCheck: skipCheck, ctx: context.Background(), }) } // SkipCheck is a function used to determine if a test suite should be skipped. // If a SkipCheck returns a non-empty skip reason, the suite is skipped with // the given reason. type SkipCheck func() (reason string) // NeverSkip is a default SkipCheck which never skips the suite. var NeverSkip SkipCheck = func() string { return "" } // DriverConstructor is a function which returns a new // storagedriver.StorageDriver. type DriverConstructor func() (storagedriver.StorageDriver, error) // DriverTeardown is a function which cleans up a suite's // storagedriver.StorageDriver. type DriverTeardown func() error // DriverSuite is a gocheck test suite designed to test a // storagedriver.StorageDriver. The intended way to create a DriverSuite is // with RegisterSuite. type DriverSuite struct { Constructor DriverConstructor Teardown DriverTeardown SkipCheck storagedriver.StorageDriver ctx context.Context } // SetUpSuite sets up the gocheck test suite. func (suite *DriverSuite) SetUpSuite(c *check.C) { if reason := suite.SkipCheck(); reason != "" { c.Skip(reason) } d, err := suite.Constructor() c.Assert(err, check.IsNil) suite.StorageDriver = d } // TearDownSuite tears down the gocheck test suite. func (suite *DriverSuite) TearDownSuite(c *check.C) { if suite.Teardown != nil { err := suite.Teardown() c.Assert(err, check.IsNil) } } // TearDownTest tears down the gocheck test. // This causes the suite to abort if any files are left around in the storage // driver. func (suite *DriverSuite) TearDownTest(c *check.C) { files, _ := suite.StorageDriver.List(suite.ctx, "/") if len(files) > 0 { c.Fatalf("Storage driver did not clean up properly. Offending files: %#v", files) } } // TestRootExists ensures that all storage drivers have a root path by default. func (suite *DriverSuite) TestRootExists(c *check.C) { _, err := suite.StorageDriver.List(suite.ctx, "/") if err != nil { c.Fatalf(`the root path "/" should always exist: %v`, err) } } // TestValidPaths checks that various valid file paths are accepted by the // storage driver. func (suite *DriverSuite) TestValidPaths(c *check.C) { contents := randomContents(64) validFiles := []string{ "/a", "/2", "/aa", "/a.a", "/0-9/abcdefg", "/abcdefg/z.75", "/abc/1.2.3.4.5-6_zyx/123.z/4", "/docker/docker-registry", "/123.abc", "/abc./abc", "/.abc", "/a--b", "/a-.b", "/_.abc", "/Docker/docker-registry", "/Abc/Cba"} for _, filename := range validFiles { err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) defer suite.deletePath(c, firstPart(filename)) c.Assert(err, check.IsNil) received, err := suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.IsNil) c.Assert(received, check.DeepEquals, contents) } } func (suite *DriverSuite) deletePath(c *check.C, path string) { for tries := 2; tries > 0; tries-- { err := suite.StorageDriver.Delete(suite.ctx, path) if _, ok := err.(storagedriver.PathNotFoundError); ok { err = nil } c.Assert(err, check.IsNil) paths, err := suite.StorageDriver.List(suite.ctx, path) if len(paths) == 0 { break } time.Sleep(time.Second * 2) } } // TestInvalidPaths checks that various invalid file paths are rejected by the // storage driver. func (suite *DriverSuite) TestInvalidPaths(c *check.C) { contents := randomContents(64) invalidFiles := []string{ "", "/", "abc", "123.abc", "//bcd", "/abc_123/"} for _, filename := range invalidFiles { err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) // only delete if file was successfully written if err == nil { defer suite.deletePath(c, firstPart(filename)) } c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.InvalidPathError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.InvalidPathError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } } // TestWriteRead1 tests a simple write-read workflow. func (suite *DriverSuite) TestWriteRead1(c *check.C) { filename := randomPath(32) contents := []byte("a") suite.writeReadCompare(c, filename, contents) } // TestWriteRead2 tests a simple write-read workflow with unicode data. func (suite *DriverSuite) TestWriteRead2(c *check.C) { filename := randomPath(32) contents := []byte("\xc3\x9f") suite.writeReadCompare(c, filename, contents) } // TestWriteRead3 tests a simple write-read workflow with a small string. func (suite *DriverSuite) TestWriteRead3(c *check.C) { filename := randomPath(32) contents := randomContents(32) suite.writeReadCompare(c, filename, contents) } // TestWriteRead4 tests a simple write-read workflow with 1MB of data. func (suite *DriverSuite) TestWriteRead4(c *check.C) { filename := randomPath(32) contents := randomContents(1024 * 1024) suite.writeReadCompare(c, filename, contents) } // TestWriteReadNonUTF8 tests that non-utf8 data may be written to the storage // driver safely. func (suite *DriverSuite) TestWriteReadNonUTF8(c *check.C) { filename := randomPath(32) contents := []byte{0x80, 0x80, 0x80, 0x80} suite.writeReadCompare(c, filename, contents) } // TestTruncate tests that putting smaller contents than an original file does // remove the excess contents. func (suite *DriverSuite) TestTruncate(c *check.C) { filename := randomPath(32) contents := randomContents(1024 * 1024) suite.writeReadCompare(c, filename, contents) contents = randomContents(1024) suite.writeReadCompare(c, filename, contents) } // TestReadNonexistent tests reading content from an empty path. func (suite *DriverSuite) TestReadNonexistent(c *check.C) { filename := randomPath(32) _, err := suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestWriteReadStreams1 tests a simple write-read streaming workflow. func (suite *DriverSuite) TestWriteReadStreams1(c *check.C) { filename := randomPath(32) contents := []byte("a") suite.writeReadCompareStreams(c, filename, contents) } // TestWriteReadStreams2 tests a simple write-read streaming workflow with // unicode data. func (suite *DriverSuite) TestWriteReadStreams2(c *check.C) { filename := randomPath(32) contents := []byte("\xc3\x9f") suite.writeReadCompareStreams(c, filename, contents) } // TestWriteReadStreams3 tests a simple write-read streaming workflow with a // small amount of data. func (suite *DriverSuite) TestWriteReadStreams3(c *check.C) { filename := randomPath(32) contents := randomContents(32) suite.writeReadCompareStreams(c, filename, contents) } // TestWriteReadStreams4 tests a simple write-read streaming workflow with 1MB // of data. func (suite *DriverSuite) TestWriteReadStreams4(c *check.C) { filename := randomPath(32) contents := randomContents(1024 * 1024) suite.writeReadCompareStreams(c, filename, contents) } // TestWriteReadStreamsNonUTF8 tests that non-utf8 data may be written to the // storage driver safely. func (suite *DriverSuite) TestWriteReadStreamsNonUTF8(c *check.C) { filename := randomPath(32) contents := []byte{0x80, 0x80, 0x80, 0x80} suite.writeReadCompareStreams(c, filename, contents) } // TestWriteReadLargeStreams tests that a 5GB file may be written to the storage // driver safely. func (suite *DriverSuite) TestWriteReadLargeStreams(c *check.C) { if testing.Short() { c.Skip("Skipping test in short mode") } filename := randomPath(32) defer suite.deletePath(c, firstPart(filename)) checksum := sha1.New() var fileSize int64 = 5 * 1024 * 1024 * 1024 contents := newRandReader(fileSize) writer, err := suite.StorageDriver.Writer(suite.ctx, filename, false) c.Assert(err, check.IsNil) written, err := io.Copy(writer, io.TeeReader(contents, checksum)) c.Assert(err, check.IsNil) c.Assert(written, check.Equals, fileSize) err = writer.Commit() c.Assert(err, check.IsNil) err = writer.Close() c.Assert(err, check.IsNil) reader, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.IsNil) defer reader.Close() writtenChecksum := sha1.New() io.Copy(writtenChecksum, reader) c.Assert(writtenChecksum.Sum(nil), check.DeepEquals, checksum.Sum(nil)) } // TestReaderWithOffset tests that the appropriate data is streamed when // reading with a given offset. func (suite *DriverSuite) TestReaderWithOffset(c *check.C) { filename := randomPath(32) defer suite.deletePath(c, firstPart(filename)) chunkSize := int64(32) contentsChunk1 := randomContents(chunkSize) contentsChunk2 := randomContents(chunkSize) contentsChunk3 := randomContents(chunkSize) err := suite.StorageDriver.PutContent(suite.ctx, filename, append(append(contentsChunk1, contentsChunk2...), contentsChunk3...)) c.Assert(err, check.IsNil) reader, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.IsNil) defer reader.Close() readContents, err := ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, append(append(contentsChunk1, contentsChunk2...), contentsChunk3...)) reader, err = suite.StorageDriver.Reader(suite.ctx, filename, chunkSize) c.Assert(err, check.IsNil) defer reader.Close() readContents, err = ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, append(contentsChunk2, contentsChunk3...)) reader, err = suite.StorageDriver.Reader(suite.ctx, filename, chunkSize*2) c.Assert(err, check.IsNil) defer reader.Close() readContents, err = ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contentsChunk3) // Ensure we get invalid offest for negative offsets. reader, err = suite.StorageDriver.Reader(suite.ctx, filename, -1) c.Assert(err, check.FitsTypeOf, storagedriver.InvalidOffsetError{}) c.Assert(err.(storagedriver.InvalidOffsetError).Offset, check.Equals, int64(-1)) c.Assert(err.(storagedriver.InvalidOffsetError).Path, check.Equals, filename) c.Assert(reader, check.IsNil) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) // Read past the end of the content and make sure we get a reader that // returns 0 bytes and io.EOF reader, err = suite.StorageDriver.Reader(suite.ctx, filename, chunkSize*3) c.Assert(err, check.IsNil) defer reader.Close() buf := make([]byte, chunkSize) n, err := reader.Read(buf) c.Assert(err, check.Equals, io.EOF) c.Assert(n, check.Equals, 0) // Check the N-1 boundary condition, ensuring we get 1 byte then io.EOF. reader, err = suite.StorageDriver.Reader(suite.ctx, filename, chunkSize*3-1) c.Assert(err, check.IsNil) defer reader.Close() n, err = reader.Read(buf) c.Assert(n, check.Equals, 1) // We don't care whether the io.EOF comes on the this read or the first // zero read, but the only error acceptable here is io.EOF. if err != nil { c.Assert(err, check.Equals, io.EOF) } // Any more reads should result in zero bytes and io.EOF n, err = reader.Read(buf) c.Assert(n, check.Equals, 0) c.Assert(err, check.Equals, io.EOF) } // TestContinueStreamAppendLarge tests that a stream write can be appended to without // corrupting the data with a large chunk size. func (suite *DriverSuite) TestContinueStreamAppendLarge(c *check.C) { suite.testContinueStreamAppend(c, int64(10*1024*1024)) } // TestContinueStreamAppendSmall is the same as TestContinueStreamAppendLarge, but only // with a tiny chunk size in order to test corner cases for some cloud storage drivers. func (suite *DriverSuite) TestContinueStreamAppendSmall(c *check.C) { suite.testContinueStreamAppend(c, int64(32)) } func (suite *DriverSuite) testContinueStreamAppend(c *check.C, chunkSize int64) { filename := randomPath(32) defer suite.deletePath(c, firstPart(filename)) contentsChunk1 := randomContents(chunkSize) contentsChunk2 := randomContents(chunkSize) contentsChunk3 := randomContents(chunkSize) fullContents := append(append(contentsChunk1, contentsChunk2...), contentsChunk3...) writer, err := suite.StorageDriver.Writer(suite.ctx, filename, false) c.Assert(err, check.IsNil) nn, err := io.Copy(writer, bytes.NewReader(contentsChunk1)) c.Assert(err, check.IsNil) c.Assert(nn, check.Equals, int64(len(contentsChunk1))) err = writer.Close() c.Assert(err, check.IsNil) curSize := writer.Size() c.Assert(curSize, check.Equals, int64(len(contentsChunk1))) writer, err = suite.StorageDriver.Writer(suite.ctx, filename, true) c.Assert(err, check.IsNil) c.Assert(writer.Size(), check.Equals, curSize) nn, err = io.Copy(writer, bytes.NewReader(contentsChunk2)) c.Assert(err, check.IsNil) c.Assert(nn, check.Equals, int64(len(contentsChunk2))) err = writer.Close() c.Assert(err, check.IsNil) curSize = writer.Size() c.Assert(curSize, check.Equals, 2*chunkSize) writer, err = suite.StorageDriver.Writer(suite.ctx, filename, true) c.Assert(err, check.IsNil) c.Assert(writer.Size(), check.Equals, curSize) nn, err = io.Copy(writer, bytes.NewReader(fullContents[curSize:])) c.Assert(err, check.IsNil) c.Assert(nn, check.Equals, int64(len(fullContents[curSize:]))) err = writer.Commit() c.Assert(err, check.IsNil) err = writer.Close() c.Assert(err, check.IsNil) received, err := suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.IsNil) c.Assert(received, check.DeepEquals, fullContents) } // TestReadNonexistentStream tests that reading a stream for a nonexistent path // fails. func (suite *DriverSuite) TestReadNonexistentStream(c *check.C) { filename := randomPath(32) _, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.Reader(suite.ctx, filename, 64) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestList checks the returned list of keys after populating a directory tree. func (suite *DriverSuite) TestList(c *check.C) { rootDirectory := "/" + randomFilename(int64(8+rand.Intn(8))) defer suite.deletePath(c, rootDirectory) doesnotexist := path.Join(rootDirectory, "nonexistent") _, err := suite.StorageDriver.List(suite.ctx, doesnotexist) c.Assert(err, check.Equals, storagedriver.PathNotFoundError{ Path: doesnotexist, DriverName: suite.StorageDriver.Name(), }) parentDirectory := rootDirectory + "/" + randomFilename(int64(8+rand.Intn(8))) childFiles := make([]string, 50) for i := 0; i < len(childFiles); i++ { childFile := parentDirectory + "/" + randomFilename(int64(8+rand.Intn(8))) childFiles[i] = childFile err := suite.StorageDriver.PutContent(suite.ctx, childFile, randomContents(32)) c.Assert(err, check.IsNil) } sort.Strings(childFiles) keys, err := suite.StorageDriver.List(suite.ctx, "/") c.Assert(err, check.IsNil) c.Assert(keys, check.DeepEquals, []string{rootDirectory}) keys, err = suite.StorageDriver.List(suite.ctx, rootDirectory) c.Assert(err, check.IsNil) c.Assert(keys, check.DeepEquals, []string{parentDirectory}) keys, err = suite.StorageDriver.List(suite.ctx, parentDirectory) c.Assert(err, check.IsNil) sort.Strings(keys) c.Assert(keys, check.DeepEquals, childFiles) // A few checks to add here (check out #819 for more discussion on this): // 1. Ensure that all paths are absolute. // 2. Ensure that listings only include direct children. // 3. Ensure that we only respond to directory listings that end with a slash (maybe?). } // TestMove checks that a moved object no longer exists at the source path and // does exist at the destination. func (suite *DriverSuite) TestMove(c *check.C) { contents := randomContents(32) sourcePath := randomPath(32) destPath := randomPath(32) defer suite.deletePath(c, firstPart(sourcePath)) defer suite.deletePath(c, firstPart(destPath)) err := suite.StorageDriver.PutContent(suite.ctx, sourcePath, contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Move(suite.ctx, sourcePath, destPath) c.Assert(err, check.IsNil) received, err := suite.StorageDriver.GetContent(suite.ctx, destPath) c.Assert(err, check.IsNil) c.Assert(received, check.DeepEquals, contents) _, err = suite.StorageDriver.GetContent(suite.ctx, sourcePath) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestMoveOverwrite checks that a moved object no longer exists at the source // path and overwrites the contents at the destination. func (suite *DriverSuite) TestMoveOverwrite(c *check.C) { sourcePath := randomPath(32) destPath := randomPath(32) sourceContents := randomContents(32) destContents := randomContents(64) defer suite.deletePath(c, firstPart(sourcePath)) defer suite.deletePath(c, firstPart(destPath)) err := suite.StorageDriver.PutContent(suite.ctx, sourcePath, sourceContents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, destPath, destContents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Move(suite.ctx, sourcePath, destPath) c.Assert(err, check.IsNil) received, err := suite.StorageDriver.GetContent(suite.ctx, destPath) c.Assert(err, check.IsNil) c.Assert(received, check.DeepEquals, sourceContents) _, err = suite.StorageDriver.GetContent(suite.ctx, sourcePath) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestMoveNonexistent checks that moving a nonexistent key fails and does not // delete the data at the destination path. func (suite *DriverSuite) TestMoveNonexistent(c *check.C) { contents := randomContents(32) sourcePath := randomPath(32) destPath := randomPath(32) defer suite.deletePath(c, firstPart(destPath)) err := suite.StorageDriver.PutContent(suite.ctx, destPath, contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Move(suite.ctx, sourcePath, destPath) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) received, err := suite.StorageDriver.GetContent(suite.ctx, destPath) c.Assert(err, check.IsNil) c.Assert(received, check.DeepEquals, contents) } // TestMoveInvalid provides various checks for invalid moves. func (suite *DriverSuite) TestMoveInvalid(c *check.C) { contents := randomContents(32) // Create a regular file. err := suite.StorageDriver.PutContent(suite.ctx, "/notadir", contents) c.Assert(err, check.IsNil) defer suite.deletePath(c, "/notadir") // Now try to move a non-existent file under it. err = suite.StorageDriver.Move(suite.ctx, "/notadir/foo", "/notadir/bar") c.Assert(err, check.NotNil) // non-nil error } // TestDelete checks that the delete operation removes data from the storage // driver func (suite *DriverSuite) TestDelete(c *check.C) { filename := randomPath(32) contents := randomContents(32) defer suite.deletePath(c, firstPart(filename)) err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Delete(suite.ctx, filename) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestURLFor checks that the URLFor method functions properly, but only if it // is implemented func (suite *DriverSuite) TestURLFor(c *check.C) { filename := randomPath(32) contents := randomContents(32) defer suite.deletePath(c, firstPart(filename)) err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) url, err := suite.StorageDriver.URLFor(suite.ctx, filename, nil) if _, ok := err.(storagedriver.ErrUnsupportedMethod); ok { return } c.Assert(err, check.IsNil) response, err := http.Get(url) c.Assert(err, check.IsNil) defer response.Body.Close() read, err := ioutil.ReadAll(response.Body) c.Assert(err, check.IsNil) c.Assert(read, check.DeepEquals, contents) url, err = suite.StorageDriver.URLFor(suite.ctx, filename, map[string]interface{}{"method": "HEAD"}) if _, ok := err.(storagedriver.ErrUnsupportedMethod); ok { return } c.Assert(err, check.IsNil) response, err = http.Head(url) c.Assert(response.StatusCode, check.Equals, 200) c.Assert(response.ContentLength, check.Equals, int64(32)) } // TestDeleteNonexistent checks that removing a nonexistent key fails. func (suite *DriverSuite) TestDeleteNonexistent(c *check.C) { filename := randomPath(32) err := suite.StorageDriver.Delete(suite.ctx, filename) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestDeleteFolder checks that deleting a folder removes all child elements. func (suite *DriverSuite) TestDeleteFolder(c *check.C) { dirname := randomPath(32) filename1 := randomPath(32) filename2 := randomPath(32) filename3 := randomPath(32) contents := randomContents(32) defer suite.deletePath(c, firstPart(dirname)) err := suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, filename1), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, filename2), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, filename3), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Delete(suite.ctx, path.Join(dirname, filename1)) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename1)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename2)) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename3)) c.Assert(err, check.IsNil) err = suite.StorageDriver.Delete(suite.ctx, dirname) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename1)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename2)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename3)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) } // TestDeleteOnlyDeletesSubpaths checks that deleting path A does not // delete path B when A is a prefix of B but B is not a subpath of A (so that // deleting "/a" does not delete "/ab"). This matters for services like S3 that // do not implement directories. func (suite *DriverSuite) TestDeleteOnlyDeletesSubpaths(c *check.C) { dirname := randomPath(32) filename := randomPath(32) contents := randomContents(32) defer suite.deletePath(c, firstPart(dirname)) err := suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, filename), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, filename+"suffix"), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, dirname, filename), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, path.Join(dirname, dirname+"suffix", filename), contents) c.Assert(err, check.IsNil) err = suite.StorageDriver.Delete(suite.ctx, path.Join(dirname, filename)) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, filename+"suffix")) c.Assert(err, check.IsNil) err = suite.StorageDriver.Delete(suite.ctx, path.Join(dirname, dirname)) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, dirname, filename)) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) _, err = suite.StorageDriver.GetContent(suite.ctx, path.Join(dirname, dirname+"suffix", filename)) c.Assert(err, check.IsNil) } // TestStatCall runs verifies the implementation of the storagedriver's Stat call. func (suite *DriverSuite) TestStatCall(c *check.C) { content := randomContents(4096) dirPath := randomPath(32) fileName := randomFilename(32) filePath := path.Join(dirPath, fileName) defer suite.deletePath(c, firstPart(dirPath)) // Call on non-existent file/dir, check error. fi, err := suite.StorageDriver.Stat(suite.ctx, dirPath) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) c.Assert(fi, check.IsNil) fi, err = suite.StorageDriver.Stat(suite.ctx, filePath) c.Assert(err, check.NotNil) c.Assert(err, check.FitsTypeOf, storagedriver.PathNotFoundError{}) c.Assert(strings.Contains(err.Error(), suite.Name()), check.Equals, true) c.Assert(fi, check.IsNil) err = suite.StorageDriver.PutContent(suite.ctx, filePath, content) c.Assert(err, check.IsNil) // Call on regular file, check results fi, err = suite.StorageDriver.Stat(suite.ctx, filePath) c.Assert(err, check.IsNil) c.Assert(fi, check.NotNil) c.Assert(fi.Path(), check.Equals, filePath) c.Assert(fi.Size(), check.Equals, int64(len(content))) c.Assert(fi.IsDir(), check.Equals, false) createdTime := fi.ModTime() // Sleep and modify the file time.Sleep(time.Second * 10) content = randomContents(4096) err = suite.StorageDriver.PutContent(suite.ctx, filePath, content) c.Assert(err, check.IsNil) fi, err = suite.StorageDriver.Stat(suite.ctx, filePath) c.Assert(err, check.IsNil) c.Assert(fi, check.NotNil) time.Sleep(time.Second * 5) // allow changes to propagate (eventual consistency) // Check if the modification time is after the creation time. // In case of cloud storage services, storage frontend nodes might have // time drift between them, however that should be solved with sleeping // before update. modTime := fi.ModTime() if !modTime.After(createdTime) { c.Errorf("modtime (%s) is before the creation time (%s)", modTime, createdTime) } // Call on directory (do not check ModTime as dirs don't need to support it) fi, err = suite.StorageDriver.Stat(suite.ctx, dirPath) c.Assert(err, check.IsNil) c.Assert(fi, check.NotNil) c.Assert(fi.Path(), check.Equals, dirPath) c.Assert(fi.Size(), check.Equals, int64(0)) c.Assert(fi.IsDir(), check.Equals, true) } // TestPutContentMultipleTimes checks that if storage driver can overwrite the content // in the subsequent puts. Validates that PutContent does not have to work // with an offset like Writer does and overwrites the file entirely // rather than writing the data to the [0,len(data)) of the file. func (suite *DriverSuite) TestPutContentMultipleTimes(c *check.C) { filename := randomPath(32) contents := randomContents(4096) defer suite.deletePath(c, firstPart(filename)) err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) contents = randomContents(2048) // upload a different, smaller file err = suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) readContents, err := suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contents) } // TestConcurrentStreamReads checks that multiple clients can safely read from // the same file simultaneously with various offsets. func (suite *DriverSuite) TestConcurrentStreamReads(c *check.C) { var filesize int64 = 128 * 1024 * 1024 if testing.Short() { filesize = 10 * 1024 * 1024 c.Log("Reducing file size to 10MB for short mode") } filename := randomPath(32) contents := randomContents(filesize) defer suite.deletePath(c, firstPart(filename)) err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) var wg sync.WaitGroup readContents := func() { defer wg.Done() offset := rand.Int63n(int64(len(contents))) reader, err := suite.StorageDriver.Reader(suite.ctx, filename, offset) c.Assert(err, check.IsNil) readContents, err := ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contents[offset:]) } wg.Add(10) for i := 0; i < 10; i++ { go readContents() } wg.Wait() } // TestConcurrentFileStreams checks that multiple *os.File objects can be passed // in to Writer concurrently without hanging. func (suite *DriverSuite) TestConcurrentFileStreams(c *check.C) { numStreams := 32 if testing.Short() { numStreams = 8 c.Log("Reducing number of streams to 8 for short mode") } var wg sync.WaitGroup testStream := func(size int64) { defer wg.Done() suite.testFileStreams(c, size) } wg.Add(numStreams) for i := numStreams; i > 0; i-- { go testStream(int64(numStreams) * 1024 * 1024) } wg.Wait() } // TODO (brianbland): evaluate the relevancy of this test // TestEventualConsistency checks that if stat says that a file is a certain size, then // you can freely read from the file (this is the only guarantee that the driver needs to provide) // func (suite *DriverSuite) TestEventualConsistency(c *check.C) { // if testing.Short() { // c.Skip("Skipping test in short mode") // } // // filename := randomPath(32) // defer suite.deletePath(c, firstPart(filename)) // // var offset int64 // var misswrites int // var chunkSize int64 = 32 // // for i := 0; i < 1024; i++ { // contents := randomContents(chunkSize) // read, err := suite.StorageDriver.Writer(suite.ctx, filename, offset, bytes.NewReader(contents)) // c.Assert(err, check.IsNil) // // fi, err := suite.StorageDriver.Stat(suite.ctx, filename) // c.Assert(err, check.IsNil) // // // We are most concerned with being able to read data as soon as Stat declares // // it is uploaded. This is the strongest guarantee that some drivers (that guarantee // // at best eventual consistency) absolutely need to provide. // if fi.Size() == offset+chunkSize { // reader, err := suite.StorageDriver.Reader(suite.ctx, filename, offset) // c.Assert(err, check.IsNil) // // readContents, err := ioutil.ReadAll(reader) // c.Assert(err, check.IsNil) // // c.Assert(readContents, check.DeepEquals, contents) // // reader.Close() // offset += read // } else { // misswrites++ // } // } // // if misswrites > 0 { // c.Log("There were " + string(misswrites) + " occurrences of a write not being instantly available.") // } // // c.Assert(misswrites, check.Not(check.Equals), 1024) // } // BenchmarkPutGetEmptyFiles benchmarks PutContent/GetContent for 0B files func (suite *DriverSuite) BenchmarkPutGetEmptyFiles(c *check.C) { suite.benchmarkPutGetFiles(c, 0) } // BenchmarkPutGet1KBFiles benchmarks PutContent/GetContent for 1KB files func (suite *DriverSuite) BenchmarkPutGet1KBFiles(c *check.C) { suite.benchmarkPutGetFiles(c, 1024) } // BenchmarkPutGet1MBFiles benchmarks PutContent/GetContent for 1MB files func (suite *DriverSuite) BenchmarkPutGet1MBFiles(c *check.C) { suite.benchmarkPutGetFiles(c, 1024*1024) } // BenchmarkPutGet1GBFiles benchmarks PutContent/GetContent for 1GB files func (suite *DriverSuite) BenchmarkPutGet1GBFiles(c *check.C) { suite.benchmarkPutGetFiles(c, 1024*1024*1024) } func (suite *DriverSuite) benchmarkPutGetFiles(c *check.C, size int64) { c.SetBytes(size) parentDir := randomPath(8) defer func() { c.StopTimer() suite.StorageDriver.Delete(suite.ctx, firstPart(parentDir)) }() for i := 0; i < c.N; i++ { filename := path.Join(parentDir, randomPath(32)) err := suite.StorageDriver.PutContent(suite.ctx, filename, randomContents(size)) c.Assert(err, check.IsNil) _, err = suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.IsNil) } } // BenchmarkStreamEmptyFiles benchmarks Writer/Reader for 0B files func (suite *DriverSuite) BenchmarkStreamEmptyFiles(c *check.C) { suite.benchmarkStreamFiles(c, 0) } // BenchmarkStream1KBFiles benchmarks Writer/Reader for 1KB files func (suite *DriverSuite) BenchmarkStream1KBFiles(c *check.C) { suite.benchmarkStreamFiles(c, 1024) } // BenchmarkStream1MBFiles benchmarks Writer/Reader for 1MB files func (suite *DriverSuite) BenchmarkStream1MBFiles(c *check.C) { suite.benchmarkStreamFiles(c, 1024*1024) } // BenchmarkStream1GBFiles benchmarks Writer/Reader for 1GB files func (suite *DriverSuite) BenchmarkStream1GBFiles(c *check.C) { suite.benchmarkStreamFiles(c, 1024*1024*1024) } func (suite *DriverSuite) benchmarkStreamFiles(c *check.C, size int64) { c.SetBytes(size) parentDir := randomPath(8) defer func() { c.StopTimer() suite.StorageDriver.Delete(suite.ctx, firstPart(parentDir)) }() for i := 0; i < c.N; i++ { filename := path.Join(parentDir, randomPath(32)) writer, err := suite.StorageDriver.Writer(suite.ctx, filename, false) c.Assert(err, check.IsNil) written, err := io.Copy(writer, bytes.NewReader(randomContents(size))) c.Assert(err, check.IsNil) c.Assert(written, check.Equals, size) err = writer.Commit() c.Assert(err, check.IsNil) err = writer.Close() c.Assert(err, check.IsNil) rc, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.IsNil) rc.Close() } } // BenchmarkList5Files benchmarks List for 5 small files func (suite *DriverSuite) BenchmarkList5Files(c *check.C) { suite.benchmarkListFiles(c, 5) } // BenchmarkList50Files benchmarks List for 50 small files func (suite *DriverSuite) BenchmarkList50Files(c *check.C) { suite.benchmarkListFiles(c, 50) } func (suite *DriverSuite) benchmarkListFiles(c *check.C, numFiles int64) { parentDir := randomPath(8) defer func() { c.StopTimer() suite.StorageDriver.Delete(suite.ctx, firstPart(parentDir)) }() for i := int64(0); i < numFiles; i++ { err := suite.StorageDriver.PutContent(suite.ctx, path.Join(parentDir, randomPath(32)), nil) c.Assert(err, check.IsNil) } c.ResetTimer() for i := 0; i < c.N; i++ { files, err := suite.StorageDriver.List(suite.ctx, parentDir) c.Assert(err, check.IsNil) c.Assert(int64(len(files)), check.Equals, numFiles) } } // BenchmarkDelete5Files benchmarks Delete for 5 small files func (suite *DriverSuite) BenchmarkDelete5Files(c *check.C) { suite.benchmarkDeleteFiles(c, 5) } // BenchmarkDelete50Files benchmarks Delete for 50 small files func (suite *DriverSuite) BenchmarkDelete50Files(c *check.C) { suite.benchmarkDeleteFiles(c, 50) } func (suite *DriverSuite) benchmarkDeleteFiles(c *check.C, numFiles int64) { for i := 0; i < c.N; i++ { parentDir := randomPath(8) defer suite.deletePath(c, firstPart(parentDir)) c.StopTimer() for j := int64(0); j < numFiles; j++ { err := suite.StorageDriver.PutContent(suite.ctx, path.Join(parentDir, randomPath(32)), nil) c.Assert(err, check.IsNil) } c.StartTimer() // This is the operation we're benchmarking err := suite.StorageDriver.Delete(suite.ctx, firstPart(parentDir)) c.Assert(err, check.IsNil) } } func (suite *DriverSuite) testFileStreams(c *check.C, size int64) { tf, err := ioutil.TempFile("", "tf") c.Assert(err, check.IsNil) defer os.Remove(tf.Name()) defer tf.Close() filename := randomPath(32) defer suite.deletePath(c, firstPart(filename)) contents := randomContents(size) _, err = tf.Write(contents) c.Assert(err, check.IsNil) tf.Sync() tf.Seek(0, os.SEEK_SET) writer, err := suite.StorageDriver.Writer(suite.ctx, filename, false) c.Assert(err, check.IsNil) nn, err := io.Copy(writer, tf) c.Assert(err, check.IsNil) c.Assert(nn, check.Equals, size) err = writer.Commit() c.Assert(err, check.IsNil) err = writer.Close() c.Assert(err, check.IsNil) reader, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.IsNil) defer reader.Close() readContents, err := ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contents) } func (suite *DriverSuite) writeReadCompare(c *check.C, filename string, contents []byte) { defer suite.deletePath(c, firstPart(filename)) err := suite.StorageDriver.PutContent(suite.ctx, filename, contents) c.Assert(err, check.IsNil) readContents, err := suite.StorageDriver.GetContent(suite.ctx, filename) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contents) } func (suite *DriverSuite) writeReadCompareStreams(c *check.C, filename string, contents []byte) { defer suite.deletePath(c, firstPart(filename)) writer, err := suite.StorageDriver.Writer(suite.ctx, filename, false) c.Assert(err, check.IsNil) nn, err := io.Copy(writer, bytes.NewReader(contents)) c.Assert(err, check.IsNil) c.Assert(nn, check.Equals, int64(len(contents))) err = writer.Commit() c.Assert(err, check.IsNil) err = writer.Close() c.Assert(err, check.IsNil) reader, err := suite.StorageDriver.Reader(suite.ctx, filename, 0) c.Assert(err, check.IsNil) defer reader.Close() readContents, err := ioutil.ReadAll(reader) c.Assert(err, check.IsNil) c.Assert(readContents, check.DeepEquals, contents) } var filenameChars = []byte("abcdefghijklmnopqrstuvwxyz0123456789") var separatorChars = []byte("._-") func randomPath(length int64) string { path := "/" for int64(len(path)) < length { chunkLength := rand.Int63n(length-int64(len(path))) + 1 chunk := randomFilename(chunkLength) path += chunk remaining := length - int64(len(path)) if remaining == 1 { path += randomFilename(1) } else if remaining > 1 { path += "/" } } return path } func randomFilename(length int64) string { b := make([]byte, length) wasSeparator := true for i := range b { if !wasSeparator && i < len(b)-1 && rand.Intn(4) == 0 { b[i] = separatorChars[rand.Intn(len(separatorChars))] wasSeparator = true } else { b[i] = filenameChars[rand.Intn(len(filenameChars))] wasSeparator = false } } return string(b) } // randomBytes pre-allocates all of the memory sizes needed for the test. If // anything panics while accessing randomBytes, just make this number bigger. var randomBytes = make([]byte, 128<<20) func init() { _, _ = rand.Read(randomBytes) // always returns len(randomBytes) and nil error } func randomContents(length int64) []byte { return randomBytes[:length] } type randReader struct { r int64 m sync.Mutex } func (rr *randReader) Read(p []byte) (n int, err error) { rr.m.Lock() defer rr.m.Unlock() toread := int64(len(p)) if toread > rr.r { toread = rr.r } n = copy(p, randomContents(toread)) rr.r -= int64(n) if rr.r <= 0 { err = io.EOF } return } func newRandReader(n int64) *randReader { return &randReader{r: n} } func firstPart(filePath string) string { if filePath == "" { return "/" } for { if filePath[len(filePath)-1] == '/' { filePath = filePath[:len(filePath)-1] } dir, file := path.Split(filePath) if dir == "" && file == "" { return "/" } if dir == "/" || dir == "" { return "/" + file } if file == "" { return dir } filePath = dir } } ```
```shell How to unstage a staged file Finding a tag Limiting log output by time Check the status of your files Ignore files in git ```
Sadiq Mousa (born 20 October 1959) is a former Iraqi football forward. He competed in the men's tournament at the 1984 Summer Olympics. Mousa played for Iraq between 1984 and 1987. References 1959 births Living people Iraqi men's footballers Iraq men's international footballers Olympic footballers for Iraq Footballers at the 1984 Summer Olympics Place of birth missing (living people) Men's association football forwards Men's association football midfielders
```go package test import ( "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/magiconair/properties/assert" ) func TestUnitNullInput(t *testing.T) { t.Parallel() foo := map[string]interface{}{ "nullable_string": nil, "nonnullable_string": "foo", } options := &terraform.Options{ TerraformDir: "./fixtures/terraform-null", Vars: map[string]interface{}{"foo": foo}, } terraform.InitAndApply(t, options) fooOut := terraform.OutputMap(t, options, "foo") assert.Equal(t, fooOut, map[string]string{"nonnullable_string": "foo", "nullable_string": "<nil>"}) barOut := terraform.Output(t, options, "bar") assert.Equal(t, barOut, "I AM NULL") } ```
```javascript import React from 'react'; import { Route, IndexRoute } from 'react-router'; import Base from './containers/Base/Base'; import Board from './containers/Board/Board'; export const urls = { index: '/', }; export const routes = ( <Route path={urls.index} component={Base}> <IndexRoute component={Board} /> </Route> ); ```
Claudia Inés Serrano Madrid (born 17 February 1957) is a Chilean politician and sociologist who served as minister during the first government of Michelle Bachelet (2006–2010). References External links Profile at RedEncuentros 1957 births Living people Pontifical Catholic University of Chile alumni University of Chile alumni School for Advanced Studies in the Social Sciences alumni Chilean sociologists Chilean women sociologists 21st-century Chilean politicians Socialist Party of Chile politicians
Wānaka Airport is an airport serving the rural town of Wānaka in Otago, New Zealand. The airport currently has scheduled commercial flights from one airline, SoundsAir, with Air New Zealand having ceased flights to the airport in 2013. It largely serves as a base for scenic and charter flights to destinations such as Milford Sound and Mount Aspiring National Park. The airport is located beside , on a plateau above the small village of Luggate, and is 10 km south-east of Wānaka township. It was originally a private airstrip owned by Tim Wallis, but in 1985 it became the main commercial airport for Wānaka, replacing Mt Iron Aerodrome. The Warbirds over Wanaka air show has been held biennially at the airport since 1988, regularly attracting crowds of more than 50,000 people. Other attractions, including the National Transport and Toy Museum and the Warbirds & Wheels Museum, are also located at the airport. History Wānaka was originally served by Mount Iron Aerodrome. By the early 1980s it was clear a new airport would be required to serve the town's growing tourism industry as Mount Iron's runway was too short for commercial aircraft with no possibility of extension. In 1984, the local council decided to create a new airport for the town by expanding a private airstrip to the south-east of the town, which had been owned by Tim Wallis. On 19 March 2004, Air New Zealand began scheduled services from Wānaka to Christchurch through its subsidiary Eagle Airways, using 19-seat Beechcraft 1900D aircraft. Larger aircraft, such as the Dash8-Q300, were occasionally used during periods of increased demand and airshow weekends. Air New Zealand ended scheduled services to Wānaka on 30 January 2013 after stating the route had never been profitable and showed no signs of improvement. Following the withdrawal of the national carrier, local businesses attempted to run a charter service during the ski season and asked Air New Zealand to consider reinstating services on a seasonal basis using larger aircraft, although neither of these efforts proved successful. On 2 November 2020, a new regular commercial service operated by SoundsAir launched, once again linking Wanaka Airport with Christchurch Airport. Failed attempt to develop a jet airport at Wānaka In September 2020, a group formed out of concerns for the future of Wānaka Airport, Wanaka Stakeholders Group Inc, took on Queenstown Lakes District Council and Queenstown Airport Corporation over an alleged illegal airport lease and plans to develop a jet airport at Wānaka Airport. The group had almost 3,500 members, 50% of Wānaka's adult population, and was claiming that QLDC and the Queenstown Airport Corporation failed to consult properly over a controversial 100-year lease that allowed virtually unlimited airport expansion in Wānaka. The group argued that QLDC and QAC documents that were kept secret for years instead forecast a new Wānaka airport as big as the current Queenstown airport with jet aircraft taking off or landing every 10 – 12 minutes. The court heard evidence that elected QLDC councillors were kept in the dark over plans to introduce jet aircraft to an expanded Wānaka airport. The court was also told that Mayor Jim Boult had been given the right by QLDC to negotiate the terms of the Wānaka airport lease, albeit alongside the QLDC CEO Mike Theelen and two selected councillors. Community consultation by QLDC deliberately hid plans to introduce narrow body jet aircraft to Wānaka as well as understating the extent of planned Wānaka airport expansion. In April 2021, the High Court ruled that the airport lease was illegal and control of the airport was handed back to QLDC, who were cautioned to follow due process in the future. The result was a double blow against QLDC and QAC not just through the Wānaka Airport lease being declared illegal, but because the case centred on how open and transparent they had been in communicating with the Wānaka community prior to the lease being granted. Wānaka Airport is now managed on behalf of QLDC by QAC under a management agreement. Operations Scenic and charter operators are the main commercial users and include Aspiring Air, Glenorchy Air and Southern Alps Air. There are extensive skydiving and helicopter operations and a large number of general aviation aircraft are based at the airport. On 2 November 2020, Sounds Air commenced daily services between Christchurch and Wanaka utilising a PC-12. Limitations The runway's Pavement Classification Number (PCN) is too low to cope with heavier aircraft and the length of the runway prevents certain aircraft from using the airport. However, the airport has consent rights to extend the current runway westward by 500 metres, with an additional 240 metres for standard overrun requirements. The size of the terminal limits aircraft passenger capacity; larger aircraft such as the Dash8-Q300 and ATR 72 are still able to operate but the airport's facilities are not designed to handle the larger number of the passengers these aircraft carry. The lack of a VHF omnidirectional range (VOR) beacon at the airport poses an issue as few aircraft have appropriate GPS systems to enable a precision instrument approach in bad weather. Airlines and destinations See also List of airports in New Zealand List of airlines of New Zealand Transport in New Zealand References External links Wānaka Airport AIP New Zealand U-Fly Wanaka Airports in New Zealand Wānaka Transport buildings and structures in Otago
```makefile ################################################################################ # # python-pathvalidate # ################################################################################ PYTHON_PATHVALIDATE_VERSION = 2.5.2 PYTHON_PATHVALIDATE_SOURCE = pathvalidate-$(PYTHON_PATHVALIDATE_VERSION).tar.gz PYTHON_PATHVALIDATE_SITE = path_to_url PYTHON_PATHVALIDATE_SETUP_TYPE = setuptools PYTHON_PATHVALIDATE_LICENSE = MIT PYTHON_PATHVALIDATE_LICENSE_FILES = LICENSE $(eval $(python-package)) ```
Artiifact (stylized as ARTIIFACT) is the debut studio album by South African hip hop record producer and musician Anatii. The album was released on 9 September 2016 by his record label YAL Entertainment, after many delays and following singles "Freedom" and "The Saga", which were released in 2014 and early 2015, respectively. The album was initially titled Electronic Bushman. Promotion In promotion of his debut studio release, Anatii commenced celebrations with an album release party at Moloko Club in Hatfield, east of Pretoria on 9 September 2016. Anatii hosted the ARTIIFACT Tour, which had dates for shows in Midrand and Durban on 30 September and 2 October 2016, respectively. The tour featured main opening act, American singer, friend and collaborator Omarion. Track listing References 2016 debut albums Anatii albums
```javascript import { createSelector } from 'reselect'; const foo = createSelector( getIds, getObjects, (ids, objects) => ids.map(id => objects[id]) ); const bar = createSelector( [getIds, getObjects], (ids, objects) => ids.map(id => objects[id]) ); ```
```php <h4 class="gs-card-title">{{trans('gitscrum.general-attachments')}}</h4> <div class="gs-card-content"> <ul class="list-group"> @each('partials.lists.attachments-min', $list, 'attachment', 'partials.lists.no-items') </ul> @include('partials.forms.attachment', ['type' => $type, 'id' => $id]) </div> ```
Rinaldo Barlassina (2 May 1898 Novara, Italy – 23 December 1946 Bergamo, Italy) was an Italian international football referee. He was one of the first three Italian referees to officiate in the World Cup (with Francesco Mattea and Albino Carraro). Career He officiated 36 international matches and was FIFA referee in 1931–1942. He attributed 4 matches in 1934 (3) and 1938 (1) World Cup, 1938 World Cup qualifying (2), 1936 Olympic Games (1) and others were in friendly matches. He gave a red card in friendly match in 1935 between Hungary-Austria (6:3) to home team footballer Pál Titkos in 88th minute. He also attributed 4 matches in Central European International Cup in 1931 and 1934-1935 editions and 1937 Eduard Benes Cup Romania-Czechoslovakia (1:1) match which all were friendly matches. In club football he officiated two Mitropa Cup matches in 1931 and 1936 editions which all of them were final matches. Below are his important matches were he officiated: References and notes External links Profil - eu-football.info Profil - www.worldfootball.net Profil - worldreferee.com 1898 births 1946 deaths Italian football referees 1934 FIFA World Cup referees 1938 FIFA World Cup referees Olympic football referees
Hugh Donell Green (born July 27, 1959) is an American former professional football player who was a linebacker for 11 seasons in the National Football League (NFL) during the 1980s and 1990s. He played college football for the Pittsburgh Panthers, and was recognized as a three-time consensus All-American. Green was selected in the first round of the 1981 NFL Draft, and played professionally for the Tampa Bay Buccaneers and the Miami Dolphins. Early years Green was born in Natchez, Mississippi. He attended North Natchez High School. College career Green attended the University of Pittsburgh, where he played defensive end for the University of Pittsburgh Panthers from 1977 to 1980. Green was part of an elite team that included four future NFL Hall of Fame players: Defensive end Rickey Jackson, Center Russ Grimm, lineman Jimbo Covert and quarterback Dan Marino. Several other future NFL stars would be his teammates, including Mark May. He was a three-time consensus first-team All-American (1978, 1979, 1980) and a second-team All-America selection as a freshman in 1977. He was a consensus four-time All-East selection as well. In the 4 years Green played, the Pittsburgh Panthers compiled a 39–8–1 record, winning three bowl games en route (two Gator Bowls and one Fiesta Bowl). His No. 99 jersey was retired at halftime of his final home game in the 1980 season. After the season, he played in both the Hula Bowl and Japan Bowl All-star games. Green left the university with 460 tackles and 53 career sacks in his college career. According to USC and Tampa Bay Buccaneers coach John McKay, "Hugh Green is the most productive player at his position I have ever seen in college". The table is a year-by-year showing of Green's defensive statistics. In 1980, Green won the Walter Camp Award, the Maxwell Award, the Lombardi Award and was the Sporting News Player of the Year, the UPI Player of the Year and finished second in the Heisman Trophy balloting, losing to running back George Rogers of the University of South Carolina. Green's second-place finish in the voting was the best a defensive specialist had ever attained until 1997, when Charles Woodson won the award. Green was selected to the College Football Hall of Fame in 1996 and was named the fifth greatest college football player of All-Time by collegefootballnews.com. He was named to the all-time All-American team compiled by The Sporting News in 1983. In 2007, Green was ranked No. 14 on ESPN's Top 25 Players In College Football History list. He was also named to Sports Illustrated's College Football All-Century team in 1999. Professional career Green was selected as the seventh overall pick of the first round by the Buccaneers in the 1981 NFL Draft. He was a 1982 All-Pro and 1983 All-Pro and was elected to the Pro Bowl twice in his career, in 1982 and 1983. Later in his career he suffered several injuries, including a car accident in the middle of the 1984 season for a fracture near the eye. He was traded to the Miami Dolphins in the middle of the 1985 season in exchange for their first- and second-round draft choices in the 1986 draft. In the 1985 season he was on to a career-high in sacks and ended the season getting 7.5 while playing all 16 games despite the mid-season trade. His 1986 season ended in the Week 3 classic 51–45 game against the New York Jets. In the first quarter, Green suffered a knee injury and was carted off the field. In Miami, Green played six more solid seasons before retiring. He was a member of Don Shula's teams, which were often playoff contenders and Green was a starter on those teams, racking up 7.5 sacks in 1989, to tie a career-high, for example. References External links 1959 births Living people All-American college football players American football linebackers College Football Hall of Fame inductees Sportspeople from Natchez, Mississippi Pittsburgh Panthers football players Players of American football from Mississippi Miami Dolphins players National Conference Pro Bowl players Tampa Bay Buccaneers players
Camp Campbell Gard is a YMCA camp located on along the Great Miami River six miles (10 km) northeast of Hamilton, Ohio. The camp is on Augspurger Road in St. Clair Township. The camp was dedicated to the memory of a World War I airman, Charles Campbell Gard, by his father Homer Gard in 1926, six years after Charles Campbell Gard's death. The dedication on Friday, July 1, 1926, featured remarks by former Ohio governors, James M. Cox and Charles P. Taft II, son of William Howard Taft. When it opened in 1926, the new camp had 20 buildings, including a by dining hall with electric stoves and refrigeration; five cabins (quickly expanded to ten) that housed 12 people each; a recreational building "for rainy days"; an informal playground for games; and a guest house "equipped with hot and cold shower baths." As specified by Homer Gard, the camp also featured facilities for “crippled children” to make the facility accessible to the handicapped. Today the camp has expanded to and features heating and air conditioning. Noted Attendees Robert McCloskey, who carved the camp totem pole was a famous author of children's book including Make Way for Ducklings and Lentil. David P. Smith and his brother Tony Smith are Camp Campbell Gard Alums. David P. Smith's name still is proudly displayed on Errol's Archer's #11. The plaque is still displayed today. Star women's basketball player Willa McKee has also attended several times. References Campbell Gard Campbell Gard Buildings and structures in Butler County, Ohio
```c++ /******************************************************************************* * * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. *******************************************************************************/ #ifndef CPU_X64_JIT_UNI_POSTOPS_INJECTOR_HPP #define CPU_X64_JIT_UNI_POSTOPS_INJECTOR_HPP #include <functional> #include <map> #include <memory> #include "common/c_types_map.hpp" #include "common/primitive_attr.hpp" #include "common/type_helpers.hpp" #include "common/utils.hpp" #include "cpu/x64/injectors/injector_utils.hpp" #include "cpu/x64/injectors/jit_uni_binary_injector.hpp" #include "cpu/x64/injectors/jit_uni_eltwise_injector.hpp" #include "cpu/x64/jit_generator.hpp" #include <initializer_list> namespace dnnl { namespace impl { namespace cpu { namespace x64 { namespace injector { /* * Allows specifying custom injector function for given post-op type - one * function per primitive. There are post-ops type (example: sum) that don't * have specialized injector. They heavily rely on kernel specific intrnals, * which makes the generalization unreasonable. As so user can prepare internal * kernel lambda and pass it explicitly to injector. */ using lambda_jit_injectors_t = std::map<dnnl_primitive_kind_t, std::function<void()>>; size_t aux_vec_count(const post_ops_t &post_ops, cpu_isa_t isa, bool is_fwd); // A base isa-agnostic post-ops injector abstract class. // // The main mechanism of handling various post-ops types. It utilizes internally // specialized injectors to generate post-ops code to host primitive. Random // order of post-ops is supported. // // Note: to move back from `create` to constructor and merge base into a parent // class, both binary and eltwise injector top-level objects should become // isa-agnostic, which allows to call their constructors or methods passing isa // at runtime. template <typename Vmm> class jit_uni_postops_injector_base_t { public: // `isa` argument specifies the ISA the kernel to be generated for. In most // cases it's aligned with the former kernel ISA if such enum value is // instantiated for injectors. If not, uses the next available isa enum // value in compliance with same vector length. static jit_uni_postops_injector_base_t *create(jit_generator *host, cpu_isa_t isa, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params, const eltwise_injector::static_params_t &eltwise_static_params); static jit_uni_postops_injector_base_t *create(jit_generator *host, cpu_isa_t isa, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params); virtual ~jit_uni_postops_injector_base_t() = default; // Generates code of post_ops chain injected to host primitive. Applied to // ordered set of vector registers' indexes. // @rhs_arg_params: see jit_uni_binary_injector description virtual void compute_vector_range( const injector_utils::vmm_index_set_t &vmm_idxs, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) = 0; virtual void compute_vector_range( const injector_utils::vmm_index_set_t &vmm_idxs) = 0; // Generates code of post_ops chain injected to host primitive. Applied to // range <start_idx, end_idx) of vector registers' indexes. // @rhs_arg_params: see jit_uni_binary_injector description virtual void compute_vector_range(size_t start_idx, size_t end_idx, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) = 0; virtual void compute_vector_range(size_t start_idx, size_t end_idx) = 0; // Generates code of post_ops chain injected to host primitive. Applied to // a single vector register index. // @rhs_arg_params: see jit_uni_binary_injector description virtual void compute_vector(size_t idx, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) = 0; virtual void compute_vector(size_t idx) = 0; // Thin wrapper for eltwise injector specific function virtual void prepare_table(bool gen_table) = 0; virtual void set_lambda_injector(lambda_jit_injectors_t::key_type, const lambda_jit_injectors_t::mapped_type &jit_injector) = 0; }; // A parent isa-specific post-ops injector class. A specific instance is // assigned based on `cpu_isa_t isa` argument in the base class. template <cpu_isa_t isa, typename Vmm = typename cpu_isa_traits<isa>::Vmm> class jit_uni_postops_injector_t : public jit_uni_postops_injector_base_t<Vmm> { public: /* * @param host <required> - user primitive where post-ops generated code is * injected * @param post_ops <required> - struct representing requested post-ops chain * @binary_static_params <reguired> - static params needed for binary_injector. * see: jit_uni_binary_injector.hpp for more info. * @param eltwise_static_params <optional> - allows user specify non default * params for eltwise_injector * @param lambda_jit_injectors <optional> - allows user specify custom injector * function for given post-op type */ jit_uni_postops_injector_t(jit_generator *host, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params); jit_uni_postops_injector_t(jit_generator *host, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params, const lambda_jit_injectors_t &lambda_jit_injectors); jit_uni_postops_injector_t(jit_generator *host, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params, const eltwise_injector::static_params_t &eltwise_static_params); jit_uni_postops_injector_t(jit_generator *host, const post_ops_t &post_ops, const binary_injector::static_params_t &binary_static_params, const eltwise_injector::static_params_t &eltwise_static_params, const lambda_jit_injectors_t &lambda_jit_injectors); virtual ~jit_uni_postops_injector_t() = default; // See `jit_uni_postops_injector_base_t::compute_vector_range(...)` void compute_vector_range(const injector_utils::vmm_index_set_t &vmm_idxs, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) override; void compute_vector_range( const injector_utils::vmm_index_set_t &vmm_idxs) override; // See `jit_uni_postops_injector_base_t::compute_vector_range(...)` void compute_vector_range(size_t start_idx, size_t end_idx, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) override; void compute_vector_range(size_t start_idx, size_t end_idx) override; // See `jit_uni_postops_injector_base_t::compute_vector(...)` void compute_vector(size_t idx, const binary_injector::rhs_arg_dynamic_params_t &rhs_arg_params) override; void compute_vector(size_t idx) override; /* * Thin wrapper for eltwise injector specific function */ void prepare_table(bool gen_table) override; void set_lambda_injector(lambda_jit_injectors_t::key_type, const lambda_jit_injectors_t::mapped_type &jit_injector) override; private: post_ops_t post_ops_; jit_generator *host_; // Key is a numerical order of a post-op in attributes. std::map<int, jit_uni_eltwise_injector<isa, Vmm>> alg_to_eltwise_injector_; std::unique_ptr<binary_injector::jit_uni_binary_injector_t<isa, Vmm>> binary_injector_; lambda_jit_injectors_t lambda_jit_injectors_; }; enum post_op_type { sum = 0, eltwise, binary, prelu }; struct post_ops_ok_args_t { post_ops_ok_args_t(const cpu_isa_t isa, const std::vector<post_op_type> &accepted_post_op_types, const post_ops_t &post_ops, const memory_desc_wrapper *dst_d, const bool sum_at_pos_0_only, const bool sum_requires_scale_one, const bool sum_requires_zp_zero = true, const bool sum_requires_same_params = true, const bcast_set_t &enabled_bcast_strategy = default_strategies()); const cpu_isa_t isa; const std::vector<post_op_type> &accepted_post_op_types; const post_ops_t &post_ops; const memory_desc_wrapper *dst_d; const bool sum_at_pos_0_only; const bool sum_requires_scale_one; const bool sum_requires_zp_zero; const bool sum_requires_same_params; const bcast_set_t enabled_bcast_strategy; }; bool post_ops_ok(const post_ops_ok_args_t &args); } // namespace injector } // namespace x64 } // namespace cpu } // namespace impl } // namespace dnnl #endif ```
was a Japanese cinematographer. Career Born in Tokyo, Ohara entered the Kamata section of the Shochiku film studios in 1924 and was promoted to cinematographer in 1927. Ohara helped establish the modern touch of the studio's cinematography at Kamata together with Bunjirō Mizutani and Mitsuo Miura, and is known for the soft tone of his images. He regularly worked for director Heinosuke Gosho on films like The Dancing Girl of Izu, Burden of Life and An Inn at Osaka. Ohara later worked at Tokyo Hassei Eiga, Toho, Shintoho, and Daiei Film, and shot films for directors such as Akira Kurosawa, Yasujirō Ozu, Kenji Mizoguchi, Kōzaburō Yoshimura, Masahiro Makino and Shōhei Imamura. In 1954, he received the Mainichi Film Award for Best Cinematography for his work on The Valley Between Love and Death and The Cock Crows Twice. Selected filmography The Dancing Girl of Izu (1933) Somniloquy of the Bridegroom (Hanamuko no negoto) (1935) Burden of Life (Jinsei no onimotsu) (1935) Woman of the Mist (Oboroyo no onna) (1936) The New Road (Part one) (Shindō zenhen) (1936) The New Road (Part two) (Shindō kōhen) (1936) Ahen senso (1943) The Most Beautiful (1944) The Munekata Sisters (1950) Portrait of Madame Yuki (1950) An Inn at Osaka (Osaka no yado) (1954) Non-chan Kumo ni Noru (1955) Takekurabe (1955) Kisses (1957) Endless Desire (1958) References External links Japanese cinematographers 1902 births Mass media people from Tokyo 1990 deaths
Puspalal Sharma (born 8 November 1983) is a Bhutanese former international footballer. He made his first appearance for the Bhutan national football team in 2009. References 1984 births Bhutanese men's footballers Bhutan men's international footballers Transport United FC players Living people Men's association football goalkeepers Bhutanese people of Nepalese descent
```xml export function isString(value: unknown): value is string { return typeof value === 'string'; } export function isObject(value: unknown): value is Record<any, any> { return typeof value === 'object' && !Array.isArray(value) && value !== null; } export function isFunction(value: unknown): value is (...args: any) => any { return !!( value && (value as any).constructor && (value as any).call && typeof value === 'function' && value.apply ); } ```
Racovița is a commune located in Vâlcea County, Muntenia, Romania. It is composed of seven villages: Balota, Blănoiu, Bradu-Clocotici, Copăceni, Gruiu Lupului, Racovița and Tuțulești. References Communes in Vâlcea County Localities in Muntenia
The Parker County Peach Festival is an annual event held the second Saturday of each July in Weatherford, Texas, beginning in 1985. In addition to celebrating the peach crop from local growers, the festival also showcases local arts and crafts vendors. It is capable of drawing thousands of attendees. During the festival, vendors set up booths in the open area surrounding the Parker County Courthouse. There is a companion bike rally event called the Peach Pedal, that serves peaches at its rest stops. There are three separate stages for entertainment including a children's stage. In previous years, there have been as many as 200 foods, craft, art, and activity booths. The festival features a “42” domino tournament. Food In addition to fresh, ripe peaches, attendees can enjoy peach-based foods such as: Peach cobbler Peach ice cream peach juleps Peach smoothies Peach pie and typical fair foods like funnel cake, turkey legs, roasted corn, etc. The peach-themed offerings run out within the first few hours of the event Notes References External links Parker County Peach Festival Weatherford Chamber of Commerce Peach Pedal Bike Ride Food and drink festivals in the United States Tourist attractions in Parker County, Texas Festivals in Texas Fruit festivals
```xml <?xml version="1.0" encoding="UTF-8"?> <ui version="4.0"> <class>OverquotaFullDialog</class> <widget class="QDialog" name="OverquotaFullDialog"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>540</width> <height>357</height> </rect> </property> <property name="minimumSize"> <size> <width>540</width> <height>357</height> </size> </property> <property name="maximumSize"> <size> <width>540</width> <height>357</height> </size> </property> <property name="windowTitle"> <string>Storage full</string> </property> <property name="styleSheet"> <string notr="true">background-color: #FAFAFA;</string> </property> <layout class="QVBoxLayout" name="verticalLayout"> <property name="spacing"> <number>0</number> </property> <property name="leftMargin"> <number>38</number> </property> <property name="topMargin"> <number>22</number> </property> <property name="rightMargin"> <number>38</number> </property> <property name="bottomMargin"> <number>17</number> </property> <item> <widget class="QWidget" name="widgetHeader" native="true"> <property name="minimumSize"> <size> <width>465</width> <height>0</height> </size> </property> <property name="maximumSize"> <size> <width>465</width> <height>16777215</height> </size> </property> <layout class="QHBoxLayout" name="horizontalLayout"> <property name="spacing"> <number>15</number> </property> <property name="leftMargin"> <number>3</number> </property> <property name="topMargin"> <number>0</number> </property> <property name="rightMargin"> <number>3</number> </property> <item> <spacer name="horizontalSpacer"> <property name="orientation"> <enum>Qt::Horizontal</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>40</width> <height>20</height> </size> </property> </spacer> </item> <item> <widget class="QPushButton" name="buttonWarning"> <property name="minimumSize"> <size> <width>36</width> <height>36</height> </size> </property> <property name="maximumSize"> <size> <width>36</width> <height>36</height> </size> </property> <property name="styleSheet"> <string notr="true">border: none;</string> </property> <property name="text"> <string/> </property> <property name="icon"> <iconset resource="../Resources_macx.qrc"> <normaloff>:/images/icon_warning_36.png</normaloff>:/images/icon_warning_36.png</iconset> </property> <property name="iconSize"> <size> <width>36</width> <height>36</height> </size> </property> </widget> </item> <item> <widget class="CustomLabel" name="labelTitle"> <property name="sizePolicy"> <sizepolicy hsizetype="Preferred" vsizetype="Preferred"> <horstretch>0</horstretch> <verstretch>0</verstretch> </sizepolicy> </property> <property name="minimumSize"> <size> <width>0</width> <height>0</height> </size> </property> <property name="maximumSize"> <size> <width>16777215</width> <height>16777215</height> </size> </property> <property name="font"> <font> <family>SF UI Text</family> <pointsize>-1</pointsize> </font> </property> <property name="styleSheet"> <string notr="true">#labelTitle { color: #333333; font-family: &quot;SF UI Text&quot;; font-size: 20px; }</string> </property> <property name="text"> <string/> </property> <property name="wordWrap"> <bool>false</bool> </property> </widget> </item> <item> <spacer name="horizontalSpacer_2"> <property name="orientation"> <enum>Qt::Horizontal</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>40</width> <height>20</height> </size> </property> </spacer> </item> </layout> </widget> </item> <item> <spacer name="verticalSpacer_3"> <property name="orientation"> <enum>Qt::Vertical</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>20</width> <height>18</height> </size> </property> </spacer> </item> <item alignment="Qt::AlignHCenter"> <widget class="QStackedWidget" name="stackedWidgetBigIcons"> <property name="minimumSize"> <size> <width>0</width> <height>0</height> </size> </property> <property name="maximumSize"> <size> <width>16777215</width> <height>111</height> </size> </property> <property name="currentIndex"> <number>0</number> </property> <widget class="QWidget" name="pageStorageFull"> <property name="minimumSize"> <size> <width>100</width> <height>100</height> </size> </property> <property name="maximumSize"> <size> <width>100</width> <height>100</height> </size> </property> <layout class="QVBoxLayout" name="verticalLayout_2"> <property name="spacing"> <number>0</number> </property> <property name="leftMargin"> <number>0</number> </property> <property name="topMargin"> <number>0</number> </property> <property name="rightMargin"> <number>0</number> </property> <property name="bottomMargin"> <number>0</number> </property> <item> <widget class="QPushButton" name="buttonBigIcon"> <property name="minimumSize"> <size> <width>100</width> <height>100</height> </size> </property> <property name="maximumSize"> <size> <width>100</width> <height>100</height> </size> </property> <property name="styleSheet"> <string notr="true">border: none;</string> </property> <property name="text"> <string/> </property> <property name="icon"> <iconset resource="../Resources_macx.qrc"> <normaloff>:/images/storage_full_100.png</normaloff>:/images/storage_full_100.png</iconset> </property> <property name="iconSize"> <size> <width>100</width> <height>100</height> </size> </property> </widget> </item> </layout> </widget> <widget class="QWidget" name="pageBandwidthFull"> <layout class="QVBoxLayout" name="verticalLayout_3"> <property name="spacing"> <number>0</number> </property> <property name="leftMargin"> <number>0</number> </property> <property name="topMargin"> <number>0</number> </property> <property name="rightMargin"> <number>0</number> </property> <property name="bottomMargin"> <number>0</number> </property> <item> <widget class="QPushButton" name="pushButton"> <property name="minimumSize"> <size> <width>94</width> <height>94</height> </size> </property> <property name="maximumSize"> <size> <width>94</width> <height>94</height> </size> </property> <property name="styleSheet"> <string notr="true">border: none;</string> </property> <property name="text"> <string/> </property> <property name="icon"> <iconset resource="../Resources_macx.qrc"> <normaloff>:/images/transfer_empty_100.png</normaloff>:/images/transfer_empty_100.png</iconset> </property> <property name="iconSize"> <size> <width>94</width> <height>94</height> </size> </property> </widget> </item> </layout> </widget> </widget> </item> <item> <spacer name="verticalSpacer_4"> <property name="orientation"> <enum>Qt::Vertical</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>20</width> <height>24</height> </size> </property> </spacer> </item> <item> <widget class="QLabel" name="labelMessage"> <property name="minimumSize"> <size> <width>444</width> <height>60</height> </size> </property> <property name="maximumSize"> <size> <width>16777215</width> <height>16777215</height> </size> </property> <property name="styleSheet"> <string notr="true">#labelMessage { color: #242424; font-family: &quot;SF UI Text&quot;; font-size: 12px; qproperty-alignment: AlignCenter; }</string> </property> <property name="text"> <string/> </property> <property name="wordWrap"> <bool>true</bool> </property> <property name="margin"> <number>0</number> </property> </widget> </item> <item> <spacer name="verticalSpacer_5"> <property name="orientation"> <enum>Qt::Vertical</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>20</width> <height>25</height> </size> </property> </spacer> </item> <item> <widget class="QWidget" name="widgetButtons" native="true"> <layout class="QHBoxLayout" name="horizontalLayout_2"> <property name="spacing"> <number>22</number> </property> <property name="leftMargin"> <number>0</number> </property> <property name="rightMargin"> <number>0</number> </property> <item> <spacer name="horizontalSpacer_3"> <property name="orientation"> <enum>Qt::Horizontal</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>40</width> <height>20</height> </size> </property> </spacer> </item> <item> <widget class="QPushButton" name="buttonDismiss"> <property name="minimumSize"> <size> <width>98</width> <height>50</height> </size> </property> <property name="styleSheet"> <string notr="true">#buttonDismiss { background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1, stop: 0 #ffffff, stop: 1 #fcfcfc); border-radius: 3px; border: 1px solid rgba(0, 0, 0, 5%); padding: 7px 14px 7px 14px; font-family: &quot;Lato&quot;; font-size: 14px; color: #333333; }</string> </property> <property name="text"> <string>Dismiss</string> </property> </widget> </item> <item> <widget class="QPushButton" name="buttonUpgrade"> <property name="minimumSize"> <size> <width>150</width> <height>50</height> </size> </property> <property name="styleSheet"> <string notr="true">#buttonUpgrade { background-color: #00BFA5; border-radius: 3px; border: 1px solid rgba(0, 0, 0, 5%); boder-color:#00AC94; padding: 7px 14px 7px 14px; font-family: &quot;Lato&quot;; font-size: 14px; color: #ffffff; }</string> </property> <property name="text"> <string>Upgrade</string> </property> </widget> </item> <item> <spacer name="horizontalSpacer_4"> <property name="orientation"> <enum>Qt::Horizontal</enum> </property> <property name="sizeHint" stdset="0"> <size> <width>40</width> <height>20</height> </size> </property> </spacer> </item> </layout> </widget> </item> </layout> </widget> <customwidgets> <customwidget> <class>CustomLabel</class> <extends>QLabel</extends> <header>OverQuotaDialog.h</header> </customwidget> </customwidgets> <resources> <include location="../Resources_macx.qrc"/> </resources> <connections/> </ui> ```
New Jersey's 29th Legislative District is one of 40 districts that make up the map for the New Jersey Legislature. It covers a portion of Essex County, specifically most of the city of Newark; and the Hudson County municipalities of East Newark and Harrison. Demographic information As of the 2020 United States census, the district had a population of 249,255, of whom 192,742 (77.3%) were of voting age. The racial makeup of the district was 46,930 (18.8%) White, 82,416 (33.1%) African American, 2,008 (0.8%) Native American, 7,733 (3.1%) Asian, 172 (0.1%) Pacific Islander, 72,824 (29.2%) from some other race, and 37,172 (14.9%) from two or more races. Hispanic or Latino of any race were 113,095 (45.4%) of the population. The district had 130,950 registered voters , of whom 52,189 (39.9%) were registered as unaffiliated, 67,880 (51.8%) were registered as Democrats, 9,137 (7.0%) were registered as Republicans, and 1,744 (1.3%) were registered to other parties. Political representation For the 2022–2023 session, the district is represented in the State Senate by Teresa Ruiz (D, Newark) and in the General Assembly by Eliana Pintor Marin (D, Newark) and Shanique Speight (D, Newark). The legislative district overlaps with New Jersey's 8th and 10th congressional districts. Apportionment history Since the creation of the 40-district legislative map in 1973, the 29th District has always been based in and around Newark. In the 1973 map, the 29th district consisted of most of the South and East Wards (excluding Ironbound) and a portion of the Central Ward. For the 1981 redistricting, the 29th became all of the South and East Wards and a larger part of the Central Ward. In the 1991 redistricting, the 29th continued encompassing the South and East Wards and part of the Central Ward; the district now crept into a part of the North Ward and entered a new municipality, Hillside in Union County. In the 2001 redistricting, Hillside remained in the district but now most of the area of Newark was contained in the 29th District. After the 2011 redistricting, Hillside was removed and Belleville was moved into the district; again, most of the area of the city remained in the 29th. Because of its heavily urban nature, the district tends to favor Democrats strongly. The 29th District is one of the few districts in the state to have ever elected only one party to all Senate and Assembly seats in every election since 1973. Election history Election results Senate General Assembly References Essex County, New Jersey 29