text
large_stringlengths 384
2.05k
| rank_avg
float64 1
4.19k
⌀ | rank_max
float64 1
8.21k
⌀ | rank_min
float64 1
5.03k
⌀ | rank_median
float64 1
4.21k
⌀ | rank_by_avgsim
float64 1
4.19k
⌀ | avgsim_to_github
float32 0.77
0.85
⌀ | dataset
large_stringclasses 1
value |
|---|---|---|---|---|---|---|---|
All help is highly appreciated.
My Code:
public class WebTableToTxtFile {
static WebDriver driver = new FirefoxDriver();
public static void main(String[] args) throws Throwable {
driver.navigate().to("http://www.bloomberg.com/markets/stocks/futures");
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
WebElement table = driver.findElement(By.cssSelector("div[class='data-tables first']"));
List<WebElement> irow = table.findElements(By.cssSelector("div[class='data-tables first'] tr"));
System.out.println("No. of rows in the table are: " + irow.size());
File txtFile = new File("MyFileLocation/Output.txt");
for (int r = 0; r < irow.size(); r++) {
WebElement webRow = irow.get(r);
System.out.print(webRow.getText());
List<WebElement> allCells = webRow.findElements(By.xpath("th | td"));
for (int c = 0; c < allCells.size(); c++) {
WebElement webCell = allCells.get(c);
String text = webCell.getText();
System.out.print(text);
FileWriter fw = new FileWriter(txtFile.getAbsolutePath());
BufferedWriter bw = new BufferedWriter(fw);
bw.write(text);
bw.close();
}
System.out.println("");
}
end();
}
public static void end() {
driver.close();
driver.quit();
}
}
A:
The problem is that each time you are calling your code
FileWriter fw = new FileWriter(txtFile.getAbsolutePath());
BufferedWriter bw = new BufferedWriter(fw);
bw.write(text);
bw.close();
It is rewriting the file, not adding a line.
And you are calling it for each element found, so just the one last value is saved in the file which happens to be empty.
I suggest you first to build the String you want to store in the file, and then write it to the file. Like this:
public class WebTableToTxtFile {
static WebDriver d
| 101
| 77
| 65
| 45
| 59
| 0.830144
|
github_plus_top10pct_by_avg
|
:
option.backgroundBrush = QtGui.QBrush(QtGui.QColor("gray"))
def paint(self, painter, option, index):
super(InventoryDelegate, self).paint(painter, option, index)
if not index.parent().isValid():
painter.save()
painter.setPen(QtGui.QPen(QtGui.QColor("green")))
r = QtCore.QRect(option.rect)
r.adjust(0, 1, 0, -1)
painter.drawLine(r.topLeft(), r.topRight())
painter.drawLine(r.bottomLeft(), r.bottomRight())
painter.restore()
def sizeHint(self, option, index):
s = super(InventoryDelegate, self).sizeHint(option, index)
s.setHeight(55)
return s
Q:
Why is my segue being performed?
In my iOS app, I have a log in Button to go from one viewController to another, which uses the following function:
@IBAction func logInButton(sender: AnyObject) {
if loggedIn == true {
// user is signed in
print("A user is logged in.")
uid = user.uid
self.currentUser(uid)
self.performSegueWithIdentifier("logIn", sender: sender)
} else {
print("No current user.")
let anim = CAKeyframeAnimation( keyPath:"transform" )
anim.values = [
NSValue( CATransform3D:CATransform3DMakeTranslation(-10, 0, 0 ) ),
NSValue( CATransform3D:CATransform3DMakeTranslation( 10, 0, 0 ) )
]
anim.autoreverses = true
anim.repeatCount = 2
anim.duration = 7/100
self.passwordTextField.layer.addAnimation( anim, forKey:nil )
self.welcomeTextLabel.hidden = false
self.welcomeTextLabel.text = "Please sign in first"
}
}
There's a function which runs within viewDidLoad() that updates the loggedIn value. I've tested this function when loggedIn == false (based on console output), and the app crashes based on code that executes on the subsequent viewController. I know why it crashes (no user data) but I don't know why the segue is being performed at all.
If you need more code to
| 102
| 3,622
| 21
| 102
| 207
| 0.81834
|
github_plus_top10pct_by_avg
|
.40 **87.41** 368.97 12.60 94.20 22.23 370.12
**SDD-E Dynamic** 18.53 81.80 30.21 6808.58 20.28 **71.40** 31.58 8985.92 25.90 97.40 40.92 5881.37 13.44 97.20 23.61 5747.19
**SDD-E Dynamic+** 27.85 25.60 26.68 8019.23 26.19 56.40 35.77 9197.94 77.60 79.40 78.49 6408.12 **50.55** 84.00 **63.11** 6176.57
**MGoF** 10.94 4.40 6.28 **347.97** 10.08 51.60 16.87 571.93 6.40 13.60 8.70 440.26 2.75 11.60 4.45 509.58
-------------------- ------------ ------------ ----------- ------------ ------------ ------------ ----------- ------------ ------------ ------------ ----------- ------------ ------------ ------------ ----------- ------------
[2]{} {width="\linewidth"} \[fig:divergence-metric-f1\] {width=".8\linewidth"} \[fig:divergence-metric-roc\]
The last experiment was carried out on random-shaped distribution data set, with $alpha=0.1$ and rest parameters the same. Under different divergence metrics mentioned in section \[sec:preliminaries\], F1 scores were calculated among all SDD algorithms and ROC curves were recorded on SDD-R and static SDD-E. Given that MGoF was defined specifically on Kullback-Leibler divergence, it cannot be tested in the same way. Results are shown in Fig. \[fig:divergence-metric-f1\] and Fig. \[fig:divergence-metric-roc\].
It is indicated that Jensen-Shannon divergence is suited to all techniques due to its symmetry. Kullback-Leibler divergence provides more evident differences when references were given. Bhattacharyya distance and Hellinger distance turned out almost as good as Jensen-Shannon divergence, but they consumed less time. Kolmogorov-Smirnov Statistic performed relativ
| 103
| 1,368
| 227
| 148
| null | null |
github_plus_top10pct_by_avg
|
nnik_event]. Events were modeled by the geographic location and time of their occurrence. For temporal queries expressed in simple natural language they outline an extended Backus-Naur form (EBNF) language that incorporates time intervals with standard boolean operations. Geographical queries are also modeled as EBNF language, however the input for them is a minimum bounding rectangle (MBR). Using this multidimensional querying model the user is able to visualize search results in form of events; which are additionally represented on a map.
Giving special attention to geographical information retrieval, Samet et al. [@samet_news] present a system *NewsStand*, that is able to resolve and pinpoint a news article based on the geographic information present in its content. They discuss various methods for toponymn resolution, which is in essence disambiguating the geographic location based on its surface form in the news content. The system involves a streaming clustering algorithm that can keep track of emerging news in new locations and present them in a map-based interface.
[l|p[4cm]{}p[4cm]{}p[4cm]{}]{}
\
**Event** & $c_1$ & $c_2$ & $c_3$\
**Words** & micheal, phelps, bejing, china, tibet & london, usain, bolt, england, badminton & rio, brazil,copacabana, deodoro, maracanã\
**Time** &$[08-08-2008, 24-08-2008]$ & $[27-07-2012, 12-08-2012]$ & $[05-08-2016, 21-08-2016]$\
**Location** & $\langle Beijing, China \rangle$ & $\langle London, England \rangle$ & $\langle Rio de Janeiro, Brazil \rangle$\
**Entities** & $\langle China \rangle$, $\langle Micheal\_Phelps \rangle$ & $\langle England \rangle$, $\langle Badminton \rangle$ & $\langle Brazil \rangle$, $\langle Copacabana \rangle$\
**Event Analytics**. By disambiguating and linking named entities to ontologies, Hoffart et al. [@aesthetics; @stics] provide a framework for semantic search and performing analytics on them. They provide features for giving auto-complete suggestions in the form of similar entities for the input named-entity. In [@aesthetics] th
| 104
| 211
| 642
| 109
| 2,688
| 0.777857
|
github_plus_top10pct_by_avg
|
002){ref-type="fig"}).
{#gcbb12419-fig-0002}
######
Statistical analyses of non‐structural carbohydrates (NSC). The effect of harvest date and genotype on NSC. Tests are a two‐way [anova]{.smallcaps} with date and genotype as factors. *P* = ≤ 0.05
*F* pr
------------------ -------- -------- --------
Mixed population
Genotype \<0.01 0.012 \<0.01
Date \<0.01 0.062 \<0.01
Geno × Date 0.001 \<0.01 0.02
Mapping family
Genotype \<0.01 \<0.01 \<0.01
Date \<0.01 \<0.01 \<0.01
Geno × Date \<0.01 \<0.01 \<0.01
John Wiley & Sons, Ltd
Biomass yield {#gcbb12419-sec-0021}
-------------
The highest yielding plants in spring 2014 (following the 2013 growing season when plants were sampled for NSC in July and October) were the four hybrid genotypes of the mixed population at 3--5 kg DW plant^−1^ (Fig. [3](#gcbb12419-fig-0003){ref-type="fig"}). The highest yielding hybrids of the mapping family were similar in final yield to Sin 1--5 of the mixed population. The lowest yielding plant was Hyb 21 at 0.07 kg (70 g) DW plant^−1^. The *M. sacchariflorus* genotypes were also generally low yielding, especially Sac 2--4 (Fig. [3](#gcbb12419-fig-0003){ref-type="fig"}).
{#gcbb12419-fig-0003}
The samples used for the analysis of carbohydrates were taken from single stems harvested in July and October 2013. To project the yields of total carbohydrate in July and October, sequential harvests were taken from a separate field site over a two‐year period (Table
| 105
| 125
| 416
| 182
| null | null |
github_plus_top10pct_by_avg
|
by a second order Markov chain model. This strongly suggests that humans follow common topical strategies while navigating in a goal-oriented scenario.
#### MSNBC dataset
In this section we present the results obtained from the MSNBC dataset introduced in the section called “”. Again we look at navigational paths over topical categories and henceforth, we only look at categorical information of nodes and present the results in Figure \[fig:paths\_msnbc\].
Similar to the experiments conducted for the Wikigame and Wikispeedia topic datasets we can again see, based on the likelihood ratio statistics (B), that a higher order Markov chain seems to be appropriate. The AIC (C) and BIC (D) statistics suggest an order of three and two respectively. To further investigate the behavior we illustrate the Bayesian inference results (E, F) that clearly suggest a third order Markov chain model. Finally, this is also confirmed by the cross validation prediction results (G) which again is in accordance with the AIC. **Summary:** By and large, almost all methods for order selection suggest a Markov chain of order three for the topic sequence in the MSNBC dataset. Again, we can observe that the navigational patterns are not memoryless. Even though this dataset is not a goal-oriented navigation dataset, but is based on free navigation on MSNBC, we can identify similar memory effects as above.
Structure {#subsec:structure .unnumbered}
---------
In the previous section we observed memory patterns in human navigation over topics in information networks. We are now interested in digging deeper into the structure of human navigational patterns on a topical level. Concretely, we are interested in detecting common navigational sequences and in investigating structural differences between goal-oriented and free form navigation.
First, we want to get a global picture of common transition patterns for each of the datasets. We start with the Markov chain transition matrices, but instead of normalizing the row vectors, we normalize
| 106
| 71
| 603
| 126
| null | null |
github_plus_top10pct_by_avg
|
// TODO Auto-generated method stub
myCalendar.set(Calendar.YEAR, year);
myCalendar.set(Calendar.MONTH, monthOfYear);
myCalendar.set(Calendar.DAY_OF_MONTH, dayOfMonth);
updateLabel();
}
private void updateLabel() {
// TODO Auto-generated method stub
String myFormat = "dd/MM/yyyy";
SimpleDateFormat sdf = new SimpleDateFormat(myFormat, Locale.US);
hdate.setText(sdf.format(myCalendar.getTime()));
}
};
//texview listener
hdate.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
// TODO Auto-generated method stub
new DatePickerDialog(Daty.this, date, myCalendar
.get(Calendar.YEAR), myCalendar.get(Calendar.MONTH),
myCalendar.get(Calendar.DAY_OF_MONTH)).show();
}
});
A:
You can simply use setMaxDate function for the date picker.
DatePickerDialog datePickerDialog = new DatePickerDialog(getApplicationContext(), date, Calendar.YEAR, Calendar.MONTH, Calendar.DAY_OF_MONTH); //date is dateSetListener as per your code in question
datePickerDialog.getDatePicker().setMaxDate(System.currentTimeMillis());
Refer documentation
Hope this helps.
Q:
Append row to the end of a dataframe with loc function
I have a dataframe that is a result of some multiple step processing. I am adding one row to this dataframe like so:
df.loc[‘newindex’] = 0
Where ‘newindex’ is unique to the dataframe. I expect the new row to show up as a last row in the dataframe. But the row shows up somewhere near the middle of the dataframe.
What could be the reason of such behavior? I have to add row exactly at the last position, with its index name preserved.
* update *
I
| 107
| 5,326
| 22
| 34
| 22
| 0.837587
|
github_plus_top10pct_by_avg
|
ollers.WeatherForecastController.Ping() in C:\Users\Dellas\source\repos\TestApi\TestApi\Controllers\WeatherForecastController.cs:line 48
What's a workaround here? Does Azure really blocks pinging?
A:
Yes, on Azure App Service the tools ping, nslookup and tracert won’t work through the console due to security constraints. However, there are two separate tools: In order to test DNS functionality,you could leverage nameresolver.exe and Tcpping -which allows you to test for TCP connectivity to a host and port combination.
To highlight more details on this, the standard Azure Web Apps run in a secure environment called a sandbox. Each app runs inside its own sandbox, isolating its execution from other instances on the same machine as well as providing an additional degree of security and privacy which would otherwise not be available.
In this environment, the only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
Connection attempts to local addresses (e.g. localhost, 127.0.0.1) and the machine's own IP will fail, except if another process in the same sandbox has created a listening socket on the destination port.
Take a look at this article for more details on this topic.
From Kudu console, you can perform tcpping www.google.com:80
Q:
Matlab: Placing Zeros In A Three Dimensional Matrix
I am using a for loop to calculate the electric potential on a subset of the xy-plane (a square grid). Here is the code:
L=2;
for i=1:L
for j=1:L
for k=1:L
V(i,j,k)= -10;
end
end
end
where L is the length of the subset of the xy-plane. The difficulty I am having, however, is that I want the z component of the electric potential to be zero, I just want to the region in the xy-plane to be nonzero. The reason why I am using three dimensions is because I am going to eventually introduce an object, which is at a different electric potential re
| 108
| 406
| 144
| 105
| 1,972
| 0.784073
|
github_plus_top10pct_by_avg
|
?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@PBD@Z
; 10 : std::string g = "Hello";
push OFFSET ??_C@_05COLMCDPH@Hello?$AA@
lea ecx, DWORD PTR _g$[esp+80]
mov DWORD PTR __$EHRec$[esp+88], 0
call DWORD PTR __imp_??0?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@PBD@Z
... but that's an artifact, because it was the first one the compiler saw. Change the code by swapping the order:
; 9 : std::string g1 = "Hello";
push OFFSET ??_C@_05COLMCDPH@Hello?$AA@
lea ecx, DWORD PTR _g1$[esp+136]
call DWORD PTR __imp_??0?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@PBD@Z
; 10 : std::string f1("Hello");
push OFFSET ??_C@_05COLMCDPH@Hello?$AA@
lea ecx, DWORD PTR _f1$[esp+136]
mov DWORD PTR __$EHRec$[esp+144], 0
call DWORD PTR __imp_??0?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@PBD@Z
... and lo and behold, the second is one fewer instruction.
We also see that this compiler (Microsoft VC++ 2005, Release settings) generated the same assembler for both versions. So it makes no difference in this compiler, and you can prove it.
Q:
Can't send data over serialPort
I would like to control my Arduino robot with Node.js and a joystick, but serialport.write doesn't send any data to Arduino. I have tried to use code without a joystick and it works but only with one serial.write.
Is there a bug in my code?
Arduino code:
String data = Serial.readString();
Serial.println(data);
if(data=="2") {
//motor1
}
Node.js
var hid = require('node-hid');
var SerialPort = require("serialport").SerialPort
var serialPort = new SerialPort('COM3', {
baudrate: 9600
});
serialPort.on("open", function() {
console.log('open');
function sentData(data) {
console.log(data);
if (data == 0)
setTimeout(function() {
serialPort.write('1')
}, 2000);
else if (data > 999)
setTimeout(function() {
serialPort.write('2')
}, 2000);
}
var device = new hid.HID(1133,
| 109
| 4,762
| 1
| 92
| 35
| 0.834235
|
github_plus_top10pct_by_avg
|
36 months (follow-up rate of 64.5%). [Table 1](#T0001){ref-type="table"} lists the number of infants followed at birth and at 6, 12, 18, 24, and 36 months as well as their mean weight, height, and head circumference. [Figure 1](#F0001){ref-type="fig"} shows that the weight, height, and head circumference from birth to 36 months were all higher in male than in female infants. All measurements were within the normal ranges based on comparison with the Korean National Growth Curves ([Fig. 1](#F0001){ref-type="fig"}) and WHO Child Growth Standards ([@CIT0009], [@CIT0011]). The levels of maternal antioxidant vitamins and oxidative stress and the general characteristics of the mothers, fathers, and infants, are shown in [Table 2](#T0002){ref-type="table"}.
![Comparison of data with Korean National Growth Curve.\
Source: Korean Pediatric Society ([@CIT0009]).\
The black solid line represents the data of the present study.](FNR-58-20207-g001){#F0001}
######
Anthropometric parameters of infants at birth and at 6, 12, 18, 24, and 36 months[a](#TF0001){ref-type="table-fn"}
Parameter At birth 6 months 12 months 18 months 24 months 36 months
------------------------------- ----------------- ----------------- ----------------- ----------------- ----------------- -----------------
Weight (kg) 3.3±0.4 (383) 8.3±1.0 (250) 10.0±1.2 (259) 11.4±1.4 (196) 12.6±1.3 (171) 15.2±1.8 (124)
Height (cm) 49.4±2.1 (383) 68.9±3.1 (173) 76.8±3.2 (221) 82.6±3.7 (155) 87.8±3.5 (142) 98.3±4.6 (124)
Head circumference (cm) 34.4±1.4 (382) 43.6±1.8 (80) 46.0±1.7 (128) 47.6±1.4 (67) 48.6±1.6 (83) 50.0±1.5 (119)
Weight percentile 45.0±24.6 (382) 62.5±28.8 (250) 58.9±28.0 (259) 57.9±28.0 (196) 57.8±26.7 (171) 67.4±26.2 (124)
Height percentile 51.2±22.9 (383) 58.8±28.7 (173) 57.6±28.8 (221) 57.4±27.8 (155)
| 110
| 30
| 448
| 183
| null | null |
github_plus_top10pct_by_avg
|
want to print"
' Set title.
Title = "Print"
' Set default.
Default = "1"
' Display message, title, and default value.
Dim SerialNumber As String
NumCopies = Val(InputBox(Message, Title, Default))
SerialNumber = System.PrivateProfileString("W:\settings.txt", _
"MacroSettings", "SerialNumber")
If SerialNumber = "" Then
SerialNumber = 1
End If
Set Rng1 = ActiveDocument.Bookmarks("SerialNumber").Range
Counter = 0
While Counter < NumCopies
Rng1.Delete
Rng1.Text = SerialNumber
ActiveDocument.PrintOut
SerialNumber = SerialNumber + 1
Counter = Counter + 1
Wend
'Save the next number back to the Settings.txt file ready for the next use.
System.PrivateProfileString("W:\settings.txt", "MacroSettings", _
"SerialNumber") = SerialNumber
'Recreate the bookmark ready for the next use.
With ActiveDocument.Bookmarks
.Add Name:="SerialNumber", Range:=Rng1
End With
End Sub
Question:
How can I convert this to use the document name as the "settings.txt" file
How can I convert this to save the "settings.txt" file in the same location as the document location automatically. (I'm not 100% sure but I think I have to create setting.txt manually, I'd like the creation to be automatic)
If I am going about this the wrong way, please let me know
My thought process would be:
If "document_name.txt" exists then do run macro
If "settings/document_name.txt" DOESN'T exists then do create document_name.txt in current open files location
and use current document name.
Note: I also opened a bounty on this question to have the macro run automatically after user hits print as per normal. Currently they have to manually run macro to print.
Running a macro before printing a word document
A:
In that case, you could use a macro like:
Sub FilePrint()
Dim i As Long, j As Long
With ActiveDocument
j = CLng(InputBox("How many copies to print?", "Print Copies"))
For i = 1 To j
With .CustomDocumentProperties("Counter")
.Value = .Value + 1
End With
.Fields.Update
.PrintOut
| 111
| 4,884
| 44
| 48
| 709
| 0.801968
|
github_plus_top10pct_by_avg
|
="L-curve chest data"}](LcurveChestLaplacian "fig:"){width="9cm"}]{} (65,45)[$\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2$]{} (-13,130)
[90]{} $\lVert f_{\sigma}({\mathbf x}) \rVert _2$
(225,130)
[90]{} $\lVert f_{\sigma}({\mathbf x}) \rVert _2$
(305,45)[$\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2$]{} (100,15)[(a)]{} (345,15)[(b)]{} (10,220) (10,150) (50,127) (175,80) (267,225) (269,94) (290,95) (410,95)
- The CV is tested for the Laplacian and Tikhonov covariances using point-wise evaluation of $10^{-2} \leq \sigma \leq 1$ and $10^{-2}\leq \sigma_f \leq 1$. For the Laplacian covariance, several points of length scale $1\leq \ell \leq 100$ are tested as well. The minimum prediction error was obtained for $\sigma_f = 0.8$, $\sigma = 0.8$ and $\ell = 10$. For the Tikhonov covariance, the minimum prediction error was obtained for $\sigma = 0.5$ and $\sigma_f = 0.5$. The estimates of $\sigma_f$ and $\sigma$ for Laplacian are $0.8$ and $0.5$, respectively, and they give the same estimates for the Tikhonov covariance function. The estimates of $\sigma$ for both kernels appear to overestimate the $\sigma_{\text{true}}$. The absolute error is in between $0.18 - 0.48$. The length-scale estimate from Laplacian covariance, $l = 10$, appears to close to the estimate in Matérn covariance.
Image reconstructions for both L-curve and CV methods are shown in Figure \[fig:Parameter Choice Methods\].
(100,270) (7,138)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](ChestImage "fig:"){width="6.3cm"}]{} (155,138)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $
| 112
| 130
| 279
| 154
| 814
| 0.799442
|
github_plus_top10pct_by_avg
|
'C:\Windows\System32\ntdll.dll', Cannot find or
open the PDB file
'ModelingTool.exe': Loaded 'C:\Windows\System32\kernel32.dll', Cannot find
or open the PDB file
'ModelingTool.exe': Loaded 'C:\Windows\System32\opengl32.dll', Cannot find
or open the PDB file
'ModelingTool.exe': Loaded 'C:\Windows\System32\msvcrt.dll', Cannot find
or open the PDB file
'ModelingTool.exe': Loaded 'C:\Windows\System32\dwmapi.dll', Cannot find
or open the PDB file
'ModelingTool.exe': Loaded 'C:\Qt\4.2.2\bin\Qt3Supportd4.dll', Symbols
loaded.
'ModelingTool.exe': Loaded 'C:\Program Files\Spyware Doctor\smum32.dll',
Binary was not built with debug information.
Debugger:: An unhandled non-continuable exception was thrown during
process load
The program '[5936] ModelingTool.exe: Native' has exited with code
-1072365566 (0xc0150002).
Would anyone care to guess what's wrong here? Some sort of debug-release mismatch perhaps?
A:
The exit code provides a good hint, 0xc0150002 = STATUS_SXS_CANT_GEN_ACTCTX, "Windows was not able to process the application binding information. Please refer to your System Event Log for further information."
The event log will tell you what is wrong with the manifest or what side-by-side installed component is missing from your machine.
Q:
UIImageView subclass touch detection
I have a subclass of UIImageView that I need to know when is touched. (Full code below.) I have found and read both this and this. They were mildly useful, but neither really worked. Not only is my touchesBegan:withEvent: method not being called, the button behind the image is being pressed.
Here is my code:
Header file:
@interface MathKeyboardKey : UIImageView
{
}
@end
Implementation file:
@implementation MathKeyboardKey
- (id)initWithImage:(UIImage *)image
{
if (self = [super initWithImage:image])
{
//init here
}
return self;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"-- I AM TOUCHED --");
}
- (void)
| 113
| 18
| 116
| 89
| 57
| 0.830341
|
github_plus_top10pct_by_avg
|
namefield.borderStyle = UITextBorderStyleNone;
namefield.background = [UIImage imageNamed:@"text_field_default.png"];
namefield.contentVerticalAlignment = UIControlContentVerticalAlignmentCenter;
namefield.textAlignment = UITextAlignmentCenter;
//[namefield setBackgroundColor:[UIColor whiteColor]];
[av addSubview:namefield];
[namefield release];
av.tag=12;
av.delegate=self;
[av show];
[av release];
But now in ios 7, I heard you can't easily alter the view hierarchy of a UIAlertView.
One alternative for this case is to set
alert.alertViewStyle = UIAlertViewStylePlainTextInput
But can we add that text field in wherever we want? As in my case above the first otherbutton.can anybody help me?
A:
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:@"Enter Student Name" message:@"" delegate:self cancelButtonTitle:@"Cancel" otherButtonTitles:@"Save",nil];
[alertView setAlertViewStyle:UIAlertViewStylePlainTextInput];
[alertView show];
i used to do like this ,and its working very fine
A:
The simple answer to your question is NO, you can't change anything in this testField for the UIAlertViewStylePlainTextInput and you shouldn't.
This is from Apple:
The UIAlertView class is intended to be used as-is and does not
support subclassing. The view hierarchy for this class is private and
must not be modified.
And unfortunately what you heard I heard you can't easily alter the view hierarchy of a UIAlertView is wrong, you cannot alter the view hierarchy of a UIAlertView in iOS7 at all.
There are a good alternative on the web, you can check in cocoacontrols.com
Q:
Setting Connection time out programmatically in asp.net
Is there a way to Set Connection timeout used in IIS in asp.net programaticaly in one page ? without setting it globally for all the website from IIS itself ?
A:
Create a second cpnnection string in your web config with out timeout and in the page you want, add timeout you want.
Q:
Should I put my name first or last in the team membe
| 114
| 3,318
| 47
| 98
| 1,940
| 0.784263
|
github_plus_top10pct_by_avg
|
acency matrix recovery after $\ell_1$-spectral clustering application. Spectral Adjacency matrix: Adjacency matrix recovery after spectral clustering application.[]{data-label="recovery"}](p2bis.pdf){width="\columnwidth"}
-0.2in
We can notice that our model performs well in this task as both methods effectively recovers the clustering structure, which indicates the robustness of our model.
### Robustness to perturbations
Then, we tested the robustness under perturbations of the spectral clustering ang $l_1$-spectral clustering algorithms. Let $p$ be the level of Bernoulli noise, discretized in this section between $0$ and $0.4$. In this experiment, we simulate $100$ graphs with $k \in [5,10]$ clusters of size $c_{n-k+1},\dots, c_n \in [10,20]$.
We introduce the block membership function: for all node $i \in \left\{1,\dots, n \right\}$ of a graph $G(V,E)$ made of block structures of size $c_{n-k+1},\dots, c_n$, $$\begin{aligned}
\tau \colon & V \to \left\{n-k+1,\dots, n \right\}\\
&i \mapsto c.\end{aligned}$$
For each value of $p$, we test the perfomances of both algorithms to recover the clusters of the graphs. The performances of the algorithms were evaluated by computing the percentage of missassigned nodes in average defined as $\frac{1}{100} \sum\limits_{j=1}^{100} |\left\{i\in V : \tau(i) \ne \hat{\tau}_j(i)\right\}|$, where $\tau_j$ is the block membership function and $\hat{\tau}_j$ is the estimated membership function for the $j$-th model. The results are plotted in Figure \[comparaison\].
0.2in
![Fraction of nodes correctly classified using spectral (red line) and $\ell_1$-spectral clustering (blue line) under increasing perturbation coefficient. []{data-label="comparaison"}](compl1spec3.pdf){width="\columnwidth"}
-0.2in
Figure \[comparaison\] captures the fraction of nodes correctly classified and the associated region of confidence when $\ell_1$-spectral clustering (blue) and spectral clustering (red) are applied under increasing perturbation coefficient.
### Results
Both simulations s
| 115
| 802
| 261
| 165
| 638
| 0.80353
|
github_plus_top10pct_by_avg
|
onal UIViews on top of that, depending on what you need to do?
Q:
Obtain Oracle 11g partitioning interval by direct system table query
I've got a table which is partitioned on a NUMBER variable in Oracle 11g, with the INTERVAL set to 1. On our development system I can execute
SELECT DBMS_METADATA.GET_DDL('TABLE', 'TABLE_NAME', 'SCHEMA_NAME') FROM DUAL;
to verify that the table is partitioned as expected, which it is. On our production box, however, developers aren't allowed to modify data or to run any procedures, and thus I can't use DBMS_METADATA.GET_DDL to get the DDL and, hence, to determine the INTERVAL set on the production DB. Could someone provide an idea of how to find the value used in the INTERVAL clause when the production table was built by querying system tables or views? Thanks.
A:
Get select access to dba_part_tables (for 11gr2):
select interval from dba_part_tables where table_name = 'SOME_TABLE' and owner = 'SOME_OWNER';
Q:
Android emulator access local network - Is it possible to map 127.0.0.1 to 10.0.2.2 without changing code?
In the code, I have some places where the string 127.0.0.1. Is there a way for me to tell Android Emulator that all calls to 127.0.0.1 should be like calls to 10.0.2.2?
A:
The best approach is to replace the String for the correct environment. You can add the piece below to your android tag inside the app build.gradle file.
flavorDimensions "version"
productFlavors {
device {
buildConfigField "String", "BASE_WEB_URL", "127.0.0.1"
}
emulator {
buildConfigField "String", "BASE_WEB_URL", "10.0.2.2"
}
}
//comment this block if you want the emulatorRelease build variant (but you probably wont)
variantFilter { variant ->
def names = variant.flavors*.name
if (variant.buildType.name == "release" && names.contains("emulator")) {
// Gradle ignores any variants that satisfy the conditions above.
setIgnore(true)
}
}
In this way you will have 3 build variants (you can change build variant on the left
| 116
| 88
| 137
| 122
| 220
| 0.817555
|
github_plus_top10pct_by_avg
|
n cell
}
override func tableView(tableView: UITableView, titleForHeaderInSection section: Int) -> String? {
switch (section) {
case 0:
let count = self.organizedTasks.items["Aperti"]!.count
return "Aperti (\(count))"
case 1:
let count = self.organizedTasks.items["Chiusi"]!.count
return "Chiusi (\(count))"
case 2:
let count = self.organizedTasks.items["Scaduti"]!.count
return "Scaduti (\(count))"
case 3:
let count = self.organizedTasks.items["Sospesi"]!.count
return "Sospesi (\(count))"
default:
return ""
}
}
func filterContentForSearchText(searchText: String, scope:String="All") {
// Filter the array using the filter method
self.filteredTasks = self.allTasks.filter({( task: Task) -> Bool in
let categoryMatch = (scope == "All") || (task.priorita == scope)
let stringMatch = task.titolo.rangeOfString(searchText)
return categoryMatch && (stringMatch != nil)
})
}
func searchDisplayController(controller: UISearchDisplayController, shouldReloadTableForSearchString searchString: String!) -> Bool {
self.filterContentForSearchText(searchString)
return true
}
func searchDisplayController(controller: UISearchDisplayController, shouldReloadTableForSearchScope searchOption: Int) -> Bool {
self.filterContentForSearchText(self.searchDisplayController!.searchBar.text)
return true
}
// In a storyboard-based application, you will often want to do a little preparation before navigation
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
// Get the new view controller using [segue destinationViewController].
// Pass the selected object to the new view controller.
}
}
I don't know why these two things happen. It seems that it can't create another cell with "Cell" identifier but this is strange because when the table is loaded (without using the search bar) all i
| 117
| 1,664
| 19
| 86
| 100
| 0.826255
|
github_plus_top10pct_by_avg
|
Is there a better way to find and replace things?
A:
Assign the value back into thumb.
thumb = thumb.replace("200.jpg", "640.jpg");
A:
Try:
thumb = thumb.replace("200.jpg", "640.jpg");
Q:
KeyValuePair - no parameterless constructor?
I have an object that has a KeyValuePair type property.
I would like to read some data from a database and store results in this KeyValuePair type field.
myObject.KeyValuePairs = ctx.ExecuteQuery<KeyValuePair<String, int>>
("Select " +
"[" + q.Name + "] As [Key]" +
", Count([" + q.Name + "]) As [Value] From SomeTable" +
" Group By [" + q.Name + "]").ToList();
myObject.KeyValuePairs is a List<KeyValuePair<String, int>>
When I attempt to read the records I get the following exception:
The type 'System.Collections.Generic.KeyValuePair`2[System.String,System.Int32]' must declare a default (parameterless) constructor in order to be constructed during mapping.
I have a default constructor in a class, but this is not fixing the problem. It looks as if It doesn't know how to construct a KeyValuePair object. Doesn't it have a default constructor? Confused..
Thank you
A:
It does have a public parameterless constructor because KeyValuePair<TKey, TValue> is a struct and all struct have implicit public parameterless constructor.
The issue is that EF can't find it by reflection because reflection does not return the default constructor for struct.
This is why EF is reporting that it can't find it (it can't find it by reflection).
Additionally, even if you could use the default constructor by reflection, KeyValuePair<TKey, TValue> is immutable and so you can't set the Key and Value after construction.
To solve your problem, define a custom class
class NameCount {
public string Name { get; set; }
public int Count { get; set; }
}
and change your query to return the Key as Name and Value as Count. I assume that you can come up with a better name for your custom class.
| 118
| 3,440
| 68
| 113
| 600
| 0.804761
|
github_plus_top10pct_by_avg
|
ior probability for the true model, $p_{u^*} = \hat{\mathbb{P}}(U=u^*|y_{1:T},s_{1:T})$, is evaluated.
Number of MUs, $u^*$ $\leq 5$ 6 7 8 9 10
---------------------------------------------- ---------- ------- ------- ------- ------- -------
No. where $\hat{u}=u^*$ 100 19 19 16 15 12
No. where $u^*$ in HPCS 100 20 20 19 19 20
Avg. size of HPCS 1.11 1.70 2.10 2.05 2.35 2.45
Avg. $\hat{p}_{u^*}$ (%) 97.95 89.20 80.45 69.42 62.70 54.68
Avg. particle set size for $u^*$ 5000 5250 6000 7500 7500 8250
Avg. particle set size for $\hat{u}$ 5000 5000 6000 7500 9250 7750
Avg. $n{\times}n$ lattice size for $u^*$ 30.0 30.5 30.5 32.0 32.5 32.0
Avg. $n{\times}n$ lattice size for $\hat{u}$ 30.0 30.0 30.5 32.0 35.0 32.0
: Summary of the MU-number posterior mass functions and required numerical resource for 200 simulated data sets.[]{data-label="tab:SimSummary"}
Table \[tab:SimSummary\] presents summaries of the mass functions of the number of MUs and descriptions of the resource required as functions of the true number of MUs. The MAP estimate corresponded to the true number of MUs for all data sets generated from five or fewer MUs, and for most of these datasets the HPCS contained only the true model. For true sizes of greater than five the MAP estimate was correct for $81$ of the $100$ data sets and the HPCS contained the truth for all but two data sets.
It is unsurprising that the uncertainty in the MU-number increases with the true number of MUs; this is visible both as an increase in the average size of the HPCS and a reduction in the average posterior probability for the true number. In addition, both the number of particles required to control Monte Car
| 119
| 3,649
| 143
| 75
| null | null |
github_plus_top10pct_by_avg
|
------------------------ ---------------------- --------- ------ --------- -------- --------- ------ --------- -------- --------- ------ ----------
Itching 0.17 \> 0.05 0.22 \> 0.05 0.25 \> 0.05 0.24 \> 0.05 0.25 \> 0.05 0.44 \< 0.01
Swelling/mental status 0.22 \> 0.05 0.24 \> 0.05 --0.04 \> 0.05 0.07 \> 0.05 0.17 \> 0.05 0.28 \> 0.05
Functioning 0.26 \> 0.05 0.19 \> 0.05 0.33 \< 0.05 0.24 \> 0.05 0.25 \> 0.05 0.45 \< 0.01
Sleep 0.09 \> 0.05 0.33 \< 0.05 0.20 \> 0.05 0.31 \< 0.05 --0.06 \> 0.05 0.35 \< 0.05
Eating/limits 0.15 \> 0.05 0.29 \> 0.05 0.13 \> 0.05 0.18 \> 0.05 0.19 \> 0.05 0.42 \< 0.01
Embarrassment 0.23 \> 0.05 0.44 \< 0.01 0.17 \> 0.05 0.18 \> 0.05 0.31 \< 0.05 0.51 \< 0.001
r -- Spearman's rank correlation, p -- significance level.
We also observed statistically significant correlations between stress (VAS scale and SRRS) and QoL (results presented in [Table 3](#t0003){ref-type="table"}).
######
The correlations between stress and CU-Q2oL
CU-Q~2~oL Stress (VAS) Stress (SRRS)
------------------------ -------------- --------------- ------ ---------
Itching 0.38 \< 0.01 0.28 \> 0.05
Swelling/mental status 0.42 \< 0.01 0.46 \< 0.01
Functioning 0.23 \> 0.05 0.26 \> 0.05
Sleep 0.35 \< 0.05 0.35 \< 0.05
Eating/limits 0.05 \> 0.05 0.36 \< 0.05
Embarrassment 0.25 \> 0.05 0.33 \> 0.05
r -- Spearman's rank correlation, p -- significance level.
Discussion {#
| 120
| 2,102
| 412
| 114
| null | null |
github_plus_top10pct_by_avg
|
nects to firebase sdk and also configure non-default apps to create the needed firebase auth
abstract class BaseAuth {
getDefaultAuth();
getAbnAuth();
...
}
class Auth with ChangeNotifier implements BaseAuth {
...
Auth() {
_configureAbnApp();
_configureProdApp();
}
getDefaultAuth() {
_firebaseAuth = FirebaseAuth.instance;
}
getAbnAuth() {
_firebaseAuth = FirebaseAuth.fromApp(_abnApp);
}
_configureAbnApp() {
FirebaseOptions abnOptions = FirebaseOptions(
databaseURL: 'https://[project-id].firebaseio.com',
apiKey: 'AIzaSxxxxxxxxxxxxxxxx,
googleAppID: '1:10591xxxxxxxxxxxxxxxxxxx');
FirebaseApp.configure(name: 'abn_database', options: abnOptions)
.then((result) {
_abnApp = result;
});
}
...
}
After a log in the app redirects the user to the home_page (StatefulWidget). Here I use a snapshot of the database to show data.
_stream = Firestore.instance.collection(collection).snapshots();
...
Center(
child: Container(
padding: const EdgeInsets.all(10.0),
child: StreamBuilder<QuerySnapshot>(
stream: _stream,
builder:
(BuildContext context, AsyncSnapshot<QuerySnapshot> snapshot) {
if (snapshot.hasError)
return Text('Error: ${snapshot.error}');
switch (snapshot.connectionState) {
case ConnectionState.waiting:
return Text('Loading...');
default:
return ListView(
children: snapshot.data.documents
.map((DocumentSnapshot document) {
return CustomCard(
docID: document.documentID,
title: document[title],
message: document[message],
fromDate: document[fromDate],
endDate: document[endDate],
disableApp: document[disableApp],
);
| 121
| 78
| 74
| 84
| 53
| 0.830818
|
github_plus_top10pct_by_avg
|
eature maps of unannotated objects and 2) part-location errors *w.r.t.* the annotated ground-truth part locations.
It is essential to determine the optimization sequence for the three losses in the above equation. We propose to first learn the CNN by minimizing ${Loss}^{\textrm{CNN}}$ and then build an AOG based on the learned CNN. We use the active QA to obtain new part annotations and use new part annotations to grow the AOG by optimizing ${Loss}^{\textrm{QA}}$ and ${Loss}^{\textrm{AOG}}$ alternatively.
We introduce details of the three losses in the following subsections.
Learning convolutional neural networks
--------------------------------------
To simplify the story, in this research, we just consider a CNN for single-category classification, *i.e.* identifying object images of a specific category from random images. We use the log logistic loss to learn the CNN. $${Loss}^{\textrm{CNN}}=\mathbb{E}_{I\in{\bf I}}\big[{Loss}(\hat{y}_{I},y^{*}_{I})\big]$$ where $\hat{y}_{I}$ and $y^{*}_{I}$ denote the predicted and ground-truth labels of an image $I$. If the image $I$ belongs to the target category, then $y^{*}_{I}=+1$; otherwise $y^{*}_{I}=-1$.
Learning And-Or graphs
----------------------
We are given a pre-trained CNN and its training images without part annotations. We use an active QA process to obtain a small number of annotations of object-part bounding boxes, which will be introduced in Section \[sec:QA\]. Based on these inputs, in this subsection, we focus on the approach for learning an AOG to represent the object part.
### And-Or graph representations
Before the introduction of learning AOGs, we first briefly overview the structure of the AOG and the part parsing (inference) based on the AOG.
As shown in Fig. \[fig:rawMapToModel\], an AOG represents the semantic structure of a part at four layers.
Layer Name Node type
------- ---------------- ---------------
1 semantic part OR node
2 part template AND node
3 latent pattern O
| 122
| 2,306
| 415
| 136
| 3,304
| 0.77333
|
github_plus_top10pct_by_avg
|
for(cntr = 0; cntr < 5; cntr++)
The loop executes while the condition cntr > 5 is true. If cntr starts at 0 then it is obviously not greater than 5, so the body of the loop never executes.
Q:
rxjs how to complete observable?
To learn rxjs im playing around with it.
My code:
// RxJS v6+
import { withLatestFrom, map } from 'rxjs/operators';
import { interval } from 'rxjs';
const source = interval(1000);
const example = source.pipe(
map(value => value +1),
map(value => {
if(value === 40) {
finish();
}
else if (value % 5 === 0){
return 'can devide by 5 we did some magic';
}else{
return value;
} })
);
const subscribe = example.subscribe(
val => console.log(val),
error => console.log("Error handled: " , error),
() => console.log('resolved'));
My idea was to run it 40 time and than finish the observable (it could be another requirement e.g. see if the value is 10 at 10:00 (main goal is to do an evaluation with value and force a finish)).
Im looking for an alternative to the placeholder finish() because finish does not exist. How can I get to the resolve function () => console.log('resolved') of the subscribe method?
I found How can I complete Observable in RxJS but the answer is from 2015 and im assuming by now there is an answer for the current rxjs version.
A:
Acuatally is still the same you only need to use pipe operator. You can view example here
import { interval, timer } from 'rxjs';
import { takeUntil } from 'rxjs/operators';
const source = interval(1000);
const timer$ = timer(5000);
const example = source.pipe(takeUntil(timer$));
const subscribe = example.subscribe(val => console.log(val));
A:
My idea was to run it 40 time`
For that you can add take(40). In general there are several operators like take that can complete an observable. Check out https://www.learnrxjs.io/operators/filtering/take.html
// RxJS v6+
import { withLatestFrom, map } from 'rxjs/operators';
import { interval } from 'rxjs';
const source = interval(1000);
const example = source.p
| 123
| 4,017
| 112
| 119
| 32
| 0.834726
|
github_plus_top10pct_by_avg
|
lled.
PHP Fatal error: Call to undefined method Mock_UserData_ae821217::getUserSessionArray() in /usr/share/php/tool/module/User/Module.php on line 95
PHP Stack trace:
PHP 1. {main}() /usr/local/pear/bin/phpunit:0
…
Could someone help me on this please?
We are using Zend Framework 2.2.0.
Thank you so much.
EC
A:
Your mock isn't quite setup right. You don't set any methods for the mock to have and so your expects isn't really being set. You need to create your mock like this:
$albumTableMock = $this->getMockBuilder('User\Model\UserData')
->disableOriginalConstructor()
->setMethods(array('getUserSessionArray')) //ADD this line
->getMock();
Your User\Model\UserData class doesn't exist and so PHPUnit did not create the method to get mocked. And when you ran your tests the function was not defined.
Q:
Xpath getting divs and h3 between them
Suppose the following structure
<div>Content 1</div>
<h3>head 2</h3>
<div>Content 2</div>
<h3>head 3</h3>
<div>Content 3</div>
I need to access all divs and the h3 following each of them.
I tried //div[*] but it just only returns divs without h3. I need to know something like concatenation or something like it.
A:
Some possible ways, using XPath union (|) :
//div | //h3
Or matching by name() :
//*[name()='div' or name()='h3']
Q:
toolset for wordpress theme development recommendation
What are the best toolset you can recommend in developing wordpress theme? Are the tools you use to make the workflow fast? Actually I'm looking for a stacked of codes or snippet that developers gathered and just reuse it in their development. (e.g. the wordpress loop, customized loop, ) the loop can then just be copy and pasted. If this doesn't exist, can you recommend some techniques or tools you use to make the development fast? cos i'm think i'm doing it all wrong starting from always from scratch. I also tried using "Starkers Naked theme". I just hope they provide snippets to achieve some common functionality of a theme (e.g. loop to spit custom fields, control
| 124
| 447
| 66
| 170
| 1,051
| 0.79521
|
github_plus_top10pct_by_avg
|
ip date range”.
In the dateDiff function I did not use days for the reason that it returns 0 for something like this: dateDiff("d", #1/1/2015#, #1/1/2015 23:59:59#). Using hours would have been already good but I decided to use seconds to make it as accurate as possible.
I tested it but please let me know if you see something wrong with the code.
Q:
Is this sequence monotonically decreasing?
Let $a_n = \frac{p_n - p_{n-1}}{p_n \log p_n}$ where $p_n$ denotes the $n$-th prime. Is this sequence decreasing (or decreasing after some $N$)?
A:
The sequence is not decreasing.
You have for example:
$$a_4=\frac{2}{7 \cdot \log 7}=0,14 \dots$$
$$a_5=\frac{4}{11 \log 11}=0,15 \dots$$
Doing more examples,you will see that the sequence is getting in general smaller,but it is not monotone..
Here you can see a plot:
EDIT: Consider two twin primes,for example these ones: $3,5$ and $5,7$.
Then it will be like that:
$$a_n=\frac{5-3}{5 \cdot \log{5}}=\frac{2}{5 \cdot \log{5}}$$
$$a_{n+1}=\frac{7-5}{7 \cdot \log{7}}=\frac{2}{7 \cdot \log{7}}$$
At this case, $a_{n}>a_{n+1}$.
So,you can't conclude that the sequence is decreasing,because then the relation $\frac{a_{n+1}}{a_n}>1$ would stand $\forall n \in \mathbb{N}$.
Q:
exchange data between two web services using spring boot and docker-compose
For example i have one object stored on the first webservice that can be accessed by "http://localhost:5000/articles/1"{"id":"1","name":"article1","description":"Desc1","partsId": 2}
and another webservice that can be accesed by "http://localhost:80/api/parts2" stores parts object
{"id": 2,"manufacturer": "manufacture", "name": "name", "price": "228", "type": "type"}
my docker-compose.yml:
`version: '3.7'
services:
articles-service:
container_name: articles-service
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- .:/service
networks:
- ws_bridge
computer-parts:
build: ./comp_parts
comm
| 125
| 563
| 87
| 100
| 153
| 0.822455
|
github_plus_top10pct_by_avg
|
ious historical topics on *Wikipedia* [@wiki_past].
- Events in the future can be evaluated by using important infrastructure projects, engineering projects etc. These can be extracted from *Wikipedia* and other sources on the Internet.
- Current events extracted from *Wikipedia* [@wiki_current].
- Alternatively, we can manually construct a list of prominent events and extract relevant information such as named entities, geographical location, and time from ontologies such as: YAGO [@yago], Freebase [@freebase], etc.
- Diversifying and summarizing search events
- Biographies of eminent personalities, for example United States presidents [@wiki_potus].
- Historical timelines of various countries, for example for India [@wiki_timeline].
**Structure**. Each event in our test bed is then composed of a fact with an accompanying query. Formally, a *fact* in our testbed is a 4-tuple extracted from one of the aforementioned sources: $$\langle q,\mathcal{E}, g, t, \mathcal{W} \rangle$$
where $q$ consists of keyword query describing the event, $\mathcal{E}$ is a bag of participating entities, $g$ is the geographic location, $t$ is the time of its occurrence, and $\mathcal{W}$ are important terms describing the event.
**Metrics**. Based on the structure of the testbed of events, metrics such as *precision*, *recall* and *F$_1$* can be utilized to measure the effectiveness of the algorithms for detecting important events in semantically annotated corpora. How effectively the algorithm diversifies documents along multiple dimensions can be evaluated by metrics such as $\alpha$-nDCG [@diverse]. Quality of summaries can be measured by an automatic evaluation metric called *Rouge* [@rouge].
Discussion
==========
\[sec:discussion\]
I briefly present some open technical challenges that I will address along with the research objectives in my PhD dissertation.
**Mathematical Models**. One key aspect that occurs in the design of the algorithms is that of computational models for
| 126
| 917
| 189
| 152
| 875
| 0.798489
|
github_plus_top10pct_by_avg
|
ll as the importance of threshold under this technique. Although dynamic SDD-E consumes more computation power, it is clear that dynamic SDD-E is capable of tracing the gradual shift of environment. MGoF turned out to be the worst since it always mark several false positive when $c_{th}$ had not been met and much more false negatives when similar errors occurred too many.
Parameter $\alpha$ improved total accuracy of dynamic SDD-E algorithm by 10-20% as was supposed. It also increased its F1 by more than 20%. $\alpha$ made a great difference in SDD-R as well, which illustrated that divergence sorted almost all collections in correct order according to the averaged reference. However, static SDD-E did not show the same improvement. Since environment drift took greater influence in the result. In comparison with $\alpha$, adaptive threshold given by evidence sets did not bring the most improvement. But this threshold can be applied together with other optimizations such as slide windows.
Test against Anomaly Proportion and Magnitude {#sec:exp-synthetic}
---------------------------------------------
In this experiment, we tested algorithm performance under various anomaly proportion and magnitude. $\alpha$ ranged from 0.1 to 0.9 when $\nu = 1$ and $\nu \in [0.1, 0.9]$ when $\alpha = 0.1$, other settings remains the same.
![Accuracy and F1 on Different Anomaly Probabilities[]{data-label="fig:anomaly-probability"}](./AccuracyOnAnomalyProbability.pdf){width="\linewidth"}
![Accuracy and F1 on Different Anomaly Probabilities[]{data-label="fig:anomaly-probability"}](./F1OnAnomalyProbability.pdf){width="\linewidth"}
Fig. \[fig:anomaly-probability\] shows that our technique outperformed MGoF and was relatively stable when dealing with all proportions of 1st level centralized anomalies. SDD-E performed even better since it maintains knowledge of both normal and anomalous distributions and calculates the threshold according to the best expectation. However, it relies on the accuracy of distribution estimation. When it
| 127
| 34
| 1,128
| 149
| null | null |
github_plus_top10pct_by_avg
|
1.665(15) & 2.046(67) & 1.733(25)\
C.L. & - & - & - & - & - & - & - & - & - & 0.936(3) & 1.277(25) & 1.570(109)& 1.411(38)\
[1.0]{} [@ccccccccccc]{} $\kappa$ & ${g^{0+{\rm [lat]}}_V}$ & ${g^{0-{\rm [lat]}}_V}$ & ${g^{0+(u){\rm [lat]}}_A}$ & ${g^{0+(d){\rm [lat]}}_A}$ & ${g^{0+{\rm [lat]}}_A}$ & ${g^{0+{\rm}}_V}$ & ${g^{0-{\rm}}_V}$ & ${g^{0+{\rm}}_A}$ & $\widetilde Z_V$ & $\widetilde Z_A$\
0.1375 & 4.208( 8) & 3.844( 76) & 3.852( 42) & $-$1.073(49) & 4.925( 24) & 1.045(1) & 0.989( 19) & 1.247(8) & 0.2530 & 0.2576\
0.1390 & 4.492(10) & 4.152(160) & 3.978( 94) & $-$1.244(44) & 5.222(126) & 1.089(1) & 1.036( 60) & 1.261(7) & 0.2446 & 0.2491\
0.1400 & 4.663( 9) & 4.380(206) & 3.952(136) & $-$1.150(55) & 5.102(145) & 1.115(2) & 1.048(111) & 1.261(8) & 0.2390 & 0.2434\
[1.0]{} [@ccccccccc]{} $\kappa$ &
${g^{0-(u){\rm [lat]}}_A}$ & ${g^{0-(d){\rm [lat]}}_A}$ & ${g^{0-{\rm [lat]}}_A}$ & ${g^{1-(u){\rm [lat]}}_A}$ & ${g^{1-(d){\rm [lat]}}_A}$ & ${g^{1-{\rm [lat]}}_A}$ & ${g^{0-{\rm}}_A}$ & ${g^{1-{\rm}}_A}$\
0.1375 & 0.336(194) & $-$0.257(118) & 0.592(226) & 3.308(234) & 1.189(209) & 2.119(359) & 0.152(58) & 0.546(093)\
0.1390 & $-$0.710(251) & 0.081(119) & $-$0.791(272) & 3.423(495) & 1.243(420) & 2.180(730) & $-$0.197(68) & 0.543(182)\
0.1400 & 0.189(257) & $-$0.129(178) & 0.318(297) & 3.530(516) & 1.339(405) & 2.190(676) & 0.077(72) & 0.533(165)\
We show the fitted values of $L_{1,2}^\pm$ and $R_{1,2}^\pm$ in Table \[fittedval\]. $L(T)$ and $R(T)$ are rather stable and show a plateau from relatively small value of $T$ ($T\sim 2$), which is the same tendency as that found in Ref. [@Burch:2006cc]. We plot in Fig. \[optparam\] $L_i^\pm(T)$ and $R_i^\pm(T)$ obtained at $\kappa$=0.1390, for the purpose of reference.
![\[optparam\] As typical examples, $L_1^-$ and $R_1^-$ obtained at $\kappa$=0.1390 are plotted. ](L01_neg_1390.eps "fig:") ![\[optparam\] As typical examples, $L_1^-$ and $R_1^-$ obtained at $\kappa$=0.1390 are plotted. ](R01_neg_1390.eps "fig:")
The energies $E_{1,2}^\pm$ are extracted from two-point
| 128
| 38
| 559
| 209
| 1,855
| 0.785046
|
github_plus_top10pct_by_avg
|
operty for $\utilde{\d}^2_1$. The result was originally proved in [@KKMW].'
author:
- |
Grigor Sargsyan [^1]\
Department of Mathematics\
Rutgers University\
Hill Center for the Mathematical Sciences\
110 Frelinghuysen Rd.\
Piscataway, NJ 08854 USA\
http://math.rutgers.edu/$\sim$gs481\
grigor@math.rutgers.edu
bibliography:
- 'pertitionproperty.bib'
date:
title: 'An inner model proof of the strong partition property for $\utilde{\delta}^2_1$ [^2] [^3]'
---
The main theorem of this note is the following special case of Theorem 1.1 of [@KKMW] originally due to Kechris-Kleinberg-Moschovakis-Woodin.
\[main theorem\] Assume $V=L(\mathbb{R})+AD$. Then $\utilde{\delta}^2_1$ has the strong partition property, i.e., $\utilde{\delta}^2_1\rightarrow (\utilde{\delta}^2_1)^{\utilde{\delta}^2_1}$ holds.
Our proof uses techniques from inner model theory and resembles Martin’s proof of strong partition property for $\omega_1$ (see [@Jackson]). We expect that it will have other applications and in particular, can be used to show that under $AD^+$, if $\Gamma$ is any $\Pi^1_1$-like [^4] scaled pointclass and $\d=\d(\Gamma)$ then $\d$ has the strong partition property. Our motivation to find a new proof of [Theorem \[main theorem\]]{} comes from a desire to prove Kechris-Martin like results for $\Pi^1_1$-like scaled pointclasses which will settle Question 19 of [@OpenProblems] and most likely, several other questions in the same neighborhood. We are optimistic that inner model theoretic techniques will settle this question and our optimism comes from the fact that the literature is already full of descriptive set theoretic results that have been proved using methods from inner model theory (for instance, see [@Hjorth01], [@GeneralizedHjorth] and [@OIMT]). More importantly for us, recently, Neeman, in [@KMN], found a proof of the Kechris-Martin theorem for $\Pi^1_3$ using techniques from inner model theory. Finally, we believe that our proof can be used to prove the strong partition property for
| 129
| 26
| 210
| 132
| null | null |
github_plus_top10pct_by_avg
|
ent values of $\lambda$.
Figure \[fig:cd\_individual\] shows critical difference plots for both subset selection methods. Class balanced selection shows a clear trend that increasing $\lambda$ improves the RMSE, with the average rank for $\lambda=1$ being exactly 4. For random-pair selection, choosing $\lambda=3$ is shown to be statistically equivalent to $\lambda=1$, while higher values of $\lambda$ give superior results on average.
Ensembles of Nested Dichotomies
-------------------------------
Typically, nested dichotomies are utilised in an ensemble setting, so we investigate the predictive performance of ensembles of ten nested dichotomies with multiple subset evaluation, with bagging and AdaBoost employed as the ensemble methods.
### Class Threshold. {#class-threshold. .unnumbered}
[l]{}\
As previously discussed, the number of binary problems is reduced when multiple subset evaluation is applied. This could have negative a effect on ensemble diversity, and therefore potentially reduce predictive performance. To investigate this effect, we built ensembles of nested dichotomies with multiple subset evaluation by introducing a *class threshold*, the number of classes present at a node required to perform multiple subset evaluation, and varying its value from one to seven. We plot the test RMSE, relative to having a class threshold of one, averaged over the datasets from Table \[tab:datasets\], including standard errors, in Figure \[fig:threshold\]. Surprisingly, the RMSE increases monotonically, showing that the potentially reduced ensemble diversity does not have a negative effect on the RMSE for ensembles of this size. Therefore, we use a class threshold of one in our subsequent experiments. However, note that increasing the class threshold has a positive effect on training time, so it may be useful to apply it in practice.
### Number of Subsets. {#number-of-subsets. .unnumbered}
We now investigate the effect of $\lambda$ when using bagging and boosting. Figure \[fig:cd\_bagging\] shows critical
| 130
| 120
| 990
| 181
| null | null |
github_plus_top10pct_by_avg
|
<form id="form1" runat="server">
<asp:Panel runat="server" ID="uxForm">
<div class="lp-pom-form-field clearfix" id="container_name">
<asp:TextBox ID="uxName" runat="server" CssClass="text form_elem_name" placeholder="Name" />
</div>
<div class="lp-pom-form-field clearfix" id="container_email">
<asp:TextBox ID="uxEmail" runat="server" CssClass="text form_elem_email" placeholder="Email" />
</div>
</asp:Panel>
</form>
The problem is that I cannot convert the get my demo button in to asp button.
I have tried the following:
<asp:LinkButton runat="server" ID="btnSave" CssClass="lp-element lp-pom-button" Text="Get My Demo" />
but it just become invisible like the picture (right hand side form)
I have also tried
<a class="lp-element lp-pom-button" id="save" runat="server"><span class="label">GET MY DEMO</span></a>
But then also the button goes invisible and the text get my demo is in the name field as shown in the 2nd picture.
What has gone wrong and how can I correct it? the button goes invisible..even without css class
For your information I have not created the vb.net codes yet. Could that be the issue?
CSS
#lp-pom-button-411 {
display:block;
border-style:none;
behavior:url(/PIE.htc);
border-radius:9px;
left:0px;
top:251px;
z-index:16;
width:348px;
height:50px;
position:absolute;
background-color:#f7941d;
background:-webkit-linear-gradient(#fd494b,#fb2c2f);
background:-moz-linear-gradient(#fd494b,#fb2c2f);
background:-ms-linear-gradient(#fd494b,#fb2c2f);
background:-o-linear-gradient(#fd494b,#fb2c2f);
background:linear-gradient(#fd494b,#fb2c2f);
box-shadow:inset 0px 1px 0px #ff9697,inset 0 -1px 2px #c72325;
text-shadow:1px 1px #7a0404;
-pie-background:linear-gradient(#fd494b
| 131
| 49
| 141
| 144
| 126
| 0.823981
|
github_plus_top10pct_by_avg
|
nnotations in the following paragraphs.
**Named Entities**. For disambiguating and linking named entities in text to an external knowledge source such as *Wikipedia* [@wiki] or an ontology such as YAGO [@yago] or Freebase [@freebase]; I use the AIDA system [@aida]. The AIDA system does named entity disambiguation and linking by leveraging contexts extracted from ontologies such as YAGO. For Web collections such ClueWeb’09/’12 the entity disambiguation and linking has been released as *facc1 : Freebase annotation of ClueWeb Corpora* [@facc].
**Geographical Locations** can be obtained by utilizing *geographic* named entities such as those known to be cities, countries, or continents. Geographical relations stored in an ontology can be used to resolve these locations to its geographical coordinates. Having obtained a set of coordinates, we can subsequently construct a geographical representation such as a *minimum bounding rectangle* over the coordinates.
**Temporal Expressions**, both implicit and explicit, can be extracted and normalized from text by using *temporal taggers* such as HeidelTime [@heideltime] or SUTime [@sutime].
Evaluation
==========
\[sec:evaluation\]
To test our approach we need to construct query sets that contain an event description associated with the query; along with participating named entities, geographical locations where the event took place and relevant time interval associated with it. I describe a tentative approach to achieve this here.
**Test Data**. To evaluate the correctness of the various algorithms, I plan to use reliable encyclopedic resources on the Web such as *Wikipedia* [@wiki] or other curated knowledge sources. For an objective evaluation, I propose the following different sources depending on the algorithm under evaluation.
- Identify important events
- Events in a particular year/decade etc. pages available on *Wikipedia* [@wiki_year].
- Testing of past events can be done by extracting important topics from *Category* pages on var
| 132
| 1,575
| 812
| 131
| 1,207
| 0.792731
|
github_plus_top10pct_by_avg
|
_cond\], we introduced constraints on the integer variables $\underline{I}$ and $\underline{J}$, proving non-termination for queries in $Den(\leftarrow constants(\underline{I},\underline{J}))$. Introducing symbolic coefficient $i$ and $j$ for the integers of the query and for the domains of $\underline{I}$ and $\underline{J}$, yields the following constraints.
1. $i = 2$ to guarantee applicability of the derivation
2. $c_I = i, ~c_J = j$ to guarantee that the precondition holds
3. $d_I \leq 1, ~d_I \geq -1, ~d_J \leq 1, ~d_J \geq -1$,
4. $\forall I,J \in {{\mathbb{Z}}}: I=2, ~d_I * I \geq d_I * c_I, ~d_J * J \geq d_J * c_J \Longrightarrow $\
$~~~~~J*2=2, ~d_I * (J*2) \geq d_I * c_I, (1-d_I^2)*(J*2) = (1-d_I^2)*c_I, $\
$~~~~~d_J * (I-J) \geq d_J * c_J, (1-d_J^2)*(I-J) = (1-d_J^2)*c_J$
The implication in $(4)$ can only be satisfied with $d_J$ equal to zero. $\hfill \square$
### To implications over the natural numbers
The symbolic coefficients to be inferred which represent the domains, allow to transform the implication over ${{\mathbb{Z}}}$ to an equivalent implication over ${{\mathbb{N}}}$.
- for $d_I = 1$, any integer in $\lbrace c_I,~c_I+1,~\ldots\rbrace$ that satisfies the precondition is in $\lbrace c_I+d_I*N \mid N \in {{\mathbb{N}}}\rbrace$
- for $d_I = -1$, any integer in $\lbrace c_I,~c_I-1,~\ldots\rbrace$ that satisfies the precondition is in $\lbrace c_I+d_I*N \mid N \in {{\mathbb{N}}}\rbrace$
- for $d_I = 0$, any integer in $\lbrace c_I \rbrace$ that satisfies the precondition is in $\lbrace c_I+d_I*N \mid N \in {{\mathbb{N}}}\rbrace$
Therefore, we obtain an equivalent implication over the natural numbers by replacing each integer $I$ by its corresponding expression $c_I+d_I*N$ and replacing the universal quantifier over $I$ by a quantifier over $N$.
### Automation by a translation to diophantine constraints
To solve the resulting constraints, we use the approach of [@DBLP:journals/corr/abs-0912-4360]. Constraints of the form $A =:= B$ in the implication, are replaced b
| 133
| 2,477
| 637
| 141
| 370
| 0.811844
|
github_plus_top10pct_by_avg
|
removing one box from $\widetilde{\lambda}$. Let $\{Q_s\}$ be the set of tableaux obtained from $P$ by replacing ${\mbox{\boldmath $\alpha$}}^{(i-1/2)}$ with $\widetilde{\lambda}^{-}_{(s)}$.
Then we define $(E_i)_{QP}$ to be $$(E_i)_{Q_sP}
= \frac{h(\widetilde{\lambda})}{h(\widetilde{\lambda}^{-}_{(s)})}.$$ Here $h(\lambda)$ is the product of hook lengths defined by $$h(\lambda) = \prod_{x\in\lambda} h_{\lambda}(x)$$ and $h_{\lambda}(x)$ is the [*hook-length*]{} at $x\in\lambda$.
Note that the matrix $E_i$ is determined by the label $\widetilde{\lambda}$ itself not by the vertex at which the tableau $P$ goes through. In other words, if another vertex in different level, say $i'$, has the same label $\widetilde{\lambda}$, then $E_{i'}$ becomes the same matrix.
Let $v(\lambda^-_{(s)}, \lambda)$ be the standard vector which corresponds to a tableau whose $(i-1)$-st, $(i-1/2)$-th and $i$-th coordinate $({\mbox{\boldmath $\alpha$}}^{(i-1)}, {\mbox{\boldmath $\alpha$}}^{(i-1/2)}, {\mbox{\boldmath $\alpha$}}^{(i)})$ are labeled by $(\lambda, \lambda^-_{(s)}, \lambda)$. Then for a tableau $P$ which goes through $\lambda$ at the $(i-1)$-st and the $i$-th coordinate of $P$, we have $$\rho(e_i)(v_P)
=
\sum_{s'} \frac{h(\lambda)}{h(\lambda^{-}_{(s')})}v(\lambda^-_{(s')}, \lambda).$$ Here $\lambda^{-}_{(s')}$ runs through Young diagrams obtained from $\lambda$ by removing one box.
![Representation spaces for $\rho(e_i)$[]{data-label="fig:repE4a"}](19.eps)
![Representation spaces for $\rho(e_i)$[]{data-label="fig:repE4b"}](20.eps)
Suppose that tableaux $\{p_r\}$ goes through paths in pictures illustrated in Figure \[fig:repE4a\] or \[fig:repE4b\]. Then we have $$\begin{aligned}
\rho(e_i)(v_0) &=&
\frac{h(\widetilde{\emptyset})}{h(\widehat{\emptyset})}v_0 = Qv_0,\\
\rho(e_i)(v_1\ v_2) &=& (v_1\ v_2)
\begin{pmatrix}
h(\widetilde{{\mbox{\tiny\yng(1)}}})/h(\widehat{\emptyset})
&h(\widetilde{{\mbox{\tiny\yng(1)}}})/h(\widehat{\emptyset})\\
h(\widetilde{{\mbox{\tiny\yng(1)}}})/h(\widehat{{\mbox{\tiny\yng(1)}}
| 134
| 848
| 230
| 177
| 2,074
| 0.783026
|
github_plus_top10pct_by_avg
|
$ in and we are left with $$\begin{aligned}
\label{eq:finalRGeq}
\partial_{\hat k} \Delta \hat{ V} = -\frac{ 1 }{ 6 \pi^2}
\left(1+\frac{\eta_0}{5}\right) \frac{
\ g_{k}^2 \
\partial^2_\varphi \, ( \hat{V}_{\bot} + \Delta \hat{ V}) }{1
+\frac{ g_{k}^2 }{ \hat k^2 }
\partial^2_\varphi \, (\hat{V}_{\bot} + \Delta \hat{ V})}\,, \end{aligned}$$ where we have kept the notation $\partial_{\hat k}\Delta \hat V$ for $\partial_{\hat k}\Delta \hat V-2(1+\eta_0/5)\, \hat k^2 / (3 (2\pi)^2)$. In this form it is evident, that the flow vanishes for fields where $\partial_\varphi^2(\hat{V}_{\bot} + \Delta \hat{ V})=0$, i.e. once a region of the potential becomes convex, this part is frozen, unless the external input $\hat V_{\bot}$ triggers the flow again.
We close this section with a discussion of the qualitative features of . It resembles the flow equation of a real scalar field theory, and due to $V_\bot$, the flow is initialised in the broken phase. It relies on two external inputs, $V_\bot$ and $\alpha_s$.
The first input, $\hat V_\bot$, is computed in a perturbative approximation to the spatial gluon sector, and its computation is deferred to Appendix \[app:intoutAI\]. It is shown in Fig. \[fig:VPreWeiss3D\] for various values of the RG time $\hat k$, and approaches the perturbative Weiss potential [@Weiss:1980rj] for vanishing cutoff $\hat k=0$.
![$\hat V_{\bot}$ for different values of $\hat k$[]{data-label="fig:VPreWeiss3D"}](PreWeiss.eps "fig:"){width="8cm"}\
We have argued that within Polyakov gauge this approximation should capture the qualitative feature of its contribution to the Polyakov loop potential. We emphasise again that this is not so for the question of spatial confinement, and the related potential of the spatial Wilson loops.
The second input is $\alpha_s= g_k^2/(4 \pi)$, the running gauge coupling. It runs with the physical cut-off scale $k_{\rm phys}$ derived in Appendix \[app:match\], $\alpha_s= \alpha_s(k_{\rm
phys}^2)$. In the present work we model $\alpha_s$ with a te
| 135
| 1,694
| 548
| 172
| 1,638
| 0.787327
|
github_plus_top10pct_by_avg
|
ound of type I or of type II*, then $\pi^i\cdot h_i$ has the following form: $$\xi^{(i-1)/2}\begin{pmatrix} \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& \\ & & & \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0 \end{pmatrix} \end{pmatrix}.$$
**
2. Let $R$ be a $\kappa$-algebra. We also denote by $h$ the element of $\underline{H}(R)$ which is the image of $h\in \underline{H}(A)$ under the natural map from $\underline{H}(A)$ to $\underline{H}(R)$. Recall that we denote each element of $\underline{H}(R)$ by $(f_{i, j}, a_i\cdots f_i)$. Then the tuple $(f_{i, j}, a_i\cdots f_i)$ denoting $h\in \underline{H}(R)$ is defined by the conditions:
1. If $i\neq j$, then $f_{i,j}=0$.
2. If $i$ is even and $L_i$ is *of type I*, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & \\ &\ddots & \\ & & \begin{pmatrix} 0&1\\1&0\end{pmatrix}\end{pmatrix}, \textit{thus $x_i^j=0$},$$ $$b_i=0, d_i=0, e_i=0, f_i=0, c_i=0.$$ If $i$ is even and $L_i$ is *of type II*, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 2\cdot 1&1\\1&2\cdot\bar{\gamma}_i\end{pmatrix} \end{pmatrix}, \textit{thus $x_i^j=0$ with $j\leq n_i-2$, $x_i^{n_i-1}=1$, $x_i^{n_i}=\bar{\gamma}_i$}.$$
3. If $i$ is odd, then $$a_i=\begin{pmatrix} \begin{pmatrix} 0&1\\-1&0\end{pmatrix}& & \\ &\ddots & \\ & & \begin{pmatrix} 0&1\\-1&0\end{pmatrix}\end{pmatrix}, \textit{thus $x_i^j=0$},$$ $$b_i=0, d_i=0, e_i=0, c_i=0, f_{i,i}^{\ast}=0, f_i=\bar{\gamma}_i.$$ Here, $\bar{\gamma}_i\in \kappa$ is the reduction of $\gamma_i$ mod $2$.
The smooth affine group scheme ** {#sags}
---------------------------------
\[t34\] For any flat $A$-algebra $R$, the group $\underline{M}^{\ast}(R)$ acts on $\underline{H}(R)$ on the right by $f\circ m = \sigma({}^tm)\cdot f\cdot m$. This action is represented by an action morphism $$\underline{H} \times \underline{M}^{\ast} \lon
| 136
| 869
| 298
| 169
| 3,688
| 0.77068
|
github_plus_top10pct_by_avg
|
node, whose children represent template candidates for the part.
- Layer 2: a *part template* in the second layer describes a certain part appearance with a specific pose, *e.g.* a black sheep head from a side view. A part template is an AND node, which uses its children latent patterns to encode its constituent regions.
- Layer 3: a *latent pattern* in the third layer represents a constituent region of a part (*e.g.* an eye in the head part) or a contextual region (*e.g.* the neck region *w.r.t.* the head). A latent pattern is an OR node, which naturally corresponds to a group of units within the feature map of a certain CNN filter. The latent pattern selects one of its children *CNN units* as the configuration of the geometric deformation.
- Layer 4: terminal nodes are *CNN units*, *i.e.* raw activation units on feature maps of a CNN filter.
In this hierarchy, the AOG maps implicit latent patterns in raw CNN feature maps to explicit semantic parts. We can use the AOG to localize object parts and their constituent regions for hierarchical object parsing. The AOG is interpretable and can be used for communications with human users.
**Weakly-supervised learning via active question-answering:**[` `]{} We propose a new active learning strategy to build an AOG in a weakly-supervised manner. As shown in Fig. \[fig:QA\], we use an active question-answering (QA) process to mine latent patterns from raw feature maps and gradually grow the AOG.
{width="0.99\linewidth"}
The input is a pre-trained CNN and its training samples (*i.e.* object images without part annotations). The QA method actively discovers the missing patterns in the current AOG and asks human users to label object parts for supervision.
In each step of the QA, we use the current AOG to localize a certain semantic part among all unannotated images. Our method actively identifies object images, which cannot fit well to the AOG. *I.e.* the current AOG cannot explain object parts in these images. Our method estimates the potenti
| 137
| 1,796
| 372
| 171
| 3,132
| 0.774626
|
github_plus_top10pct_by_avg
|
le components. A functionality maps a reactive state space into the persistent subset of itself.
#### Walk of iterative operator {#S:ITERATIVE_OPERATOR_WALK}
\[D:ITERATIVE\_OPERATOR\_WALK\] Let $\langle \Psi, \Phi \rangle$ be a basis with step space ${\mathbb{S}} = \Lambda \times {\mathscr{F}} \times (\,{\prod{\Psi}} \times {\prod{\Phi}})$. Suppose $V \colon {\mathbb{S}} \to {\mathbb{S}}$ is an iterative operator, step ${\mathit{s}} \in {\mathbb{S}}$, and $\lbrace \xi_n \rbrace \in (\,{\prod{\Xi}})^{\mathbb{N}}$ is an volatile excitation sequence. Define inductively a sequence of steps by setting ${\mathit{s}}_1 = {\mathit{s}}$ and ${\mathit{s}}_{i+1} = V({\mathit{s}}_i)$ for each $i \ge 1$. The walk of ${\mathit{s}} \in {\mathbb{S}}$ under $V$, assuming sequence of volatile excitation $\lbrace \xi_n \rbrace$, is the walk $\lbrace {\mathit{s}}_n \rbrace$.
An iterative operator’s $i^\text{th}$ *iteration* is its walk’s ${(i+1)}^\text{th}$ term.
### Automaton-induced iterative operators {#S:INDUCED_ITERATIVE_OPERATOR}
Iteration of automata guarantees properties not necessarily enjoyed by other classes of iterative operators: automata generate consistent steps having conjoint processes.
#### Automaton as transformation
\[D:ITERATIVE\_TRANSFORM\] Let $\langle \Psi, \Phi \rangle$ be a basis with persistent-volatile partition $\Psi = \Phi\Xi$ and step space ${\mathbb{S}} = \Lambda \times {\mathscr{F}} \times (\,{\prod{\Psi}} \times {\prod{\Phi}})$. Let ${\mathfrak{A}} = \langle \Psi, \Phi, {\mathscr{F}}\!, {\mathsf{A}}, \Lambda, \ell, \Delta \rangle$ be an actuated automaton. Let $\xi \in {\prod{\Xi}}$ be an event stimulus and $(\lambda, {\mathit{f}}, {\mathbf{f}}) = (\lambda, {\mathit{f}}, (\psi, \phi)) \in {\mathbb{S}}$ be a step. The transform $T_{{\mathfrak{A}}}$ induced by ${\mathfrak{A}}$ is $$(\lambda, {\mathit{f}}, (\psi, \phi)) \stackrel{{\mathfrak{A}}}{\mapsto} (\lambda', {\mathit{f}}', (\psi', \phi')),$$ where $$\begin{aligned}
\lambda' &= \Delta(\lambda, \psi).&\quad\text{[next locus]}\\
{
| 138
| 1,246
| 235
| 176
| 2,509
| 0.779244
|
github_plus_top10pct_by_avg
|
rity policies says if the system is *output consistent*, *weakly step consistent* and *locally respects* $\rightsquigarrow$, the system is secure for policy $\rightsquigarrow$. The three conditions are called *unwinding conditions*. The unwinding theorem simplifies the security proofs by decomposing the global properties into unwinding conditions on each execution step.
The three unwinding conditions are as follows, and the unwinding theorem states that $output\_consistent \wedge weakly\_step\_consistent \wedge locally\_respect \longrightarrow noninterference$.
$$\begin{aligned}
output\_consistent \equiv s \stackrel{u}{\sim} t \longrightarrow s \stackrel{u}{\bumpeq} t
\end{aligned}$$
$$\begin{aligned}
weakly\_step\_consistent \equiv dom(a) \rightsquigarrow u \wedge s \stackrel{dom(a)}{\sim} t \\
\wedge s \stackrel{u}{\sim} t \longrightarrow step(a,s) \stackrel{u}{\sim} step(a,t)
\end{aligned}$$
$$\begin{aligned}
locally\_respect \equiv \neg (dom(a) \rightsquigarrow u) \longrightarrow s \stackrel{u}{\sim} step(a,s)
\end{aligned}$$
The general proofs of information flow security properties and unwinding conditions are available in [@rushby92; @von04] and an application of them on a concrete separation kernel is available in [@zhao16].
### Temporal Separation Verification
Temporal separation ensures that the services provided by shared resources to applications in a partition cannot be affected by applications in other partitions. It includes the performance of the resources concerned, as well as the rate, latency, jitter, and duration of scheduled access to them [@Rushby00]. The temporal separation becomes critical when being applied in safety-critical systems. The scheduler of separation kernels implements temporal separation since it is responsible for assigning processor time to partitions. Temporal separation requires a two-level scheduler, partition level and process level, according to ARINC 653 standard.
The literature mainly deals with two issues for temporal separation: the schedulability analysis
| 139
| 238
| 832
| 214
| null | null |
github_plus_top10pct_by_avg
|
rFormula, Operator:=intOperator
Else
rng.FormatConditions.Add Type:=intType, _
Formula1:=strFormula
End If
On Error GoTo 0
Set objCond = rng.FormatConditions(rng.FormatConditions.Count)
If intColorIndex <> -1 Then
objCond.Font.ColorIndex = intColorIndex
ElseIf dblRGB <> -1 Then
objCond.Font.Color = dblRGB
End If
Set fctApply = objCond
Exit Function
ConvertLocal:
With Range("A1") 'change this to an empty cell address - it is temporarily used to translate from local to normal formulas
.Formula = strFormula
strFormula = .FormulaLocal
.Formula = ""
End With
Resume
End Function
Q:
Eclipse giving errors whenever I try to export to executable jar
I'm trying to export a small program that I have made in Eclipse Indigo today to an executable, however, every time I do so one of two problems occur. The program uses one resource which I was hoping to put inside of the JAR but Eclipse will not put in the executable jar no matter which option I tick when I export or which folder the resource is in - the first problem!
The second problem is that whenever I tell eclipse to "Extract required libraries into generated JAR" I receive the following error when I double click on the executable Jar:
Could not find the main class: main.Launcher. Program will exit.
I don't suppose that the second problem is too much of an issue at the minute but the first one is extremely frustrating so I would appreciate any help or advice. Thanks in advance.
(Strangely, and even more frustrating, if I go through the same process with a project I made a while ago with a previous version of Eclipse it works perfectly.)
The folder structure of the project is as follows:
In the project folder there are the following directories .settings, bin, src as default. I have put the resource, which is a png in the bin folder but I have also tried it in the src folder.
A:
First of all, I would like to thank Mike (marksml) for being so helpful and atte
| 140
| 3,593
| 107
| 121
| 464
| 0.808917
|
github_plus_top10pct_by_avg
|
at{S} = \bigcup x$, such that the text summary covers all events in $\mathcal{C}$.
**Semantic Search and Analytics**. The mined set of *events* can further be utilized for search and analytics. For this purpose we can utilize inherent hierarchy in the semantic annotations. For example a given year can be broken down to different months and subsequently days in those months. Similarly, we can utilize the *type hierarchies* in named entities. Such as and are subtypes of . This can jointly be modeled by using the concept of a *data cube* [@han_dm] as shown in Figure \[fig:data\_cube\].
Formally, given a query $Q$, the objective would to first model the mined set of events as a *data cube* and subsequently provide *data cube operations* [@han_dm]:
- roll ($\bigcirc$),
- slice ($\ominus$),
- dice ($\oplus$),
- drill up ($\bigtriangleup$),
- drill down ($\bigtriangledown$).
![Example data cube based on set of events $\mathcal{C}$[]{data-label="fig:data_cube"}](cube.pdf)
![Example data cube operations for the query `all races won during 2008 by usain bolt in china` []{data-label="fig:cube_opr"}](cubeopr.pdf)
As a concrete example consider the query `all races won during 2008 by usain bolt in china`. To produce an appropriate result the sequence of operations would be: first a slice on the entity ; second dice on ; and finally drill up to year (see Figure \[fig:cube\_opr\]).
Data
====
**Corpora**. There are several readily available massive data sets. They are available from news corporation such as the *New York Times* [@nyt], *English Gigaword* [@gigaword]. These corpora have the benefit of being available with reliable publication dates and grammatically well-formed text. On larger scale are Web collections such as *ClueWeb’09* [@clueweb09]/’12 [@clueweb12], which are not always accompanied by reliable creation dates and many are ill-formed documents.
**Semantic Annotations**. The text corpora next need to be annotated for text mining. I explain how to obtain the different semantic a
| 141
| 2,635
| 344
| 146
| 2,025
| 0.783389
|
github_plus_top10pct_by_avg
|
“Estimation time" means the corresponding time measured in second when one runs Case 1 of Example \[example2\] in Section \[sec3\]. As for the other notation, $b$ is the subset size, $S$ is the number of subsets, $R$ is the number of sampled subsets. The detailed setting is shown in Section \[sec3\]. We run R language with version 3.5.2 in the desktop computer with Intel(R) Core(TM)CPU i7-4770 3.40GHz processor and 16.0GB RAM. Here we select $b$ of BLB and SDB to be a litle big so that most information of data can be used. From Table \[table1\], one can see that our method reduces the computation burden a lot.
Method Cost time Estimation time (seconds)
------------ ------------------------ ------------- ---------------------------
BLB $R\times S\times t(b)$ $b=n^{0.6}$ 26.528
$b=n^{0.8}$ 209.810
SDB $ S\times t(b)$ $b=n^{0.6}$ 6.810
$b=n^{0.8}$ 38.363
Our method $K\times t(m) + c(K)$ $K=50$ 1.031
$K=100$ 1.158
$K=150$ 1.285
: The computational time for different methods.
\[table1\]
Simulations {#sec3}
===========
In this section, we investigate the finite sample performance of our proposed method. We also compare it with several existing alternatives in the literature. Example \[example1\] is designed for linear model. Example \[example2\] is for Logistic regression. Based on the suggestion in [@Shi2018], the numbers of subsets for steps 1 and 2 are 2000 and $10^4$ respectively in mMSE and mVC. As in [@Kleiner2014] and [@Sengupta2016], we set subset size $b=n^{\gamma}$ with $\gamma=0.6$ and $0.8$. The numbers of subsets in BLB and SDB are 20 and 500 respectively. The number of sampled subset is 100 in BLB. Furthermore, we set the replications of TB to be 100, $K=\{50,10
| 142
| 531
| 439
| 193
| 2,437
| 0.779755
|
github_plus_top10pct_by_avg
|
uss in greater detail the special case of line bundles on gerbes over projective spaces.
Generalities {#generalities}
------------
Let us first review some basic properties of line bundles on gerbes over projective spaces, and then we will outline their sheaf cohomology.
First, let us consider some simple explicit examples. The total space of the line bundle ${\cal O}(-m)$ over the projective space ${\mathbb P}^n$ can be described[^25] by a gauged linear sigma model with fields of $U(1)$ charges
$x_1$ $\cdots$ $x_{n+1}$ $p$
------- ---------- ----------- ------
$1$ $\cdots$ $1$ $-m$
Now, a ${\mathbb Z}_k$ gerbe over ${\mathbb P}^n$ can be described by a gauged linear sigma model in which the $n+1$ fields/homogeneous coordinates have weight $k$ instead of weight $1$, as discussed in [*e.g.*]{} [@glsm]. Then, for example, the GLSM with fields and $U(1)$ charges
$x_1$ $\cdots$ $x_{n+1}$ $p$
------- ---------- ----------- ------
$k$ $\cdots$ $k$ $-k$
is surely going to be the pullback of ${\cal O}(-1) \rightarrow {\mathbb P}^n$ to the gerbe.
However, how does one interpret GLSM’s defined by, for example:
$x_1$ $\cdots$ $x_{n+1}$ $p$
------- ---------- ----------- ------
$k$ $\cdots$ $k$ $-1$
This is the total space of what is sometimes referred to as the “${\cal O}(1/k)$” line bundle over the ${\mathbb Z}_k$ gerbe ${\mathbb P}^n_{[k,\cdots,k]}$. It is an example of a line bundle on the gerbe that is not a pullback of a line bundle on the base space – the gerbe has more bundles than the base space. More to the point, it can only be understood as the total space of a line bundle on a gerbe – so a physicist who was very careful in a study of GLSM’s would eventually be forced to discover gerbes in order to make sense of this example.
In addition to being a line bundle over the stack, the total space of the ${\cal O}(1/k)$ line bundle is also a fibered orbifold over the projective space ${\mathbb P}^n$ – it is a ty
| 143
| 512
| 338
| 185
| null | null |
github_plus_top10pct_by_avg
|
ms.
Q:
Manually remove whitespace in String - JavaScript
I have attempted to make an algorithm that will do the same thing as this function: var string= string.split(' ').join('');
So if I have the following String: Hello how are you it becomes Hellohowareyou
I don't want to use .replace or regex or .split
However, the algorithm doesn't seem to make any changes to the String:
var x = prompt("Enter String");
for (var i=0; i<=x.length;i++) {
if (x[i] == " ") {
x[i] = "";
}
}
alert(x);
A:
Your code is not working because, probably for strings, similar to a getter, there is no setter for indexed approach(x[0] = "w"). You cannot consider a string as an array. Its a special form of object (immutable object) that can be accessed with index, but strictly there is no setter in this approach.
You can fix your code by changing like below,
var x = prompt("Enter sum or 'e' to Exit");
var modified = "";
for (var i=0; i<x.length;i++) {
if (x[i] != " ") {
modified += x[i];
}
}
alert(modified);
And you can do this in other better ways like below by using regex,
var x = prompt("Enter sum or 'e' to Exit");
x = x.replace(/\s/g,"");
A:
Iterate over the string copying characters, skipping spaces. Your code doesn't work because strings are immutable, so you cannot change characters within the string by doing x[i] = 'c'.
See Are JavaScript strings immutable? Do I need a "string builder" in JavaScript?
var string = 'Hello How are you';
var noSpaces = '';
for (var i = 0; i < string.length; i++) {
if (string.charAt(i) != ' ' ) {
noSpaces += string.charAt(i);
}
}
alert(noSpaces);
Q:
Webdriver script won't print to text file
Issue:
My Java WebDriver script is creating the text file, printing everything to console properly, but will not print to the said text file. The file is always blank.
My Observation:
It's got some to do with how I have written the buffered writer write() and close() functions, but I can't quiet place my finger on it being a noob.
| 144
| 5,295
| 149
| 40
| 475
| 0.808431
|
github_plus_top10pct_by_avg
|
ly the points of ${\mathcal{B}}({\mathcal C})$), where morphisms are natural transformations between the inverse image functors.
\[th:filt\] There is an equivalence of categories $${\mathsf{Filt}}({\mathcal C})\, \,{\mathrel{
\settowidth{\@tempdima}{$\scriptstyle\tau$}
\settowidth{\@tempdimb}{$\scriptstyle\rho$}
\ifdim\@tempdimb>\@tempdima \@tempdima=\@tempdimb\fi
\mathop{\vcenter{
\offinterlineskip\ialign{\hbox to\dimexpr\@tempdima+2em{##}\cr
\rightarrowfill\cr\noalign{\kern.3ex}
\leftarrowfill\cr}}}\limits^{\!\tau}_{\!\rho}}}\, \,{\mathsf{Geom}}({\mathsf{Sets}}, {\mathcal{B}}({\mathcal C}))$$ where the functors $\tau$ and $\rho$ are defined, for a filtered functor $A\colon {\mathcal C}\to {\mathsf{Sets}}$ and a point $f\in {\mathsf{Geom}}({\mathsf{Sets}}, {\mathcal{B}}({\mathcal C})) $, by $$\tau(A)^*=- \otimes_{\mathcal C} A, \,\, \tau(A)_*= \underline{\mathrm{Hom}}_{\mathcal C}(A,-),$$ $$\rho(f)=f^*\cdot {\mathbf{y}}\colon {\mathcal C}\to {\mathcal{B}}({\mathcal C})\to {\mathsf{Sets}},$$ where ${\mathbf{y}}$ denotes the Yoneda embedding of ${\mathcal C}$ into ${\mathcal{B}}({\mathcal C})$.
We remark that Theorem \[th:filt\] remains valid in a wider setting where the topos ${\mathsf{Sets}}$ is replaced by an arbitrary topos ${\mathcal E}$. To formulate this result, known as Diaconescu’s theorem, one needs a suitable definition of a filtered functor from a small category to a topos, such that being filtered is equivalent to being flat, cf. [@MM VII.8]. For our purposes, we will need filtered functors to the topos of sheaves over a topological space, which we discuss in Section \[sec:bundles\].
Principal group bundles and group torsors
-----------------------------------------
The connection between filtered functors on a small category and geometric morphisms to the classifying topos is a well known and fundamental result in topos theory. In the special case where the category is a group, denote it by $G$, it is known [@MM VIII.2] that the category of filtered functors $G\to {\mathcal
| 145
| 878
| 276
| 184
| null | null |
github_plus_top10pct_by_avg
|
(dnName, certSerialNumber, startDate, endDate, dnName, this.publicKey);
BasicConstraints basicConstraints = new BasicConstraints(true);
certBuilder.addExtension(new ASN1ObjectIdentifier("2.5.29.19"), true, basicConstraints);
x509Certificate = new JcaX509CertificateConverter().setProvider(provider).getCertificate(certBuilder.build(contentSigner));
} catch (CertIOException | CertificateException | OperatorCreationException ex) {
x509Certificate = null;
}
return x509Certificate;
}
Clase para manejar RSA:
import java.io.UnsupportedEncodingException;
import java.math.BigInteger;
import java.security.InvalidKeyException;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.Provider;
import java.security.PublicKey;
import java.security.Security;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import java.util.Base64;
import java.util.Calendar;
import java.util.Date;
import javax.crypto.BadPaddingException;
import javax.crypto.Cipher;
import javax.crypto.IllegalBlockSizeException;
import javax.crypto.NoSuchPaddingException;
import org.bouncycastle.asn1.ASN1ObjectIdentifier;
import org.bouncycastle.asn1.x500.X500Name;
import org.bouncycastle.asn1.x509.BasicConstraints;
import org.bouncycastle.cert.CertIOException;
import org.bouncycastle.cert.jcajce.JcaX509CertificateConverter;
import org.bouncycastle.cert.jcajce.JcaX509v3CertificateBuilder;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.operator.ContentSigner;
import org.bouncycastle.operator.OperatorCreationException;
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
/**
* Clase para cifrar los datos de la aplicación.
*
* @author Jose Montes
*/
public class EncryptionRSA {
private PublicKey publicKey;
private PrivateKey privateKey;
private final int keySize;
/**
| 146
| 4,192
| 17
| 129
| 322
| 0.813413
|
github_plus_top10pct_by_avg
|
performance on unperturbed images when defenses are used, we performed the experiment below. For the FGS and IGS attacks, unless otherwise noted, an epsilon of 0.3 was used as is typical in the literature.
Performance of Defended Models on Clean Data
--------------------------------------------
One of the basic assumptions of our approach is that there exist operations that can be applied to the feature embeddings generated by the layers of a deep classification model, which preserve the classification accuracy of the network while removing the adversarial signal. As an example of such transformations, we propose a variational autoencoder. We have evaluated the effect of inserting VAEs on two models: Logistic Regression (LR) and a 2 convolutional layer LeNet on the MNIST dataset. The comparison of the performance of these methods is summarized in Table \[table:performance\_reduction\]. Surprisingly, on MNIST it is possible to train quite simple variational autoencoding models to recreate feature embeddings with sufficient fidelity to leave the model performance virtually unchanged. Reconstructed embeddings are visualized in Supplementary Materials. Supplementary Figure 3 shows how the defense reduces the distance between adversarial and normal examples in the various layers of LeNet.
Model Undef. Accuracy Deterministic Def. Accuracy Stochastic Def. Accuracy
----------- ----------------- ----------------------------- --------------------------
LR-VAE 0.921 0.907 0.914
LeNet-VAE 0.990 0.957 0.972
: Performance reduction caused by defenses on the MNIST dataset.[]{data-label="table:performance_reduction"}
Transferability of Attacks Between Defense Arrangements
-------------------------------------------------------
The premise of our defense is that the exponentially many arrangements of noise removing operations are not all exploitable by the same set of adversarial images. The worst case sce
| 147
| 283
| 720
| 200
| null | null |
github_plus_top10pct_by_avg
|
* Constructor, define el tamaño de la clave en 2048bytes por defecto.
*/
public EncryptionRSA() {
this.keySize = 2048;
}
/**
* Constructor, permite definir el tamaño de la clave en bytes. Ejemplo:
* EncryptionRSA(1024); La clave se generará con un tamaño de 1024 bytes y
* no con los 2048 bytes que viene por defecto.
*
* @param keySize | Tamaño de la clave en bytes.
*/
public EncryptionRSA(int keySize) {
this.keySize = keySize;
}
public PublicKey getPublicKey() {
return publicKey;
}
public PrivateKey getPrivateKey() {
return privateKey;
}
public void setPublicKey(PublicKey publicKey) {
this.publicKey = publicKey;
}
public void setPrivateKey(PrivateKey privateKey) {
this.privateKey = privateKey;
}
/**
* Genera las claves publica/privada. Se pueden obtener con getPrivateKey()
* y getPublicKey().
*/
public void buildParKey() {
try {
KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
keyPairGenerator.initialize(this.keySize);
KeyPair keyPair = keyPairGenerator.genKeyPair();
this.publicKey = keyPair.getPublic();
this.privateKey = keyPair.getPrivate();
} catch (NoSuchAlgorithmException ex) {
}
}
/**
* Función para cifrar con la clave pública generada.
*
* @param message | Mensaje a cifrar
* @return | byte[] cifrado
*/
public byte[] encryptMessage(String message) {
byte[] finalMessage = null;
try {
Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");
cipher.init(Cipher.ENCRYPT_MODE, this.publicKey);
finalMessage = cipher.doFinal(message.getBytes("UTF-8"));
} catch (IllegalBlockSizeException | InvalidKeyException | NoSuchAlgorithmException | NoSuchPaddingException | BadPaddingException | UnsupportedEncodingException ex) {
}
| 148
| 1,005
| 145
| 77
| 472
| 0.808585
|
github_plus_top10pct_by_avg
|
- an overview of the research problems (Section \[sec:problem\]);
- available corpora, test sources and evaluation measures for research (Section \[sec:evaluation\]);
- discussion of few open technical problems (Section \[sec:discussion\]).
Related Work
============
\[sec:background\]
In this section I discuss the progress already made in the area of analyzing different semantic annotations in isolation as well as in conjunction for some of the problems proposed.
**Temporal Information Retrieval and Extraction**. Researchers have considered only temporal annotations in text corpora to improve retrieval effectiveness by analyzing the time sensitivity of keyword queries and incorporating the time dimension in retrieval models. Some methods of analysis of time-sensitive queries rely on publication dates of documents [@diaz_profile; @nattiya_2010], while others also look at the temporal expressions in document contents [@dhruv_2014]. Several works also take into account the time dimension for re-ranking documents [@klaus_2010] and diversifying them along time [@klaus_2013; @nattiya_2014]. One of the seminal works in extracting temporal events was by Ling and Weld [@ling_tie]. They outline a probabilistic model to solve the problem of extracting relations from text with temporal constraints.
**Important Events in Annotated Corpora**. One of the most important seminal works in identifying existing and emerging events were the various tasks in *Topic Detection and Tracking* (TDT) [@tdt_book]. The TDT program aimed to “search, organize and structure” broadcast news media from multiple sources. The five tasks laid within the ambit of TDT where topic tracking, link detection, topic detection, first story detection, and story segmentation. The task of topic tracking required to build a system to detect *on topic stories* from an evaluation corpus after being trained on a set *on topic* stories. The link detection task involved answering a boolean query to whether two given *stories* are related b
| 149
| 45
| 799
| 191
| null | null |
github_plus_top10pct_by_avg
|
a marked impact on the distance between normal and adversarial examples. Thus, we can conclude that part of the reason for why the defense works is that it dampens the effect of adversarial noise.
[.5]{} ![L-$\infty$ distance between adversarial and normal images as a function of layer number for LeNet attacked with FGS for the MNIST dataset.[]{data-label="fig:lenet_fgs_distances"}](figures/lenet_vae_fgs_0_mnist_distances_lineplot.pdf "fig:"){width="1\linewidth"}
Effect of Attacks on Averaged Defense {#effect-of-attacks-on-averaged-defense .unnumbered}
-------------------------------------
Since the IGS and CW2 attacks are iterative, they have the ability to see multiple defense arrangements while creating adversarial examples. This can result in adversarial examples that might fool any defense of the available defense arrangements. Indeed, this seems to happen for the CW2 attack shown in Table \[table:defense\_success\_lenet\]. The cause of this is most easily explained by the illustration in Figure \[fig:failure\_mode\]. Since the depth of the models we trained was not deep enough, it was possible for the iterative attacks to see all defense combinations when creating adversarial examples, so our defense was defeated. We believe that given a deep enough network of 25 or more layers, it would be computationally infeasible for an adversary to create examples that fool the stochastic ensemble.
----------- ------- ------- ------- ------- -------
LR-VAE 0.920 0.032 0.922 0.473 0.921
LeNet-VAE 0.990 0.014 0.977 0.140 0.984
----------- ------- ------- ------- ------- -------
: Success rate of CW2 attack on LR and LeNet defended with VAEs. []{data-label="table:defense_success_lenet"}
![Illustration of how the defense can fail against iterative attacks. Even though the two defense arrangements have orthogonal gradients, thereby exhibiting low transferability of attacks, an iterative attack that alternates between optimizing for either arrangement can end up fooling both.[]{data-
| 150
| 17
| 474
| 201
| null | null |
github_plus_top10pct_by_avg
|
x)
=
\begin{cases}
\infty & (0 < a < 1), \\
1 & (a = 1), \\
0 & (a > 1),
\end{cases}
&
&\lim_{x \to +\infty} \frac{d}{dx} y_{1}(a, x)
= 0 \quad (a > 0), \allowdisplaybreaks \\
&\lim_{x \to 0+} y_{1}(a, x)
= 0 \quad (a > 0),
&
&\lim_{x \to +\infty} y_{1}(a, x)
= 0 \quad (a > 0). \end{aligned}$$ From these results, Lemma $\ref{lem:4-1-3}$, and the fact that the signs of $\frac{d}{dx} y_{i}(a, x)$ and $y_{i + 1}(a, x)$ ($i = 1, 2, 3$) are equal to each other for $a > 0$ and $x > 0$, we obtain Tables $3$ and $4$. From Tables $3$ and $4$, we can verify that $y_{1}(a, x) > 0$ holds for $a > 0$ and $x > 0$. This completes the proof of the lemma.
$x$ $\;0\;$ $\cdots$ $\;\frac{3a + 1}{2}\;$ $\cdots$ $\;p_{4}(a)\;$ $\cdots$ $+\infty$
---------------------------- --------- ---------- ------------------------ ---------- ---------------- ---------- -----------
$\frac{d}{dx} y_{4}(a, x)$ $0$ $+$ $0$ $-$ $-$ $-$ $0$
$y_{4}(a, x)$ $0$ $+$ $+$ $+$ $0$ $-$ $-$
: Case of $0 < a < 1$
$x$ $\;0\;$ $\cdots$ $\;\frac{3a + 1}{2}\;$ $\cdots$ $\;+\infty\;$
---------------------------- --------- ---------- ------------------------ ---------- -------------------------------------------------------
$\frac{d}{dx} y_{4}(a, x)$ $0$ $+$ $0$ $-$ $0$
$y_{4}(a, x)$ $0$ $+$ $+$ $+$ $\begin{matrix}0\;\;(a = 1)\\ +\;(a > 1)\end{matrix}$
: Case of $a \geq 1$
$x$ $\;0\;$ $\cdots$ $\;p_{2}(a)\;$ $\cdots$ $\;p_{3}(a)\;$ $\cdots$ $\;p_{4}(a)\;$ $\cdots$ $\;+\infty\;$
---------------------------- ----------- ---------- ---------------- ---------- ---------------- ---------- ---------------- ---------- ---------------
| 151
| 2,037
| 269
| 157
| null | null |
github_plus_top10pct_by_avg
|
Interpretable AOG Representations from Convolutional Networks via Active Question Answering
---
[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}
Introduction
============
Convolutional neural networks [@CNN; @CNNImageNet; @ResNet; @DenseNet] (CNNs) have achieved superior performance in many visual tasks, such as object detection and segmentation. However, in real-world applications, current neural networks still suffer from low interpretability of their middle-layer representations and data-hungry learning methods.
Thus, the objective of this study is to mine thousands of *latent patterns* from the mixed representations in conv-layers. Each latent pattern corresponds to a constituent region or a contextual region of an object part. We use an interpretable graphical model, namely an And-Or graph (AOG), to organize latent patterns hidden in conv-layers. The AOG maps implicit latent patterns to explicit object parts, thereby explaining the hierarchical representation of objects. We use very few (*e.g.* 3–20) part annotations to mine latent patterns and construct the AOG to ensure high learning efficiency.
As shown in Fig. \[fig:rawMapToModel\], compared to ordinary CNN representations where each filter encodes a mixture of textures and parts (evaluated by [@Interpretability]), we extract clear object-part representations from CNN features. Our weakly-supervised learning method enables people to model objects or object parts on-the-fly, thereby ensuring broad applicability.
{width="\linewidth"}
**And-Or graph representations:**[` `]{} As shown in Fig. \[fig:rawMapToModel\], the AOG represents a semantic hierarchy on the top of conv-layers, which consists of four layers, *i.e.* the *semantic part*, *part templates*, *latent patterns*, to *CNN units*. In the AOG, AND nodes represent compositional regions of a part, and OR nodes represent a list of alternative template/deformation candidates for a local region.
- Layer 1: the top *semantic part* node is an OR
| 152
| 16
| 240
| 167
| 1,164
| 0.793312
|
github_plus_top10pct_by_avg
|
�ь она смотрит влево.
Можно ли в Android в XML каким нибудь параметром сказать чтобы при смене языка RTL в ImageButton не менять. Что то типо RTL disable. Мне надо чтобы именно эта View не реагировала на смену RTL ImageButton
A:
можно отключить для всего приложения в AndroidManifest.xml:
<application android:supportsRtl="false">
если ic_keyboard_arrow_right_24dp - векторное изображение, то можно сменить (API 19+):
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="24dp"
android:height="24dp"
android:autoMirrored="false" <-- эта строка
android:viewportHeight="24.0"
android:viewportWidth="24.0">
...
</vector>
так же можно использовать папки:
drawable-ldltr // LTR
drawable-ldrtl // RTL
Q:
Windows 8.1 App C# XAML help: Show message if no radio buttons are selected
I'm trying to code my ESurvey application. I have a list of radio buttons. Here is the code on SurveyPage1.xaml:
<StackPanel Grid.Row="1" Margin="-10,10,10,-10">
<TextBlock Text="PARLIAMENT VISITOR CENTRE SURVEY" FontWeight="Bold" FontSize="48" Margin="100,0,0,0" VerticalAlignment="Top"/>
<TextBlock Text="1. How would you describe your overall visit experience?" Margin="100,100,0,0" FontSize="18"/>
<RadioButton Name="radioExcellent" Content="Excellent" Margin="100,10,0,0" FontSize="12"/>
<RadioButton Name="radioGood" Content="Good" Margin="100,10,0,0" FontSize="12"/>
<RadioButton Name="radioAverage" Content="Average" Margin="100,10,0,0" FontSize="12"/>
<RadioButton Name="radioPoor" Content="Poor" Margin="100,10,0,0" FontSize="12"/>
</StackPanel>
<Button Name="nextButton1" Content="Next" HorizontalAlignment="Left" Margin="1250,525,0,0" Grid.Row="1" VerticalAlignment="Center" Background="#FFED2E38" Click="But
| 153
| 48
| 132
| 116
| null | null |
github_plus_top10pct_by_avg
|
W/m^2^; (**c**) MLP model at *G* = 909.0 W/m^2^; (**d**) CNN model at *G* = 153.7 W/m^2^; (**e**) CNN model at *G* = 653.4 W/m^2^; (**f**) CNN model at *G* = 909.0 W/m^2^.](sensors-20-02119-g006){#sensors-20-02119-f006}
{#sensors-20-02119-f007}
{#sensors-20-02119-f008}
sensors-20-02119-t001_Table 1
######
The selected working conditions.
Working Conditions Irradiance *G* (W/m^2^) Temperature of PV Module Back-Surface *T*~1~ (°C) Ambient Temperature *T*~2~ (°C) Relative Humidity *H*~a~ (%) Atmospheric Pressure *P*~a~ (hPa)
-------------------- ------------------------- --------------------------------------------------- --------------------------------- ------------------------------ -----------------------------------
1 153.7 16.8 21.9 83.2 1002.9
2 237.5 23.3 24.6 42.3 1001.4
3 328.7 26.9 24.7 69.7 997.1
4 445.5 22.7 24.8 58.4 1007.6
5 537.9 28.2
| 154
| 1,480
| 785
| 212
| null | null |
github_plus_top10pct_by_avg
|
, {\mbox{\boldmath $\alpha$}}^{(i)}, {\mbox{\boldmath $\alpha$}}^{(i+1/2)})$ are labeled by $(\mu, \mu^+_{(r)}, \mu)$. Then for a tableau $P$ which goes through $\mu$ at the $(i-1/2)$-th and the $(i+1/2)$-th coordinate of $P$, we have $$\rho(f_i)(v_P)
=
\sum_{r} \frac{h(\mu)}{h(\mu^{+}_{(r_0)})}v(\mu^+_{(r)}, \mu).$$ Here $\mu^{+}_{(r)}$ runs through Young diagrams obtained from $\mu$ by adding one box.
![Representation spaces for $\rho(f_i)$[]{data-label="fig:repF4"}](21.eps)
Suppose that tableau $\{q_r\}$ go through paths in the picture illustrated in Figure \[fig:repF4\]. Then we have $$\rho(f_i)(v_0\ v_1)
= (v_0\ v_1)
\begin{pmatrix}
h(\widehat{\emptyset})/h(\widetilde{\emptyset})
&h(\widehat{\emptyset})/h(\widetilde{{\mbox{\tiny\yng(1)}}})\\
h(\widehat{\emptyset})/h(\widetilde{\emptyset})
&h(\widehat{\emptyset})/h(\widetilde{{\mbox{\tiny\yng(1)}}})
\end{pmatrix}
= (v_0\ v_1)
\begin{pmatrix}
\frac{1}{Q} &\frac{Q-1}{Q}\\
\frac{1}{Q} &\frac{Q-1}{Q}
\end{pmatrix}$$ and $$\begin{aligned}
\rho(f_i)(v_2\ v_{3}\ v_{4})
&=& (v_2\ v_{3}\ v_{4})
\begin{pmatrix}
h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(2)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1,1)}}})\\
h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(2)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1,1)}}})\\
h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(2)}}})
&h(\widehat{{\mbox{\tiny\yng(1)}}})/h(\widetilde{{\mbox{\tiny\yng(1,1)}}})
\end{pmatrix}\\
&=& (v_2\ v_{3}\ v_{4})
\begin{pmatrix}
\frac{Q-1}{Q(Q-2)} &\frac{Q-3}{2(Q-2)} &\frac{Q-1}{2Q}\\
\frac{Q-1}{Q(Q-2)} &\frac{Q-3}{2(Q-2)} &\frac{Q-1}{2Q}\\
\frac{Q-1}{Q(Q-2)} &\frac{Q-3}{2(Q-2)} &\frac{Q-1}{2Q}
\end{pmatrix
| 155
| 1,834
| 286
| 205
| null | null |
github_plus_top10pct_by_avg
|
on divergence if no specific notation is made. However, MGoF used only Kullback-Leibler divergence due to its special mechanism. We use a “+” to denote algorithms optimized by a given $\alpha$.
Experiments on Koubei Data Set {#sec:exp-raw}
------------------------------
We first tested our algorithms on Koubei data set in order to see whether and why the algorithm works. Anomalies were random selected days replaced by corresponding click farmed version. To play the role of purchasing platform, we investigated two levels of transaction distribution. The first level is to simply draw a histogram aligned to time spans. The second level is to draw a histogram on the sub-volumes in each time span(i.e. a histogram on frequencies in the first level histogram, as shown in Fig. \[fig:histogram-example\]).
![Examples of 1st and 2nd Level Histogram[]{data-label="fig:histogram-example"}](./HistogramExample.pdf){width="\linewidth"}
On the raw data set, we had no choice but to set one hour a basket. While on the enhanced data set, we adopted E.q.(\[equ:step-size\]) to determine step size automatically. To test SDD-E, we randomly selected 30 correct days and 10 click farmed days as normal and anomalous evidence respectively. Here $\alpha = 0.2$. The results are shown in Table \[tab:result-koubei-raw\] and \[tab:result-koubei-enhanced\].
-------------------- ------------ ------------ ----------- ------------ ------------ ------------ ----------- ----------- ------------ ------------ ----------- ------------ ------------ ------------ ----------- -----------
**Pre(%)** **R
| 156
| 819
| 553
| 231
| null | null |
github_plus_top10pct_by_avg
|
ant to get:
suma producto dia
4 1 FRI
5 3 TUE
Only the top product of each day (with the max(suma) of each group).
I tried different approaches, like subqueries, but the aggregate function used make things a bit difficult.
A:
You can still use DISTINCT ON to get this done in a single query level without subquery, because DISTINCT is applied after GROUP BY and aggregate functions (and after window functions):
SELECT DISTINCT ON (3)
sum(d.cantidad) AS suma
, d.producto_id AS producto
, to_char(o.fecha AT TIME ZONE 'MST', 'DY') AS dia
FROM detalle_orden d
LEFT JOIN orden o ON o.id = d.order_id
GROUP BY o.fecha, d.producto_id
ORDER BY 3, 1 DESC NULLS LAST, d.producto_id;
Notes
This solution returns exactly one row per dia (if available). if multiple products tie for top sales my arbitrary (but deterministic and reproducible) pick is the one with the smaller producto_id.
If you need all peers tying for one day use rank() as suggested by @Houari.
The sequence of events in an SQL SELECT query is explained in this related answer:
Best way to get result count before LIMIT was applied
date_trunc() was just noise in the calculation of dia. I removed it.
I added NULLS LAST to the descending sort order since it is unclear whether there might be rows with NULL for suma in the result:
PostgreSQL sort by datetime asc, null first?
The numbers in DISTINCT ON and GROUP BY are just a syntactical shorthand notation for convenience. Similar:
PostgreSQL equivalent for MySQL GROUP BY
As are the added table aliases (syntactical shorthand notation).
Basics for DISTINCT ON
Select first row in each GROUP BY group?
Q:
Invalid syntax when evaluating expressions
I'm getting
SyntaxError: invalid syntax
and
SyntaxError: invalid token
when evaluating a boolean expression in a dictionary:
Test Case
test_expressions = ['(3530A338A58 OR 3533A500H65 OR 3533-555A57 OR 3533A593A36 OR 3533A637A60 OR 3533A636A67 OR 3533A637D30 OR 3533H370A53 OR 3533H370T63 OR 3533H693H95 OR 3533
| 157
| 4,541
| 12
| 139
| 2,420
| 0.779886
|
github_plus_top10pct_by_avg
|
} +
\sum_{K \neq L} W_{iK} \hat{S}_{KL}^{(2)} \left\{ (W^{\dagger}) \right\}_{L j}.
\label{S-alpha-beta-4th-[3]}\end{aligned}$$ We do not display explicitly the expression of each term in (\[Sab-4th\]). But, the notation of $S_{\alpha \beta}^{(4)} [n]_{ \text{ diag } }$ and $S_{\alpha \beta}^{(4)} [n]_{ \text{ offdiag } }$ will be transported to the notation for the oscillation probability such that $2 \mbox{Re} \left[ \left( S^{(0)}_{\alpha \beta} \right)^{*} S_{\alpha \beta}^{(4)} [n]_{ \text{ diag } } \right]$. Similarly, to make the equation fit to a single page we present the first and the second terms of $S_{\alpha \beta}^{(4)} [3]$ in (\[S-alpha-beta-4th-\[3\]\]) separately, $S_{\alpha \beta}^{(4)} [3]_\text{First} = \sum_{k L} (UX)_{\alpha k} W^*_{\beta L} \hat{S}_{kL}^{(3)}$ and $S_{\alpha \beta}^{(4)} [3]_\text{Second} = \sum_{L k} W_{\alpha L} (UX)^*_{\beta k} \hat{S}_{L k}^{(3)}$, whose notations are also transported to the oscillation probability.
Expression of the oscillation probability in fourth order in $W$ {#sec:expression-probability-4th}
================================================================
The oscillation probability to second order in $W$ is given in eq. (\[P-beta-alpha-0th+2nd\]) in section \[sec:probability-2nd\]. What is left is, therefore, the expressions of the oscillation probability in fourth order in $W$, the explicit form of the two terms in (\[P-beta-alpha-4th-def\]), $P(\nu_\beta \rightarrow \nu_\alpha) =
\left| S^{(2)}_{\alpha \beta} \right|^2
+ 2 \mbox{Re} \left[ \left( S^{(0)}_{\alpha \beta} \right)^{*} S^{(4)}_{\alpha \beta} \right] $.
Second order $S$ matrix squared term: $\left| S^{(2)}_{\alpha \beta} \right|^2$ {#sec:second-order-square}
-------------------------------------------------------------------------------
The $S$ matrix element $S^{(2)}_{\alpha \beta}$ in eq. (\[S-alpha-beta-2nd\]) contains four terms. To prevent too long expressions, we divide $\left| S^{(2)}_{\alpha \beta} \right|^2$ into the two terms, one sum of each term squared and the
| 158
| 552
| 368
| 213
| null | null |
github_plus_top10pct_by_avg
|
One can verify that the unique homomorphic extension of $c$, denoted by $\overline{c}$, is injective. Therefore, we conclude that the function $c$ is an adaptive code of order two.
Let $\Sigma$, $\Delta$, and ${\it Bool}=\{{\it True}, {\it False}\}$ be alphabets. We define the function ${\it Prefix}:{\it AC}(\Sigma,\Delta,n)\rightarrow {\it Bool}$ by: $${\it Prefix}(c)=
\left\{
\begin{array}{ll}
{\it True} & \textrm{if $C_{u}$ is a prefix code, for all $u\in\Sigma^{\leq{n}}$,} \\
{\it False} & \textrm{otherwise.}
\end{array}
\right.$$ The function *Prefix* can now be used to translate the hypothesis in Theorem 2.1: if ${c:\Sigma\times\Sigma^{\leq{n}}\rightarrow\Delta^{+}}$ is a function satisfying ${\it Prefix}(c)={\it True}$, then we conclude that $c\in{{\it AC}(\Sigma,\Delta,n)}$. Let $c\in{{\it AC}(\Sigma,\Delta,n)}$ be an adaptive code satisfying ${\it Prefix}(c)={\it True}$. Then, the algorithm **Decoder** described below requires a linear time.
(340,240) (30,211)[**Decoder**$(c,u)$]{} (30,199)[input:$c\in{{\it AC}(\Sigma,\Delta,n)}$ *such that* ${\it Prefix}(c)={\it True}$ *and* $u\in\Delta^{+}$;]{} (30,187)[output:$w\in\Sigma^{+}$ *such that* $\overline{c}(w)=u$;]{} (30,175)[begin]{} (15,163)[1. $w:=\lambda$; $i:=1$; $Last:=\lambda$; $length:=|u|;$]{} (15,151)[2.while $i\leq{length}$ do]{} (15,139)[begin]{} (15,127)[3. *Let* $\sigma\in\Sigma$ *be the unique symbol of $\Sigma$ with the property*]{} (15,115)[ *that* $c(\sigma,Last)$ *is prefix of* $u(i)\cdot u(i+1)\cdot\ldots\cdot u(length)$;]{} (15,103)[4.$w:=w\cdot\sigma$;]{} (15,91)[5.$i:=i+|c(\sigma,Last)|$;]{} (15,79)[6.if $|Last|<n$]{} (15,67)[7.then $Last:=Last\cdot\sigma$;]{} (15,55)[8.else $ $ $Last:=Last(|Last|-n+2)\cdot\ldots\cdot Last(|Last|)\cdot\sigma$;]{} (15,43)[end]{} (15,31)[9.return $w$;]{} (30,19)[end]{} (8,227)[(1,0)[327]{}]{} (8,9)[(1,0)[327]{}]{} (8,227)[(0,-1)[218]{}]{} (335,227)[(0,-1)[218]{}]{}
In the third step of the algorithm given above, the symbol denoted by $\sigma$ is unique with that
| 159
| 543
| 271
| 192
| 1,794
| 0.785661
|
github_plus_top10pct_by_avg
|
$m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'=
\begin{pmatrix} a_i''&b_i''&c_i''\\ d_i''&e_i''&f_i''\\ g_i''&h_i''&k_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'=
\begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''&\tilde{c}_i''\\ \tilde{d}_i''&\tilde{e}_i''&\tilde{f}_i''\\ \tilde{g}_i''&\tilde{h}_i''&\tilde{k}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-2) \times (n_i-2)$-matrices, etc. Then $$\left\{
\begin{array}{l}
s_i''=s_is_i'+\pi (r_iy_i'+t_iv_i'+a_i'');\\
r_i''=s_ir_i'+r_i+t_iv_i'+b_i''+ \pi (r_ix_i'+\tilde{b}_i'') ;\\
t_i''=s_it_i' +t_i+\pi (r_iu_i'+t_iw_i'+c_i'');\\
y_i''=y_is_i'+y_i' +\pi (x_iy_i'+u_iv_i'+d_i'');\\
x_i''=x_i+x_i'+y_ir_i'+u_iz_i'+e_i''+\pi (x_ix_i'+\tilde{e}_i'');\\
u_i''=u_i+u_i'+y_it_i'+\pi(u_iw_i'+x_iu_i'+f_i'');\\
v_i''=v_is_i'+z_iy_i'+v_i'+g_i''+\pi (w_iv_i'+\tilde{g}_i'');\\
z_i''=z_i+z_i'+h_i''+\pi (v_ir_i'+ z_ix_i'+w_iz_i'+\tilde{h}_i'');\\
w_i''=w_i+w_i'+v_it_i'+z_iu_i'+k_i''+\pi (w_iw_i'+\tilde{k}_i'').
\end{array} \right.$$
5. Assume that $i$ is even and $L_i$ is *of type I*. Let $\tilde{k}_{i-2, i}''$ (resp. $\tilde{k}_{i+2, i}''$) be the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the formal matrix $\tilde{m}_{i-2, i}''$ (resp. $\tilde{m}_{i+2, i}''$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^o$ and let $\tilde{k}_{i-2, i}''$ (resp. $\tilde{k}_{i+2, i}''$) be the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the formal matrix $\tilde{m}_{i-2, i}''$ (resp. $\tilde{m}_{i+2, i}''$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^e$. Then the formal sum $$\tilde{z}_i''+\delta_{i-2}\tilde{k}_{i-2, i}''+\delta_{i+2}\tilde{k}_{i+2, i}''$$ equals $$z_i+z_i'+(m_{i, i-1}m_{i-1, i}')^{\dag}+(m_{i, i+1}m_{i+1, i}'))^{\dag}+\delta_{i-2}\cdot(k_{i-2, i}+k_{i-2, i}'+(m_{i-2, i-1}m_{i-1, i}')^{\dag})+$$ $$\delta_{i+2}\cdot(k_{i+2, i}+k_{i+2, i}'+(m_{i+2, i+1}m_{i+1, i}')^{\dag})+\pi \tilde{z}_i''^{\ddag}$
| 160
| 1,838
| 266
| 207
| null | null |
github_plus_top10pct_by_avg
|
computationally much simpler, as it involves fewer random variables and a simpler set of conditions (no nonnegativity constraints). However, CbD has the advantage of being more general than NP, as it can include cases where no NP distributions exist due to violations of the no-signaling condition [@dzhafarov_probabilistic_2014; @DKL2014; @KDL2014].
#### Acknowledgments. This work was supported by NSF grant SES-1155956 and AFOSR grant FA9550-14-1-0318. The authors are grateful to Samson Abramsky, Guido Bacciagaluppi, Andrei Khrennikov, Jan-Åke Larsson, and Patrick Suppes for helpful discussions.
Proofs of statements {#appendix:proofs}
====================
In this appendix, we describe how the analytic results of the main text were obtained for each of the expressions , , , and of the main text.
EPR-Bell: Contextuality-by-Default
----------------------------------
Following the computations of Dzhafarov and Kujala [@dzhafarov_all-possible-couplings_2013 Text S3], or the more general formulation in Ref. [@dzhafarov_generalizing_2014], it can be shown that the observable distributions with probabilities given by the matrices
$$\begin{tabular}{c|cc|ccc|cc|ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc}
\cline{2-3} \cline{7-8} & \ensuremath{\mathbf{B}_{1,1}=+1} & \ensuremath{\mathbf{B}_{1,1}=-1} & & \quad{} & & \ensuremath{\mathbf{B}_{1,2}=+1} & \ensuremath{\mathbf{B}_{1,2}=-1} & \tabularnewline\cline{1-4} \cline{6-9} \multicolumn{1}{|c|}{\ensuremath{\mathbf{A}_{1,1}=+1}} & \ensuremath{p_{1,1}} & \ensuremath{a_{1}-p_{1,1}} & \multicolumn{1}{c|}{\ensuremath{a_{1}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{A}_{1,2}=+1} & \ensuremath{p_{1,2}} & \ensuremath{a_{1}-p_{1,2}} & \multicolumn{1}{c|}{\ensuremath{a_{1}}}\tabularnewline\multicolumn{1}{|c|}{\ensuremath{\mathbf{A}_{1,1}=-1}} & \ensuremath{b_{1}-p_{1,1}} & \ensuremath{1-a_{1}-b_{1}+p_{1,1}} & \multicolumn{1}{c|}{\ensuremath{1-a_{1}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{A}_{1,2}=-1} & \ensuremat
| 161
| 256
| 502
| 229
| 2,262
| 0.781328
|
github_plus_top10pct_by_avg
|
_driver.FindElements(By.XPath("//ul[@id='sortable1']/li")).ToList();
List<IWebElement> sortableListTwo = _driver.FindElements(By.XPath("//ul[@id='sortable2']/li")).ToList();
After changing the sortableListOneandsortableListTwo` element locator, please use the below in your test method and it will return the correct count
var list1 = _sortPage.SortableListOne.Count;
var list2 = _sortPage.SortableListTwo.Count;
list1.Should().NotBe(list2);
Q:
LINQ to SQL query not ordering properly, please help
var temp = (from assetVisit in db.AssetVisits
join assetBundle in db.AssetBundles on assetVisit.AssetID equals assetBundle.AssetID
join groupBundle in db.GroupBundles on assetBundle.BundleID equals groupBundle.BundleID
join userGroup in db.UserGroups on groupBundle.GroupID equals userGroup.GroupID
where assetVisit.CompanyID == companyID &&
userGroup.UserID == userID
select new { AssetID = assetVisit.AssetID, Count = assetVisit.AccessCounter }).Distinct();
IQueryable<Asset> final = (from t in temp
join asset in db.Assets on t.AssetID equals asset.AssetID
where asset.IsActive == true
&& asset.AssetTypeID == assetType
&& asset.ShowInResults == true
&& (asset.CompanyID == companyID || asset.CompanyID == -12081974)
orderby t.Count descending
select asset).Except(from companyAssets in db.Assets
join copiedAssets in db.Assets on companyAssets.AssetID equals copiedAssets.OriginalAssetID
where copiedAssets.CompanyID == companyID && companyAssets.CompanyID == -12081974 && copiedAssets.IsActive == true
select companyAssets);
return final.Take(limit);
OK so it
| 162
| 4,842
| 10
| 117
| 7
| 0.843115
|
github_plus_top10pct_by_avg
|
bib-0005){ref-type="ref"} Despite many studies regarding the pathogenic or opportunistic nature of this bacterium, the fact is still not clearly understood.[7](#ccr32374-bib-0007){ref-type="ref"}, [10](#ccr32374-bib-0010){ref-type="ref"}
The aim of present study was to report the first case of human septicemia due to *S pluranimalium* in Iran.
2. CASE PRESENTATION {#ccr32374-sec-0002}
====================
An Iraqi 2.5‐month‐old infant with clinical manifestations such as lethargy, vomiting, and anorexia was brought to the emergency department of the pediatric hospital in Mashhad, Iran, in November 2018. The initial examinations were done, which included pupil dilation, temperature: 37.2°C, and blood pressure: 75/55. The laboratory results are listed in Tables [1](#ccr32374-tbl-0001){ref-type="table"}, [2](#ccr32374-tbl-0002){ref-type="table"}, [3](#ccr32374-tbl-0003){ref-type="table"}, and [4](#ccr32374-tbl-0004){ref-type="table"}.
######
Arterial blood values
Index pH pCO~2~ (mm Hg) pO~2~ (mm Hg) HCO~3~ ^−^ (mmol/L)
-------------- ----------- ---------------- --------------- ---------------------
Patient case 7.55 25.6 116.4 26.4
Normal range 7.35‐7.45 35‐45 80‐100 22‐28
John Wiley & Sons, Ltd
######
Analysis of complete blood count (CBC)
Index WBC RBC Hb HCT Platelets
-------------- ------------------------------------------------ ----------- --------- ------- -------------
Patient case 22.760 (PMN: 87%, Lymph: 13%) 2.86 7.7 24.2 1284
Normal 5000‐19500 (PMN: 1000‐9000, Lymph: 2500‐16500) 2.70‐4.50 11‐17.1 33‐55 10000‐45000
John Wiley & Sons, Ltd
######
Analysis of urine
Index WBC RBC Epithelial cells pH SG (specific gravity)
-------------- ----- ------ ------------------ ------- -----------------------
Patient case 20
| 163
| 4,468
| 191
| 67
| null | null |
github_plus_top10pct_by_avg
|
line $\mathbb{R}$ including $(0,0)$.
1. The homeomorphism is $C^{\infty}$ on ${\mathbb{R}}^2-\{(0,0)\}$.
2. For each point not $(0,0)$, the homeomorphism preserves the distance between the point and $(0,0)$.
3. The homeomorphism maps each straight line originating from $(0,0)$ to another straight line originating from $(0,0)$.
For around each singular value corresponding to a vertex of degree $0$, we define the class so that the local form is as in Step 3: a natural height function on a unit disc. This completes the proof except the fourth condition.
For each finite graph which is not a single point or which has no vertex of degree larger than $3$, then we can orient the graph so that we can construct a continuous map into $S^1$ satisfying the following.
1. On each edge the map is injective.
2. The orientation of each edge canonically induced from a canonical orientation of $S^1$ is compatible with the defined orientation.
If the graph has no loop, then we can replace $S^1$ by $\mathbb{R}$.
For a map of the class $\mathcal{Q}_{\mathcal{C}}$, if for the target graph, for each vertex, the degree is at most $3$, then we can orient the graph as this and we can construct a local function compatible with the definition of the class $\mathcal{C}$ and the orientation. This is oweing to the definitions of a $D_2$-symmetric and a $D_3$-symmetric map and an almost smooth generalized rotation with reflection. We can consider a transformation by an almost smooth generalized rotation with reflection to construct a local function compatible with the desired orientation. See FIGURE \[fig:4.5\] for the case of a vertex of degree $2$.
![Almost smooth generalized rotations with reflections and projections (around a vertex of degree $2$: arrows indicate local orientations of the graphs induced naturally from local functions).[]{data-label="fig:4.5"}](rot.eps){width="30mm"}
This completes the proof.
Last we present another example of classes of maps and pseudo quotient maps of the class.
A [*standard-spherical*]{} M
| 164
| 2,308
| 1,316
| 174
| null | null |
github_plus_top10pct_by_avg
|
\g(3a, x) + 2 x^{2a - 1} e^{-x} \G(2a, x), \end{aligned}$$ from Lemma $\ref{lem:3.5}$, we have $\frac{d}{dx} f(a, x) > 0$ ($a > 0$, $x > 0$). Also, $f(a, 0) = 0$ holds for $a > 0$. Therefore, we obtain $$\begin{aligned}
\operatorname{{V}}[\Pe(Z)] - \operatorname{{V}}[\Pe(Z + C)] \geq 0, \end{aligned}$$ where equality holds only when $C = 0$. Moreover, from equation $(\ref{C})$, we find that $C = 0$ holds only when $k_{1} = k_{2}$.
Inequalities for the gamma and the incomplete gamma functions
=============================================================
In this section, we give some inequalities for the gamma and the incomplete gamma functions, which we used to derive the inequality for the variance of the loss in Theorem $\ref{thm:3.4}$.
Inequalities for the gamma function
-----------------------------------
To prove Lemma $\ref{lem:3.5}$, we use the following:
\[lem:4-1-3\] For $a > 0$, we have $$\begin{aligned}
2 \G(2a) - a \G(a)^{2} > 0. \end{aligned}$$
Next, to prove Lemma $\ref{lem:4-1-3}$, we use the following:
\[lem:4-1-1\] For $a > 0$, we have $$\begin{aligned}
4^{a} \G\left(a + \frac{1}{2} \right) > \sqrt{\pi} \G(a + 1). \end{aligned}$$
Furthermore, to prove Lemma $\ref{lem:4-1-1}$, we need another lemma:
\[lem:4-1-2\] We have $$\begin{aligned}
\sum_{n = 1}^{\infty} \frac{1}{n (2n - 1)} = 2 \log{2}. \end{aligned}$$
Let $S_{n} := \sum_{k = 1}^{n} \frac{1}{k (2k - 1)}$. Accordingly, we have $$\begin{aligned}
S_{n}
&= \sum_{k = 1}^{n} \left(\frac{2}{2k - 1} - \frac{1}{k} \right) \allowdisplaybreaks \\
&= 2 \sum_{k = 1}^{n} \frac{1}{2k - 1} - \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\
&= 2 \sum_{k = 1}^{n} \frac{1}{2k - 1}
+ \left(2 \sum_{k = 1}^{n} \frac{1}{2 k} - 2 \sum_{k = 1}^{n} \frac{1}{2 k} \right)
- \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\
&= 2 \left(\sum_{k = 1}^{n} \frac{1}{2 k - 1} + \sum_{k = 1}^{n} \frac{1}{2 k} \right)
- 2 \sum_{k = 1}^{n} \frac{1}{k} \allowdisplaybreaks \\
&= 2 \sum_{k = 1}^{2n} \frac{1}{k} - 2 \sum_{k = 1}^{n} \frac{1}{k} \allowdispl
| 165
| 895
| 367
| 227
| null | null |
github_plus_top10pct_by_avg
|
a$ systematically increase with $\Delta t$. For the $95\%$ of the stocks the increasing tendency is observed and for a window size $\Delta t = 1$ day the respective $\lambda$’s are greater than $2$. These are strong indications that the distributions are not in the Levy stable regime, and thus the second moment exists.
Note that our calculations *assume* that the variable is asymptotically distributed as and do not *prove* it. Still, the existence of the second moment is guaranteed by the absence of convergence to a Levy distribution. Consequently, it is possible to define the Hurst exponent for $f_i(t)$.
$\Delta t$ Hill’s $\lambda$ ($p=0.06$) Ref. [@gopi.volume], $p=0.03$ Shifted Hill’s $\lambda$ $f_0/\ev{f}$ Fraga Alves ($p=0.1$)
------------ ----------------------------- ------------------------------- -------------------------- -------------- -----------------------
$1$ min $1.43 \pm 0.09$ $1.45\pm0.10$ $2.15\pm0.15$ $3.0$ $1.98 \pm 0.25$
$5$ min $1.56 \pm 0.13$ $1.55\pm0.15$ $2.29 \pm 0.25$ $2.8$ $2.04 \pm 0.25$
$15$ min $1.71 \pm 0.20$ $1.67\pm0.20$ $2.55 \pm 0.35$ $2.8$ $2.1 \pm 0.3$
$60$ min $2.06 \pm 0.30$ $1.90\pm0.25$ $2.85 \pm 0.45$ $1.8$ $2.1 \pm 0.4$
$120$ min $2.3 \pm 0.4$ $2.0\pm0.3$ $3.15 \pm 0.70$ $1.6$ $2.1 \pm 0.4$
$390$ min $2.7 \pm 0.6$ $2.1\pm0.5$ $3.7 \pm 0.9$ $1.2$ no estimate
Regardless of the absence of the convergence to Levy stability there are qualitative similarities in the shape of the traded value distributions of various stocks \[cf. Fig. \[fig:distrib\](left)\]. Nevertheless, the existence of a universal distribution can be rejected by a simple test [^2].
If the form of th
| 166
| 3,813
| 295
| 159
| null | null |
github_plus_top10pct_by_avg
|
let $f$ be the solution to (cf. [1]{}) Based on Lemma \[l2\] and $||\varphi'||= 1$, From [6]{}, we can rewrite [4]{} as Recall that $T_{n-k}\sim N(0, \frac{n-k}{n})$ and is independent of $\{X_1,\dots, X_n\}$. Using [21]{} with $Y=T_{n-k}$, $x=0, t=(n-k)/n$, we have and Therefore, from [32]{} and [33]{} where we regard $Y$ as the third and fourth terms on the right-hand side of [32]{}, we obtain Rewrite where and Based on [7]{} and the fact that $X_k$ is independent of $W_{k-1}$ and $\eta_k\sim N(0,\frac{1}{n})$ is independent of $\{X_1,\dots, X_n\}$, we have From [104]{}, [34]{} and the estimates above, we have where Note that and Therefore, [105]{} is further bounded by where $C_1$ is as in the statement of Theorem \[t1\].
We are left to show that $A$ in [106]{} equals 0. Since $\eta_k$ has mean 0 and is independent of $\{X_1,\dots, X_n\}$ and $T_{n-k}$, we have By the property [33]{} of sublinear expectation, we have Using Lemma \[l3\] and $t_i=\frac{n-i}{n}$ in the statement of the theorem, we have Moreover, by the definition of $\xi_k$ and $V_{i}$ below [17]{}, we have and by the definition of $\E$, Finally, by the choice of $\mu_k$ and $\sigma_k$ in [13]{} and [14]{}, we have $A=0$. Note that part of the reason for the particular expansion of [34]{} is to find connections to $V$. This, together with [105]{}, proves Claim \[claim2\].
The proof is similar to that of Theorem \[t1\]. We use a slightly different expansion (cf. [111]{}) and make use of the convexity (concavity) of $\varphi$ (cf. [113]{} and [114]{}).
We only prove the case where $\varphi$ is convex. The concave case follows from a similar argument. Without loss of generality, we assume that $\mu=0$ and $||\varphi'|| = 1$. Denote and denote Define We will prove the following claim.
\[claim3\] Let $\phi_{\sigma}(\cdot)$ be the density function of $N(0,\sigma^2)$ and let $*$ denote the convolution of functions. For any k=1,…, n, we have
Using telescoping sum and the independence assumption and applying Claim \[claim3\] recursively from $k=n$
| 167
| 101
| 277
| 225
| null | null |
github_plus_top10pct_by_avg
|
t{w}} \text{ in }{\mathbb{W}} \text{ begin}\\
\text{\quad\quad \# number of steps in localized walk is } {\lvert{{\mathit{w}}}\rvert}\\
\text{\quad\quad \# abused index runs between 0 [for start step } {\mathit{w}}_{\text{crux}} \text{] and }
-({\lvert{{\mathit{w}}}\rvert}-1) \text{ [last predecessor step]}\\
\text{\quad\quad \# no iteration through } i = -({\lvert{{\mathit{w}}}\rvert}-1) \text{ because then }
{\mathit{s}} \text{ == } {\mathit{e}}_{i-1} \text{ below would be undefined}\\
\text{\quad\quad for }i = 0 \text{ downto } -({\lvert{{\mathit{w}}}\rvert}-2) \text{ begin}\\
\text{\quad\quad\quad\quad for each }{\mathit{s}} \text{ in } \tilde{V}({\mathit{w}}_{i}) \text{ begin}\\
\text{\quad\quad\quad\quad\quad\quad if there is a member }{\mathit{e}} \in {\mathbb{W}} \text{ such that }
{\mathit{w}}_{i} \text{ == } {\mathit{e}}_{i} \text{ and }{\mathit{s}} \text{ == } {\mathit{e}}_{i-1} \text{ then}\\
\text{\quad\quad\quad\quad\quad\quad\quad\quad answer = TRUE}\\
\text{\quad\quad\quad\quad\quad\quad else}\\
\text{\quad\quad\quad\quad\quad\quad\quad\quad answer = FALSE}\\
\text{\quad\quad\quad\quad\quad\quad if answer == FALSE then return FALSE}\\
\text{\quad\quad\quad\quad end}\\
\text{\quad\quad end}\\
\text{end}\\
\text{return TRUE}\\
$
\[S:DEPENDENT\_CONE\_WALK\] Let ${\mathit{w}}$ and ${\mathit{w}}'$ be localized predecessor walks starting at ${\mathit{w}}_0 = {\mathit{w}}_0' = {\mathit{s}}_{\text{crux}}$. Suppose the length of ${\mathit{w}}$ is ${\lvert{{\mathit{w}}}\rvert} = n$ and the length of ${\mathit{w}}'$ is ${\lvert{{\mathit{w}}'}\rvert} = m$, with $m \leq n$. If ${\mathit{w}}'(i) = {\mathit{w}}(i)$ for every $-(m-1) \leq i \leq 0$, then ${\mathit{w}}$ and ${\mathit{w}}'$ are *dependent* with ${\mathit{w}}'$ *dispensable*.
\[S:INDEPENDENT\_CONE\_WALK\] Let ${\mathbb{W}}$ be a set of localized predecessor walks starting at ${\mathit{s}}_0 = {\mathit{s}}_{\text{crux}}$. The set is *independent* if it contains no dispensable member.
### Cone {#S:CONE_CONE_WALK}
\[S:CONE\_DEFINITIO
| 168
| 976
| 268
| 216
| null | null |
github_plus_top10pct_by_avg
|
unavailable (which can be used in header too):
- (instancetype) init __attribute__((unavailable("Use 'sharedInstance' instead of 'init' as this class is singleton.")));
This can be used if you want to prompt some message about unavailability.
Q:
Assigning two arrays by ID
I am in stuck with easy things such a loop for and if statement.
I have two different objects:
objectA:
0: id = 1, name = null
1: id = 3, name = null
objectB:
id = 1, name = NameForID1
id = 3, name = NameForID3
My point is to assign names from objectB to objectA by ID value.
I've done double loop + if:
for (int i = 0; i <= objectA.size() - 1; i++){
for(int j = 0; j<=objectB.size() - 1; j++){
if(Objects.requireNonNull(objectA.get(i).getobjectAID()).equals(objectB.get(j).getObjectBID()))
objectA.get(i).setobjectAName(objectB.get(j).getobjectBName());
}
}
And after this I have a list of objectA with last name from objectB array. According to Android Studio Debugger and some... logic... this is logically, but what I am missing and doing wrong?
My bad output:
objectA:
0: id = 1, name = NameForID3
1: id = 3, name = NameForID3
Expected output:
objectA:
0: id = 1, name = NameForID1
1: id = 3, name = NameForID3
UPD
ObjectA class:
class objectA:Serializable {
var objectAID: Int? = null
var objectAName: String? = null
}
ObjectB class:
class objectB {
var objectBName: String? = null
var objectBID: Int? = null
}
UPD. 30.04.2020:
Still getting the same bad output, based on answers below have tried different variations of equals, ==, ===, POJO in Java instead of Kotlin.
UPD: Just wasted your time - when I was trying to found error with your hints, I have malwared my code and started assigning equal IDs to objectA so and names were equal.
A:
I am assuming that the ids are integers:
Try this:
for (int i = 0; i < objectA.size(); i++){
for(int j = 0; j < objectB.size(); j++){
if(objectA.get(i).getobjectAID() == objectB.get(j).getObje
| 169
| 44
| 171
| 111
| 164
| 0.821379
|
github_plus_top10pct_by_avg
|
in jsp (I know we shouldn't be using scriptlets in JSP pages) . However I am planning to advance from Servlet programming (currently i am only familiar with JSP and servlets) to some other framework possibly JSF,struts or spring so will learning JSTL come in handy later.. Or is it just an overload ??
A:
Yes, it will. In most web technologies you use JSPs and JSTL in the presentation layer.
Even with JSF, where you don't use JSPs, you can still use the ported JSTL tags.
A:
Will be handy. As with most of the frameworks you would use JSPs and JSTL for rendering views.
With Spring and other frameworks, you can also use Velocity and freemarker on the view side.
However, learning JSTL surely won't be an overhead.
Q:
How to design a class where the user/caller has options to provide a custom behavior using a custom class
I encounters a problem where I want to have a class in which its behavior can be customized by another class, for example, Foo's constructor accepts a parameter of some type of class:
class Bar { //The default class that define behavior
};
template <typename T = Bar>
class Foo {
public:
Foo(T* t = 0) t_(t) {
if (t_ == 0) t_ = new T();
}
~Foo() {
delete t_;
}
}
Now if someone use Foo in a client code:
Foo foo;
Everything is fine. But, if we want to supply the custom class:
class Bar1 { };
Foo<Bar1> foo(new Bar1()); // This is OK
Bar1 b;
Foo<Bar1> foo(&b); // Error, b is not dynamically allocated
Is there any design pattern I can use to prevent this kind of mistakes? Or, is there any techniques or semantics where the user of Foo class can choose/specify who owns the bar object? So for example, the above Foo destructor can be like this:
~Foo() {
if (t_ is owned by this object) delete t_;
}
Bar or Bar1 or any class passed as t in Foo(T* t) might be a big object, so if it is possible I rather not to pass it by value.
Update:
What I have in mind is for the user to be able to do something like:
Foo foo(new Bar(1, 2, etc..));
//or this:
Bar bar(1, 2
| 170
| 3,536
| 577
| 220
| 1,584
| 0.787951
|
github_plus_top10pct_by_avg
|
on estimated by $d_i$ $\{D_i | \frac{d_i - \mu}{\sigma} > 3 \}$
![Distribution of Jensen-Shannon divergence on Taobao data set(without click farming) used in the experiments.[]{data-label="fig:jsd-dist"}](./JSD-Dist.pdf){width="\linewidth"}
Fig. \[fig:jsd-dist\] shows distribution of all divergences against the reference. It can be approximated as a Gaussian distribution though the true one may differ a little more from the standard Gaussian than the expected estimation error. That is due to the unknown randomness within real world data. Few assumptions can be applied in real world data sets, no mention that data volume is sometimes relatively low. This topic is out of the domain discussed in this paper and we here only introduce the technique instead of the specific distribution model. Certainly, if stronger assumptions can be included to provide a more precise model, this component in the framework can be replaced to give better results. For the simplicity of our proposal, we deem the distributions of divergences to be Gaussian.
By this approach, time complexity can be reduced from quadratic to linear. Fig. \[fig:raw-overview\] in Section \[sec:exp-raw\] demonstrates the result of the above process. Red distribution refer to the distances calculated from normal data collections, blue and green ones are from click-farmed data collections. Clearly, distances of normal data collections assembles together around a small value while anomalous ones lay around a larger distance value.
Optimization: Statistical Divergence Detection with Evidence(SDD-E)
-------------------------------------------------------------------
It is possible to further optimize SDD-R if we can provide this algorithm with evidence(Algorithm \[alg:sdd-e\]).
Evidence set with normal data collections $\mathbb{E}_N = \{N_1, \dots, N_n\}$ Evidence set with anomalous data collections $\mathbb{E}_A = \{A_1, \dots, A_m\}$ Estimated anomalous probability $\alpha$ New data collection $\mathbb{D} = \{D_1, \dots, D_l\}$ Anomalous data collections
| 171
| 20
| 245
| 179
| null | null |
github_plus_top10pct_by_avg
|
erms except for the one with $\Delta_G - \Delta_B - \Delta_F=\bar\Delta_G - \bar\Delta_B - \bar \Delta_F=0$. Thus only the regular term $:BF:(w)$ in the OPE between $B(x)$ and $F(w)$ survives. We obtain the following contribution to the OPE : $$\begin{aligned}
\lim_{z\to w}& A(z) :BC:(w) = ... + (z-w)^{\Delta_F + \Delta_B- \Delta_A -\Delta_C} (\bar z - \bar w)^{\bar\Delta_F+\bar \Delta_B - \bar\Delta_A -\bar\Delta_C} :BF:(w).
\nonumber\end{aligned}$$
Simplification in the case of a holomorphic operator {#simplification-in-the-case-of-a-holomorphic-operator .unnumbered}
----------------------------------------------------
The computation of the singular terms in the OPE simplifies if the operator $A(z)$ is holomorphic. Let us consider a term of the form . Since the operator $A$ is holomorphic there is no dependence on $\bar z$, so $\bar\Delta_D - \bar\Delta_A - \bar
\Delta_B = 0$. Let us also assume that $\Delta_D - \Delta_A -
\Delta_B$ is an integer. The question is whether such a term may contribute to a pole in the OPE , i.e. a term of the form with $\Delta_E - \Delta_A - \Delta_B-\Delta_C$ a negative integer (and $\bar\Delta_E - \bar\Delta_A -
\bar\Delta_B-\bar\Delta_C=0$). But this is only possible if $\Delta_D
- \Delta_A - \Delta_B$ is already a negative integer, since otherwise the coefficient vanishes.
It follows from the previous discussion that under the assumption that only integer powers of $(z-x)$ appear in the OPE between the operators $A(z)$ and $B(x)$, then in the computation of singular terms in the OPE one can truncate the OPE between $A(z)$ and $B(x)$ to the singular terms only (i.e. keep only the poles in $(z-x)$). That specific feature of this special case is put to good use in some standard calculations in two-dimensional conformal field theory [@yellowbook].
The semi-classical behavior of the OPE coefficients {#XXOPEs}
===================================================
At large radius, namely in the limit $f^2 \to 0$ (either at fixed level $k$ or at fixed $kf^2$), the target sp
| 172
| 225
| 292
| 233
| 3,345
| 0.772989
|
github_plus_top10pct_by_avg
|
ults. Section 4 deals with Self-Organizing maps and its variant along with their classification results. Conclusions and future perspectives have been discussed in the section 5.
Data set description
====================
The data sets are generated by a montecarlo program, CORSIKA [@cor]. They contain 12332 gammas, 7356 ’on’ events (mixture of gammas and hadrons), and 6688 hadron events. These events are stored in different files. The files contain event parameters in ASCII format, each line of 12 numbers being one event [@boc], with the parameters defined below,
1. fLength: major axis of ellipse \[mm\]
2. fWidth: minor axis of ellipse \[mm\]
3. fSize: 10-log of sum of content of all pixels
4. fConc: ratio of sum of two highest pixels over fSize \[ratio\]
5. fConc1: ratio of highest pixel over fSize \[ratio\]
6. fAsym: distance from highest pixel to centre, projected onto major axis \[mm\]
7. fM3Long 3rd root of third moment along major axis \[mm\]
8. fM3Trans 3rd root of third moment along minor axis \[mm\]
9. fAlpha: angle of major axis with vector to origin \[deg\]
10. fDist: distance from origin to centre of ellipse \[mm\]
11. fEner: 10-log of MC energy \[in GeV\]
12. fTheta: MC zenith angle \[rad\]
The first 10 image parameters are derived from pixel analysis, and are used for classification.
Multi-Layer Perceptron
======================
For this approach we used the ROOT Analysis Package (v. 4.00/02) and in particular the MultiLayer Perceptron class [@kn:mlp], which implements a generic layered network. Since this is a supervised network we took half of Gamma and OFF data to train the network and the remaining data to test it. The code of the ROOT package is very flexible and simple to use. It allowed us to create a network with a 10 nodes input layer, a hidden layer with the same number of nodes and an output layer with just a single neuron which should return “0” if the data represent hadrons or “1” if they’re gammas. Weights are put randomly at the beginning of
| 173
| 1,948
| 724
| 241
| null | null |
github_plus_top10pct_by_avg
|
odule>
foo()
File "exc.py", line 5, in foo
print(1/x)
ZeroDivisionError: integer division or modulo by zero
The fix is to handle it. The most trivial fix is something like this:
while True:
try:
cam.start()
img = cam.get_image()
pygame.image.save(img,"current.jpeg")
cam.stop()
host = ftputil.FTPHost(**)
host.upload("./current.jpeg", "/domains/*/public_html/webcam.jpg", mode='b')
host.close()
if !count:
host = ftputil.FTPHost(**)
filename = str(time.time()) + ".jpg"
host.upload("./current.jpeg", "/webcamarchive/"+filename, mode='b')
host.close()
count = 10
logging.info(str(time.time())+": Still running")
count -= 1
except Exception as e:
logging.error('Caught exception: ' + str(e))
time.sleep(3)
However, you're probably better off looking over each kind of exception you could run into—or looking at the exception you actually are running into—and putting the appropriate error handling and logging in each place, and only using this as a last-ditch fallback.
Also, you should really use with clauses instead of manual close calls. For example, if host.upload() raises, the host.close() will never get called. So, instead of this:
host = ftputil.FTPHost(**)
host.upload("./current.jpeg", "/domains/*/public_html/webcam.jpg", mode='b')
host.close()
Do this:
with contextlib.closing(ftputil.FTPHost(**)) as host:
host.upload("./current.jpeg", "/domains/*/public_html/webcam.jpg", mode='b')
(Many types don't even need the closing, because they're already inherently context managers, so try that first—but if you get an error about ftputil.FTPHost not having an __enter__ or __exit__, this is how you can deal with it.)
Q:
Regular expression for prefix exclusion
I am trying to extract gmail.com from a passage where I want only those string match that don't start with @.
Example: abc@gmail.com (don't match this); www.gmai
| 174
| 1,271
| 139
| 99
| 84
| 0.827507
|
github_plus_top10pct_by_avg
|
ing his customers their choice of any horse, as long as that horse was in the first unoccupied stall.
[^11]: The set difference $A \setminus B$ is not conventionally restricted to $B \subseteq A$, as is stipulated here.
[^12]: one form of which is $(\neg B \Rightarrow \neg A)\Leftrightarrow(A \Rightarrow B)$
[^13]: Not to be confused with the categorical variable named [category]{}
---
address: |
CSSM and Department of Physics and Mathematical Physics, University of Adelaide, Australia 5005\
E-mail: [awilliam@physics.adelaide.edu.au]{}
author:
- ', FREDERIC D.R. BONNET, PATRICK O. BOWMAN, DEREK B. LEINWEBER, JON IVAR SKULLERUD, AND JAMES M. ZANOTTI'
title: 'GLUONS, QUARKS, AND THE TRANSITION FROM NONPERTURBATIVE TO PERTURBATIVE QCD '
---
=cmr8
1.5pt
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
Introduction {#sec:intro}
============
Lattice gauge theory is currently the only known “first principles” approach to studying nonperturbative QCD. It is therefore important for lattice QCD to provide constraints and guidance for the construction of quark-based models[@DSE_review] and to provide an indication of the momentum regime at which we can expect perturbative QCD to become applicable. The quark and gluon propagators are two of the most fundamental quantities in QCD. There has been considerable interest in the infrared behavior of the gluon propagator as a probe into the mechanism of confinement and by studying the scalar part of the quark propagator, the mass function, we can gain insight into the mechanisms of chiral symmetry breaking. Both are used as input for other quark-model calculations.
Gluon Propagator
================
We use an ${\cal O}(a^2)$ tree-level, tadpole-improved action[@Weisz83] and for the tadpole (mean-field) improvement parameter we use the plaquette measure[@tadpole]. A full description and discussion of the gluon propagator results summarized here can be found elsewhere.[@LandauGaugeDE; @long_glu; @big_vol_glu]
Dimensions $\beta$ $a$ (fm) V
| 175
| 25
| 352
| 257
| null | null |
github_plus_top10pct_by_avg
|
In [@DBLP:conf/iclp/VoetsS09], classes of queries are represented as *moded queries*. Moded queries are partially instantiated queries, in which variables can be labeled as *input*. Variables labeled input are called *input variables* and represent arbitrary ground terms. To indicate that a variable is labeled as input, the name of the variable is underlined. A query in which no variable is labeled as input is called a *concrete query*. The set of concrete queries represented by a moded query $Q$ is called the *denotation* of $Q$.
\[def:denotation\] Let $Q$ be a query and $\lbrace \underline{I_1},\ldots,\underline{I_n} \rbrace$ its set of input variables. The *denotation* of $Q$, $Den(Q)$, is defined as:
- $Den(Q) = \left\lbrace Q\lbrace \underline{I_1} \setminus t_1,\ldots,\underline{I_n} \setminus t_n \rbrace \mid
t_i \in Term_P, t_i~is~ground, 1\leq i \leq n \right\rbrace $. $\hfill \square$
Note that the denotation of a concrete query is a singleton containing the query itself. Denotations of moded goals and atoms are defined similarly.
A moded query $\leftarrow Q$ is evaluated by constructing a *moded SLD-tree*, representing the derivations of the queries in $Den(\leftarrow Q)$. This moded SLD-tree is constructed by applying SLD-resolution to the query and propagating the labels. An input variable $\underline{I}$ can be unified with any term $t \in Term_P$. After unifying $\underline{I}$ and $t$, all variables of $t$ will be considered input as well.
\[example:moded\_sld\] Figure \[fig:eq\_plus\_symbolic\] shows the moded SLD-tree of the program $eq\_plus$ for the moded query $\leftarrow eq\_plus(\underline{I},\underline{J},\underline{P})$. This program is non-terminating for any query in $Den(\leftarrow eq\_plus(\underline{I},\underline{I},0))$ and fails for all other queries in $Den(\leftarrow eq\_plus(\underline{I},\underline{J},\underline{P}))$. A query fails if its derivation tree is finite, with no path ending with the empty goal.
eq_plus(I,J,P):- eq(I,J), plus(P,I,In), eq_plus(In
| 176
| 4,632
| 427
| 206
| 132
| 0.823545
|
github_plus_top10pct_by_avg
|
ntly upregulated genes.](ol-14-06-7153-g00){#f1-ol-0-0-7146}
{#f2-ol-0-0-7146}
{#f3-ol-0-0-7146}
######
Top 5 differentially expressed miRNAs of malignant follicular thyroid carcinoma compared with benign follicular thyroid adenoma.
miRNA P-value log~2~(fold-change)
--------------- ----------- ---------------------
Downregulated
miR-7 0.0041392 −1.7320437
miR-1179 0.0081728 −1.3950195
miR-7--2 0.0006626 −1.2525509
miR-486-5p 0.0412501 −1.0502825
miR-130b 0.0028172 −0.9176468
Upregulated
miR-663b 0.0009353 0.9881272
miR-137 0.0088044 0.9341108
miR-30c-1 0.0059237 0.8695624
miR-767-5p 0.0036048 0.7353497
miR-603 0.0392875 0.6646499
miR/miRNA, microRNA.
######
Top 5 differentially expressed mRNAs of malignant follicular thyroid carcinoma compared with benign follicular thyroid adenoma.
mRNA P-value log~2~(fold-change)
--------------- ------------- ---------------------
Downregulated
FABP4 0.001621719 −2.100023748
CMAHP 0.024414059 −1.066127774
ITM2A 0.016922069 −1.060957028
CA4 0.003105875 −1.030691864
FAM189A2 0.001993414 −1.000814956
Upregulated
EPB41L3 0.000527208 1.020917517
SCG5 0.044745635 0.990761798
PAX1 0.040281329 0.936795356
MTHFD2 0.002917107 0.870767506
CDH2 0.038829964 0.862357097
######
Differentially expressed mRNAs of malignant follicular thyroid carcinoma compared with benign follicular thyroid adenoma.
Gene
| 177
| 2,053
| 485
| 188
| null | null |
github_plus_top10pct_by_avg
|
_{A}$ onto the set $\{J,U\}$ of *justification values* following * justification rules* which refer to $\sigma $ and are based on the informal properties of the metalinguistic concept of proof in natural languages. In particular, the following justification rules hold.
JR$_{1}$. *Let* $\alpha \in \psi _{R}$*; then,* $\pi _{\sigma
}(\vdash \alpha )=J$* iff a proof exists that* $\alpha $* is true, i.e., that* $\sigma (\alpha )=1$* (hence,* $\pi _{\sigma
}(\vdash \alpha )=U$* iff no proof exists that* $\alpha $*is true).*
JR$_{2}$.* Let* $\delta \in \psi _{A}$*; then,* $\pi _{\sigma
}(N\delta )=J$* iff a proof exists that* $\delta $* is unjustified, i.e., that* $\pi _{\sigma }(\delta )=U$*.*
JR$_{3}$.* Let* $\delta _{1}$*,* $\delta _{2}\in \psi _{A}$*; then,*
*(i)* $\pi _{\sigma }(\delta _{1}K\delta _{2})=J$* iff* $\pi
_{\sigma }(\delta _{1})=J$* and* $\pi _{\sigma }(\delta _{2})=J$*,*
*(ii)* $\pi _{\sigma }(\delta _{1}A\delta _{2})=J$* iff* $\pi
_{\sigma }(\delta _{1})=J$* or* $\pi _{\sigma }(\delta _{2})=J$*,*
*(iii)* $\pi _{\sigma }(\delta _{1}C\delta _{2})=J$* iff a proof exists that* $\pi _{\sigma }(\delta _{2})=J$* whenever* $\pi
_{\sigma }(\delta _{1})=J$*,*
*(iv)* $\pi _{\sigma }(\delta _{1}E\delta _{2})=J$* iff* $\pi
_{\sigma }(\delta _{1}C\delta _{2})=J$* and* $\pi _{\sigma }(\delta
_{2}C\delta _{1})=J$*.*
Furthermore, the following *correctness criterion* holds in $\mathcal{L}^{P}$.
CC\. *Let* $\alpha \in \psi _{R}$*; then,* $\pi _{\sigma
}(\vdash \alpha )=J$* implies* $\sigma (\alpha )=1.$
Finally, the set of all pragmatic evaluation functions that can be associated with a given semantic interpretation $\sigma $ is denoted by $\Pi
_{\sigma }$.
The quantum pragmatic language $\mathcal{L}_{Q}^{P}$
----------------------------------------------------
The quantum pragmatic language $\mathcal{L}_{Q}^{P}$ that we want to introduce here is obtained by specializing syntax, semantics and pragmatics of $\mathcal{L}^{P}$. Let us begin with the syntax. We introduce the following assumpt
| 178
| 678
| 387
| 247
| null | null |
github_plus_top10pct_by_avg
|
220160.t003
###### MSEM testing the mediation effect of institutional trust on the relationship between crime and social trust.
{#pone.0220160.t003g}
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Model 1\ Model 2\ Model 3\ Model 4\
No Mediation With Mediation No Mediation With Mediation
------------------------------------------- --------------------------------------------------------------- ---------------------------------------------------------------- ------------------------------------------------------------- ----------------------------------------------------------------
Individual Level
*SOCIAL TRUST ON*
Age 0.005[\*\*](#t003fn003){ref-type="table-fn"} (0.002) 0.002 (0.002) 0.036[\*\*](#t003fn003){ref-type="table-fn"} (0.012) 0.002 (0.001)
Male
| 179
| 2,847
| 369
| 179
| null | null |
github_plus_top10pct_by_avg
|
i,s}^1 = 0, ..., D_{i,s}^k = 0$ means that all data areas in the partition $i$ are clear. Satisfaction of this property implies that no data stored in the partition during one configuration of this partition can remain in any memory area of a later configuration.
### Formal Comparison of Policies and Properties
As presented in previous subsections, security policies and properties for separation kernels have been studied in literature. They are formalized in different specification and verification systems, such as ACL2, Isabelle/HOL, and PVS. Formal comparison of them to clarify the relationships can establish a substantial foundation for formal specification and verification of separation kernels.
In [@von04], the notions of noninterference, nonleakage, and noninfluence are defined based on the same state machine and formally compared. The author states that noninfluence is semantically equal to the conjunction of noninterference and nonleakage.
In [@Bond14], the GWV policy and Rushby’s noninterference are formally compared in detail. The authors present a mapping between the objects and relations of the two models. The conclusion is that GWV is stronger than Rushby’s noninterference, i.e., all systems satisfying GWV’s separation also satisfy Rushby’s noninterference.
Formal Specification and Models of Separation Kernels
-----------------------------------------------------
The formal specification and models of separation kernels present a significant contribution to formal verification. Here, we only discuss the models for formally developing separation kernels. Models targeted at formal verification are surveyed in the next subsection. In formal development, the specification may be used as a guide while the concrete implementation is developed during the design process. We present typical specification and models of separation kernels in turn.
- Craig’s Z model of separation kernel
Following the earlier book on modeling operating system kernels [@Craig06] that shows it is possible and r
| 180
| 705
| 929
| 243
| null | null |
github_plus_top10pct_by_avg
|
A more sophisticated adjustment of the relative scales can be performed within a comparison of the flow of momentum-dependent observables such as the wave function renormalisation $Z_0$. The peak of these flows in momentum space is directly related to the cut-off scale. Indeed, the function $f$ carries the physical information of the peak of the flow at some momentum scale. Scanning the set of $f$ gives some further access to the uncertainty in such a procedure.
![${\hat k}_\bot / \hat k$ as function of $\hat k$. []{data-label="fig:kbotk"}](kbot.eps "fig:"){width="8cm"}\
The effective cut-off scales $k_{\rm phys}(k_0)$ and $k_{\bot ,\rm
phys}(k_\bot )$ in the flows of the temporal gluons and of spatial gluons respectively do not match in general. If solving the flow within a local truncation as chosen in the present work we have to identify the two effective cut-off scales, $k_{\rm phys}(k_0)=k_{\bot
,\rm phys}(k_\bot )=k_{\rm phys}$, leading to a non-trivial relation $k_0=k_0(k_\bot )$. Moreover, the effective cut-off scale has to be used in the running coupling $\alpha_s=\alpha_s(\vec p^2=k_{\rm
phys}^2)$.
![$\hat k_{\rm phys}(\hat k )$ from the comparison of flows with three-dimensional regulators and four-dimensional regulators.[]{data-label="fig:kskphys"}](kphysalpha.eps "fig:"){width="8cm"}\
It is left to determine the physical cut-off scale $k_{\rm phys}$ from either the flow of the spatial gauge fields as $k_{\bot,\rm
phys}(k_\bot )$ or from the temporal flow $k_{0,\rm phys}(k_0)$. We first discuss the spatial flow. For an optimised regulator depending on all momentum directions, $p^2$, we have the relation $k_{\rm
phys}=k_\bot $. Hence the relation $k_{\bot,\rm phys}(k_\bot )$ can be computed if comparing the flows for a specific observable with three-dimensional regulator $R_{{\rm opt},k_\bot }(\vec p^2)$, , with flows with four-dimensional regulator $R_{{\rm
opt},k_{\rm phys}}(p^2)$. Here, as a model example, we choose the effective potential of a $\phi^4$-theory. This leads to the re
| 181
| 1,056
| 702
| 274
| 1,363
| 0.790448
|
github_plus_top10pct_by_avg
|
s inverse $W_{-1}^{-1}(y) = ye^y$, defined over the domain $(-\infty, -1)$ is also monotonically decreasing. By our assumption, $-cx \leq -3 \log \frac{1}{c} \leq -3$, thus $-cx \in (-\infty, -1]$, thus applying $W_{-1}^{-1}$ to both sides gives us the first implication.
Experiment Details {#apx:experiments}
==================
In this section, we provide additional details of our experiments. In particular, we explain the CNN architecture that we use in our experiments. Denote a convolutional layer with $p$ input filters and $q$ output filters by $\mathsf{conv}(p, q)$, a fully connected layer with q outputs by $\mathsf{fully\_connect}(q)$, and a max pooling operation with stride 2 as $\mathsf{pool2}$. Let $\mathsf{ReLU}(x) = \max\{x, 0\}$. Then the CNN architecture in our paper is the following: $$\begin{aligned}
&\mathsf{conv}(3, 32) \Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{conv}(32, 64) \Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{pool2} \Rightarrow \mathsf{conv}(64, 128) \Rightarrow \mathsf{ReLU} \Rightarrow
\mathsf{conv}(128, 128) \\
&\Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{pool2} \Rightarrow \mathsf{conv}(128, 256) \Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{conv}(256, 256) \Rightarrow \mathsf{ReLU} \Rightarrow
\mathsf{pool2} \Rightarrow \mathsf{fully\_connect}(1024) \\
&\Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{fully\_connect}(512) \Rightarrow \mathsf{ReLU} \Rightarrow \mathsf{fully\_connect}(10).\end{aligned}$$
[^1]: We provide details of this CNN architecture in Appendix \[apx:experiments\].
---
abstract: 'We study connections between the topology of generic character varieties of fundamental groups of punctured Riemann surfaces, Macdonald polynomials, quiver representations, Hilbert schemes on $\C^\times\times\C^\times$, modular forms and multiplicities in tensor products of irreducible characters of finite general linear groups.'
author:
- |
Tamás Hausel\
[*University of Oxford*]{}\
[hausel@maths.ox.ac.uk]{}
- |
Emmanuel Letellier\
[ *Université de Caen*]{}
| 182
| 150
| 451
| 283
| 1,124
| 0.794039
|
github_plus_top10pct_by_avg
|
$g\leq e$. Then ${\mathbf d}(c)=g$. This and $c\leq t$ yield that $c=tg$. It follows that there is an arrow $$(g,x)\stackrel{(e,g)}{\longrightarrow} (e,x)$$ in the category $\int_{L(S)}\Phi(X,\mu)$. Since $(f,s)(e,g)=(f,c)=(f,t)(e,g)$ in $L(S)$, the diagram $$(g,x) \stackrel{(e,g)}{\longrightarrow}(e,x) {\mathrel{
\settowidth{\@tempdima}{$\scriptstyle(f,s)$}
\settowidth{\@tempdimb}{$\scriptstyle(f,t)$}
\ifdim\@tempdimb>\@tempdima \@tempdima=\@tempdimb\fi
\mathop{\vcenter{
\offinterlineskip\ialign{\hbox to\dimexpr\@tempdima+2em{##}\cr
\rightarrowfill\cr\noalign{\kern.3ex}
\rightarrowfill\cr}}}\limits^{\!(f,s)}_{\!(f,t)}}} (f,y)$$ is commutative. Therefore, the functor $\Phi(X,\mu)$ satisfies axiom (F3) from the definition of a filtered functor (see Subsection \[subs:2.5\]). It satisfies (F1) and (F2) due to Proposition \[prop:trans\], since universal $S$-sets are transitive.
Conversely, let $(X,\mu)$ be an $S$-set such that the functor $\Phi(X,\mu)$ is filtered. Assume that $s,t\in S$ and $x\in X$ are such that $s\cdot x=t\cdot x$. Let $e={\mathbf d}(s){\mathbf d}(t)$ and $h={\mathbf r}(s){\mathbf r}(t)$. Then $hse\cdot x=hte\cdot x$ and also ${\mathbf d}(hse)={\mathbf d}(hte)$, ${\mathbf r}(hse)={\mathbf r}(hte)$. We put $p={\mathbf d}(hse)$ and $q={\mathbf r}(hse)$. It follows that in the category $\int_{L(S)}\Phi(X,\mu)$ we have two parallel arrows $$(p,x) {\mathrel{
\settowidth{\@tempdima}{$\scriptstyle(q,hse)$}
\settowidth{\@tempdimb}{$\scriptstyle(q,hte)$}
\ifdim\@tempdimb>\@tempdima \@tempdima=\@tempdimb\fi
\mathop{\vcenter{
\offinterlineskip\ialign{\hbox to\dimexpr\@tempdima+2em{##}\cr
\rightarrowfill\cr\noalign{\kern.3ex}
\rightarrowfill\cr}}}\limits^{\!(q,hse)}_{\!(q,hte)}}} (q,y).$$ By axiom (F3), there is a commutative diagram $$(r,z) \stackrel{(p,a)}\longrightarrow (p,x) {\mathrel{
\settowidth{\@tempdima}{$\scriptstyle(q,hse)$}
\settowidth{\@tempdimb}{$\scriptstyle(q,hte)$}
\ifdim\@tempdimb>\@tempdima \@tempdima=\@tempdimb\fi
\mathop{\vcenter{
\of
| 183
| 510
| 221
| 238
| 780
| 0.800117
|
github_plus_top10pct_by_avg
|
eq:dim5op}\end{aligned}$$ We consider below how to obtain the dim.-5 operator at the loop level by using renormalizable interactions[^2]. We restrict ourselves to extend only the $\text{SU}(3)_c$-singlet scalar sector in the HTM because it seems a kind of beauty that the HTM does not extend the fermion sector and colored sector in the SM. An unbroken $Z_2$ symmetry is introduced in order to obtain dark matter candidates, and new scalars which appear in the loop diagram for the $\mu$-term are aligned to be $Z_2$-odd particles. We emphasize that the unbroken $A_2$ symmetry is not for a single purpose to introduce dark matter candidates but utilized also for our radiative mechanism for the $\mu$-term.
$L$ $\Phi$ $\Delta$ $s_1^0$ $s_2^0$ $\eta$
------------------- ----------- ----------- ----------- ----------- ----------- -----------
$\text{SU}(2)_L$ [****]{} [****]{} [****]{} [****]{} [****]{} [****]{}
$\text{U}(1)_Y$ $1/2$ $1/2$ $1$ $0$ $0$ $1/2$
$L\#$ $1$ $0$ $-2$ $-1$ $0$ $-1$
$Z_2$ $+$ $+$ $+$ $+$ $-$ $-$
: List of particle contents of our one-loop model. []{data-label="tab:1-loop"}
We present the minimal model where the dim.-5 operator in eq. (\[eq:dim5op\]) is generated by a one-loop diagram with dark matter candidates. Table \[tab:1-loop\] shows the particle contents. A real singlet scalar field $s_2^0$ and the second doublet scalar field $\eta$ \[$=(\eta^+, \eta^0)^T, \eta^0=(\eta^0_r+i\eta^0_i)/\sqrt{2}$\] are introduced to the HTM in addition to $s_1^0$. Lepton numbers of $s_2^0$ and $\eta$ are 0 and $-1$, respectively. Then $\eta^T i\sigma_2 \Delta^\dagger \eta$ conserves lepton number. In order to forbid the VEV of $\eta$, we introduce an unbroken $Z_2$ symmetry for which $s_2^0$ and $\eta$ are odd. Other fields are even under th
| 184
| 541
| 669
| 268
| 2,484
| 0.779387
|
github_plus_top10pct_by_avg
|
}
Table [4](#sim7930-tbl-0004){ref-type="table"} shows IPD meta‐analysis results for a random sample of 5 and 10 trials that investigated exercise interventions and for 20 trials (using all 15 exercise trials, plus 5 additional trials that investigated mixed interventions).
######
Results from baseline weight adjusted individual participant data meta‐analysis of i‐WIP data: summary treatment effect estimate ( $\left. \hat{\theta} \right)$ with 95% confidence interval and between‐trial variance of treatment effects estimate ( $\left. {\hat{\tau}}^{2} \right)$. From meta‐analysis with different numbers of trials (K = 5, 10, or 20), and assuming a random treatment effect and a common residual variance throughout
$\hat{\mathbf{\theta}}$ (95**%** CI); ${\hat{\mathbf{\tau}}}^{2}$
-------- ------------------------------------------------------------------- ------------------- ------------------- ------------------- -- ------------------- ------------------- ------------------- -------------------
**5** −1.172 −1.172 −1.172 −1.172 −1.170 −1.171 −1.171 −1.171
(−1.811, −0.534); (−1.815, −0.530); (−3.114, 0.770); (−2.712, 0.367); (−1.811, −0.529); (−1.813, −0.528); (−3.072, 0.731); (−2.681, 0.340);
8.58E−17 3.94E−15 3.94E−15 3.94E−15 2.95E−14 4.51E−12 4.51E−12 4.51E−12
**10** −0.972 −0.972 −0.972 −0.972 −0.972 −
| 185
| 1,585
| 633
| 221
| null | null |
github_plus_top10pct_by_avg
|
ation.
--------------- --------------- ------------------------ ------------------------------ ----------------------------------
$\mathcal{R}=X$ $\textrm{Cl}(\mathcal{R})=X$ $\textrm{Cl}(\mathcal{R})\neq X$
Not injective $\cdots$ $P\sigma(\mathscr{L})$ $P\sigma(\mathscr{L})$ $P\sigma(\mathscr{L})$
Not contiuous $C\sigma(\mathscr{L})$ $C\sigma(\mathscr{L})$ $R\sigma(\mathscr{L})$
Continuous $\rho(\mathscr{L})$ $\rho(\mathscr{L})$ $R\sigma(\mathscr{L})$
--------------- --------------- ------------------------ ------------------------------ ----------------------------------
: [\[Table: spectrum\]Spectrum of linear operator $\mathscr{L}\in\textrm{Map}(X)$. Here $\mathscr L_{\lambda}:=\mathscr{L}-\lambda$ satisfies the equation $\mathscr L_{\lambda}(x)=0$, with the resolvent set $\rho(\mathscr{L})$ of $\mathscr{L}$ consisting of all those complex numbers $\lambda$ for which $\mathscr L_{\lambda}^{-1}$]{} [exists as a continuous operator with dense domain. Any value of $\lambda$ for which this is not true is in the spectrum $\sigma(\mathscr{L})$ of $\mathscr{L}$, that is further subdivided into three disjoint components of the point, continuous, and residual spectra according to the criteria shown in the table. ]{}
The term “eigenfunction” is motivated by the following considerations. Consider the eigenvalue equation $$(\mu-\nu)\mathscr F_{\nu}(\mu)=0,\qquad\mu\in V(\mu),\textrm{ }\nu\in\mathbb{R}\label{Eqn: eigen}$$
in the space of multifunctions $\textrm{Multi}(V(\mu),(-\infty,\infty))$, where $\mu$ is in either of the intervals $[-1,1]$ or $[0,1]$ depending on whether the given boundary conditions for Eq. (\[Eqn: NeutronTransport\]) is full-range or half range. If we are looking only for functional solutions of Eq. (\[Eqn: eigen\]), then the unique function $\
| 186
| 1,494
| 706
| 267
| null | null |
github_plus_top10pct_by_avg
|
* $f(N\delta )=S_{N\delta }=S_{\delta }^{\bot }$*.*
(iii)* For every* $\delta _{1}$*,* $\delta _{2}\in \psi
_{A}^{Q}$*,* $f(\delta _{1}K$ $\delta _{2})=\mathcal{S}_{\delta
_{1}K\delta _{2}}=\mathcal{S}_{\delta _{1}}\cap \mathcal{S}_{\delta _{2}}$.
\(iv) *For every* $\delta _{1}$*,* $\delta _{2}\in \psi
_{A}^{Q} $*,* $f(\delta _{1}A$* *$\delta _{2})=S_{\delta
_{1}A\delta _{2}}=S_{\delta _{1}}\cup S_{\delta _{2}}$*.*
Secondly, we rewrite statement P above substituting $\mathcal{S}_{\vdash
E(x)}$ to $\mathcal{S}_{E}$ in it.
P$^{\prime }$.* Let* $\vdash E(x)$* be an elementary af of* $\mathcal{L}_{Q}^{P}$* and let* $x$ *be in the state* $S$*. Then,*
$\pi _{S}(\vdash E(x))=J$* iff* $S\in $* *$S_{\vdash E(x)}$*,*
$\pi _{S}(\vdash E(x))=U$* iff* $S\notin $* *$S_{\vdash
E(x)} $*.*
Thirdly, we note that statement P$^{\prime }$ defines the pragmatic evaluation function $\pi _{S}$ on all elementary afs of $\mathcal{L}_{Q}^{P}$.
Finally, for every $S\in \mathcal{S}$, we extend $\pi _{S}$ from the set of all elementary afs of $\mathcal{L}_{Q}^{P}$ to the set $\psi _{A}^{Q}$ of all afs of $\mathcal{L}_{Q}^{P}$ bearing in mind JR$_{2}$ and JR$_{3}$ in Sec. 3.1, hence introducing the following recursive rules.
\(i) *For every* $\delta $* *$\in \psi _{A}^{Q}$*,* $\pi _{S}(N\delta )=J$* iff* $S\in S_{N\delta }=S_{\delta }^{\bot }$*.*
(ii)* For every* $\delta _{1}$*,* $\delta _{2}\in \psi
_{A}^{Q}$*,* $\pi _{S}(\delta _{1}K$* *$\delta _{2})=J$* iff* $S\in S_{\delta _{1}K\delta _{2}}=S_{\delta _{1}}\cap
S_{\delta _{2}}$*.*
\(iii) *For every* $\delta _{1}$*,* $\delta _{2}\in \psi
_{A}^{Q}$*,* $\pi _{S}(\delta _{1}A$* *$\delta _{2})=J$* iff* $S\in S_{\delta _{1}A\delta _{2}}=S_{\delta _{1}}\cup
S_{\delta _{2}}$*.*
The above procedure defines, for every $S\in \mathcal{S}$, a pragmatic evaluation function
$\pi _{S}:\delta \in \psi _{A}^{Q}\longrightarrow \pi _{S}(\delta )\in
\{J,U\}$
which provides a set-theoretical pragmatics for $\mathcal{L}_{Q}^{P}$, as stated.
On the notion of justification in $\mathca
| 187
| 242
| 652
| 260
| null | null |
github_plus_top10pct_by_avg
|
45.900, J44.001, J44.101, J44.803, and J98.801). The duration of the collected data lasted from January 1, 2016, to December 31, 2017, which is 731 days of continuous data. For statistical purposes, the days where the daily volume was less than 24 were labeled as nonpeak events, and the rest were labeled as peak events.
[Table 1](#table1){ref-type="table"} describes the Pearson correlation coefficient between OED visit numbers and input indicators. We found that OED visit numbers showed positive correlations with wind speed, atmospheric pressure, carbon monoxide, sulphur dioxide, nitrogen dioxide, and PM25. However, OED visit numbers showed negative correlations with outdoor temperature, relative humidity, and ozone. The weather and air quality data distribution of patients with acute exacerbations of COPD from peak and nonpeak groups was shown in [Table 2](#table2){ref-type="table"}.
{#figure1}
######
The Pearson correlation coefficients between outpatient and emergency department visit numbers and input indicators.
Variable WS^a^, r TP^b^, r AP^c^, r RH^d^, r PM25^e^, r SO~2~^f^, r CO^g^, r NO~2~^h^, r O~3~\_8h^i^, r Number of visits, r
------------------ ---------- ---------- ---------- ---------- ------------ ------------- ---------- ------------- ---------------- ---------------------
WS 1 --0.32 0.27 --0.4 --0.34 --0.33 --0.26 --0.42 --0.24 0.15
TP --0.32 1 --0.88 0.35 --0.23 0.03 --0.24 --0.25 0.39 --0.38
AP 0.27 --0.88 1 --0.5 0.31 0.09 0.21 0.29 --0.18 0.39
RH --0.4 0.35 --0.5 1 --0.18 --0.27 0.2 0.03 --0.28
| 188
| 728
| 568
| 279
| null | null |
github_plus_top10pct_by_avg
|
l separated from the other curves, but a further reduction in $\lambda_{\max}$ risks the introduction of an additional, spurious MU to explain the low-stimulus observations.
[lcP[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}P[18mm]{}]{} & True & & &\
$u$ & 8 & 7 & 8 & 7 & 8 & 7 & 8\
$\mathbb{P}(u|y)$ & – & 96.7% & 3.3% & 94.6% & 5.4% & 28.7% & 71.3%\
$\eta_4$ & 26.0V & 26.9 (25.4, 28.8) & 26.6 (25.3, 28.6) & 27.0 (25.4, 28.9) & 26.5 (25.2, 28.5) & 26.7 (25.7, 27.7) & 26.4 (25.6, 27.6)\
$\eta_5$ & 27.3V & 27.8 (26.0, 29.1) & 27.4 (25.5, 28.9) & 27.8 (25.9, 29.0) & 27.5 (25.6, 28.9) & 27.4 (26.2, 28.5) & 27.2 (25.9, 28.3)\
$\eta_6$ & 27.9V & – & 27.9 (26.5, 29.5) & – & 27.9 (26.6, 29.5) & – & 27.5 (26.5, 28.8)\
$\lambda_4$ & 1.8V & 4.5 (1.8, 7.6) & 3.6 (1.0, 7.6) & 4.3 (1.9, 6.6) & 3.1 (0.7, 6.3) & 4.0 (1.8, 7.8) & 2.5 (0.9, 6.3)\
$\lambda_5$ & 3.6V & 4.1 (2.2, 7.3) & 4.7 (1.8, 7.9) & 4.0 (2.1, 6.5) & 4.4 (1.7, 6.6) & 3.7 (2.1, 6.3) & 4.6 (1.8, 8.3)\
$\lambda_6$ & 4.8V & – & 4.7 (1.6, 8.1) & – & 4.4 (1.4, 6.6) & – & 4.4 (2.3, 7.5)\
In the original analysis, $\lambda_4$ is mis-estimated because of the limited information available in the observations to adequately describe the period of alternation between 23–32V which involves five MUs. To show that this is the case, an additional 23 observations were generated evenly over this interval; see Figure \[fig:SimCaseData\]. This modest addendum to the data set is sufficient for the true model to be identified, $\hat{u}=8$, and with a posterior probability of 71.3%, and with better scale parameter estimates. However, the increase in computational resource required to obtain the same degree of Monte Carlo and numerical accuracy was substantial: from 5000 to 25000 particles and from a $30{\times}30$ to $50{\times}50$ lattice for the eight-MU hypothesis.
Case study: rat tibial muscle {#sec:CaseStudy}
=============================
The case study arises from [@Cas10] where a rat tibial muscle (medial gastrocnemius) receives stem cell therapy to encourage ne
| 189
| 36
| 735
| 287
| null | null |
github_plus_top10pct_by_avg
|
}}$ indicates the spatial compatibility between neighboring latent patterns: we model the pairwise spatial relationship between latent patterns in the upper conv-layer and those in the current conv-layer. For each $v^{\textrm{unt}}$ (with its parent $u$) in conv-layer $L_{u}$, we select 15 nearest latent patterns in conv-layer $L_{u}+1$, *w.r.t.* $\Vert\overline{\bf p}_{u}-\overline{\bf p}_{u_{\textrm{upper}}}\Vert$, as the neighboring latent patterns. We set constant weights $\lambda^{\textrm{rsp}}=1.5,\lambda^{\textrm{loc}}=1/3,\lambda^{\textrm{pair}}=10.0$, $\lambda^{\textrm{unant}}=5.0$, and $\lambda^{\textrm{close}}=0.4$ for all categories. Based on the above design, we first infer latent patterns corresponding to high conv-layers, and use the inference results to select units in low conv-layers.
During the learning of AOGs, we define $S^{\textrm{unant}}_{u}=S_{\hat{v}^{\textrm{unt}}}^{\textrm{rsp}}+S_{\hat{v}^{\textrm{unt}}}^{\textrm{loc}}$ to measure the latent-pattern-level inference score in Equation (5), where $\hat{v}^{\textrm{unt}}$ denotes the neural unit assigned to $u$.
Scores of AND nodes {#scores-of-and-nodes .unnumbered}
-------------------
$$S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})=-\lambda^{\textrm{inf}}\min\{\Vert{\bf p}(\Lambda_{u})+\Delta{\bf p}_{u}-{\bf p}(\Lambda_{v})\Vert^2,d^2\}\nonumber$$
where we set $d=37$ pixels and $\lambda^{\textrm{inf}}=5.0$.
[^1]: Because the CNN has demonstrated its superior performance in object detection, we assume that the target object can be well detected by the pre-trained CNN. As in [@SemanticPart], we regard object detection and part localization as two separate processes for evaluation. Thus, to simplify the learning scenario, we crop $I$ only to contain the object, resize it to the image size for CNN inputs, and just focus on the part localization task to simplify the scenario of learning for part localization.
[^2]: ${\bf M}_{ii}\!\propto\!\exp[\mathbb{E}_{I\in{\bf I}}S_{v^{\textrm{unt}}_{i}}]$, where $v^{\textrm{unt}}_{i}$ is the neural unit
| 190
| 43
| 474
| 284
| 448
| 0.809524
|
github_plus_top10pct_by_avg
|
segs,st1) = selectlist(segs,st2) \; \wedge \\
& current(st1) = current(st2) \; \wedge \\
& select(seg,st1) = select(seg,st2) \\
& \Rightarrow \\
& select(seg,next(st1)) = select(seg,next(st2))
\end{aligned}$$
where $segs = dia(seg) \cap segsofpartition(current(st1))$. The security policy requires that the effect on an arbitrary memory segment $seg$ of the execution of one machine step is a function of the set of memory segments that are both allowed to interact with $seg$ and are associated with the current partition. In this formula, the function $select$ extracts the values in a machine state that are associated with a memory segment. The function $selectlist$ takes a list of segments and returns a list of segment values in a machine state. The function $current$ calculates the current partition given a machine state. The function $next$ models one step of computation of the machine state. It takes a machine state as the argument and returns a machine state that represents the effect of the single step. The function $dia$ takes a memory segment name as the argument and returns a list of memory segments that are allowed to affect it. The function $segsofpartition$ returns names of the memory segments associated with a particular partition. The detailed information about the meaning of a machine state and the $next$ function of states are explained in [@Alves04].
The GWV security policy has been well known and accepted in industry [@integrity08; @Greve04; @Greve10]. The PVS formalization of GWV policy has been provided by Rushby [@Rushby04]. The GWV policy is changed/extended in [@Alves04; @Tverdy11]. The $dia$ function is weakened by allowing communication between segments of the same partition in [@Alves04] as follows.
$$\begin{aligned}
seg \in segsofpartition(p) \Rightarrow \\
segsofpartition(p) \in dia(seg)
\end{aligned}$$
The $dia$ function is extended by a restriction considering partition names, $diaStrong(seg,p) \subset dia(seg)$, in [@Tverdy11]. In addition, the GWV policy is extended by the $subject
| 191
| 809
| 1,075
| 307
| 124
| 0.824039
|
github_plus_top10pct_by_avg
|
lback from sections of any tensor bundle to itself. In this way, the group also acts on all spaces of $(p,q)$-tensors.
Studying the neighborhood of the identity $e\in G$, we get the induced action of the Lie algebra $\mathfrak{g}$ on these same tensor bundles. The infinitesimal version of a pullback of a tensor field is the Lie derivative of that field [@MR2954043]. Thus the induced action of $\mathfrak{g}$ on tensors is Lie derivation along the representation of the Lie algebra element. That is, given a representation as tangent vector fields $\rho: \mathfrak{g} \to \mathfrak{X}(\mathcal{M})$, for some algebra element $\alpha\in \mathfrak{g}$, the induced action of $\alpha$ on a tensor $\mathbf{t}$ is via the Lie derivative, $$\begin{aligned}
\alpha \cdot \mathbf{t} = {\mathcal{L}}_{\rho(\alpha)} \mathbf{t} \,.\end{aligned}$$
One of the crucial algebra elements we will need is the Casimir element of the ${\ensuremath{\mathfrak{sl}(2,\mathbb{R})}}$ factor. Let $h_{0}, h_{\pm} \in
\mathfrak{g}$ be the algebra elements whose representations are $\rho_{P}(h_{s})=H_{s}$ for $s = 0, \pm$. Then the Casimir element of the ${\ensuremath{\mathfrak{sl}(2,\mathbb{R})}}$ factor, in this basis, is proportional to $$\begin{aligned}
\label{eq:Casimir-def}
\Omega \equiv h_{0} (h_{0} - 1) - h_{-} h_{+} \,,\end{aligned}$$ which commutes with every element of $\mathfrak{g}$. Under the Poincaré-coordinates representation $\rho_{P}$, the Casimir acts on tensors via $$\begin{aligned}
\label{eq:Casimir-lies}
\Omega \cdot \mathbf{t} =
\left(
{\mathcal{L}}_{H_{0}} ( {\mathcal{L}}_{H_{0}} - \text{id} )
- {\mathcal{L}}_{H_{-}} {\mathcal{L}}_{H_{+}}
\right) \mathbf{t} \,.\end{aligned}$$ By construction, the differential operator on the right-hand side of Eq. commutes with ${\mathcal{L}}_{X}$, where $X$ is one of $\{ H_{0}, H_{\pm}, Q_{0} \}$. Similarly, under the global-coordinates representation $\rho_{g}$, the Casimir acts as in Eq. , but with $H$’s replaced with $L$’s; and this operator will similarly commute
| 192
| 2,461
| 288
| 186
| null | null |
github_plus_top10pct_by_avg
|
0.013 (2) −0.006 (2)
C69 0.018 (2) 0.021 (2) 0.018 (2) −0.0015 (19) 0.0045 (19) −0.0052 (19)
C70 0.012 (2) 0.017 (2) 0.012 (2) −0.0031 (17) 0.0015 (17) 0.0004 (17)
C71 0.013 (2) 0.017 (2) 0.014 (2) −0.0020 (17) 0.0021 (18) 0.0007 (18)
C72 0.016 (2) 0.024 (2) 0.015 (2) −0.0050 (19) 0.0088 (19) −0.0020 (19)
C73 0.021 (3) 0.023 (2) 0.016 (2) −0.008 (2) 0.004 (2) 0.0018 (19)
C74 0.021 (3) 0.017 (2) 0.027 (3) −0.0029 (19) 0.004 (2) 0.004 (2)
C75 0.010 (2) 0.018 (2) 0.028 (3) −0.0020 (18) 0.0035 (19) −0.001 (2)
O1 0.031 (2) 0.0237 (19) 0.035 (2) 0.0058 (16) 0.0170 (18) 0.0087 (17)
N1 0.020 (2) 0.022 (2) 0.023 (2) −0.0003 (17) 0.0074 (18) 0.0044 (18)
C76 0.021 (3) 0.017 (2) 0.031 (3) 0.0007 (19) 0.009 (2) 0.012 (2)
C77 0.022 (3) 0.034 (3) 0.026 (3) −0.001 (2) 0.001 (2) 0.008 (2)
C78 0.029 (3) 0.028 (3) 0.037 (3) 0.006 (2) 0.014 (3) 0.004 (2)
O2 0.026 (2) 0.071 (3) 0.043 (3) −0.006 (2) −0.008 (2) −0.016 (3)
N2 0.017 (2) 0.028 (2) 0.032 (3) 0.0030 (18) 0.0038 (19) −0.007 (2)
C79 0.029 (3) 0.043 (4) 0.032 (3) 0.004 (3) 0.005 (3) −0.001 (3)
C80 0.039 (4) 0.045 (4) 0.043 (4) 0.001 (3) 0.020 (3) −0.006 (3)
C81 0.027 (3) 0.034 (3) 0.056 (4) −0.002 (3) −0.003 (3) −0.016 (3)
----- -------------- -------------- -------------- --------------- -------------- --------------
Geometric parameters (Å, °) {#tablewrapgeomlong}
===========================
------------------- ------------- ------------------- -----------
W1---S4 2.1523 (11) C32---C33 1.390 (6)
W1---S1 2.2062 (11) C32
| 193
| 3,531
| 172
| 164
| null | null |
github_plus_top10pct_by_avg
|
length $k$ of process $\lbrace {\mathbf{f}}_n \rbrace$ is the set ${\mathbf{F}} = \lbrace {\mathbf{f}} \colon {\mathbf{f}} = {\mathbf{f}}_i \;\text{and}\; i \le k\rbrace$. This set’s $\psi$-homogeneous subset contains only those frames having initial condition (abscissa) $\psi \in {\prod{\Psi}}$. The corresponding end-condition set consists of those frames’ ordinates. Obviously the $\psi$-homogeneous end-condition set must have cardinality $\bigl\lvert\lbrace \phi \colon (\psi, \phi) = {\mathbf{f}}_i \;\text{and}\; i \le k\rbrace\bigr\rvert \le k$. The limit supremum (respecting initial segment length and homogeneity choice) presents the process’ worst case scenario.
\[D:UNDERPIGEONHOLE\] Let $\langle \Psi, \Phi \rangle$ be a basis for process $\lbrace {\mathbf{f}}_n \rbrace \colon {\mathbb{N}}\to {\prod{\Psi}} \times {\prod{\Phi}}$ and catalog of functionality ${\mathscr{F}}$. Catalog ${\mathscr{F}}$ *under-pigeonholes* process $\lbrace {\mathbf{f}}_n \rbrace$ if $${\lvert{{\mathscr{F}}}\rvert} <
\lim_{n \to \infty} \biggl(\sup_{\psi \in {\prod{\Psi}}}
\biggl(
\bigl\lvert
\lbrace \phi \colon (\psi, \phi) = {\mathbf{f}}_i \;\text{and}\; i \le n\rbrace
\bigr\rvert
\biggr)\biggr).$$
Whenever the limit supremum fails to converge, no procedure based on a (finite) catalog can cover the process.
MIL-STD-882 and the CPP {#S:MIL-STD-882}
=======================
MIL-STD-882 is the United States Department of Defense Standard Practice for System Safety. Revision E became effective May 11, 2012. In preference to *accident*, this standard prefers the term *mishap*, which it defines as [an event or series of events resulting in unintentional death, injury, occupational illness, damage to or loss of equipment or property, or damage to the environment.]{}
Its safety risk assessment method uses the compound Poisson process (CPP) to represent the timing and severity of mishaps. MIL-STD-882E partitions compound Poisson processes into a lattice of categories and levels that covers the r
| 194
| 1,278
| 374
| 264
| 1,094
| 0.794443
|
github_plus_top10pct_by_avg
|
\mathit{f}}' &= (\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi')&\quad\text{[next functionality]} \\
{\mathbf{f}}\,' &= (\psi', \phi') =
([{\mathit{f}}(\psi) \xi'], [(\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi')]([{\mathit{f}}(\psi) \xi']))&\quad\text{[next frame]}\end{aligned}$$
A partial unfolding of these expressions’ generators clarifies the roles of components in overall mechanism:
1. Current reactive state is $\psi = \phi \xi$.
2. Current locus is $\lambda$.
3. Current actuator is ${\mathsf{a}} = \ell(\lambda)$.
4. Current functionality is ${\mathit{f}} = {\mathsf{a}}(\psi) = (\ell(\lambda))(\psi)$.
5. Current frame is ${\mathbf{f}} = (\psi, {\mathit{f}}(\psi))$.
6. Current step is ${\mathit{s}} = (\lambda, (\psi, (\ell(\lambda))(\psi), {\mathit{f}}(\psi)))$.
7. Next reactive state is $\psi' = \phi' \xi' = {\mathit{f}}(\psi) \xi'$ (by conjointness).
8. Next locus is found through the jump function: $\lambda' = \Delta(\lambda, \psi)$.
9. Next actuator is ${\mathsf{a}}' = \ell(\lambda')$,
10. ${\mathsf{a}}' = \ell(\Delta(\lambda, \psi))$.
11. Next functionality is ${\mathit{f}}\,' = {\mathsf{a}}'(\psi')$,
12. ${\mathit{f}}' = [\ell(\Delta(\lambda, \psi))](\psi')$,
13. ${\mathit{f}}' = (\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi')$.
14. Next frame is ${\mathbf{f}}\,' = (\psi', {\mathit{f'}}(\psi'))$,
15. ${\mathbf{f}}\,' = ([{\mathit{f}}(\psi) \xi'], {\mathit{f'}}([{\mathit{f}}(\psi) \xi']))$,
16. ${\mathbf{f}}\,' = ([{\mathit{f}}(\psi) \xi'], [(\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi')]([{\mathit{f}}(\psi) \xi']))$.
17. Next step is ${\mathit{s}}' = (\Delta(\lambda, \psi), {\mathsf{a}}'(\psi'), (\psi', {\mathit{f'}}(\psi')))$.
#### Automaton as iterative operator
\[T:AUTOMATON\_OPERATOR\] Let $\langle \Psi, \Phi \rangle$ be a basis with persistent-volatile partition $\Psi = \Phi\Xi$ and step space ${\mathbb{S}}$. Let ${\mathfrak{A}}$ be an automaton. The transform $T_{\mathfrak{A}} \colon {\mathbb{S}} \to {\mathbb{S}}$ induced b
| 195
| 2,345
| 585
| 208
| 2,980
| 0.77564
|
github_plus_top10pct_by_avg
|
$v_i$ can be eliminated by $y_i$ and $m_{i-1,i}, m_{i,i+1}$.\
5. We consider Equation (\[ea27\]). If $L_i$ is *of type $I^e$*, then $v_i$ can be eliminated by $\left(r_i\right)$, $y_i$ can be eliminated by $\left(t_i, v_iz_i, m_{i-1,i}, m_{i,i+1}\right)$, and $w_i$ can be eliminated by $\left(r_i, t_i, z_i, x_i, u_i, m_{i-1,i}, m_{i,i+1}\right)$.\
6. Finally, we consider Equations (\[24’\]) and (\[ea32\]), $\mathcal{X}_{i,2,2}$ in Equation (\[ea27\]), together with the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j}
\bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for any $j\in \mathcal{B}_1$, and the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j}
\bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for any $j\in \mathcal{B}_2$.
We first consider $\mathcal{X}_{i,2,2}$ in Equation (\[ea27\]). If $\gamma_i$ is not a unit, then $\bar{\gamma}_i=0$ and so $u_i$ can be eliminated. If $\gamma_i$ is a unit so that $\bar{\gamma}_i\neq 0$, then we add $\sqrt{\bar{\gamma}_i}x_i$ to both sides of $\bar{\gamma}_i\mathcal{X}_{i,2,2}=0$. Then we have $$\left(\bar{\gamma}_iu_i+\sqrt{\bar{\gamma}_i}x_i\right)+\left(\sqrt{\bar{\gamma}_i}x_i+\bar{\gamma}_iu_i\right)^2+
\bar{\gamma}_i/2{}^tr_ia_ir_i+\bar{\gamma}_i\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right)=\sqrt{\bar{\gamma}_i}x_i.$$ If we let $\tilde{u}_i=\bar{\gamma}_iu_i+\sqrt{\bar{\gamma}_i}x_i$, then $u_i$ can be eliminated by $\tilde{u}_i, x_i$. In addition, the above equation yields that $x_i$ can be eliminated by $\tilde{u}_i, r_i, m_{i-2, i}^{\natural}, m_{i+2, i}^{\natural}$. Therefore, since we introduce one new variable $\tilde{u}_i$ and eliminate two variables $u_i$ and $x_i$, the effect of $\mathcal{X}_{i,2,2}$ in Equation (\[ea27\]) is to eliminated one variable.
Similarly, each of Equations (\[24’\]) and (\[ea32\]) eliminates one variable. The proof of this is similar to the above or the argument of Step (4) in the proof of Lemma A.8 of [@C2]. Thus we skip it.
The equation $\sum_{l=0}^{k_j}z_{j-
| 196
| 2,323
| 455
| 265
| 3,621
| 0.771201
|
github_plus_top10pct_by_avg
|
xt in multiple lines as the title of a plot or axis WITH a subscript present in the text
I wish to print a text in the title in two lines but am not able to achieve desired output because of subscript present in the text. Following is the e.g of the text that I want in two lines.
plot(1,main=expression(paste(CO[2]~'Flux (kg C ', m^-2,' ',s^-1,')')))
BUT using line break as in following command does not give desired result of bringing (only) the text following it in new line:
plot(1,main=expression(paste(CO[2]~'Flux \n(kg C ', m^-2,' ',s^-1,')')))
Please help me with this issue.
Thanks in advance
A:
You can do this with the atop function.
plot(1,main=expression(atop(CO[2]~'Flux', paste('(kg C ', m^-2,' ',s^-1,')'))))
Since the lheight par doesn't affect expressions, if you want tighter spacing between the lines, you can use the following.
plot(1,main=expression(textstyle(atop(CO[2]~'Flux', paste('(kg C ', m^-2,' ',s^-1,')')))),
cex.main=2)
Q:
grails-spring-security-rest OAuth OAuthException: Response body is incorrect
The plugin throws the following error when trying to sign in with facebook.
error:500,
message:org.scribe.exceptions.OAuthException: Response body is incorrect. Can't extract a token from this: '{"access_token":"EAAOWKGC6MDcBAB9ZAka1zEc1","token_type":"bearer"}', error_description:org.scribe.exceptions.OAuthException: Response body is incorrect. Can't extract a token from this: '{"access_token":"EAAOWKGC6M","token_type":"bearer"}', error_code:OAuthException
A:
I solved this problem customizing the Facebookclient. (workarround)
bug details: https://github.com/alvarosanchez/grails-spring-security-rest/issues/327
Put files in your src/main/groovy, (recommend create other package src/main/groovy/springSecurity)
Files: https://gist.github.com/sergioz95/1266c2a29b4d00094fe18423b350aa34
and reference the new facebookClient in application.groovy
grails {
plugin {
springsecurity {
rest {
oauth {
facebook {
client = YOUR_PACKAGE
| 197
| 3,030
| 131
| 295
| 58
| 0.830163
|
github_plus_top10pct_by_avg
|
at which a particular step of an orbit coincides with any member of the reference set.
### Relative operational profile {#S:RELATIVE_OP_PROFILE}
Let ${\mathit{o}} = \{{\mathit{s}}_n\}$ be an orbit. Suppose $z \in Z \subset {\mathbb{S}}$ is a step of the reference set. Software encounters $N_{\{z\}}(\{{\mathit{s}}_n\}, k)$ instances of steps satisfying $\lbrace {\mathit{s}}_n \rbrace(i) = {\mathit{s}}_i = z$ during the first $k$ automaton steps. In the same execution there are $N_Z(\{{\mathit{s}}_n\}, k)$ instances of ${\mathit{s}}_i \in Z$. In the frequentist [@wW14fp] school of interpreting probability, $$P(z \mid Z) = \lim_{\;k \to \infty} \frac{N_{\{z\}}(\{{\mathit{s}}_n\}, k)}{N_Z(\{{\mathit{s}}_n\}, k)}$$ represents the conditional probability of occurrence of $z$, given that $Z$ occurs. By Conjecture \[T:LIMIT\_RATIO\], every orbit (of the same usage pattern) yields the same relative operational profile.
A relative operational profile is an arbitrary set $Z$ of steps, along with each step’s conditional probability of execution. In other words, a relative operational profile is a mapping $\mathcal{O} \colon Z \to [0,1]$ having total measure 1.
### Absolute operational profile {#S:ABSOLUTE_OP_PROFILE}
Let ${\mathit{o}} = \{{\mathit{s}}_n\}$ be an orbit. An absolute operational profile is the probability $P(Z)$ with which an orbit (of some usage pattern) coincides with any step of the reference set $Z$. As before, this probability is the limiting ratio of two counting functions. Its numerator contains $N_Z(\{{\mathit{s}}_n\}, k)$, the same count as appears in the denominator of the relative operational profile. In its denominator is the counting function of all possible steps, namely $N_{{\mathbb{S}}}(\{{\mathit{s}}_n\}, k)$, where ${\mathbb{S}}$ is the space of all steps. Thus the ratio of counting functions of reference set $Z$ to the entire space ${\mathbb{S}}$ is $\frac{N_Z(\{{\mathit{s}}_n\}, k)}{N_{{\mathbb{S}}}(\{{\mathit{s}}_n\}, k)} = \frac{N_Z(\{{\mathit{s}}_n\}, k)}{k}$, and the absolute op
| 198
| 2,780
| 761
| 300
| 2,258
| 0.781375
|
github_plus_top10pct_by_avg
|
thm in [@moller-sgcd], which computes the Jacobi symbol using only $O(n)$ extra time and $O(1)$ extra space[^1]. This indicates that also for the fastest algorithms for large inputs, the cost is essentially the same for computing the and computing the Jacobi symbol.[^2]
Like the algorithm described in [@bach-shallit], the computation is related to the quotient sequence. The updates of the Jacobi symbol are somewhat different, instead following an unpublished algorithm by Schönhage [@schoenhage-brent-communication] for computing the Jacobi symbol from the quotient sequence modulo four. In the algorithms in , the quotients are not always applied in a single step; instead, there is a series of reductions of the form $a {\leftarrow}a
- m b$, where $m$ is a positive number equal to or less than the correct quotient ${\lfloor a/b \rfloor}$. In the corresponding Jacobi algorithms, the Jacobi sign is updated for each such partial quotient. Most of the partial quotients are determined from truncated inputs where the least significant parts of the numbers are ignored. The least significant two bits, needed for the Jacobi computation, must therefore be maintained separately.
Notation
--------
The time needed to multiply two $n$-bit numbers is denoted $M(n)$, where $M(n) = O(n \log n)$ for the fastest known algorithms. [^3]
The Jacobi symbol is denoted $(a | b)$. We use the convention that $[\text{condition}]$ means the function that is one when the condition is true, otherwise 0, e.g., $(0 | b) = [b = 1]$.
Left-to-right GCD
=================
In this paper, we will not describe the details of fast algorithms. Instead we will consider Algorithm \[alg:gcd\], which is a generic left-to-right algorithm, with a basic reduction step where a multiple of the smaller number is subtracted from the larger number. We also describe the main idea of fast instantiations of this algorithm.
In: $a, b > 0$ $a \geq b$ $a {\leftarrow}a - m b$, with $1 \leq m \leq {\lfloor a/b \rfloor}$ $a = 0$ $b$ $b {\leftarrow}b - m a$, with $1 \l
| 199
| 3,311
| 619
| 245
| 2,840
| 0.776644
|
github_plus_top10pct_by_avg
|
C51---C52---C53 120.6 (6)
W2---S7---Ag4 75.45 (3) C51---C52---H52 119.7
C1---P1---C7 106.7 (2) C53---C52---H52 119.7
C1---P1---C13 101.0 (2) C54---C53---C52 119.9 (5)
C7---P1---C13 104.3 (2) C54---C53---H53 120.0
C1---P1---Ag1 111.83 (16) C52---C53---H53 120.0
C7---P1---Ag1 112.73 (15) C53---C54---C55 120.3 (5)
C13---P1---Ag1 119.05 (14) C53---C54---H54 119.8
C70---P2---C64 102.5 (2) C55---C54---H54 119.8
C70---P2---C63 104.0 (2) C56---C55---C54 119.7 (6)
C64---P2---C63 106.9 (2) C56---C55---H55 120.1
C70---P2---Ag2 113.57 (15) C54---C55---H55 120.1
C64---P2---Ag2 111.52 (15) C51---C56---C55 120.4 (5)
C63---P2---Ag2 116.97 (15) C51---C56---H56 119.8
C57---P3---C51 103.1 (2) C55---C56---H56 119.8
C57---P3---C63 103.1 (2) C62---C57---C58 119.2 (4)
C51---P3---C63 103.4 (2) C62---C57---P3 120.5 (3)
C57---P3---Ag3 122.93 (15) C58---C57---P3 120.2 (3)
C51---P3---Ag3 109.79 (15) C59---C58---C57 120.2 (4)
C63---P3---Ag3 112.48 (15) C59---C58---H58 119.9
C39---P4---C45 104.2 (2) C57---C58---H58 119.9
C39---P4---C38 102.7 (2) C58---C59---C60 120.2 (5)
C45---P4---C38 101.0 (2) C58---C59---H59 119.9
C39---P4---Ag3 124.25 (15) C60---C59---H59 119.9
C45---P4---Ag3 110.63 (15) C61---C60---C59 120.5 (4)
C38---P4---Ag3 111.39 (15) C61---C60---H60 119.7
C32---P5---C26 103.6 (2) C59---C60---H60 119.7
C32---P5---C38 100.7 (2) C60---C61---C62 120.1 (4)
C26---P5---C38 105.3 (2) C60---C61---H61 120.0
C32---P5---Ag4 112.51 (15) C62---C61---H61 120.0
C26---P5---Ag4 109.23 (15) C57---C62---C61 119.8 (4)
C38---P5---Ag4 123.52 (15) C57---C62---H62 120.1
| 200
| 3,155
| 842
| 273
| null | null |
github_plus_top10pct_by_avg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.