id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23537900
|
So my question is, is that the best way? Or are there better ways to delete a symbolic link when the original folder gets deleted?
A: i have not used inotify, but if it can integrate *nix's find command, you can use it to delete the link
find /folderpath -type l -delete
| |
doc_23537901
|
public void reorder(int fromIndex, int toIndex) {
getElements().add(toIndex, getElements().remove(fromIndex));
}
Here, the method getElements has the return type List<?>. The remove method has the return type ?, and the add method shows its arguments as int index, ? element. So my assumption was, since the return type of remove method and second argument of add method are the same - ? - the method call must succeed. But, I was wrong, the above code segment results in the error:
The method add(int, capture#17-of ?)
in the type List<capture#17-of ?>
is not applicable for the arguments (int, capture#18-of ?)
Here, I don't have any direct access to the list, and I don't know it's original type returned by getElements method. All I want here is to remove the item at fromIndex and put it at toIndex. So, how do I achieve that? Also is there anything wrong with my understanding of the generics?
A: No no no! Use capture:
public void reorder(int fromIndex, int toIndex) {
reorderWithCapture(getElements());
}
private <E> void reorderWithCapture(List<E> elements) {
elements.add(toIndex, elements.remove(fromIndex));
}
A: Just add a cast that makes that ? concrete:
public void reorder(int fromIndex, int toIndex) {
final List<Object> els = (List<Object>)getElements();
els.add(toIndex, els.remove(fromIndex));
}
Since you are just rearranging elements within the list, this will never cause any trouble. But I must say there's something wrong with the design if you are seeing that kind of return value.
A: The wildcard ? means any (unknown) type, not one specific type. So the compiler can't verify that the any type used in the remove call is the same as the any type used in the getElements and add calls. This is seen from the error message, where the former is described as capture#18-of ? while the latter is capture#17-of ?. These are seen as two different, unrelated types by the compiler.
Since you apparently can't modify the definition of these methods (although they definitely look fishy based on your description), the least worst option here is probably what @Marko suggests: separate the two steps of the process and use a temp variable with a concrete type parameter (Object) to make the compiler happy.
| |
doc_23537902
|
Input:
not interesting
foo is 1 in 1,200 and test is 1 in 3.4 not interesting
something else is 1 in 2.5, things are 1 in 10
also not interesting
Wanted output:
foo is 1/1,200
and test is 1/3.4
something else is 1/2.5,
things are 1/10
What I have so far:
$ sed -nr ':a s|(.*) 1 in ([0-9.,]+)|\1 1/\2\n|;tx;by; :x h;ba; :y g;/^$/d; p' input
foo is 1/1,200
and test is 1/3.4
not interesting
something else is 1/2.5,
things are 1/10
something else is 1/2.5,
things are 1/10
This beautiful code repeatedly splits lines when it matches, and tries to only print it if it contained matches. The problem with my code seems to be that the hold space isn't cleared after a line is done.
The general problem is that sed can't do non-greedy matching and my separator can be anything.
I guess a solution in a different language would be okay, but now I'm kind of intrigued if this is possible in sed?
A: sed is for simple substitutions on individual lines, that is all. For anything more interesting just use awk:
$ cat tst.awk
{
while ( match($0,/\s*([^0-9]+)([0-9]+)[^0-9]+([0-9,.]+)/,a) ) {
print a[1] a[2] "/" a[3]
$0 = substr($0,RSTART+RLENGTH)
}
}
$ awk -f tst.awk file
foo is 1/1,200
and test is 1/3.4
something else is 1/2.5,
things are 1/10
The above uses GNU awk for the 3rd arg to match() and \s shorthand for [[:space:]].
A: This might work for you (GNU sed):
sed -r 's/([0-9]) in ([0-9]\S*\s*)/\1\/\2\n/;/[0-9]\/[0-9]/P;D' file
This replaces some number followed by space followed by in followed by a space followed by a token beginning with a number followed by a possible space with the first number followed by a / followed by the second token beginning with a number followed by a new line. If the following line contains a number followed by a /` followed by a number, then print it and then delete it and if anything else is in the pattern space repeat.
A: Yes, sed can do it, although it's not the best tool for the job. My attempt is to search all number in number pattern and add a newline after each one. Then remove trailing text (no newline after it), remove leading spaces and print:
sed -nr '/([0-9]+) in ([0-9,.]+)/ { s//\1\/\2\n/g; s/\n[ ]*/\n/g; s/\n[^\n]*$//; p }' file
It yields:
foo is 1/1,200
and test is 1/3.4
something else is 1/2.5,
things are 1/10
| |
doc_23537903
|
The content div is 1000px wide. Now the paragraph_content automatically applies the 1000px width of the content. So I can never center it with margin: 0 auto, so the text in the paragraphs get centered. Now I could do text-align: center, but then the lines doesn't show under eachother since some lines are shorter and all is centered.
I want it centered, and all text lines placed right under eachother instead of some jumping in later.
And I want it centered so that if the text get adjusted it doesn't just expand it on the right side, but that the left and the right side auto expand so all stays centered.
what I have:
#content{
width: 1000px;
float: left;
overflow: hidden;
}
#paragraph_content{
display: block;
padding: 0;
margin:0 auto;
overflow: hidden;
}
#paragraph_content p{
float: left;
font-family: Lucida Console;
font-size: 12px;
padding: 5px;
}
http://jsfiddle.net/cUx2k/
I have put borders to show the space of the content and child div. the paragraphs are taking all of the space on the right for example as is the child div. So I cant never center it in the content div.
A: I fixed it by:
#paragraph_content{
display: table;
margin: 0 auto;
}
A: Try setting display:inline-block; to #paragraph_content, along with width:100%;.
It wasn't very clear to me what is the problem with text-align:center; in your case. You can maybe help adjusting the line-height of the element if you worry about the space between lines (if I understood you correctly).
| |
doc_23537904
|
public boolean onTouchEvent(MotionEvent event)
{
xPos = event.getX();
yPos = event.getY();
oOffset = this.getThumbOffset();
oProgress = this.getProgress();
//Code from example - Not working
//this.setThumbOffset( progress * (this.getBottom()-this.getTop()) );
this.setProgress((int)(29*yPos/this.getBottom()));
return true;
}
I've managed to implement one VerticalSeekBar in which the progress updates as expected and is fully-functional, but the thumb does not follow suit. This is only a graphical glitch, so I'm overlooking it for now. But, it would be nice to have that working. This SeekBar has max = 20.
However, I tried implementing another VerticalSeekBar with max = 1000. Obviously, it uses the same code, so you'd assume the same behavior. I'm only able to achieve a progress of 0~35, even as my finger slides beyond the SeekBar and eventually off the screen. If I just tap near the end of the progress bar (which should be progress ~ 900) it returns a progress of about 35 and the yellow progress bar reflects that value by staying near the top.
My question is: Does anyone have a link to a working vertical SeekBar, or know how to adapt this particular example?
A: Thanks to Paul Tsupikoff, Fatal1ty2787 and Ramesh for this excellent code.
Personally, I wanted a vertical slider that is upside-down compared to the given code. In other words, the value increases, rather than decreases, the lower the thumb. Changing four lines seems to have taken care of this.
First, I changed the onDraw() method as originally given by Paul. The rotate() and translate() calls now have these arguments:
c.rotate(90);
c.translate(0, -getWidth());
Then I made two changes to the ACTION_MOVE case in onTouchEvent() as given by Fatal1ty2787. The call to setProgress() now looks like this:
setProgress((int) (getMax() * event.getY() / getHeight()));
Finally, the call to onProgressChanged() looks like this:
myListener.onProgressChanged(this, (int) (getMax() * event.getY() / getHeight()), true);
Now, if only Google shared our interest in this feature....
A: For API 11 and later, can use seekbar's XML attributes(android:rotation="270") for vertical effect.
<SeekBar
android:id="@+id/seekBar1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:rotation="270"/>
For older API level (ex API10),use:
https://github.com/AndroSelva/Vertical-SeekBar-Android
A: The code given in the accepted answer didn't intercept the onStartTrackingTouch and the onStopTrackingTouch events, so I've modified it to have more control over this two events.
Here is my code:
public class VerticalSeekBar extends SeekBar {
private OnSeekBarChangeListener myListener;
public VerticalSeekBar(Context context) {
super(context);
}
public VerticalSeekBar(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public VerticalSeekBar(Context context, AttributeSet attrs) {
super(context, attrs);
}
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
}
@Override
protected synchronized void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(heightMeasureSpec, widthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
@Override
public void setOnSeekBarChangeListener(OnSeekBarChangeListener mListener){
this.myListener = mListener;
}
protected void onDraw(Canvas c) {
c.rotate(-90);
c.translate(-getHeight(), 0);
super.onDraw(c);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
if(myListener!=null)
myListener.onStartTrackingTouch(this);
break;
case MotionEvent.ACTION_MOVE:
setProgress(getMax() - (int) (getMax() * event.getY() / getHeight()));
onSizeChanged(getWidth(), getHeight(), 0, 0);
myListener.onProgressChanged(this, getMax() - (int) (getMax() * event.getY() / getHeight()), true);
break;
case MotionEvent.ACTION_UP:
myListener.onStopTrackingTouch(this);
break;
case MotionEvent.ACTION_CANCEL:
break;
}
return true;
}
}
A: Another idea could be to change the X and Y coordinates of the MotionEvent and pass them to the super-implementation:
public class VerticalSeekBar extends SeekBar {
public VerticalSeekBar(Context context) {
super(context);
}
public VerticalSeekBar(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public VerticalSeekBar(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
if (!isEnabled()) {
return false;
}
float x = (getHeight() - event.getY()) * getWidth() / getHeight();
float y = event.getX();
MotionEvent verticalEvent = MotionEvent
.obtain(event.getDownTime(), event.getEventTime(), event.getAction(), x, y,
event.getPressure(), event.getSize(), event.getMetaState(),
event.getYPrecision(), event.getXPrecision(), event.getDeviceId(),
event.getEdgeFlags());
return super.onTouchEvent(verticalEvent);
}
protected void onDraw(Canvas c) {
c.rotate(-90);
c.translate(-getHeight(), 0);
super.onDraw(c);
}
@Override
protected synchronized void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(heightMeasureSpec, widthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
}
}
In this case it is not necessary to call the setProgress(int) method and therefore you could use the boolean-flag "fromUser" in OnSeekBarChangeListener.onProgressChanged() to determine if the seeking was produced by an user interaction.
A: Based on Paul Tsupikoff's answer, here is the AppCompatVerticalSeekBar:
package com.my.apppackage;
import android.annotation.SuppressLint;
import android.content.Context;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.view.MotionEvent;
import androidx.appcompat.widget.AppCompatSeekBar;
public class AppCompatVerticalSeekBar extends AppCompatSeekBar {
public AppCompatVerticalSeekBar(Context context) {
super(context);
}
public AppCompatVerticalSeekBar(Context context, AttributeSet attrs) {
super(context, attrs);
}
public AppCompatVerticalSeekBar(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
}
@Override
protected synchronized void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(heightMeasureSpec, widthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
@Override
protected synchronized void onDraw(Canvas canvas) {
canvas.rotate(-90);
canvas.translate(-getHeight(), 0);
super.onDraw(canvas);
}
@SuppressLint("ClickableViewAccessibility")
@Override
public boolean onTouchEvent(MotionEvent event) {
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
case MotionEvent.ACTION_UP: {
setProgress(getMax() - (int) (getMax() * event.getY() / getHeight()));
onSizeChanged(getWidth(), getHeight(), 0, 0);
break;
}
case MotionEvent.ACTION_CANCEL: {
break;
}
}
return true;
}
}
Use it in your layout file:
<com.my.apppackage.AppCompatVerticalSeekBar
android:id="@+id/verticalSeekBar1"
android:layout_width="wrap_content"
android:layout_height="200dp" />
A: Here is a working VerticalSeekBar implementation:
package android.widget;
import android.content.Context;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.view.MotionEvent;
public class VerticalSeekBar extends SeekBar {
public VerticalSeekBar(Context context) {
super(context);
}
public VerticalSeekBar(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public VerticalSeekBar(Context context, AttributeSet attrs) {
super(context, attrs);
}
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
}
@Override
protected synchronized void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(heightMeasureSpec, widthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
protected void onDraw(Canvas c) {
c.rotate(-90);
c.translate(-getHeight(), 0);
super.onDraw(c);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
case MotionEvent.ACTION_UP:
setProgress(getMax() - (int) (getMax() * event.getY() / getHeight()));
onSizeChanged(getWidth(), getHeight(), 0, 0);
break;
case MotionEvent.ACTION_CANCEL:
break;
}
return true;
}
}
To implement it, create a new class in your project, choosing the right package:
There, paste the code and save it. Now use it in your XML layout:
<android.widget.VerticalSeekBar
android:id="@+id/seekBar1"
android:layout_width="wrap_content"
android:layout_height="200dp"
/>
A: I had problems while using this code with setProgress method. To solve them I suggest overriding setProgress and adding onSizeChanged call to it.
A: I had problem while using this code with setProgress method. To solve them I suggest overriding setProgress and adding onSizeChanged call to it.Added code here ..
private int x,y,z,w;
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(h, w, oldh, oldw);
this.x=w;
this.y=h;
this.z=oldw;
this.w=oldh;
}
@Override
public synchronized void setProgress(int progress) {
super.setProgress(progress);
onSizeChanged(x, y, z, w);
}
selected hover actions are performed by adding the following code:
1.setPressed(true);setSelected(true);//Add this in ACTION_DOWN
2.setPressed(false);setSelected(false);//Add this in ACTION_UP
And Write code for selected hover options in ur xml.
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_pressed="true"
android:state_window_focused="true"
android:drawable="@drawable/thumb_h" />
<item android:state_selected="true"
android:state_window_focused="true"
android:drawable="@drawable/thumb_h" />
<item android:drawable="@drawable/thumb" />
</selector>
This is working for me...
A: Target platforms
from Android 2.3.x (Gingerbread)
to Android 7.x (Nougat)
Getting started
This library is published on jCenter. Just add these lines to build.gradle.
dependencies {
compile 'com.h6ah4i.android.widget.verticalseekbar:verticalseekbar:0.7.2'
}
Usage
Layout XML
<!-- This library requires pair of the VerticalSeekBar and VerticalSeekBarWrapper classes -->
<com.h6ah4i.android.widget.verticalseekbar.VerticalSeekBarWrapper
android:layout_width="wrap_content"
android:layout_height="150dp">
<com.h6ah4i.android.widget.verticalseekbar.VerticalSeekBar
android:id="@+id/mySeekBar"
android:layout_width="0dp"
android:layout_height="0dp"
android:max="100"
android:progress="0"
android:splitTrack="false"
app:seekBarRotation="CW90" /> <!-- Rotation: CW90 or CW270 -->
</com.h6ah4i.android.widget.verticalseekbar.VerticalSeekBarWrapper>
NOTE: android:splitTrack="false" is required for Android N+.
Java code
public class TestVerticalSeekbar extends AppCompatActivity {
private SeekBar volumeControl = null;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_test_vertical_seekbar);
volumeControl = (SeekBar) findViewById(R.id.mySeekBar);
volumeControl.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() {
int progressChanged = 0;
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
progressChanged = progress;
}
public void onStartTrackingTouch(SeekBar seekBar) {
// TODO Auto-generated method stub
}
public void onStopTrackingTouch(SeekBar seekBar) {
Toast.makeText(getApplicationContext(), "seek bar progress:" + progressChanged,
Toast.LENGTH_SHORT).show();
}
});
}
}
| |
doc_23537905
|
demo
const renderField = (props) => (
<div>
<label>{props.label}</label>
<div>
<input {...props.input} placeholder={props.label} type={props.type} id={props.id}/>
{props.meta.touched && ((props.meta.error && <span>
{props.meta.error}</span>) || (props.meta.warning && <span>
{props.meta.warning}</span>))}
</div>
</div>
)
const FieldLevelValidationForm = (props) => {
const { handleSubmit, pristine, reset, submitting } = props
return (
<form onSubmit={() => handleSubmit(this, props.id)}>
<Field name="username" type="text"
component={renderField} label="Username"
id="user"
validate={[ required, maxLength15 ]}
/>
<Field name="email" type="email"
component={renderField} label="Email"
id="userEmail"
validate={email}
warn={aol}
/>
<Field name="age" type="number"
component={renderField} label="Age"
id="userAge"
validate={[ required, number ]}
warn={tooOld}
/>
<Field name="favoriteColor" component="select" id="userColor">
<option value="ff0000">Red</option>
<option value="00ff00">Green</option>
<option value="0000ff">Blue</option>
</Field>
<div>
<button type="submit" disabled={submitting}>Submit</button>
<button type="button" disabled={pristine || submitting} onClick={reset}>Clear Values</button>
</div>
</form>
)
}
| |
doc_23537906
|
When I will click on the hide images all the images show replaced with a static image and later when I will uncheck it it must show the original images now.
<div id="log_contents">
<span style="color:blue;"><b>Public chat</b> with <b>dragos123</b></span> <br><br>
<div class="chat-line">
<span class="dialogue_time"> 11:00:39 AM </span>
<span style="background-color:FFF;">debasish:</span>
<span style="background-color:FFF;"><img style="cursor:pointer; max-height:80px;" src="http://localhost/myshowcam/files/stickers/msc-1427684408.gif" title=":party1"></span>
</div>
<div class="chat-line">
<span class="dialogue_time"> 11:01:43 AM </span>
<span style="background-color:ffff88;">pkk:</span>
<span style="background-color:ffff88;">hiiiiiiiiiiiiii</span>
</div>
<div class="chat-line">
<span class="dialogue_time"> 11:02:03 AM </span>
<span style="background-color:ffff88;">pkk:</span>
<span style="background-color:ffff88;"><img style="cursor:pointer; max-height:80px;" src="http://localhost/myshowcam/files/stickers/msc-1427684892.gif" title=":1min"></span>
</div><div class="pagination" style=""></div>
</div>
Please give your valuable feedback.
Thank you.
A: You need to hold the information regarding the original vlaue somewhere, otherwise, it would not be possible to revert this change.
I would change the initial code to something like
$("#log_contents").find('img').each(function() {
$(this).data('img-org', $(this).attr('src'));
$(this).attr('src', 'img/hide-image.gif');
});
and to reverse it, you just need to do the opposite
$("#log_contents").find('img').each(function() {
$(this).attr('src', $(this).data('img-org'));
});
A: You could use some attribute to store original image source. It's not valid HTML, but it should work.
Edit: @Kami's answer with data() was better, so I've changed mine:
function change(element){
if($(element).prop('checked')){
$("img").each(function() {
$(this).data("old-src", $(this).attr("src"));
});
$('img').attr("src", 'http://img005.lazygirls.info/people/tamanna_bhatia/tamanna_bhatia_tamanna_latest_images_7_jpg_jpeg_image_1024_1226_pixels_scaled_76__qWkMn2nO.sized.jpg');
} else {
$("img").each(function() {
$(this).attr("src", $(this).data("old-src"));
});
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<img src="http://www.online-image-editor.com/styles/2013/images/example_image.png" /><br />
<img src="http://www.britishlegion.org.uk/ImageGen.ashx?width=800&image=/media/2019101/id23055-normandy-66th_-schools-visit-poppy-choice_-pupils-from-london-city-academy.jpg" /><br />
<input type="checkbox" onclick="change(this)" /> Images off!
| |
doc_23537907
|
A: One solution would be to store image id when user registers and later with CRON do a query to see if current profile picture id is the same id as one stored, if different then used changed the profile picture.
Second solution is to have access to user feed and from there to check if profile picture was changed, also CRON will be used.
Note: if you plan to use user picture in your app there is no need to check if picture was changed, facebook will always send you the current one.
A: You should check ETags. It is some kind of has witch you get in response header. You should store that value and send it next time you make request for image. If image was not changed you will get 304 - Not Modified response from Facebook API.
If you do not want to download images from Facebook you should use ?redirect=false paramter in Facebook image request (eg. https://graph.facebook.com/<username>/picture?redirect=false). This will return json data of profile picture instead of returning the whole image.
| |
doc_23537908
|
To be clear, we would like to setup rules whereby userA can access levels AA, AB. And userB can access levels BA, BB for example.
I’ve setup security at the service level so only certain users have access to WFS and some have read-only access and some read/write access based on the user role.
Not surprisingly (given the documentation) if I setup layer security for a given feature type then that type can’t be accessed by WFS at all. The feature does not show in the getCapabilities/catalog/other.
In case it matters our geoserver is deployed in tomcat which is accessed with a mod_jk mount on the back-end of apache web server.
Thanks in advance for any comments - Walter
A: You can either write and plug-in your own ResourceAccessManager implementation, or use GeoFence (which provides an implementation of said interface, and a GUI and logic to drive what you want):
https://github.com/geoserver/geofence
| |
doc_23537909
|
SELECT i.id, i.stable_id, i.version, i.title
FROM initiatives AS i
INNER JOIN (
SELECT stable_id, MAX(version) AS max_version FROM initiatives GROUP BY stable_id
) AS tbl1
ON i.stable_id = tbl1.stable_id AND i.version = tbl1.max_version
ORDER BY i.stable_id ASC
The goal is to query an external non TYPO3 table which contains different versions of each data set. Only the data set with the highest version number should be rendered. The database looks like this:
id, stable_id, version, [rest of the data row]
stable_id is the external id of the data set. id is the internal autoincrement id. And version is also incremented automatically.
Code example:
$queryBuilder = GeneralUtility::makeInstance(ConnectionPool::class)->getQueryBuilderForTable($this->table);
$result = $queryBuilder
->select(...$this->select)
->from($this->table)
->join(
'initiatives',
$queryBuilder
->select('stable_id, MAX(version) AS max_version' )
->from('initiatives')
->groupBy('stable_id'),
'tbl1',
$queryBuilder->and(
$queryBuilder->expr()->eq(
'initiatives.stable_id',
$queryBuilder->quoteIdentifier('tbl1.stable_id')
),
$queryBuilder->expr()->eq(
'initiatives.version',
$queryBuilder->quoteIdentifier('tbl1.max_version')
)
)
)
->orderBy('stable_id', 'DESC')
I cannot figure out the correct syntax for the ON ... AND statement. Any idea?
A: Extbase queries have JOIN capabilities but are otherwise very limited. You could use custom SQL (see ->statement() here), though.
A better API to build complex queries is the (Doctrine DBAL) QueryBuilder, including support for JOINs, database functions like MAX() and raw expressions (->addSelectLiteral()). Make sure to read until the ExpressionBuilder where it gets interesting.
So Extbase queries are useful in order to retrieve Extbase (model) objects. It can make implicit use of its knowledge of your data structure in order to save you some code but only supports rather simple queries.
The (Doctrine DBAL) QueryBuilder fulfills all other needs. If needed, you can convert the raw data to Extbase models, too. (for example $propertyMapper->convert($data, Job::class)).
I realize that we lack clear distinguishing between the two because they were both known at some time as "QueryBuilder", but they are totally different. That's why I like to add "Doctrine" when referring to the non-Extbase one.
An example with a JOIN ON multiple criteria.
$q = TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance(TYPO3\CMS\Core\Database\ConnectionPool::class)
->getQueryBuilderForTable('fe_users');
$res = $q->select('*')
->from('tt_content', 'c')
->join(
'c',
'be_users',
'bu',
$q->expr()->andX(
$q->expr()->eq(
'c.cruser_id', $q->quoteIdentifier('bu.uid')
),
$q->expr()->comparison(
'2', '=', '2'
)
)
)
->setMaxResults(5)
->execute()
->fetchAllAssociative();
A: Short answer: it is not possible because the table to be joined in is generated on the fly. The related expression is back-ticked and thus causes an SQL error.
But: The SQL query can be changed to the following SQL query which does basically the same:
SELECT i1.id,stable_id, version, title, p.name, l.name, s.name
FROM initiatives i1
WHERE version = (
SELECT MAX(i2.version)
FROM initiatives i2
WHERE i1.stable_id = i2.stable_id
)
ORDER BY stable_id ASC
And this can be rebuild with the DBAL queryBuilder:
$queryBuilder = GeneralUtility::makeInstance(ConnectionPool::class)->getQueryBuilderForTable($this->table);
$result = $queryBuilder
->select(...$this->select)
->from($this->table)
->where(
$queryBuilder->expr()->eq(
'initiatives.version',
'(SELECT MAX(i2.version) FROM initiatives i2 WHERE initiatives.stable_id = i2.stable_id)'
),
->orderBy('stable_id', 'DESC')
->setMaxResults( 50 )
->execute();
| |
doc_23537910
|
<% if (patients) { %>
<% patients.each { %>
<tr>
<td>${ ui.format(it.status) }</td>
<td align="center">
<% def linkClaim="patientView.page?patientId=' + ${patientId}+ '&claimUuid=" + ${it.uuid} %>
<button onclick="location.href='${linkClaim}'" type="button">Details</button>
</td>
</tr>
<% } %>
Data from the model:
*
*patients
*patientId
Error:
groovy.lang.MissingMethodException: No signature of method: SimpleTemplateScript171.$()
| |
doc_23537911
|
for reference http://fullcalendar.io
A: You have to extend the Fullcalendar's function for your purpose.take a look at extending in JQuery here: extending
A: please use following method of full calendar
eventAfterAllRender (callback)
for more information This link
I hopes it's may helps you
| |
doc_23537912
|
2) I start my spring boot server with config:
@Bean
public NewTopic MyTopic() {
return new NewTopic("my-topic", 5, (short) 1);
}
@Bean
public ProducerFactory<String, byte[]> greetingProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, byte[]> unpMessageKafkaTemplate() {
return new KafkaTemplate<>(greetingProducerFactory());
}
result - server is start successfully and create my-topic in kafka.
But if I try do it with remote kafka on remote server - topic not create.
and in log spring write:
12:35:09.880 [ main] [INFO ] o.a.k.clients.admin.AdminClientConfig: [] AdminClientConfig values:
bootstrap.servers = [localhost:9092]
If I add this bean to config:
@Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "remote_host:9092");
return new KafkaAdmin(configs);
}
topic create succesfully.
1) Why it happens?
2) Do I have to create KafkaAdmin ? why for the local Kafka is not required?
EDDIT
My current config:
spring:
kafka:
bootstrap-servers: remote:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringDeserializer
value-serializer: org.apache.kafka.common.serialization.ByteArraySerializer
and
@Configuration
public class KafkaTopicConfig {
@Value("${response.topics.topicName}")
private String topicName;
@Bean
public NewTopic responseTopic() {
return new NewTopic(topicName, 5, (short) 1);
}
}
After start I see:
bootstrap.servers = [remote:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
...
But topic not create
A: KafkaAdmin is the kafka spring object that looks for NewTopic objects in your spring context and creates them. If you do not have a KafkaAdmin no creation will take place. You can explicitly create KafkaAdmin (as you show in your code snippet) or indirectly order its creation via the spring kafka configuration properties.
KafkaAdmin is a nice to have it is not related to production or consumption to/ from topics for your application code.
EDIT
You must have something wrong; I just tested it...
spring:
kafka:
bootstrap-servers: remote:9092
and
2019-03-21 09:18:18.354 INFO 58301 --- [ main] o.a.k.clients.admin.AdminClientConfig
: AdminClientConfig values:
bootstrap.servers = [remote:9092]
...
A: Spring Boot will automatically configure a KafkaAdmin for you, but it uses the application.yml (or application.properties). See Boot properties. Scroll down to spring.kafka.bootstrap-servers=. That's why it works with localhost (it's the default).
You also don't need a ProducerFactory or template; boot will create them for you from properties.
| |
doc_23537913
|
Select class
From Ships
GROUP BY class
Having COUNT(class) < 3;
However it's a bit more complicated because of the tables I'm working with. The two tables are Classes and Ships. The classes table lists out what class a certain ship belongs to and the ships table lists out the name of the ship as well as the class. Neither table has any foreign keys which means we might get a certain class in the classes table that's not in the ships table. Here's what the tables look like:
Create Table Classes (
class Varchar(40),
type Char (2),
country Varchar(15)
);
Create Table Ships (
name Varchar(40),
class Varchar(40)
);
And we might get tables that look like this:
Classes:
('Bismarck','bb','Germany');
('Kongo','bc','Japan');
('Renown','bc','Gt. Britain');
Ships:
('Hiei','Kongo');
('Haruna','Kongo');
('Renown','Renown');
('Repulse','Renown');
('Kongo','Kongo');
('Kirishima','Kongo');
So Renown would get listed out since it appears in the ships table only twice however I also want the Bismarck class to get listed out as well since it's only listed out once in the classes table and therefore is listed out less than 3 times. What I don't understand is that the classes in the Classes table will always only get listed once however if the class appears more than 3 times in the ships table then it doesn't matter how many times it's listed in the classes table. I think I need to do an outer join to make this work but I'm not sure what that would look like.
So the results I expect are
Renown
Bismarck
A: If I understood you goal correctly, all you need to do is replace your FROM Ships to a FROM Ships s [join type] Classes c on s.class = c.class. I'm not really 100% sure what join to use, because I wasn't sure what kind of result set you were looking for.
--Selects only matches, so "Bismarck" wouldn't be shown
Select class
From Ships s inner join Classes c on s.class = c.class
GROUP BY class
Having COUNT(s.class) < 3;
If your requirements are not reflected with left/right/inner join types you can always go for full outer join' and specify conditions inWhere` clause.
Select class
From Ships s full outer join Classes c on s.class = c.class
WHERE s.class IS NOT NULL AND [other conditions]
GROUP BY class
Having COUNT(s.class) < 3;
A: You could use a left join and count(*)
select c.class, count(*)
From Class as c
left join Ships as s on c.class = s.class
GROUP BY c.class
Having COUNT(*) < 3;
| |
doc_23537914
|
Is there a good way to
*
*log what domains my container(s) are connecting to
*block domains that are not on an allowlist (but still log them)
tcpdump looks like it can log what ipaddrs my containers are trying to connect to, but for hostnames relies on ambiguous reverse domain name lookups; can we log the DNS lookups instead? tcpdump also doesn't let me block non-allowlisted domains.
| |
doc_23537915
|
The first task of the app is to make a request and store the response in the database so I've setup a model;
class ApiData(models.Model):
event = models.CharField(
_("Event"),
max_length=100,
)
key = models.CharField(
_("Data identifier"),
max_length=255,
help_text=_("Something to identify the json stored.")
)
json = JSONField(
load_kwargs={'object_pairs_hook': collections.OrderedDict},
blank=True,
null=True,
)
created = models.DateTimeField()
Ideally I would like it so that objects are created in the admin and the save method populates the ApiData.json field after creating an API request based on the other options in the object.
Because these fields would have choices based on data returned from the API I wanted to lazy load the choices but at the moment I'm just getting a standard Charfield() in my form.
Is this the correct approach for lazy loading model field choices? Or should I just create a custom ModelForm and load the choices there? (That's probably the more typical approach I guess)
def get_event_choices():
events = get_events()
choices = []
for event in events['events']:
choices.append((event['name'], event['title']),)
return choices
class ApiData(models.Model):
# Fields as seen above
def __init__(self, *args, **kwargs):
super(ApiData, self).__init__(*args, **kwargs)
self._meta.get_field_by_name('event')[0]._choices = lazy(
get_event_choices, list
)()
A: So I went for a typical approach to get this working by simply defining a form for the model admin to use;
# forms.py
from django import forms
from ..models import get_event_choices, ApiData
from ..utils.api import JsonApi
EVENT_CHOICES = get_event_choices()
class ApiDataForm(forms.ModelForm):
"""
Form for collecting the field choices.
The Event field is populated based on the events returned from the API.
"""
event = forms.ChoiceField(choices=EVENT_CHOICES)
class Meta:
model = ApiData
# admin.py
from django.contrib import admin
from .forms.apidata import ApiDataForm
from .models import ApiData
class ApiDataAdmin(admin.ModelAdmin):
form = ApiDataForm
admin.site.register(ApiData, ApiDataAdmin)
| |
doc_23537916
|
option java_package = "proto.data";
message Data {
repeated string strs = 1;
repeated int ints = 2;
}
I received from network this object's inputstream (or bytes). Then, normally, I do a parsing like Data.parserFrom(stream) or Data.parserFrom(bytes) to get the object.
By this, I have to hold full memory on Data object while I just need travel
all string and integer values in the object. It's bad when the object size is big.
What should I do for this issue?
A: Unfortunately, there is no way to parse just part of a protobuf. If you want to be sure that you've seen all of the strs or all of the ints, you have to parse the entire message, since the values could appear in any order or even interleaved.
If you only care about memory usage and not CPU time then you could, in theory, use a hand-written parser to parse the message and ignore fields that you don't care about. You still have to do the work of parsing, you can just discard them immediately rather than keeping them in memory. However, to do this you'd need to study the Protobuf wire format and write your own parser. You can use Protobuf's CodedInputStream class but a lot of work still needs to be done manually. The Protobuf library really isn't designed for this.
If you are willing to consider using a different protocol framework, Cap'n Proto is extremely similar in design to Protobufs but features in the ability to read only the part of the message you care about. Cap'n Proto incurs no overhead for the fields you don't examine, other than obviously the bandwidth and memory to receive the raw message bytes. If you are reading from a file, and you use memory mapping (MappedByteBuffer in Java), then only the parts of the message you actually use will be read from disk.
(Disclosure: I am the author of most of Google Protobufs v2 (the version you are probably using) as well as Cap'n Proto.)
A: Hmm. It appears that it may be already implemented but not adequately documented.
Has you tested it ?
See for discussion:
https://groups.google.com/forum/#!topic/protobuf/7vTGDHe0ZyM
See also, sample test code in google's github:
https://github.com/google/protobuf/blob/4644f99d1af4250dec95339be6a13e149787ab33/java/src/test/java/com/google/protobuf/lazy_fields_lite.proto
| |
doc_23537917
|
And I met a trouble, that in some code with complex indent level, and if I want to went to the appropriate indent place, I have to press multiple times Tab.
e.g.
if condition_a:
if not condition_b:
if random.choice(xrange(100)) > 35:
if user.property != 'master':
|
# Above | is where I want to fast indent to with tab
# Lots of else block ommited here.
I know with > I can indent static code.
But how could I indent to that | position fast with Tab when typing code(insert mode)?
A: Just press:
S
or:
cc
to enter insert mode at the right position.
See :help S and :help cc.
A: except for S, cc, suggested by romainl, you can also use o to create new line and switch to insert mode.
Also in insert mode, you can press Ctrl-F to "auto-indent" the current line.
In normal mode, you can press == to format current line.
A: If what you want is only the functionality of normal mode's <, > but in insert mode, then Ctrl-T is one tab right and Ctrl-D is one tab left.
| |
doc_23537918
|
The background is, that the binary entities transfered over the wire can be quite large. Overall performance can benefit from a cache on microservice A side which employs http caching headers and etags provided by microservice B.
I found a solution that seems to work, but I'm not sure it that is a proper solution, that work together with current requests, that can occur on microservice A at any time.
@Inject
/* package private */ ManagedExecutor executor;
//
// Instead of using a declarative rest client we create it ourselves, because we can then supply a server-side cache: See ctor()
//
private ServiceBApi serviceClientB;
@ConfigProperty(name="serviceB.url")
/* package private */ String serviceBUrl;
@ConfigProperty(name="cache-entries")
/* package private */ int cacheEntries;
@ConfigProperty(name="cache-entrysize")
/* package private */ int cacheEntrySize;
@PostConstruct
public void ctor()
{
// Create proxy ourselves, because we can then supply a server-side cache
final CacheConfig cc = CacheConfig.custom()
.setMaxCacheEntries(cacheEntries)
.setMaxObjectSize(cacheEntrySize)
.build();
final CloseableHttpClient httpClient = CachingHttpClientBuilder.create()
.setCacheConfig(cc)
.build();
final ResteasyClient client = new ResteasyClientBuilderImpl()
.httpEngine(new ApacheHttpClient43Engine(httpClient))
.executorService(executor)
.build();
final ResteasyWebTarget target = (ResteasyWebTarget) client.target(serviceBUrl);
this.serviceClientB = target.proxy(ServiceBApi.class);
}
@Override
public byte[] getDoc(final String id)
{
try (final Response response = serviceClientB.getDoc(id)) {
[...]
// Use normally and no need to handle conditional gets and caching headers and other HTTP protocol stuff here, because this does underlying impl.
[...]
}
}
My questions are:
*
*Is my solution ok as server-side solution, i.e. can it handle concurrent requests?
*Is there a declarative (quarkus) way (@RegisterRestClient. etc) to achieve the same?
--
Edit
To make things clear: I want service B to be able to control the caching based on the HTTP get request and the specific resource. Additionally I want to avoid the unnecessary transmission of the large documents service B provides.
--
Mik
A: Assuming that you have worked with the declarative way of using Quarkus' REST Client before, you would just inject the client in your serviceB-consuming class. The method, that will invoke Service B, should be annotated with @CacheResult. This will cache results depending on the incoming id. See also Quarkus Cache Guide.
Please note: As Quarkus and Vert.x are all about non-blocking operations, you should use the async support of the REST Client.
@Inject
@RestClient
ServiceBApi serviceB;
...
@Override
@CacheResult(cacheName = "service-b-cache")
public Uni<byte[]> getDoc(final String id) {
return serviceB.getDoc(id).map(...);
}
...
| |
doc_23537919
|
My code:
import urllib.request as urllib2, re, time
# importing parser
from bs4 import BeautifulSoup
f = open('weather.txt', 'w')
# Start and end year of simulation
for y in range(2009, 2013):
# Type the months that you want to extract
# For example for January and February use range(1,3)
for m in range(1, 13):
#checking for leap years
for d in range(1,32):
if y%400 == 0:
leap = True
elif y%100 == 0:
leap = False
elif y%4 == 0:
leap = True
else:
leap = False
if (m == 2 and leap and d > 29):
continue
elif (m == 2 and d > 28):
continue
elif (m in [4, 6, 9, 10] and d > 30):
continue
url = "http://www.wunderground.com/history/airport/LTBA/" + str(y) + "/" + str(m) + "/" + str(d) + "/DailyHistory.html"
page = urllib2.urlopen(url)
#opening the website with Beautiful Soup
soup = BeautifulSoup(page)
# finding section with observation details
paragList = soup.findAll(id="observations_details")
counter = 0
counter_max = 0 # maximum number of columns
# adding a zero to one digit numbers
string = ''
if len(str(m)) < 2:
mStamp = '0' + str(m)
else:
mStamp = str(m)
if len(str(d)) < 2:
dStamp = '0' + str(d)
else:
dStamp = str(d)
# time stamp is four digit year, two digit month, and two digit day
timestamp = str(y) + mStamp + dStamp
print(timestamp)
for i in paragList:
# writing in text file the header with the name of each column
headList = i.findAll('th')
f.write('DATE,')
for k in headList:
h_element = k.text
s = str(h_element)
f.write(s)
f.write(',')
counter_max = counter_max + 1
f.write(' \n')
# writing in text file each row with data
tableList = i.findAll('tbody')
for l in tableList:
bodyList = l.findAll('td')
for j in bodyList:
if counter == 0:
f.write(timestamp + ',')
if j.string:
element = j.text
# print (element)
s = str(element)
f.write(s)
f.write(',')
else:
elementList = j.findAll(j.b) + j.findAll('b')
for k in elementList:
if k.string:
element = k.text
# print(element)
s = str(element)
f.write(s)
f.write(',')
counter = counter + 1
if counter == counter_max:
# print
# print("************************")
# print("**** NEXT RECORD *******")
# print("**** *******************")
f.write('\n')
counter = 0
# print ("\n")
# print("**** NEXT YEAR *******")
f.close()
Example output:
| |
doc_23537920
|
The column "Consignment ID" will have a big list of numbers and "Consignment number" will have a smaller list, all of which should match with a corresponding number in "Consignment ID". Therefore, I would like to establish a One-to-One relationship between these two columns and then use this relationship to extract the matching numbers from the "Consignment ID" column. However, Power BI is telling me that the "cardinality isn't valid for that relationship". I'm not sure why that is and I have no idea how to solve this problem.
Does anyone have any thoughts on how to do that?
Thank you for your support!
A: To answer the question and to make it a visible solution:
If you really want 1:1 relationship between Table1 and Table2, you can't have duplicates in any of the columns that are related, otherwise it is a N:1 or 1:N relationship.
| |
doc_23537921
|
I am not sure if this the proper way to use the CallZone class to to check if the paramater zone is vaild.
if(zone.equals("canada")){return true;}
or
if(CallZone.isValidZone(zone) == true){return true}
here is the CallZone class:
public final class CallZone {
public static boolean isValidZone(String zone) {
zone = zone.toLowerCase();
return (zone.equals("canada") ||
zone.equals("usa") ||
zone.equals("europe") ||
zone.equals("asia") ||
zone.equals("anz") ||
zone.equals("latinam") ||
zone.equals("africa")
);
}
}
and here is one of the Phonecard classes, SuperNA10 class, using it to check if the zone is valid:
public class SuperNA10 extends PhoneCard{
final double canMinRate = 0.05;
final double usMinRate = 0.10;
final double weeklyMainFee = 0.50;
public SuperNA10(long no, int passwd){
super(no, passwd,10.00) //invokes superclass class constructor sets no, passwd and balance to 10.00
}
public boolean allowed(String zone){
if(CallZone.isValidZone(zone) ){
return true;
}else{
return false;
}
}
}
I am not sure how to get the CallZone class to check if the parameter in the SuperNA10 class method allowed is valid, also sorry if my question isn't clear or cause confusion, it's my first time posting.
A: Since you have a specific number of call zones, you can use an enum like this:
public enum CallZone {
CANADA, USA, EUROPE; //add any number of zones here
public static boolean isValid(String zone) {
return Stream.of(values())
.anyMatch(s -> s.toString().equalsIgnoreCase(zone));
}
}
You can also look at using CallZone.valueOf(zone) along with catching IllegalArgumentException, instead of using the Stream as above. But I prefer not to use exceptions for non exceptional scenarios.
And then in your SuperNA10 class, you can use it like:
public boolean allowed(String zone) {
return CallZone.isValid(zone);
}
| |
doc_23537922
|
Obviously I could use Dir.glob but it is very slow when there are millions of files because it is too eager - it returns all files matching the pattern while I only need to know if there is any.
Is there any way I could check that?
A: Ruby-only
You could use Find, find and find :D.
I couldn't find any other File/Dir method that returns an Enumerator.
require 'find'
Find.find("/var/data/").find{|f| f=~/\.xml$/i }
#=> first xml file found inside "/var/data". nil otherwise
# or
Find.find("/var/data/").find{|f| File.extname(f).downcase == ".xml" }
If you really just want a boolean :
require 'find'
Find.find("/var/data/").any?{|f| f=~/\.xml$/i }
Note that if "/var/data/" exists but there is no .xml file inside it, this method will be at least as slow as Dir.glob.
As far as I can tell :
Dir.glob("/var/data/**/*.xml"){|f| break f}
creates a complete array first before returning its first element.
Bash-only
For a bash-only solution, you could use :
*
*compgen
*Shell find
| |
doc_23537923
|
Settings settings = Settings.settingsBuilder()
.put("cluster.name", configuration.getString("clusterName"))
.put("client.transport.sniff", false)
.put("client.transport.ping_timeout", "5s")
.build();
TransportClient client = TransportClient.builder().settings(settings).build();
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
break;
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
We have currently three different host in hostnames list. But any time a single client from this list of hostname goes down this Elasticsearch transport client stops responding. I have gone through transport client documentation on Elasticsearch site and have also tried looking at their Github issues, according to that whenever a node goes down only elasticsearch should remove it from list of nodes and continue working with other nodes, but in our case things just break down. Anyone has any idea what might be the problem?
We are using elasticsearch 2.4.3 right now.
A: It looks like you are breaking the loop after a single node has been added. Try removing the break statement:
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
| |
doc_23537924
|
I want to know about following:
*
*What is the main difference between these two optimizer's
*For what type of queries we should enable Pivotal optimizer for better
performance.
A: Anuraag.
Setting optimizer to "on" enables a set of modifications to the original Postgres optimizer to better handle things like queries on very large partitioned tables, subqueries, and CTE SQL (WITH statements). There are other ongoing modifications to make the optimizer code more modular and more efficient on all types of SQL queries, but that is where the focus was originally. I am not on the optimizer team (Pivotal Data Field engineer here) so there are probably others who can give you more in depth answers on this topic than I can.
As far as which queries benefit most, the best answer would be: "it depends" :). Generally, very large partitioned table queries will be handled more efficiently and faster with optimizer = on. Same with CTE queries and queries with sub-selects in them. I have also seen some more standard star schema-type queries run faster with optimizer = on.
In either case, the optimizer depends on very good statistics in the database, so you need to make sure ANALYZE is run after large loads or deletes/truncates.
Your best bet is to run and time your queries with optimizer on and off (it can be set at the session level). The size of your dataset and your database schema structure may show generally faster times with optimizer either on or off, so I would go with whichever setting works best for your particular situation. I work with a lot of Greenplum customers. Some have optimizer set to default on, some set to off. Find the default setting that works best for the bulk of your queries, and use the opposite setting in cases where a query is running "slowly" and see if you get better results.
I hope this answers your question.
Jim
A: For partitioned table, make sure you run analyze root partition since PQO uses stats on the root partition and not the leaf partitions like Planner.
| |
doc_23537925
|
In a combobox I used me.refresh command and it is updating data as I enter. Whereas in another combobox I did the same, but I got no result. Where I am making mistakes?
Further is unregistered software did such problems so that they behave different at different times.
A: There is a slight conceptual difference between the change of a (bound) control's value on a form and the update of the underlying field's value. The underlying field's value might not be updated before the 'update' event is fired.
And, of course, if the control you are dealing with is unbound, there cannot be any field update ...
Edit:
If you want to change an unbound control value programmatically:
myForm.controls(myControl).value = "whatever"
If you want to change a bound control and its underlying field, working on the field side
myForm.recordset.fields(myField).value = "whatever"
myForm.recordset.update
You might then need to refresh your control on the screen so it displays the updated value
And on the control side
myForm.controls(myControl).value = "whatever"
You might then need to fire the update programmatically (recordset.update)event on your underlying control
A: When you change a value of a textbox/combo box/etc on a form the record in the table is not immediately updated. The default way Access handles it is to wait until the record no longer has focus and then it updates the record in the table with any changes you made.
If you want to, you can force an update to the record in the table via the After Update event by using the following:
Private Sub txtMyFieldName_AfterUpdate()
Me.Dirty = False
End Sub
However, I would only do this when editing an existing record. If you are entering a new record then you don't want to trigger Me.Dirty = False after every control has been updated. If you do trigger Me.Dirty = False on new record entry and you have required fields that haven't been filled in yet, you will get an error stating that a required field cannot contain a null value.
| |
doc_23537926
|
The HTML, CSS and screenshots of the occurrence are below.
Firefox and Chrome:
Safari:
Code:
a {
text-decoration: none;
color: inherit;
}
#title {
color: black;
margin: 0 auto;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
#gallery-link {
padding: 15px 0;
text-align: center;
}
#gallery-link-padding {
padding: 15px 60px;
}
.frame-effect-click {
position: relative;
font-weight: 500;
font-size: 16px;
cursor: pointer;
border-radius: 10px;
transition: 0.4s;
}
.frame-effect-click:after, .frame-effect-click:before {
content: '';
position: absolute;
width: 0;
height: 0;
transition: all 0.4s linear, opacity 0.1s 0.4s;
opacity: 0;
}
.frame-effect-click:after {
bottom: 0;
right: 0;
border-bottom: 2px solid black;
border-right: 2px solid black;
}
.frame-effect-click:before {
top: 0;
left: 0;
border-top: 2px solid black;
border-left: 2px solid black;
}
.frame-effect-click:hover:after, .frame-effect-click:hover:before {
width: calc(100% + 10px);
height: calc(100% + 10px);
transition: 0.4s, opacity 0.1s;
opacity: 1;
}
<div id="title">
<div id="gallery-link">
<a id="gallery-link-padding" class="frame-effect-click" href="#">View gallery</a>
</div>
</div>
Does anyone know why that is and how to fix this issue?
| |
doc_23537927
|
A: EDIT:
The OP was not looking to use cross-domain requests, but jQuery supports JSONP as of v1.5. See jQuery.ajax(), specificically the crossDomain parameter.
The regular jQuery Ajax requests will not work cross-site, so if you want to query a remote RESTful web service, you'll probably have to make a proxy on your server and query that with a jQuery get request. See this site for an example.
If it's a SOAP web service, you may want to try the jqSOAPClient plugin.
A: I blogged about how to consume a WCF service using jQuery:
http://yoavniran.wordpress.com/2009/08/02/creating-a-webservice-proxy-with-jquery/
The post shows how to create a service proxy straight up in javascript.
A: Incase people have a problem like myself following Marwan Aouida's answer ... the code has a small typo. Instead of "success" it says "sucess" change the spelling and the code works fine.
A: You can make an AJAX request like any other requests:
$.ajax( {
type:'Get',
url:'http://mysite.com/mywebservice',
success:function(data) {
alert(data);
}
})
A: In Java, this return value fails with jQuery Ajax GET:
return Response.status(200).entity(pojoObj).build();
But this works:
ResponseBuilder rb = Response.status(200).entity(pojoObj);
return rb.header("Access-Control-Allow-Origin", "*").build();
----
Full class:
@Path("/password")
public class PasswordStorage {
@GET
@Produces({ MediaType.APPLICATION_JSON })
public Response getRole() {
Contact pojoObj= new Contact();
pojoObj.setRole("manager");
ResponseBuilder rb = Response.status(200).entity(pojoObj);
return rb.header("Access-Control-Allow-Origin", "*").build();
//Fails jQuery: return Response.status(200).entity(pojoObj).build();
}
}
| |
doc_23537928
|
I have tried to look into details of every worker node using "kubectl describe nodes" but none of the options indicate any relationship between the worker node and master node.
I expect something like a list of Worker Node A, Worker Node B and Worker Node C returns when I input the common master node.
A: In scenario with more than one master, the Kubernetes make leader selection so the other masters stays inactive.
You can use endpoints to check who the master is.
| |
doc_23537929
|
$(document).ready(function() {
$('.ui-page').live('pageshow', function(e, ui) {
// do something
});
});
But after updating this no longer works. But this does:
$(document).ready(function() {
});
$('.ui-page').live('pageshow', function(e, ui) {
// do something
});
If I take out the code and put it outside dom ready it works. Is there a way to make it work inside the dom ready?
A: You should replace $(document).ready() with $(document).bind('pageinit') on jquery mobile.
Important: Use $(document).bind('pageinit'), not $(document).ready()
The first thing you learn in jQuery is to call code inside the $(document).ready() function so everything will execute as soon as the DOM is loaded. However, in jQuery Mobile, Ajax is used to load the contents of each page into the DOM as you navigate, and the DOM ready handler only executes for the first page. To execute code whenever a new page is loaded and created, you can bind to the pageinit event. This event is explained in detail at the bottom of this page.
For more details, refer JQuery Mobile
| |
doc_23537930
|
I'm now struggling to rewrite the code to refer to named ranges instead of absolute references. (i think this is the terminology!?)
The File_ref range occupies cells A13:A104
The Already_Input? range occupies cells B13:B104
I'm using Excel 2013 on Windows
The code that works
Sub test()
Set mybook = Excel.ActiveWorkbook
Set entrysheet = mybook.Sheets("Entry")
Dim RangeStart As Integer
RangeStart = Range("File_ref").Cells(1).Row
Dim RangeLength As Integer
RangeLength = Range("File_Ref").Count
Dim i As Long
Dim j As Long
Dim m As Long
j = 0
m = 0
For i = RangeStart To RangeLength + RangeStart
If IsEmpty(entrysheet.Range("A" & i)) Then 'it's this bit I cannot get to refer to named range
j = j + 1
ElseIf entrysheet.Range("B" & i) = "yes" Then
m = m + 1
End If
Next i
End Sub
The code i have tried, but which doesn't work:
Sub test()
Set mybook = Excel.ActiveWorkbook
Set entrysheet = mybook.Sheets("Entry")
Dim RangeStart As Integer
RangeStart = Range("File_ref").Cells(1).Row
Dim RangeLength As Integer
RangeLength = Range("File_Ref").Count
Dim i As Long
Dim j As Long
Dim m As Long
j = 0
m = 0
For i = RangeStart To RangeLength + RangeStart
If IsEmpty(entrysheet.Range("File_ref").Cells(i)) Then
j = j + 1
ElseIf entrysheet.Range("Already_Input?").Cells(i) = "yes" Then
m = m + 1
End If
Next i
End Sub
A: Can you try the below code
Sub test()
Set mybook = Excel.ActiveWorkbook
Set entrysheet = mybook.Sheets("Entry")
File_ref = "A1:A10" ''Added new
Already_Input = "B1:B10" ''Added new
Dim RangeStart As Integer
RangeStart = Range(File_ref).Cells(1).Row
Dim RangeLength As Integer
RangeLength = Range(File_ref).Count
Dim i As Long
Dim j As Long
Dim m As Long
j = 0
m = 0
For i = RangeStart To RangeLength + RangeStart
If IsEmpty(entrysheet.Range(File_ref).Cells(i)) Then
j = j + 1
ElseIf entrysheet.Range(Already_Input).Cells(i) = "yes" Then
m = m + 1
End If
Next i
End Sub
A: I figured out what the problem was, I had misunderstood the range that I was asking the code to evaluate and it was missing all of the sample entries that I had entered to test it.
| |
doc_23537931
|
So far no problems have occurred, but what i'm missing in the lombok implementation is that there are no generated methods for adding one object to a collection.
Generated Code:
private List<Object> list = new ArrayList<>();
public Object getObject(){..}
public void setObject(List<Object> o){..}
What I want extra:
public void addObject(Object o) {..}
Anyone know if this is getting there soon or if this is impossible?
A: This is surely currently impossible. There's such a proposal, but low priority (or even rejected).
Actually, I can't find it anymore. You may want to try yourself on the issue list.
Now, I stumbled upon this thread showing an interesting workaround limited to a single variable.
Bad news
This gets improbably implemented in the near future. There are far too many feature requests to implement and maintain them all (or any non-trivial fraction of them). See this issue comment.
A: 1) I couldn't find a ticket for it, and, based on the comment on the other answer, I filed one: https://github.com/rzwitserloot/lombok/issues/1905 So let's see :)
2) For a single collection, it seems that @Delegate could do the job:
interface CollectionAdders<E> {
boolean add(E e);
boolean addAll(Collection<? extends E> c);
}
interface ListGetters<E> {
E get(int index);
}
class Foo {
@Delegate(types={CollectionAdders.class, ListGetters.class})
List<String> names = new ArrayList<>();
}
Generates:
Foo#add(E e)
Foo#addAll(Collection<? extends E> c)
Foo#get(int index)
See this forum post: https://groups.google.com/forum/#!topic/project-lombok/alektPraJ_Q
| |
doc_23537932
|
I don't have much experience using Vba so I'm a little confused. I hope you've understood my problem and a appreciate since now.
Private Sub compare_cells(ByVal Target As Range)
If Target Is Nothing Then Next
If Cells(Target.Row, 1).Value = 'another row ' Cells(Target.Row, 4).Value Then
Next
Else
'inset empty row above of the sheet with the missing value
Next
End If
End Sub
The code above is really ugly, that's why I need help. The data seems like this:
sheet 1:
sheet 2:
A: If my understanding is correct in your question above... You want to produce an empty row, above ANY values that DO NOT match between two worksheets.
Based on this, and the above code, you are not too FAR off the right track.
Try the below code...
Private Sub compare_cells(ByVal Target1 As Range, ByVal Target2 As Range)
If Target1 Is Nothing Then Exit Sub
If Target2 Is Nothing Then Exit Sub
Dim ws1, ws2 As Worksheet
Set ws1 = Sheets(Target1.Parent.Name)
Set ws2 = Sheets(Target2.Parent.Name)
If Target1.Value <> Target2.Value Then
' If they don't match place your code here
ws1.Range(Target1.Row & ":" & Target1.Row).Insert Shift:=xlDown
ws2.Range(Target2.Row & ":" & Target2.Row).Insert Shift:=xlDown
End If
End Sub
You can call this by using this within another macro...
Call compare_cells(Sheets("Sheet1").Range("A1"), Sheets("Sheet2").Range("D1"))
If you use the above macro, then this will compare Range "A1" on Sheet1, with Range "D1" on Sheet 2. If these two cells do not match, then it will insert a row above both A1, and D1.
| |
doc_23537933
|
Here is the code:
board = [['.', '.', '.', '.', '.', '.', '.'],
['.', '.', '.', '.', '.', '.', '.'],
['.', '.', '.', '.', '.', '.', '.'],
['.', '.', '.', '.', '.', '.', '.'],
['.', '.', '.', '.', '.', '.', '.'],
['.', '.', '.', '.', '.', '.', '.']
]
def printboard():
"""Prints the board"""
print('\n'.join(['\t'.join([str(cell) for cell in row]) for row in board]))
def placement(tokentype):
"""Function to place tokens and stack"""
for i in range(5, -1, -1):
if board[i][column - 1] != '.':
pass
elif board[i][column - 1] == '.':
board[i][column - 1] = tokentype
break
tokenTypes = ['x', 'o']
while True:
column = int(input("Player 1, what column do you want to place your token in? "))
placement(tokenTypes[0])
printboard()
column = int(input("Player 2, what column do you want to place your token in?"))
placement(tokenTypes[1])
printboard()
A: Here's the code for checking for horizontal winners. The check for vertical winners is almost the same. Diagonal is a just bit tricker. Remember diagonals go both ways.
def check_horizontal_winner(board):
for row in range(6):
for col in range(3):
if board[row][col] != '.' and \
board[row][col] == board[row][col+1] == \
board[row][col+2] == board[row][col+3]:
print( "WINNER", board[row][col], "starting at", row, col )
return board[row][col]
return None
Your placement code is a little silly. You don't need the 'if' part at all. And, of course, you need to pass in the column:
def placement(tokentype, column):
"""Function to place tokens and stack"""
for i in range(5, -1, -1):
if board[i][column - 1] == '.':
board[i][column - 1] = tokentype
break
EDIT to add diagonal checks
Consider that a "/" diagonal (lower left towards upper right) can only begin in x=0..3, y=0..3. The only tricky thing here is that a downward "\" diagonal will start in the upper half (x=0..3, y=4..6). That could easily be done with two sets of loops, but I just defined a separate variable (trow) that counts from the top down.
You could probably combine all three of these functions into one. It might be very slightly faster, but I think it would be more confusing to read.
def check_diagonal_winner( board ):
for row in range(3):
trow = len(board) - 1 - row
for col in range(3):
if board[row][col] == '.':
continue
# Check for a / diagonal.
if board[row][col] == board[row+1][col+1] == \
board[row+2][col+2] == board[row+3][col+3]:
print( "WINNER", board[row][col], "starting at", row, col )
break
# Check for a \ diagonal. We check from the top down.
if board[trow][col] == board[trow-1][col+1] == \
board[trow-2][col+2] == board[trow-3][col+3]:
print( "WINNER", board[trow][col], "starting at", trow, col )
break
| |
doc_23537934
|
Question is- Given an array of integers, find the longest subarray where the absolute difference between any two elements is less than or equal to .
Example a = [1,1,2,2,4,4,5,5,5]
There are two subarrays meeting the criterion: and .
The maximum length subarray has elements.
[1,1,2,2] and [4,4,5,5,5]
Returns
int: the length of the longest subarray that meets the criterion
or visit the link Hackerrank Problem
def pickingNumbers(a):
a.sort()
answer = 0
flag = False
for i in range(len(a)-1,1,-1):
count = 0
temp = [list(l) for l in list(itertools.combinations(a,i))]
for j in temp:
for k in range(len(j)-1):
if abs( j[k+1] - j[k] ) <= 1:
count +=1
if count == len(j):
answer = len(j)
break
A: Note that the statement asks you to to find a subarray, not subsequence. A subarray is a continuous chain of elements from the parent array. Here by sorting the array you are destroying the parent array's order whenever the array given to you is ascending. Hence your program will give wrong output
| |
doc_23537935
|
// MARK: New Timer Bottom Sheet
@ViewBuilder
func NewTimerView()->some View{
VStack(spacing: 15){
Text("Add New Timer")
.font(.title2.bold())
.foregroundColor(.white)
.padding(.top,10)
** HStack(spacing: 15){
Text("\(pomodoroModel.hour) hr")
.font(.title3)
.fontWeight(.semibold)
.foregroundColor(.white.opacity(0.3))
.padding(.horizontal,20)
.padding(.vertical,12)
background{
Capsule()
.fill(.white.opacity(0.07))
}**
.contextMenu{
ContextMenuOptions(maxValue: 12, hint: "hr") { value in
pomodoroModel.hour = value
}
}
}
}
}
Model
import SwiftUI
class PomodoroModel: NSObject,ObservableObject {
// MARK: Timer Properties
@Published var addNewTimer: Bool = false
@Published var hour: Int = 0
@Published var minutes: Int = 0
@Published var seconds: Int = 0
}
| |
doc_23537936
|
$(function() {
$('#prod_btn').click(function() {
$(this).addClass('selected').next('ul').css('display', 'block');
setTimeout(hideMenu, 5000);
});
});
function hideMenu() {
$('#prod_btn').removeClass('selected').next('ul').css('display', 'none');
}
Where is the problem?
Thanks
A: I've just had the same problem. My code is running great in any browser on my Mac, but on iOs devices it doesn't work.
I use ".bind(this)" on my timeout function and that is what is causing the problem for me.
When I extend the function object with ".bind" in my script it works like a charm.
My code is something like this:
searchTimeout = setTimeout(function() {
...
}.bind(this),250);
For this to work on iOs devices I (like mentioned above) just added this:
Function.prototype.bind = function(parent) {
var f = this;
var args = [];
for (var a = 1; a < arguments.length; a++) {
args[args.length] = arguments[a];
}
var temp = function() {
return f.apply(parent, args);
}
return(temp);
}
I don't see any .bind on your setTimeout, but for others with the same problem this may be the issue. That's why I'm posting :-)
A: I moved your example to a jsbin, and it's working on my iphone 4.
Please test it out going here from your devices: http://jsbin.com/asihac/5
You can see the code here http://jsbin.com/asihac/5/edit
The example is using jQuery - latest and I only added the required css class.
A: this doesn't apply to your code, but a common problem with long-running scripts failing on iOS devices is that MobileSafari kills a javascript thread after 10 seconds have elapsed. you should be able to use setTimeout and/or setInterval to work around this, or you can avoid it by making a shortcut to it and thereby running it as an app. see https://discussions.apple.com/thread/2298038, particularly the comments by Dane Harrigan.
A: Keep in mind also that any setTimeout function is actually likely fire while DOM elements are rendering if the delay is set to a value too short. While that might seem obvious, it can be easily confused with no method firing whatsoever. A good way to test is to have an alert prompt run.
window.onLoad(alert("hey!"));
Then check to see if your function fires after.
| |
doc_23537937
|
#include<stdlib.h>
int main(){
int myChr[4][8];
printf("%x\n",myChr);
printf("%x\n",&myChr);
printf("%x\n",*myChr);
return 0;
}
After executing the above program, I get the same address as output. Do they own different value or all of them have same value? How to prove that? (*Maybe need to assume values to array, I don't know)
A: *
*myChr is the address on the stack of your array.
*&mychr, in this case, is probably going to give you the same value as it is the address of the pointer on the stack.
**myChr is the address of 1st entry of myChr[4][8] entry, which in the case is still the original address.
***myChar would give you the value of myChr[0][0] - in this case garbage as you have not actually assigned anything to your array.
A: myChar is a int [4][8].
In the printf expression:
*
*myChar is of type int (*)[8] (after application of C array to pointer conversion rule from int [4][8])
*&myChar is of type int (*) [4][8]
**myChar is of type int * (after application of C array to pointer conversion rule from int [4])
All the expressions have different types but they point to the same memory address, that is:
(void *) myChar == (void *) &myChar == (void *) *myChar
Note that the valid way to print a pointer value is to use p conversion specifier and cast the pointer to void * if it is of a different type.
A: Arrays are simply named extents of memory. So the address of an array is the address of the extent it occupies. At the same time it is the address of the first element of the array because it occupies the initial part of the extent.
Thus you have:
this statement
printf("%x\n",myChr);
displays the address of the first element of the array because array name is implicitly converted to pointer to its first element;
this statement
printf("%x\n",&myChr);
displays the address of the array that is the same address as above because it is the address of the allocated extent;
this statement
printf("%x\n",*myChr);
displays the same address. Why? As I have already said the name of the array is implicitly converted to pointer to its first element. The element of the array is in turn a one-dimensional array. So in expression this one-dimensional array *myChr (the first element of the original two-dimensional array) in turn is converted to pointer to its first element.
So in all three cases you display this address :)
&myChr[0][0]
| |
doc_23537938
|
So i run
cabal install threepenny-gui
... without any problems
So i tried the following example:
module Main where
import qualified Graphics.UI.Threepenny as UI
import Graphics.UI.Threepenny.Core
main :: IO ()
main = do
startGUI defaultConfig setup
setup :: Window -> IO ()
setup window = do
return window # set UI.title "Hello World!"
button <- UI.button # set UI.text "Click me!"
getBody window #+ [element button]
on UI.click button $ const $ do
element button # set UI.text "I have been clicked!"
but i get a Errors about the types:
threePennyHelloWorld.hs:8:28:
Couldn't match type `IO ()' with `UI ()'
Expected type: Window -> UI ()
Actual type: Window -> IO ()
In the second argument of `startGUI', namely `setup'
In a stmt of a 'do' block: startGUI defaultConfig setup
threePennyHelloWorld.hs:12:25:
Couldn't match type `UI Window' with `IO a0'
Expected type: UI Window -> IO a0
Actual type: UI Window -> UI Window
In the second argument of `(#)', namely `set title "Hello World!"'
In a stmt of a 'do' block: return window # set title "Hello World!"
threePennyHelloWorld.hs:14:31:
Couldn't match type `UI Element' with `IO Element'
Expected type: UI Element -> IO Element
Actual type: UI Element -> UI Element
In the second argument of `(#)', namely `set text "Click me!"'
In a stmt of a 'do' block:
button <- UI.button # set text "Click me!"
threePennyHelloWorld.hs:15:9:
Couldn't match type `UI' with `IO'
Expected type: IO Element
Actual type: UI Element
In a stmt of a 'do' block: getBody window #+ [element button]
In the expression:
do { return window # set title "Hello World!";
button <- UI.button # set text "Click me!";
getBody window #+ [element button];
on UI.click button
$ const $ do { element button # set text "I have been clicked!" } }
In an equation for `setup':
setup window
= do { return window # set title "Hello World!";
button <- UI.button # set text "Click me!";
getBody window #+ [element button];
.... }
threePennyHelloWorld.hs:17:9:
Couldn't match type `UI' with `IO'
Expected type: IO ()
Actual type: UI ()
In a stmt of a 'do' block:
on UI.click button
$ const $ do { element button # set text "I have been clicked!" }
In the expression:
do { return window # set title "Hello World!";
button <- UI.button # set text "Click me!";
getBody window #+ [element button];
on UI.click button
$ const $ do { element button # set text "I have been clicked!" } }
In an equation for `setup':
setup window
= do { return window # set title "Hello World!";
button <- UI.button # set text "Click me!";
getBody window #+ [element button];
.... }
even when i try to run an example File, i get the same errors
Does anyone have an idea what i'am doing wrong?
A: setup is in the UI monad, not IO, so change the type declaration:
setup :: Window -> UI()
As for example in https://github.com/HeinrichApfelmus/threepenny-gui/blob/master/samples/BarTab.hs
| |
doc_23537939
|
I have Compared the form properties in both projects like Auto Scale Mode, Auto Size mode, Min Size, Max Size etc,. There is no difference. I am not sure which property causing this issue.
Note: Both Project developed by myself. It happens once already in another development. So its time to take action for this issue. Thanks
Application 1 is copied from Application 2
Update: 1
Just find app.manifest file from your project and add the piece of line from the link @jimi comment. The Form Size Back to Normal.
Here is the code which i added to my app.manifest
<asmv1:application>
<asmv1:windowsSettings xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">
<dpiAware>false</dpiAware>
</asmv1:windowsSettings>
| |
doc_23537940
|
I have a front-facing and a back-facing camera. I'd like to draw the camera stream of any of these sources on a Silverlight rectangle. For Windows Phone 7 I can do that with a VideoBrush in a Rectangle.
How does that work on Windows 8?
And I'm not talking about making pictures with the CameraCaptureUIclass
A: I'm sorry, I just found a similar question/answer here:
the answer for my problem is this:
<CaptureElement x:Name="captureElement"/>
async private void StartCamPrev()
{
var mediaCapture = new Windows.Media.Capture.MediaCapture();
await mediaCapture.InitializeAsync();
this.captureElement.Source = mediaCapture;
await mediaCapture.StartPreviewAsync();
}
And the answer about the front-face/back-face camera I found here:
DeviceInformationCollection devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
var mediaCapture = new Windows.Media.Capture.MediaCapture();
await mediaCapture.InitializeAsync(new MediaCaptureInitializationSettings
{
// the index of the devices-array is defining the device
// here I just put a '0' for testing-purposes:
VideoDeviceId = devices[0].Id
});
| |
doc_23537941
|
Clojure on emacs fails... & clojure isn't in your exec...
I'm unable to get cider to run on MacOS. I've been just using lein on the command line, but I would prefer to use cider.
I build a new project, like so:
lein new ec
open up core.clj
run Mx cider-jack-in
and I get:
The lein executable isn’t on your ‘exec-path’
I checked for the default in cider, and it is lein.
Is there any way to run cider on MacOS?
A: there are some macOS related problems with exec path detection for emacs.
There is an emacs package, that should solve this issue: exec-path-from-shell.
It used to help me when i was using macos (like 3 years ago, but i think it is still valid)
Otherwise you can try to put an absolute path for lein (which lein in terminal) to cider-lein-command: M-x customize-group [ret] cider [ret] -> Cider lein command, that should also work.
| |
doc_23537942
|
In my current PHP-project I have 2 separate (non interrelated) modules, lets say a contact-module and a review-module, they are on the same page.
On the client side both modules download JSON-data and post JSON-data to the PHP-webservice running on port 80.
Question 1
Is it correct that for both (non interrelated) modules I need to create an Angular 2 app? So creating 2 Angular apps?
Question 2
Do you have to run per module an instance of "npm start", so that on save it will keep transpiling my project-files? This will lead to having many "cmd prompts" with "npm start" running in the background right? Now its only 2 cmd prompts, but what if my site contains 10 modules doing ASYNC-activity?
Question 3
Is it possible to share the JS-library files with various apps and within various sites?
So that it does not download all the 16.000 JS-files every time you try to do some simple ASYNC-download of JSON-data and add it to the DOM?
If yes, how?
So:
C:\angular\ -> JS-Library (containing 16.000 files.... :|)
C:\sites\site1\ -> uses C:\angular\
C:\sites\site1\contact-module\ -> uses C:\angular\
C:\sites\site1\review-module\ -> uses C:\angular\
etc.
C:\sites\site2\ -> uses C:\angular\
etc.
C:\sites\site3\ -> uses C:\angular\
etc.
How to share all those 16.000 files in the Angular JS-libraries within multiple modules & projects?
Per Angular 2 app it downloads via "npm install" more than 16.000 files....., even for the most simple application you can develop!
By having 2 simple modules, it will download 32.000 files & loading 100's of files in my webbrowser for just running 2 simple modules? (sorry but really thinking WTF!! What a waste and what an overkill, right?)
Ok, maybe you can JS-pack all those JS-files later on and combine it into 1 file. But still is this not overkill?
So, can I not for example share 1 Angular library with multiple Apps? Then it "only" requires 16.000 files...
Let's say you create 5 websites. For every website you have to add 16.000 JS-files, only because you need to do some simple ASYNC-request and add the JSON-date to your DOM. For this activity you already need to download 80.000 files (5 * 16.000)!
If it is also necessary to have several modules in a website requiring to do ASYNC-activity, like here the contact-module and the review-module, then it can easily become downloading millions of the same JS-files and using an awful lot of resources. And for every module to work having to use "npm install" and "nmp start" everytime to just work and see the results.. Having 10 "cmd prompt" screens running in the background to make this basic activity possible.
And all of this for just downloading some plain JSON-data from PHP-webservice and put it nicely into the dom?
Question 4
My main question, is the above the right / recommended approach? Not to start a discussion on how should you exactly use it, but is this a common way of working for PHP-website development and using Angular 2 for the frond-end? Or is there a better way of working?
Note, I am aware that PHP is in this scenario irrelevant, of course it can also be C# or JAVA as a back-end. But its to make the example more concrete.
Question 5
Then using the recommended TypeScript you will have to use "npm start" to transpile your files to JS and then it will run on localhost:3000.
I'm developing my sites in PHP and using Xampp on localhost port 80. Any experience of people that use Angular 2 for their front-end for their PHP-projects? How do you configure it so that it will refresh your localhost:80 page when you change something in your TS-file? And what if you have multiple (non interrelated) modules for which you use another Angular App? It cannot run twice on the same port right?
Any help is highly appreciated, thanks in advance.
A: I think the problem with your understanding is that you're mixing everything into one question: back-end (PHP), front-end (Angular), development setup (npm) etc. I'll try to give short answers to each of your questions.
Question 1
No, I wouldn't do create two angular applications. If you'll have a separate page for each module, you can use router to enable navigation without page reload.
Question 2
This depends on how you setup your development process. Usually, with modern building systems like webpack, you can configure module dependencies and how often each must built. This all can be managed through one command line instance.
Question 3
npm downloads 16000 files because it downloads sources along with built files. You'll use one built version in the production.
Question 4
I usually separate front-end and back-end development into different projects and treat them as completely separate applications communicating through REST interface.
Question 5
For usage of typescript, you'll have to use preprocessors which will be run by your building system. Try to google setting up angular 2 with webpack and a lot will be clearer to you.
| |
doc_23537943
|
//class having a private Event.
public class Sample
{
private delegate void MyDelegate(string ip4);
private event MyDelegate MyEvent;
}
internal class Program
{
private static void Main(string[] args)
{
//try getting the non-public event
EventInfo[] events = typeof (Sample).GetEvents(BindingFlags.NonPublic);
Console.WriteLine(events.Length); //it's 0
var evt = typeof (Sample).GetEvent("MyEvent", BindingFlags.NonPublic); //evt is null
}
}
A: You need also specify BindingFlags.Instance ..
var evt = typeof (Sample)
.GetEvent("MyEvent", BindingFlags.NonPublic | BindingFlags.Instance); //evt is null
A: var evt = typeof(Sample).GetEvent("MyEvent", BindingFlags.NonPublic | BindingFlags.Instance);
| |
doc_23537944
|
The normal user behavior is to go to the bottom and pull more results.
I plan to scale the bitmaps down, but there will be many of them.
Therefore, I think it might be safe to delete images that are many pages above the current page.
Are the old images deleted by the GridView at some point?
The normal usage is to use the view cache, which looks like the GridView is holding on to them.
| |
doc_23537945
|
One of the problems is that the call to get the API data is long (>5 seconds) and we don't want the customer waiting.
Our thinking was to
*
*Call an API at some point in the build process to collect the data
*Save the data in the store so other components can access it.
*Not call the API to get the data again.
How might I do this? I really appreciate any help, I'm pretty new to Vue and especially Nuxt, but really enjoy both.
What I've tried
My understanding is that using fetch() in a component will call both on the server before the initial page is rendered and on the client some time after the component is mounted.
From the documentation on Nuxt
fetch is a hook called during server-side rendering after the
component instance is created, and on the client when navigating. The
fetch hook should return a promise (whether explicitly, or implicitly
using async/await) that will be resolved:
On the server before the initial page is rendered On the client some
time after the component is mounted
I've also tried this approach (and am currently using it) from Stackoverflow
export const actions = {
async nuxtServerInit ({ state }, { req }) {
let response = await axios.get("some/path/...");
state.data = response.data;
}
}
But there is a caveat in the answer:
nuxtServerInit will fire always on first page load no matter on what page you are. So you should navigate first to store/index.js.
Anyway, I could use a hand figuring out how to do this.
| |
doc_23537946
|
I made it so it would display the directory in a textbox.
But what I want is have another button which would take that directory and start it by using ProcessStartInfo.
OpenFileDialog, showing it in TextBox:
public void button4_Click(object sender, EventArgs e)
{
OpenFileDialog ofd = new OpenFileDialog();
ofd.Title = "Open Arma 3";
ofd.Filter = "EXE file|*.exe";
if (ofd.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{
textBox1.Text = ofd.FileName;
}
}
Process:
private void button3_Click(object sender, EventArgs e)
{
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = //RESULT OPENFILEDIALOG SHOULD BE HERE
startInfo.Arguments = @"-window -useBE -mod=e:\Aaron\Addons\@CBA_A3";
Process.Start(startInfo);
}
A: Since you already save the result of the OpenFileDialog in textBox1, you can easily access it in the button3_Click event handler.
To fill the startInfo.FileName:
*I added an extra IsNullOrWhiteSpace check, so the application doesn't start another process if the textBox1.Text is empty.
if(!string.IsNullOrWhiteSpace(textBox1.Text)
{
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = textBox1.Text
startInfo.Arguments = @"-window -useBE -mod=e:\Aaron\Addons\@CBA_A3";
Process.Start(startInfo);
}
| |
doc_23537947
|
I have a SQL function
function [dbo].[fnKudishikaAmt]
(@ParishName nvarchar(100), @Hno int, @dateto datetime = Null)
Returns Decimal(15,2)
This function shows proper result by using the execute command
Select dbo.fnKudishikaAmt('St.George Malankara Catholic Church', 29, default)
My requirement is this function should be called from C#
I am getting the error
Conversion failed when converting datetime from character string
Code:
public double kudishikatotal(string ParishName, Int32 HouseNo)
{
String SQLText = "select ChurchDB.dbo.fnKudishikaAmt(@ParishName, @Hno, @dateto) as fnresult";
SqlCommand cmd = new SqlCommand(SQLText);
cmd.CommandType = CommandType.Text;
cmd.Parameters.AddWithValue("@ParishName", ParishName);
cmd.Parameters.AddWithValue("@Hno", HouseNo);
cmd.Parameters.AddWithValue("@dateto", "default");
string rval = GetSingleValue(cmd);
double kudiamt = 0;
if (rval != null)
{
kudiamt = Convert.ToDouble(rval);
}
return kudiamt;
}
private static string GetSingleValue(SqlCommand cmd)
{
string ConString = connectionstring();
string returnvalue = "";
using (SqlConnection con = new SqlConnection(ConString))
{
cmd.Connection = con;
con.Open();
returnvalue = cmd.ExecuteScalar().ToString();
con.Close();
}
return returnvalue;
}
A: If you've declared default value for parameter in stored procedure - then you can just not pass this parameter from c# code at all, and in this case it will have default value.
In your case exception thrown because it's impossible to convert string "default" to SqlDateTime which is your parameter type.
A: YOu can use if condition while sending the datetime parameter.
if(some condition)
{
cmd.Parameters.AddWithValue("@dateto", dateTimeValue);
}
Here datetimeValue is the value you want to pass. So you will be passing dateTimeValue only if required.
The error is due to the string "default" you passed.
| |
doc_23537948
|
Invalid Code Signing Entitlements. Your application bundle's
signature contains code signing entitlements that are not supported by iOS.
Specifically, key
`'com.apple.developer.icloud-container-identifiers' in Payload ------- not supported`
While surfing i also got some answer that disable iCloud,but I want to use iCloud feature for my app, so is there any other way to overcome this problem, please let me know.
Thanks in advance
A: It seems like your provisioning profile is not configured to allow iCloud entitlements. To do this, log into your dev account at http://developer.apple.com, go to the iOS Dev center, and Click the link on the right for "Certificates, Identifiers, & Profiles" under the iOS Developer Program on the right. Find your app id in the Identifiers section, and click the edit button. From there, make sure iCloud is enabled for both development and distribution.
Also, make sure your app id prefix is not using wildcards. You will not be able to use a wildcard prefix (com.example.*) when using any of the special entitlements, like iCloud, Push Notifications, etc. Once you are sure that is set up with iCloud enabled, you will need to regenerate your provisioning profile.
Click the Provisioning Profiles on the left, and find you app store provisioning profile. Click the Edit button on the profile, select the app ID that now has iCloud enabled, and click the "Generate" button to generate a new provisioning profile. Then download the provisioning profile and install it over top of the old profile. Then re-build and sign the app and try re-submitting.
A: Another solution relevant to people re-signing their app:
If you have iCloud features enabled the provisioning profile will contain keys like com.apple.developer.icloud-container-identifiers. If you don't filter these keys out before you pass them to codesign they will end up inside the binary, which causes this error.
| |
doc_23537949
|
A: If working with datetimes in indices use Index.map with same format od DatetimeIndexes:
s.index = pd.to_datetime(s.index)
df.index = pd.to_datetime(df.index)
df['new'] = df.index.strftime('%m-%d %H:%M').map(s.rename(index=lambda x: x.strftime('%m-%d %H:%M')))
A: Thank you, Jezrael!
In the end I went for this solution, it's a bit more declarative in my opinion. I'm not sure which method is more performative, but I have only a few thousand lines, so in the end either way should work just fine. :)
s.index = pd.to_datetime(s.index)
df.index = pd.to_datetime(df.index)
df['new'] = df.index.to_series().apply(lambda timestamp: s.loc[timestamp.replace(year=2030)])
| |
doc_23537950
|
Cows in the FooLand city are interesting animals. One of their
specialties is related to producing offsprings. A cow in FooLand
produces its first calve (female calf) at the age of two years and
proceeds to produce other calves (one female calf a year).
Now the farmer Harold wants to know how many animals would he have at
the end of N years, if we assume that none of the calves die, given
that initially, he has only one female calf?
Input:
The first line contains a single integer T denoting the number of test
cases. Each line of the test case contains a single integer N as
described in the problem.
Output:
For each test case print in new line the number of animals expected at
the end of N years
I solved this by below code-
public static void main (String[] args)
{
Scanner ab=new Scanner(System.in);
int t=ab.nextInt(); //number of test cases
for (int i=0;i<t;i++)
{
int n=ab.nextInt(); //years
int arr[]=new int[n];
arr[0]=1;
arr[1]=2;
if(n>2)
{
for(i=2;i<n;i++)
{
arr[i]=arr[i-1]+arr[i-2];
}
}
System.out.println(arr[arr.length-1]);
}
}
But when i searched on net they have given a much complex solution at net-
https://hackerranksolutionc.blogspot.in/2017/10/cows-of-fooland.html
I tried matching the output and found the result is same for small numbers but is different for very large number.
I want to know that is there anything wrong in my solution?
A: Your Solution won't work if the expected time complexity is O(log N).
You can convert the recursive equation in the form of matrix and solve it using matrix exponentiation.
For a more generalized version solve this problem https://www.spoj.com/problems/SEQ/
| |
doc_23537951
|
margin: 20pt;
}
@page :first {
margin-top: 0pt;
}
My first @page selector works fine at setting all page margins to 20pt. But the @page :first selector which should set the top margin to 0 on the first page only has absolutely no effect.
A: This looks like a bug introduced in Dompdf 0.8.3. You can fall back to 0.8.2 or (when it's available) upgrade to the next release.
If you don't want to fall back to an earlier release you can grab a nightly download. There's a link in the Dompdf project README.
| |
doc_23537952
|
var desc = "All over print design SoulCal branding badge 80% Polyester, 20% Elastane Machine washable Keep away from fire."
function materialCutter(desc){
// some logic here...
// var material = "80% Polyester, 20% Elastane"
return material;
}
I think I have use the "%" signs, but at this point to be honest I am stuck.
Thanks in advance,
A: With String.match() function:
var desc = "All over print design SoulCal branding badge 80% Polyester, 20% Elastane Machine washable Keep away from fire.",
materials = desc.match(/\b\d+% \w+/g);
console.log(materials);
| |
doc_23537953
|
Best way would be using jquery.
A: Probably in a tabular format. Personally, jQuery + dataTables (jQuery plugin) works really well for most applications where you can sort out times and types of messages and such. dataTables would allow you to say, view 100 per page and sort by it.
A: Since your problem seems to be client-side searching-and-filtering your log (since you use a lot of Ctrl+F), you could use the quicksearch jquery plugin (see the example on the page).
Now in the examples, the plugin filters table rows, but I believe you could also use it to filter other type of elements, such as <p>log message</p>. Just wrap individual messages in an appropriate html element, and then filter them.
A: Define 'long'. Is the data being displayed in a very long string that makes you scroll horizontally?
If so you could try breaking it down using something like:
echo nl2br($very_long_log_file);
which will put a <br/> before all new lines.
A: You could provide an interactive "search" field at the top that would filter the long list of lines to show only the lines matching the search criteria. This could be done on the client side with using jQuery which would avoid a round-trip to the server.
| |
doc_23537954
|
Do you know if there is an implementation or an example using this algorithm, maybe MATLAB?
A: I'm a bit confused. FastICA, which you mention, implements the fast-fixed point algorithm in MATLAB. So that would be your answer then?
EDIT: The FastICA code is pretty easy to use. The only input it needs is a mixed signal, which it then tries to unmix. You can also give it additional inputs, like doing PCA, etc.. The main difficulty is in creating the mixed signal, which needs to be a n x N matrix, with n being the number of observations and N the length of the signal.
Here is an example that first creates one signal with 4 observations, then mixes that signal by multiplying it with a random signal, and finally uses ICA on the mixed signal to try to recover the orignal signal.
N=500; %data size
v=[0:N-1];
sig(1,:)=sin(v/2); %sinusoid
sig(2,:)=((rem(v,23)-11)/9).^5; %funny curve
sig(3,:)=((rem(v,27)-13)/9); %saw-tooth
sig(4,:)=((rand(1,N)<.5)*2-1).*log(rand(1,N)); %impulsive noise
%create mixtures
Aorig=rand(size(sig,1));
mixedsig=(Aorig*sig);
%preform ica to unmix signal
ica = fastica(mixedsig);
| |
doc_23537955
|
//component.vue
<template>
<div>
Hello there?
<a @click="changed">New</a>
<ol>
<li v-for="option in list">
<div class='row justify-content-start'>
<div class='col-sm-6'><input v-model="option.value" type='text' placeholder="key"/></div>
<div class='col-sm-6'><input v-model="option.name" type='text' placeholder="Name"/></div>
</div>
</li>
</ol>
</div>
</template>
<script>
export default{
props:['options','iscolumn'],
data(){
return {list:this.options,item:{name:'',value:''}}
},
methods:{
changed(){
$bus.$emit('add-option',this.item,this.iscolumn);
}
}
}
</script>
/** root.vue **/
<template>
<div>
<h3>Rows</h3>
<div><rows :options="rows" :iscolumn="false"/></div>
<h3>Columns</h3>
<div><rows :options="columns" :iscolumn="true" /></div>
</div>
</template>
<script>
export default{
components:{'rows':require('./component')},
data(){
return {
columns:[],rows:[]
}
},
created(){
this.$bus.$on('add-option',(option,iscolumn)=>{
if (is_column) {this.columns.push(option);}
else this.rows.push(option);
})
}
}
</script>
When I click on the New from root both columns and rows get populated.
Looking for case where each of the component are independent, can't understand how they are sharing variables.
Any assistance will be appreciated.
A: Assign unique key attributes to the rows components:
<template>
<div>
<h3>Rows</h3>
<div><rows key="rows1" :options="rows" :iscolumn="false"/></div>
<h3>Columns</h3>
<div><rows key="rows2" :options="columns" :iscolumn="true" /></div>
</div>
</template>
| |
doc_23537956
|
I am following these steps for the upgrade.
*
*Create a backup of the current database, repos & uploads.(Not sure if relevant.)
sudo gitlab-rake gitlab:backup:create
*Download the 7.13.2 Gitlab Omnibus package.
*Install the Gitlab 7.13.2 Omnibus package.
sudo dpkg -i gitlab-ce_7.13.2-ce.0_amd64.deb
*Reconfigure Gitlab
sudo gitlab-ctl reconfigure
*Restart services
sudo gitlab-ctl restart
All the components start up, but there is a 500 error when I try to access any page and the logs show that the database no longer exists. (database gitlabhq_production does not exist)
I am not sure why this is happening. Am I missing something/are there any other steps to be followed?
Things I have tried:
*
*Use the sudo gitlab-ctl upgrade command to upgrade. Did not work, after the command completes, Gitlab is still in 7.10.0 version.
*Create a backup, do a fresh installation of 7.10.0 version, restore the backup & perform the above listed steps. Did not work, logs show that the production database is missing.
Things I am going to try:
*
*Try an incremental upgrade(7.10.x -> 7.11.x -> 7.12.x - 7.13.x). Will be posting back the results.
Please let me know if any other information is needed.
| |
doc_23537957
|
enter image description here
here is my screenshoot
this is my code
i dont know what's wrong with this code,
PS:its not even i press the button,sometimes if i scroll listview,the color change by itself
public void colorToggle(View view) {
int[] attrs = {android.R.attr.popupBackground};
TypedArray ta = obtainStyledAttributes(R.style.MyApp_PopupMenu, attrs);
final LinearLayout propLayout = (LinearLayout) findViewById(R.id.leot);
ListView listView = (ListView) findViewById(android.R.id.list);
TextView textView = (TextView) findViewById(R.id.wilayah);
switch (view.getId()) {
case R.id.blueButton: {
int holoBlue = getResources().getColor(R.color.holo_blue_light);
mFab.setColor(holoBlue);
getActionBar().setBackgroundDrawable(new ColorDrawable(holoBlue));
mFab.setDrawable(getResources().getDrawable(R.drawable.ic_popup_sync_6));
int popupBackground = ta.getColor(0, R.color.holo_blue_light);
Log.i("Retrieved textColor as hex:", Integer.toHexString(popupBackground));
propLayout.setVisibility(View.GONE);
listView.setDivider(new ColorDrawable(holoBlue));
listView.setDividerHeight(1);
textView.setTextColor(holoBlue);
break;
}
case R.id.purpleButton: {
int holoPurple = getResources().getColor(R.color.holo_purple);
mFab.setColor(holoPurple);
getActionBar().setBackgroundDrawable(new ColorDrawable(holoPurple));
mFab.setDrawable(getResources().getDrawable(R.drawable.ic_popup_sync_6));
int popupBackground = ta.getColor(0, R.color.holo_purple);
Log.i("Retrieved textColor as hex:", Integer.toHexString(popupBackground));
propLayout.setVisibility(View.GONE);
listView.setDivider(new ColorDrawable(holoPurple));
listView.setDividerHeight(1);
textView.setTextColor(holoPurple);
break;
}
case R.id.greenButton: {
int holoGreen = getResources().getColor(R.color.holo_green_light);
mFab.setColor(holoGreen);
getActionBar().setBackgroundDrawable(new ColorDrawable(holoGreen));
mFab.setDrawable(getResources().getDrawable(R.drawable.ic_popup_sync_6));
int popupBackground = ta.getColor(0, R.color.holo_green_light);
Log.i("Retrieved textColor as hex:", Integer.toHexString(popupBackground));
propLayout.setVisibility(View.GONE);
listView.setDivider(new ColorDrawable(holoGreen));
listView.setDividerHeight(1);
textView.setTextColor(holoGreen);
break;
}
case R.id.orangeButton: {
int holoOrange = getResources().getColor(R.color.holo_orange_light);
mFab.setColor(holoOrange);
getActionBar().setBackgroundDrawable(new ColorDrawable(holoOrange));
mFab.setDrawable(getResources().getDrawable(R.drawable.ic_popup_sync_6));
int popupBackground = ta.getColor(0, R.color.holo_orange_light);
Log.i("Retrieved textColor as hex:", Integer.toHexString(popupBackground));
propLayout.setVisibility(View.GONE);
listView.setDivider(new ColorDrawable(holoOrange));
listView.setDividerHeight(1);
textView.setTextColor(holoOrange);
break;
}
case R.id.redButton: {
int holoRed = getResources().getColor(R.color.holo_red_light);
mFab.setColor(holoRed);
getActionBar().setBackgroundDrawable(new ColorDrawable(holoRed));
mFab.setDrawable(getResources().getDrawable(R.drawable.ic_popup_sync_6));
int popupBackground = ta.getColor(0, R.color.holo_red_light);
Log.i("Retrieved textColor as hex:", Integer.toHexString(popupBackground));
propLayout.setVisibility(View.GONE);
listView.setDivider(new ColorDrawable(holoRed));
listView.setDividerHeight(1);
textView.setTextColor(holoRed);
break;
}
}
ta.recycle();
}
A: Try calling adapter's notifyDataSetChanged() after color change. It should redraw whole listview with new colors for every item.
A:
if i scroll ListView,the color change by itself
When you scroll, the items in ListView which are hiding will be recycled or re-used.
See this for detail explanation.
The way you are using the ListView seems to be incorrect. I recommend you using an adapter with a model and assign this adapter to the ListView.
See this for implementing ListView with custom adapter.
i want to change all list item color
Implement the custom adapter first. When color is changed, just loop through model's list and change the color of each item. And then call adapter's notifyDataSetChanged().
Hope this will help you to fix your problem.
A: Hopefully it will help you to achieve your output !
ThirdActivity.java (Including Adapter Class)
public class ThirdActivity extends Activity {
View view_red, view_blue;
ListView lst_data;
int mySelectedColor;
MyListAdapter myListAdapter;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.demo_3_list);
lst_data = (ListView) findViewById(R.id.lst_data);
// Statically taken two views for color selection
view_blue = findViewById(R.id.view_blue);
view_red = findViewById(R.id.view_red);
mySelectedColor = ContextCompat.getColor(this, android.R.color.holo_red_dark);
myListAdapter = new MyListAdapter(this, mySelectedColor);
lst_data.setAdapter(myListAdapter);
view_blue.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (myListAdapter == null) {
myListAdapter = new MyListAdapter(ThirdActivity.this, ContextCompat.getColor(ThirdActivity.this, android.R.color.holo_blue_dark));
lst_data.setAdapter(myListAdapter);
return;
}
mySelectedColor = ContextCompat.getColor(ThirdActivity.this, android.R.color.holo_blue_dark);
myListAdapter.notifyDataSetChanged();
}
});
view_red.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (myListAdapter == null) {
myListAdapter = new MyListAdapter(ThirdActivity.this, ContextCompat.getColor(ThirdActivity.this, android.R.color.holo_red_dark));
lst_data.setAdapter(myListAdapter);
return;
}
mySelectedColor = ContextCompat.getColor(ThirdActivity.this, android.R.color.holo_red_dark);
myListAdapter.notifyDataSetChanged();
}
});
}
public class MyListAdapter extends BaseAdapter {
private int selectedMyColor;
private Context context;
private LayoutInflater mLayoutInflater;
public MyListAdapter(Context contetx, int mySelectedColor) {
this.context = contetx;
mLayoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
this.selectedMyColor = mySelectedColor;
}
@Override
public int getCount() {
return 20;
}
@Override
public Object getItem(int position) {
return null;
}
@Override
public long getItemId(int position) {
return 0;
}
@Override
public View getView(int position, View view, ViewGroup parent) {
ViewHolder holder = null;
if (view == null) {
//The view is not a recycled one: we have to inflate
view = mLayoutInflater.inflate(R.layout.demo_3_row_layout, parent, false);
holder = new ViewHolder();
holder.txt_title = (TextView) view.findViewById(R.id.txt_title);
view.setTag(holder);
} else {
holder = (ViewHolder) view.getTag();
}
holder.txt_title.setTextColor(mySelectedColor);
return view;
}
class ViewHolder {
TextView txt_title;
}
}
}
| |
doc_23537958
|
...
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/hibernate/bytecode/instrumentation/internal/FieldInterceptionHelper
at org.hibernate.jpa.internal.util.PersistenceUtilHelper.isLoadedWithoutReference(PersistenceUtilHelper.java:119)
at org.hibernate.jpa.HibernatePersistenceProvider$1.isLoadedWithoutReference(HibernatePersistenceProvider.java:171)
at javax.persistence.Persistence$1.isLoaded(Persistence.java:111)
at org.hibernate.validator.internal.engine.resolver.JPATraversableResolver.isReachable(JPATraversableResolver.java:46)
at org.hibernate.validator.internal.engine.resolver.DefaultTraversableResolver.isReachable(DefaultTraversableResolver.java:128)
at org.hibernate.validator.internal.engine.resolver.CachingTraversableResolverForSingleValidation.isReachable(CachingTraversableResolverForSingleValidation.java:36)
at org.hibernate.validator.internal.engine.ValidatorImpl.isReachable(ValidatorImpl.java:1522)
at org.hibernate.validator.internal.engine.ValidatorImpl.isValidationRequired(ValidatorImpl.java:1507)
at org.hibernate.validator.internal.engine.ValidatorImpl.validateMetaConstraint(ValidatorImpl.java:584)
at org.hibernate.validator.internal.engine.ValidatorImpl.validateConstraint(ValidatorImpl.java:555)
at org.hibernate.validator.internal.engine.ValidatorImpl.validateConstraintsForDefaultGroup(ValidatorImpl.java:490)
at org.hibernate.validator.internal.engine.ValidatorImpl.validateConstraintsForCurrentGroup(ValidatorImpl.java:454)
at org.hibernate.validator.internal.engine.ValidatorImpl.validateInContext(ValidatorImpl.java:406)
at org.hibernate.validator.internal.engine.ValidatorImpl.validate(ValidatorImpl.java:204)
at org.springframework.validation.beanvalidation.SpringValidatorAdapter.validate(SpringValidatorAdapter.java:108)
at org.springframework.validation.DataBinder.validate(DataBinder.java:866)
at org.springframework.web.method.annotation.ModelAttributeMethodProcessor.validateIfApplicable(ModelAttributeMethodProcessor.java:164)
at org.springframework.web.method.annotation.ModelAttributeMethodProcessor.resolveArgument(ModelAttributeMethodProcessor.java:111)
at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:99)
at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:161)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:128)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:817)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:731)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)
... 33 more
Caused by: java.lang.ClassNotFoundException: org.hibernate.bytecode.instrumentation.internal.FieldInterceptionHelper from [Module "deployment.Employee_Ex.war:main" from Service Module Loader]
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:198)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:363)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:351)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:93)
... 59 more
there is the dependencies in my pom file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.fussa</groupId>
<artifactId>EX</artifactId>
<name>EX</name>
<packaging>war</packaging>
<version>1.0.0-BUILD-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>4.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>4.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>4.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>4.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>4.2.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.1.0.Final</version>
</dependency>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.1.0.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>5.2.4.Final</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.jadira.usertype</groupId>
<artifactId>usertype.core</artifactId>
<version>5.0.0.GA</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
</dependency>
<dependency>
<groupId>javax.servlet.jsp</groupId>
<artifactId>javax.servlet.jsp-api</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>javax.transaction</groupId>
<artifactId>jta</artifactId>
<version>1.1</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.6</version>
<configuration>
<warSourceDirectory>WebContent</warSourceDirectory>
<failOnMissingWebXml>false</failOnMissingWebXml>
<warName>Employee_Ex</warName>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
The above issue only exist during the validation of fields.
How can i resolve this issue ?
thanks for any suggestions..
A: The error mentioned is occurring due to a possible clash of dependency versions.
WildFly already provides both hibernate-core and hibernate-validator dependencies in <wildfly_dir>\modules\system\layers\base\org\hibernate.
In the case of WildFly10, the dependencies' versions are the following:
*
*hibernate-core-5.0.7.Final
*hibernate-validator-5.2.3.Final
Therefore, on your pom.xml, you could place your Hibernate dependencies as provided and let the container use its own:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.1.0.Final</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>5.2.4.Final</version>
<scope>provided</scope>
</dependency>
But if you want to provide your own dependencies, as mentioned on WildFly 10 documentation, you should provide a jboss-deployment-structure.xml, where you basically tell WildFly to disregard it's own dependencies:
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.hibernate" slot="main" />
</exclusions>
</deployment>
</jboss-deployment-structure>
This way, the container will load the dependencies that were packaged with your application and that are present on your WAR's WEB-INF/lib folder.
EDIT
After going to the source code of PersistenceUtilHelper.isLoadedWithoutReference, one notices that, in Hibernate 5.1, it no longer references the class FieldInterceptionHelper, on line 119, where the error occurs. Whereas in the version 5.0 it still does.
I also suggest you to add the most recent version of the hibernate-entitymanager dependency, in order to be in accordance with the other Hibernate dependencies:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>5.1.0.Final</version>
</dependency>
A: The following worked for me:
hibernate-core-5.1.0.Final
hibernate-entitymanager-5.1.0.Final
hibernate-validator-5.2.4.Final
| |
doc_23537959
|
import java.nio.ByteBuffer;
import java.util.Random;
public class MemPressureTest {
static final int SIZE = 4096;
static final class Bigish {
final ByteBuffer b;
public Bigish() {
this(ByteBuffer.allocate(SIZE));
}
public Bigish(ByteBuffer b) {
this.b = b;
}
public void fill(byte bt) {
b.clear();
for (int i = 0; i < SIZE; ++i) {
b.put(bt);
}
}
}
public static void main(String[] args) {
Random random = new Random(1);
Bigish tmp = new Bigish();
for (int i = 0; i < 3e7; ++i) {
tmp.fill((byte)random.nextInt(255));
}
}
}
On my laptop, with default GC settings, it runs in about 95 seconds:
/tmp$ time java -Xlog:gc MemPressureTest
[0.006s][info][gc] Using G1
real 1m35.552s
user 1m33.658s
sys 0m0.428s
This is where things get strange. I tweaked the program to allocate a new object for each iteration:
...
Random random = new Random(1);
for (int i = 0; i < 3e7; ++i) {
Bigish tmp = new Bigish();
tmp.fill((byte)random.nextInt(255));
}
...
In theory, this should add some small overhead, but none of the objects should ever be promoted out of Eden. At best, I'd expect the runtimes to be close to identical. However, this test completes in ~17 seconds:
/tmp$ time java -Xlog:gc MemPressureTest
[0.007s][info][gc] Using G1
[0.090s][info][gc] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 23M->1M(130M) 1.304ms
[0.181s][info][gc] GC(1) Pause Young (Normal) (G1 Evacuation Pause) 76M->1M(130M) 0.870ms
[0.247s][info][gc] GC(2) Pause Young (Normal) (G1 Evacuation Pause) 76M->0M(130M) 0.844ms
[0.317s][info][gc] GC(3) Pause Young (Normal) (G1 Evacuation Pause) 75M->0M(130M) 0.793ms
[0.381s][info][gc] GC(4) Pause Young (Normal) (G1 Evacuation Pause) 75M->0M(130M) 0.859ms
[lots of similar GC pauses, snipped for brevity]
[16.608s][info][gc] GC(482) Pause Young (Normal) (G1 Evacuation Pause) 254M->0M(425M) 0.765ms
[16.643s][info][gc] GC(483) Pause Young (Normal) (G1 Evacuation Pause) 254M->0M(425M) 0.580ms
[16.676s][info][gc] GC(484) Pause Young (Normal) (G1 Evacuation Pause) 254M->0M(425M) 0.841ms
real 0m16.766s
user 0m16.578s
sys 0m0.576s
I ran both versions several times, with near identical results to the above. I feel like I must be missing something very obvious. Am I going insane? What could explain this difference in performance?
=== EDIT ===
I rewrote the test using JMH as per apangin and dan1st's suggestions:
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;
import java.nio.ByteBuffer;
import java.util.Random;
public class MemPressureTest {
static final int SIZE = 4096;
@State(Scope.Benchmark)
public static class Bigish {
final ByteBuffer b;
private Blackhole blackhole;
@Setup(Level.Trial)
public void setup(Blackhole blackhole) {
this.blackhole = blackhole;
}
public Bigish() {
this.b = ByteBuffer.allocate(SIZE);
}
public void fill(byte bt) {
b.clear();
for (int i = 0; i < SIZE; ++i) {
b.put(bt);
}
blackhole.consume(b);
}
}
static Random random = new Random(1);
@Benchmark
public static void test1(Blackhole blackhole) {
Bigish tmp = new Bigish();
tmp.setup(blackhole);
tmp.fill((byte)random.nextInt(255));
blackhole.consume(tmp);
}
@Benchmark
public static void test2(Bigish perm) {
perm.fill((byte) random.nextInt(255));
}
}
Still, the second test much slower:
> Task :jmh
# JMH version: 1.35
# VM version: JDK 18.0.1.1, OpenJDK 64-Bit Server VM, 18.0.1.1+2-6
# VM invoker: /Users/xxx/Library/Java/JavaVirtualMachines/openjdk-18.0.1.1/Contents/Home/bin/java
# VM options: -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/Users/xxx/Dev/MemTests/build/tmp/jmh -Duser.country=US -Duser.language=en -Duser.variant
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: com.xxx.MemPressureTest.test1
# Run progress: 0.00% complete, ETA 00:16:40
# Fork: 1 of 5
# Warmup Iteration 1: 2183998.556 ops/s
# Warmup Iteration 2: 2281885.941 ops/s
# Warmup Iteration 3: 2239644.018 ops/s
# Warmup Iteration 4: 1608047.994 ops/s
# Warmup Iteration 5: 1992314.001 ops/s
Iteration 1: 2053657.571 ops/s3s]
Iteration 2: 2054957.773 ops/sm 3s]
Iteration 3: 2051595.233 ops/sm 13s]
Iteration 4: 2054878.239 ops/sm 23s]
Iteration 5: 2031111.214 ops/sm 33s]
# Run progress: 10.00% complete, ETA 00:15:04
# Fork: 2 of 5
# Warmup Iteration 1: 2228594.345 ops/s
# Warmup Iteration 2: 2257983.355 ops/s
# Warmup Iteration 3: 2063130.244 ops/s
# Warmup Iteration 4: 1629084.669 ops/s
# Warmup Iteration 5: 2063018.496 ops/s
Iteration 1: 1939260.937 ops/sm 33s]
Iteration 2: 1791414.018 ops/sm 43s]
Iteration 3: 1914987.221 ops/sm 53s]
Iteration 4: 1969484.898 ops/sm 3s]
Iteration 5: 1891440.624 ops/sm 13s]
# Run progress: 20.00% complete, ETA 00:13:23
# Fork: 3 of 5
# Warmup Iteration 1: 2228664.719 ops/s
# Warmup Iteration 2: 2263677.403 ops/s
# Warmup Iteration 3: 2237032.464 ops/s
# Warmup Iteration 4: 2040040.243 ops/s
# Warmup Iteration 5: 2038848.530 ops/s
Iteration 1: 2023934.952 ops/sm 14s]
Iteration 2: 2041874.437 ops/sm 24s]
Iteration 3: 2002858.770 ops/sm 34s]
Iteration 4: 2039727.607 ops/sm 44s]
Iteration 5: 2045827.670 ops/sm 54s]
# Run progress: 30.00% complete, ETA 00:11:43
# Fork: 4 of 5
# Warmup Iteration 1: 2105430.688 ops/s
# Warmup Iteration 2: 2279387.762 ops/s
# Warmup Iteration 3: 2228346.691 ops/s
# Warmup Iteration 4: 1438607.183 ops/s
# Warmup Iteration 5: 2059319.745 ops/s
Iteration 1: 1112543.932 ops/sm 54s]
Iteration 2: 1977077.976 ops/sm 4s]
Iteration 3: 2040147.355 ops/sm 14s]
Iteration 4: 1975766.032 ops/sm 24s]
Iteration 5: 2003532.092 ops/sm 34s]
# Run progress: 40.00% complete, ETA 00:10:02
# Fork: 5 of 5
# Warmup Iteration 1: 2240203.848 ops/s
# Warmup Iteration 2: 2245673.994 ops/s
# Warmup Iteration 3: 2096257.864 ops/s
# Warmup Iteration 4: 2046527.740 ops/s
# Warmup Iteration 5: 2050379.941 ops/s
Iteration 1: 2050691.989 ops/sm 35s]
Iteration 2: 2057803.100 ops/sm 45s]
Iteration 3: 2058634.766 ops/sm 55s]
Iteration 4: 2060596.595 ops/sm 5s]
Iteration 5: 2061282.107 ops/sm 15s]
Result "com.xxx.MemPressureTest.test1":
1972203.484 ±(99.9%) 142904.698 ops/s [Average]
(min, avg, max) = (1112543.932, 1972203.484, 2061282.107), stdev = 190773.683
CI (99.9%): [1829298.786, 2115108.182] (assumes normal distribution)
# JMH version: 1.35
# VM version: JDK 18.0.1.1, OpenJDK 64-Bit Server VM, 18.0.1.1+2-6
# VM invoker: /Users/xxx/Library/Java/JavaVirtualMachines/openjdk-18.0.1.1/Contents/Home/bin/java
# VM options: -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/Users/xxx/Dev/MemTests/build/tmp/jmh -Duser.country=US -Duser.language=en -Duser.variant
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: com.xxx.MemPressureTest.test2
# Run progress: 50.00% complete, ETA 00:08:22
# Fork: 1 of 5
# Warmup Iteration 1: 282751.407 ops/s
# Warmup Iteration 2: 283333.984 ops/s
# Warmup Iteration 3: 293785.079 ops/s
# Warmup Iteration 4: 268403.105 ops/s
# Warmup Iteration 5: 280054.277 ops/s
Iteration 1: 279093.118 ops/s9m 15s]
Iteration 2: 282782.996 ops/s9m 25s]
Iteration 3: 282688.921 ops/s9m 35s]
Iteration 4: 291578.963 ops/s9m 45s]
Iteration 5: 294835.777 ops/s9m 55s]
# Run progress: 60.00% complete, ETA 00:06:41
# Fork: 2 of 5
# Warmup Iteration 1: 283735.550 ops/s
# Warmup Iteration 2: 283536.547 ops/s
# Warmup Iteration 3: 294403.173 ops/s
# Warmup Iteration 4: 284161.042 ops/s
# Warmup Iteration 5: 281719.077 ops/s
Iteration 1: 276838.416 ops/s10m 56s]
Iteration 2: 284063.117 ops/s11m 6s]
Iteration 3: 282361.985 ops/s11m 16s]
Iteration 4: 289125.092 ops/s11m 26s]
Iteration 5: 294236.625 ops/s11m 36s]
# Run progress: 70.00% complete, ETA 00:05:01
# Fork: 3 of 5
# Warmup Iteration 1: 284567.336 ops/s
# Warmup Iteration 2: 283548.713 ops/s
# Warmup Iteration 3: 294317.511 ops/s
# Warmup Iteration 4: 283501.873 ops/s
# Warmup Iteration 5: 283691.306 ops/s
Iteration 1: 283462.749 ops/s12m 36s]
Iteration 2: 284120.587 ops/s12m 46s]
Iteration 3: 264878.952 ops/s12m 56s]
Iteration 4: 292681.168 ops/s13m 6s]
Iteration 5: 295279.759 ops/s13m 16s]
# Run progress: 80.00% complete, ETA 00:03:20
# Fork: 4 of 5
# Warmup Iteration 1: 284823.519 ops/s
# Warmup Iteration 2: 283913.207 ops/s
# Warmup Iteration 3: 294401.483 ops/s
# Warmup Iteration 4: 283998.027 ops/s
# Warmup Iteration 5: 283987.408 ops/s
Iteration 1: 278014.618 ops/s14m 17s]
Iteration 2: 283431.491 ops/s14m 27s]
Iteration 3: 284465.945 ops/s14m 37s]
Iteration 4: 293202.934 ops/s14m 47s]
Iteration 5: 290059.807 ops/s14m 57s]
# Run progress: 90.00% complete, ETA 00:01:40
# Fork: 5 of 5
# Warmup Iteration 1: 285598.809 ops/s
# Warmup Iteration 2: 284434.916 ops/s
# Warmup Iteration 3: 294355.547 ops/s
# Warmup Iteration 4: 284307.860 ops/s
# Warmup Iteration 5: 284297.362 ops/s
Iteration 1: 283676.043 ops/s15m 57s]
Iteration 2: 283609.750 ops/s16m 7s]
Iteration 3: 284575.124 ops/s16m 17s]
Iteration 4: 293564.269 ops/s16m 27s]
Iteration 5: 216267.883 ops/s16m 37s]
Result "com.xxx.MemPressureTest.test2":
282755.844 ±(99.9%) 11599.112 ops/s [Average]
(min, avg, max) = (216267.883, 282755.844, 295279.759), stdev = 15484.483
CI (99.9%): [271156.731, 294354.956] (assumes normal distribution)
The JMH Blackhole should prevent code removal and the fact that JMH is now in charge of running separate iterations should prevent parallelization, right? Shouldn't Blackhole also stop the object from being confined to the stack? Also, wouldn't there be more variation between warmup iterations if hotspot were still doing a significant amount of optimization?
A: Creating a new ByteBuffer right before filling, indeed helps JIT compiler to produce better optimized code, when you use relative put methods, and here is why.
*
*JIT compilation unit is a method. HotSpot JVM does not perform a whole-program optimization, which is quite hard even in theory due to dynamic nature of Java and the open world runtime environment.
*When the JVM compiles test1 method, buffer instantiation appears in the same compilation scope as filling:
Bigish tmp = new Bigish();
tmp.setup(blackhole);
tmp.fill((byte)random.nextInt(255));
The JVM knows everything about the created buffer: its exact size and its backing array, it knows the buffer has not been published yet, no other thread sees it. So, the JVM can agressively optimize the fill loop: vectorize it using AVX instructions and unroll it to set 512 bytes at a time:
0x000001cdf60c9ae0: mov %r9d,%r8d
0x000001cdf60c9ae3: movslq %r8d,%r9
0x000001cdf60c9ae6: add %r11,%r9
0x000001cdf60c9ae9: vmovdqu %ymm0,0x10(%rcx,%r9,1)
0x000001cdf60c9af0: vmovdqu %ymm0,0x30(%rcx,%r9,1)
0x000001cdf60c9af7: vmovdqu %ymm0,0x50(%rcx,%r9,1)
0x000001cdf60c9afe: vmovdqu %ymm0,0x70(%rcx,%r9,1)
0x000001cdf60c9b05: vmovdqu %ymm0,0x90(%rcx,%r9,1)
0x000001cdf60c9b0f: vmovdqu %ymm0,0xb0(%rcx,%r9,1)
0x000001cdf60c9b19: vmovdqu %ymm0,0xd0(%rcx,%r9,1)
0x000001cdf60c9b23: vmovdqu %ymm0,0xf0(%rcx,%r9,1)
0x000001cdf60c9b2d: vmovdqu %ymm0,0x110(%rcx,%r9,1)
0x000001cdf60c9b37: vmovdqu %ymm0,0x130(%rcx,%r9,1)
0x000001cdf60c9b41: vmovdqu %ymm0,0x150(%rcx,%r9,1)
0x000001cdf60c9b4b: vmovdqu %ymm0,0x170(%rcx,%r9,1)
0x000001cdf60c9b55: vmovdqu %ymm0,0x190(%rcx,%r9,1)
0x000001cdf60c9b5f: vmovdqu %ymm0,0x1b0(%rcx,%r9,1)
0x000001cdf60c9b69: vmovdqu %ymm0,0x1d0(%rcx,%r9,1)
0x000001cdf60c9b73: vmovdqu %ymm0,0x1f0(%rcx,%r9,1)
0x000001cdf60c9b7d: mov %r8d,%r9d
0x000001cdf60c9b80: add $0x200,%r9d
0x000001cdf60c9b87: cmp %r10d,%r9d
0x000001cdf60c9b8a: jl 0x000001cdf60c9ae0
*You use relative put method. It not only sets a byte in a ByteBuffer, but also updates the position field. Note that the above vectorized loop does not update the position in memory. JVM sets it just once after the loop - it is allowed to do so as long as nobody can observe an inconsistent state of the buffer.
*Now try to publish ByteBuffer before filling:
Bigish tmp = new Bigish();
volatileField = tmp; // publish
tmp.setup(blackhole);
tmp.fill((byte)random.nextInt(255));
The loop optimization breaks; now the array bytes are filled one by one, and the position field is incremented accordingly.
0x000001829b18ca5c: nopl 0x0(%rax)
0x000001829b18ca60: cmp %r11d,%esi
0x000001829b18ca63: jge 0x000001829b18ce34 ;*if_icmplt {reexecute=0 rethrow=0 return_oop=0}
; - java.nio.Buffer::nextPutIndex@10 (line 721)
; - java.nio.HeapByteBuffer::put@6 (line 209)
; - bench.MemPressureTest$Bigish::fill@22 (line 33)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca69: mov %esi,%ecx
0x000001829b18ca6b: add %edx,%ecx ;*getfield position {reexecute=0 rethrow=0 return_oop=0}
; - java.nio.Buffer::nextPutIndex@1 (line 720)
; - java.nio.HeapByteBuffer::put@6 (line 209)
; - bench.MemPressureTest$Bigish::fill@22 (line 33)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca6d: mov %esi,%eax
0x000001829b18ca6f: inc %eax ;*iinc {reexecute=0 rethrow=0 return_oop=0}
; - bench.MemPressureTest$Bigish::fill@26 (line 32)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca71: mov %eax,0x18(%r10) ;*putfield position {reexecute=0 rethrow=0 return_oop=0}
; - java.nio.Buffer::nextPutIndex@25 (line 723)
; - java.nio.HeapByteBuffer::put@6 (line 209)
; - bench.MemPressureTest$Bigish::fill@22 (line 33)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca75: cmp %r8d,%ecx
0x000001829b18ca78: jae 0x000001829b18ce14
0x000001829b18ca7e: movslq %esi,%r9
0x000001829b18ca81: add %r14,%r9
0x000001829b18ca84: mov %bl,0x10(%rdi,%r9,1) ;*bastore {reexecute=0 rethrow=0 return_oop=0}
; - java.nio.HeapByteBuffer::put@13 (line 209)
; - bench.MemPressureTest$Bigish::fill@22 (line 33)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca89: cmp $0x1000,%eax
0x000001829b18ca8f: jge 0x000001829b18ca95 ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
; - bench.MemPressureTest$Bigish::fill@14 (line 32)
; - bench.MemPressureTest::test1@28 (line 47)
0x000001829b18ca91: mov %eax,%esi
0x000001829b18ca93: jmp 0x000001829b18ca5c
That's exactly what happens in test2. Since the ByteBuffer object is external to the compilation scope, JIT can't optimize it as freely as a local not-yet-published object.
*Is it possible at all to optimize the fill loop in case of an external buffer?
The good news, it is possible. Just use the absolute put method instead of relative. In this case, position field remains unchanged, and JIT can easily vectorize the loop without a risk of breaking ByteBuffer invariants.
for (int i = 0; i < SIZE; ++i) {
b.put(i, bt);
}
With this change, the loop will be vectorized in both cases. Even better, now test2 becomes a lot faster than test1, proving that an object creation indeed has a performance overhead.
Benchmark Mode Cnt Score Error Units
MemPressureTest.test1 thrpt 10 2447,370 ± 146,804 ops/ms
MemPressureTest.test2 thrpt 10 15677,575 ± 136,075 ops/ms
Conclusion
*
*The counterintuitive performance difference was caused by the JVM inability to vectorize the fill loop when the ByteBuffer object creation is not in the compilation scope.
*Prefer absolute get/put methods to relative ones where possible. Absolute methods are usually much faster, since they do not update the internal state of ByteBuffer, and JIT can apply more agressive optimizations.
*Object creation indeed has an overhead, as the modified benchmark shows.
A: Disclaimer
The following are just a theories, they might be completely wrong. I am neither a JIT nor a GC expert.
code removal
I think that the JIT just optimized away (some of) your code. If that is the case, it detected that you are not actually using the stored values and just removed the code allocating/filling the object. Things like JmH's black hole might help you with that.
Parallelization
It could also be the case that it parallelized your code. Since loop different iterations are independent of each other, it is possible to execute multiple iterations in parallel.
Stack allocations
Another possibility is that it detected that the object is stack confined and has a very narrow scope and is removed immediately. Because of that, it might have moved your object to the stack where it can be allocated/pushed and deallocated/popped quickly.
Closing thoughts
The JIT might always do unexpected things. Don't optimize prematurely and don't guess where your bottlenecks are. Measure your performance before and after any changes you do. Performance might not get lost you expect it to.
This is also the case in other languages but especially in Java.
And, as apangin mentioned in the comments, you should really use JMH.
A: You original question and the edited JMH version are actually slightly different.
In the edited version, like @apangin mentioned, it's pointer perm stored in static field prevents the code get optimized.
In your original question, it's because you forgot to warm up. Here's a modified version:
public static void main(String[] args) {
var t1 = System.currentTimeMillis();
var warmup = Integer.parseInt(args[0]);
for (int i = 0; i < warmup; i++) { test(1); } // magic!!!
test(1000000);
var t2 = System.currentTimeMillis();
System.out.println(t2 - t1);
}
private static void test(int n) {
Random random = new Random(1);
Bigish tmp = new Bigish();
for (int i = 0; i < n; ++i) {
tmp.fill((byte) random.nextInt(255));
}
}
It takes an int argument warmup to help JVM decide which methods should be inlined.
On my machine, which is OpenJDK Runtime Environment Zulu17.28+13-CA (build 17+35-LTS) on Windows, when warmup is 8000, the output is unpredictable. It usually takes ~2.7 seconds, but occasionally it only takes 110 milliseconds.
When warmup is set to 8500, it almost always completes in 110~120 milliseconds.
You may also run with -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining options to see how JVM inline the methods. If everything is fully inlined, you should be able to see something like
@ 24 A$Bigish::<init> (11 bytes) inline (hot)
@ 4 java.nio.ByteBuffer::allocate (20 bytes) inline (hot)
@ 16 java.nio.HeapByteBuffer::<init> (21 bytes) inline (hot)
@ 10 java.nio.ByteBuffer::<init> (47 bytes) inline (hot)
@ 8 java.nio.Buffer::<init> (105 bytes) inline (hot)
@ 1 java.lang.Object::<init> (1 bytes) inline (hot)
@ 39 java.nio.ByteBuffer::limit (6 bytes) inline (hot)
@ 2 java.nio.ByteBuffer::limit (8 bytes) inline (hot)
@ 2 java.nio.Buffer::limit (65 bytes) inline (hot)
@ 45 java.nio.ByteBuffer::position (6 bytes) inline (hot)
@ 2 java.nio.ByteBuffer::position (8 bytes) inline (hot)
@ 2 java.nio.Buffer::position (52 bytes) inline (hot)
@ 17 java.nio.ByteOrder::nativeOrder (4 bytes) inline (hot)
@ 7 A$Bigish::<init> (10 bytes) inline (hot)
@ 1 java.lang.Object::<init> (1 bytes) inline (hot)
close to the bottom of the output.
Notice that only when the consturctor of Bigish and ByteBuffer are fully inlined, then JVM can predicate that the underlying buffer will never be visible to another thread, which allows writes to the buffer to be safely vectorized.
BTW, this is yet another case shows how tricky benchmarking is. Without digging into the the detail, it's difficult to tell which part is the real performance bottleneck. Even JMH can be misleading.
| |
doc_23537960
|
The structure of my file includes: the name of the configuration, the number of neurons, an array of neurons (each neuron has a strict number of receptors and synapses, which are also represented by arrays) and the coefficient values for each of them.
I need to get these values.
I have this JSON file:
{
"Task config name": "Test",
"Configuration": {
"NeuronsCount": 2,
"Neurons": [
{
"ReceptorsCount": 3,
"Receptors": [
{
"coef1": 17.32,
"coef2": 11.992,
"coef3": 2.314
},
{
"coef1": 12.982,
"coef2": 96.148,
"coef3": -1.899
},
{
"coef1": 49.11,
"coef2": 35.001,
"coef3": -643.52
}
],
"SynapsysCount": 4,
"Synapses": [
{
"coef1": 13.22,
"coef2": 31.992,
"coef3": 22.314
},
{
"coef1": 12.81,
"coef2": 36.8,
"coef3": -53.189
},
{
"coef1": 1.11,
"coef2": 44.261,
"coef3": -23.12
},
{
"coef1": 642.86,
"coef2": 24.24,
"coef3": 95.009
}
]
},
{
"ReceptorsCount": 3,
"Receptors": [
{
"coef1": 6.32,
"coef2": 64.992,
"coef3": 98.314
},
{
"coef1": 42.982,
"coef2": 11.148,
"coef3": -12.899
},
{
"coef1": 1.11,
"coef2": 752.001,
"coef3": -3.82
}
],
"SynapsysCount": 4,
"Synapses": [
{
"coef1": 19.82,
"coef2": 1.592,
"coef3": 75.384
},
{
"coef1": 89.81,
"coef2": 65.8,
"coef3": -13.189
},
{
"coef1": 18.11,
"coef2": 11.261,
"coef3": -211.12
},
{
"coef1": 2.86,
"coef2": 8.24,
"coef3": 6.009
}
]
}
]
}
}
How can i receive values of coef# of each "Receptor" and "Synapse"?
I tried this, but it return me 0.. How to read such a file?
QByteArray data = jsonFile.readAll();
QJsonDocument document;
document = document.fromJson(data);
QJsonObject jsonObject = document.object();
QJsonArray neuronsArray = jsonObject.value("Neurons").toArray();
qDebug() << "Size = " << neuronsArray.size();
A: you have to iterate through the JSON document:
QJsonDocument doc = QJsonDocument::fromJson(file.readAll());
QJsonObject root = doc.object();
QJsonObject conf = root.value("Configuration").toObject();
//this gives you the neurons array, in there you have objects which you can access just like above
QJsonArray arr = conf.value("Neurons").toArray();
| |
doc_23537961
|
Here is a jsFiddle page with what I am trying to do. Thanks!
http://jsfiddle.net/qDmhV/722/
A: .html() returns an element's innerHTML including leading and trailing whitespace, which drawSvg() chokes on.
Try this (from your fiddle):
ctx.drawSvg($.trim($("#test2").html()), 0 , 0 , 500, 500);
$.trim will remove that whitespace for you.
| |
doc_23537962
|
Like the pattern fill here. https://www.ablebits.com/office-addins-blog/2012/03/28/excel-charts-tips/
A: The closest thing to what you want is probably gradients.
In Graphviz's Node, Edge and Graph Attributes page, it supports gradients when specifying color lists.
You can use them referring to these examples.
Another alternative which would be a bigger hassle is use the shapefile attribute and build a shapefile per pattern although this feature is deprecated.
A third option would be if you are outputting in SVG format, is to do some post processing on the generated SVG to apply texture patterns on the nodes.
| |
doc_23537963
|
My first version is this
Promise.all = function(promiseArray) {
return new Promise((resolve, reject) => {
try {
let resultArray = []
const length = promiseArray.length
for (let i = 0; i <length; i++) {
promiseArray[i].then(data => {
resultArray.push(data)
if (resultArray.length === length) {
resolve(resultArray)
}
}, reject)
}
}
catch(e) {
reject(e)
}
})
}
However, the native Promise.all accepts not just arrays, but also any iterables, which means any object that has Symbol.iterator, such as an array or a Set or Map.
So something like this is going to work with the native Promise.all but not with my current implementation
function resolveTimeout(value, delay) {
return new Promise((resolve) => setTimeout(resolve, delay, value))
}
const requests = new Map()
requests.set('reqA', resolveTimeout(['potatoes', 'tomatoes'], 1000))
requests.set('reqB', resolveTimeout(['oranges', 'apples'], 100))
Promise.all(requests.values()).then(console.log); // it works
I modified my Promise.all by first adding a check to see if it has Symbol.iterator to make sure it is an iterable.
Promise.all = function(promiseArray) {
if (!promiseArray[Symbol.iterator]) {
throw new TypeError('The arguments should be an iterable!')
}
return new Promise((resolve, reject) => {
try {
But the challenge is with how to iterate through the iterable. Current implementation is to get the length of if and doing it with a for loop via const length = promiseArray.length however only arrays have length property on it, other iterables or iterators like Set or Map.values() will not have that property available.
How can I tweak my implementation to support other types of iterables like the native Promise.all does
A:
But the challenge is with how to iterate through the iterable.
Just use for/of to iterate the iterable.
Current implementation is to get the length of if and doing it with a for loop via const length = promiseArray.length however only arrays have length property on it, other iterables or iterators like Set or Map.values() will not have that property available.
Just count how many items you get inside the for/of iteration.
Here's a more detailed explanation of the implementation below
If you look at the spec for Promise.all() it accepts an iterable as its argument. That means that it must have the appropriate iterator such that you can iterate it with for/of. If you're implementing to the spec, you don't have to check if it is an iterable. If not, the implementation will throw and thus reject with the appropriate error (which is what it should do). The Promise executor already catches exceptions and rejects for you.
As has been mentioned elsewhere, you could use Array.from() on the iterable to produce an actual array from it, but that seems a bit wasteful because we don't really need a copy of that data, we just need to iterate it. And, we're going to iterate it synchronously so we don't have to worry about it changing during iteration.
So, it seems like it would be most efficient to use for/of.
It would be nice to iterate the iterable with .entries() because that would give us both index and value and we could then use the index to know where to put the result for this entry into the resultArray, but it does not appear that the spec requires support of .entries() on the iterable so this implementation makes do with just a simple iterable with for (const p of promiseIterable). The code uses its own counter to generate its own index to use for storing the result.
Likewise, we need to produce as the resolved value of the promise we're returning an array of results that are in the same order as the original promises.
And, the iterable does not have to be all promises - it can also contain plain values (that just get passed through in the result array) so we need to make sure we work with regular values too. The value we get from the original array is wrapped in Promise.resolve() to handle the case of a plain value (not a promise) being passed in the source array.
Then, lastly since we are not guaranteed to have a .length property, there is no efficient way to know in advance how many items are in the iterable. I work around that in two ways. First, I count the items as we go in the cntr variable so when we're done with the for/of loop, we know how many total items there were. Then, as the .then() results come in, we can decrement the counter and see when it hits zero to know when we're done.
Promise.all = function(promiseIterable) {
return new Promise((resolve, reject) => {
const resultArray = [];
let cntr = 0;
for (const p of promiseIterable) {
// keep our own local index so we know where this result goes in the
// result array
let i = cntr;
// keep track of total number of items in the iterable
++cntr;
// track each promise - cast to a promise if it's not one
Promise.resolve(p).then(val => {
// store result in the proper order in the result array
resultArray[i] = val;
// if we have all results now, we are done and can resolve
--cntr;
if (cntr === 0) {
resolve(resultArray);
}
}).catch(reject);
}
// if the promiseIterable is empty, we need to resolve with an empty array
// as we could not have executed any of the body of the above for loop
if (cntr === 0) {
resolve(resultArray);
}
});
}
FYI, you can see some of this logic in the ECMAScript specification for Promise.all() here. Also, make sure to look at PerformPromiseAll.
A: The easiest and most practical tweak would be to copy your iterated results into an array you control. Array.from is a good match, as on MDN (emphasis mine):
The Array.from() static method creates a new, shallow-copied Array instance from an array-like or iterable object.
[...]
Array.from() lets you create Arrays from:
*
*array-like objects (objects with a length property and indexed elements); or
*iterable objects (objects such as Map and Set).
This might also help by giving your inputs clear indices, which (as derpirsher mentions in the comments) will be necessary to ensure that your all implementation returns results in order even if they may complete out of order.
Of course, as you're doing this to better prepare for interviews, you may choose to avoid helpful functions like Array.from and favor a homemade implementation based on for ( ... of ... ) to check your understanding, as Mike 'Pomax' Kamermans mentions in the comments. That said, once in a real development role, I hope you would use Array.from (and the built-in Promise.all) as much as possible to reduce duplication and edge cases.
| |
doc_23537964
|
vector <pair <int, int>> vp = {{1, 2}. {4, 4}, {2, 3}};
Now I want to sort this container in acsending order using sort function:
sort(vp.begin(), vp.end());
Output:
{{1, 2}, {2, 3}, {4, 4}}
Now my question is that how the function works in-depth.
A: It sorts in accordance with the ordering of std::pair<int, int> class, which compares the first elements, and if they are equivalent, then compares the second elements. What algorithm is actually used to sort the vector is implementation-defined. Typically it is a mixture of a number of algorithms to adapt to different situations (number of elements, etc.).
A: std::sort uses the elements operator< when not other comparator used. std::sort may use any sorting algorithm that meets the specification, most importantly the number of comparisons is of O(N·log(N)), where N = std::distance(first, last).
std::pair<T1,T2>::operator< compares first and only if they are equivalent compares their second.
| |
doc_23537965
|
*
*Upgrade my mac to Maverick
*Installed Mac Ports For Maverick
Recently, my MongoDB stopped working because there was an error with libboost. I tried to do an update but the update always fails when trying to install ghostscript:
Error: Failed to configure ghostscript, consult /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_print_ghostscript/ghostscript/work/ghostscript-9.10/config.log
Error: org.macports.configure for port ghostscript returned: configure failure: command execution failed
Please see the log file for port ghostscript for details:
/opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_print_ghostscript/ghostscript/main.log
Now going into the config.log, I get this:
#define PACKAGE_NAME ""
#define PACKAGE_TARNAME ""
#define PACKAGE_VERSION ""
#define PACKAGE_STRING ""
#define PACKAGE_BUGREPORT ""
#define PACKAGE_URL ""
configure: exit 77
What does this mean and how can it be fixed?
| |
doc_23537966
|
I have the impression that it should not be a problem, but I'm not finding a lot of examples/caveats/best practices regarding issues as
*
*Custom validation functions that are automatically called on save() to evaluate if field contents are valid;
*Automatic generation of the identifier on save(), based on the hash of the contents of a field;
I think I need to override the save() method, so that I can call my custom logic, but the lack of examples leads me to believe that that may be a wrong approach...
Any examples, or references to high-quality codebases using mongoEngine, are welcome.
A: You could also override the validate method on Document, but you'd need to swallow the superclass Document errors so you can add your errors to them
This unfortunately relies on the internal implementation details in MongoEngine, so who knows if it will break in the future.
class MyDoc(Document):
def validate(self):
errors = {}
try:
super(MyDoc, self).validate()
except ValidationError as e:
errors = e.errors
# Your custom validation here...
# Unfortunately this might swallow any other errors on 'myfield'
if self.something_is_wrong():
errors['myfield'] = ValidationError("this field is wrong!", field_name='myfield')
if errors:
raise ValidationError('ValidationError', errors=errors)
Also, there is proper signal support now in MongoEngine for handling other kinds of hooks (such as the identifier generation you mentioned in the question).
http://mongoengine.readthedocs.io/en/latest/guide/signals.html
A: Custom validation should now be done by implementing the clean() method on a model.
class Essay(Document):
status = StringField(choices=('Published', 'Draft'), required=True)
pub_date = DateTimeField()
def clean(self):
"""
Ensures that only published essays have a `pub_date` and
automatically sets the pub_date if published and not set.
"""
if self.status == 'Draft' and self.pub_date is not None:
msg = 'Draft entries should not have a publication date.'
raise ValidationError(msg)
# Set the pub_date for published items if not set.
if self.status == 'Published' and self.pub_date is None:
self.pub_date = datetime.now()
Edit: That said, you have to be careful using clean() as it is called from validate() prior to validating the model based on the the rules set in your model definition.
A: You can override save(), with the usual caveat that you must call the parent class's method.
If you find that you want to add validation hooks to all your models, you might consider creating a custom child class of Document something like:
class MyDocument(mongoengine.Document):
def save(self, *args, **kwargs):
for hook in self._pre_save_hooks:
# the callable can raise an exception if
# it determines that it is inappropriate
# to save this instance; or it can modify
# the instance before it is saved
hook(self):
super(MyDocument, self).save(*args, **kwargs)
You can then define hooks for a given model class in a fairly natural way:
class SomeModel(MyDocument):
# fields...
_pre_save_hooks = [
some_callable,
another_callable
]
| |
doc_23537967
|
Here's the code:
<form class="row form-inline">
<div class="form-group col-xs-6">
<div class="input-group">
<input name="search" type="text" class="form-control input-sm">
<span class="input-group-btn">
<button class="btn btn-default btn-sm" type="submit">
<span class="glyphicon glyphicon-search"></span> Search
</button>
</span>
</div>
</div>
<div class="form-group col-xs-6">
<a class="btn btn-default btn-sm pull-right" href="#">
<span class="glyphicon glyphicon-user"></span> Create
</a>
</div>
</form>
The problem is that the width of text input is too small at the 'lg' and 'md' browser sizes, and only increases to something that looks OK when the browser is at smaller sizes. What's the best way to increase the width of the search input at larger browser sizes?
A: why dont you set the width on the input using bootstrap classes? e.g.
<input name="search" type="text" class="form-control col-lg-10 col-md-8 col-sm-12 col-xs-12">
A: Try this as well
<form class="row form-inline">
<div class="form-group col-md-8">
<div class="input-group">
<input name="search" type="text" class="form-control col-md-6">
<span class="input-group-btn col-md-2">
<button class="btn btn-default" type="submit">
<span class="glyphicon glyphicon-search"></span> Search
</button>
</span>
</div>
</div>
<div class="form-group col-md-4">
<a class="btn btn-default pull-right" href="#">
<span class="glyphicon glyphicon-user"></span> Create
</a>
</div>
</form>
A: As you can see in Bootstrap DOC form-inline require setting a width for input componentes:
http://getbootstrap.com/css/#forms-inline
To solve that problem I have custom classes for that:
.input-width-X {width: X}
.input-width-Y {width: Y}
...
and use media queries for this classes in order to apply them just when you are not in XS mode, because Bootstrap set 100% width on it.
Sorry about my english :)
| |
doc_23537968
|
I'm using very simple code:
private static void SetAppUniqueId()
{
string guid;
var appSettings = IsolatedStorageSettings.ApplicationSettings;
if (appSettings.Contains("GUID"))
{
guid = appSettings["GUID"].ToString();
}
else
{
guid = Guid.NewGuid().ToString("N");
appSettings["GUID"] = guid;
appSettings.Save();
}
App.UniqueId = guid;
}
And when it is first ran, it creates a new GUID. Then if I do not shut the emulator down, but simply stop and restart my project, the GUID is still in app settings.
But, if I shut the emulator down, then restart my project, the GUID is re-created again.
Am I doing anything wrong, or is this expected behavior?
A: It's a normal behaviour because each time you restart the emulator you create a new instance of it!
| |
doc_23537969
|
I setup the project using Gradle in order to use the dependencies.
My project hierarchy is as follows:
.gradle
build
gradle
src
-main
-java
-Main.java
-MyAmazingBot.java
build.gradle
gradlew
gradlew.bat
This is the guide I used to setup up Gradle. I used the Gradle Wrapper to get my build running.
However, I get the following warning:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/C:/Users/addis/.gradle/caches/modules-2/files-2.1/com.google.inject/guice/4.1.0/eeb69005da379a10071aa4948c48d89250febb07/guice-4.1.0.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Yet the bot runs fine (it echoes my messages back).
1) Should this message be a cause for concern?
2) Is it possible to run the jar file using java -jar? I get a message no main manifest attribute, in .\build\libs\fsc2.jar
3) Is it possible to run ./gradlew run without using Gradle's wrapper?
A: This is apparently due to an incompatibility between Guice and Java 9. See issue link below.
There is no fix just yet. However
*
*this is just a warning, and
*there is a workaround in the issue comments to turn off all of these illegal access warnings.
Issue link:
*
*https://github.com/google/guice/issues/1133
I don't think Gradle is actually at fault here. It seems that the problem is in Telegram / Guice / Cglib.
| |
doc_23537970
|
import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;
import org.springframework.stereotype.Component;
@ManagedBean
@SessionScoped
@Component
public class EpgBean {...}
The problem is that the session is shared between users! If a user does some stuff and another user from another computer connects, he sees the SessionScoped data of the other user.
Is it due to the spring @Component which would force the bean to be a singleton? What is a correct approach to this matter?
A: I solved the problem using spring scope annotation @Scope("session") instead of JSF @SessionScopped. I guess that since spring is configured as FacesEl resolver, it is spring scope that matters, while JSF scope is ignored.
A: The approach I use is to keep the managed beans inside JSF container, and inject the Spring beans into them via EL on a managed property. See related question.
To do that, set up SpringBeanFacesELResolver in faces-config.xml, so JSF EL can resolve Spring beans:
<application>
...
<el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>
...
</application>
After that, you can inject Spring beans in your @ManagedBean annotated beans like this:
@ManagedBean
@ViewScoped
public class SomeMB {
// this will inject Spring bean with id someSpringService
@ManagedProperty("#{someSpringService}")
private SomeSpringService someSpringService;
// getter and setter for managed-property
public void setSomeSpringService(SomeSpringService s){
this.someSpringService = s;
}
public SomeSpringService getSomeSpringService(){
return this.someSpringService;
}
}
There may be better approachs than this, but this is what I've been using lately.
| |
doc_23537971
|
Below is my div:
.statistics .progress-bar {
background-color: black;
border-radius: 10px;
line-height: 20px;
text-align: center;
transition: width 0.6s ease 0s;
width: 0;
}
<div class="statistics" ng-repeat="item in data">
<div class="progress-bar" role="progressbar" aria-valuenow="70"
aria-valuemin="0" aria-valuemax="100" ng-style="{'width' : ( item.statistics + '%' )}">
</div>
</div>
Condition is like below :
If statistics > 100
backgroundcolor=red;
else
backgroundcolor=black;
A: You can do it using simple expression
ng-style="<condition> ? { <true-value> } : { <false-value> }"
Output
Code:
<!DOCTYPE html>
<html ng-app="myApp">
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.8/angular.min.js"></script>
</head>
<body ng-controller="myCtrl">
<div ng-style="item.statistics > 100 ? { 'background-color':'red', 'width': item.statistics + '%' }: { 'background-color':'yellow', 'width': item.statistics + '%'}">
<h2>$scope.statistics = {{statistics}}</h2>
</div>
<div ng-style="item.statistics2 > 100 ? { 'background-color':'red', 'width': item.statistics2 + '%' } : { 'background-color':'yellow', 'width': item.statistics2 + '%'}">
<h2>$scope.statistics2 = {{statistics2}}</h2>
</div>
<script>
var myApp = angular.module("myApp", []);
myApp.controller("myCtrl", ['$scope', function($scope) {
$scope.item = {};
$scope.item.statistics = 30;
$scope.item.statistics2 = 101;
}]);
</script>
</body>
</html>
A: You can use ng-class to set dynamically css classes
.statistics .progress-bar {
background-color: black;
border-radius: 10px;
line-height: 20px;
text-align: center;
transition: width 0.6s ease 0s;
width: 0;
color: black
}
.red {
color: red
}
<div class="progress-bar" role="progressbar"
aria-valuenow="70"
aria-valuemin="0" aria-valuemax="100"
ng-style="{'width' : ( item.statistics + '%' )}"
ng-class="{ red: item.statistics > 100 }"
>
If you don't want to create extra classes, you can use ng style :
<div class="progress-bar" role="progressbar"
aria-valuenow="70"
aria-valuemin="0" aria-valuemax="100"
ng-style="getItemStyle(item)">
Then in your controller, you have to create the getItemStyle function :
$scope.getItemStyle = function(item) {
// determine the color
var itemColor = item.statistics > 100 ? 'red' : 'black';
// return object containing the css props
return {
'width': item.statistics + '%',
'color': itemColor
};
};
A: You can use ng-style for changin background property
or ng-class for class manipulation.
Do not forget to use object notation like this
data-ng-class = {'test-class': condition}
| |
doc_23537972
|
My problem is that from 2013 onward, the data for one .jsp page will be different, and the current database table schema needs to be modified, but backwards compatibility for the 2012 and before years needs to be maintained.
Currently (2012 and before), the relevant database table displays two columns, "continuing students" & "new starts" that is displayed by a single .jsp. For 2013 and onward, 4 columns need to be displayed. The original two columns are being split into two subcategories each, undergrad and graduate. So I can't simply add those new columns to the existing table because that would violate third normal form.
What do you think the best way to manage this situation? How do I display the new data while still maintaining backwards compatibility to display the data for older years?
A: Some options:
*
*Introduce the fields and allow for nulls for older data. I think you rejected this idea.
*Create new table structures to store the new data going forward. It's an least an option if you don't want (1). You could easily create a view that queries from both tables and presents a unified set of data. You could also handle this in the UI and call two separate stored procedures depending on the year queried.
*Create a new table with the new attributes and then optionally join back to your original table. This keeps the old table the same and the new table is just an extension of the old data. You would write a stored procedure potentially to take in the year and then return the appropriate data.
One of the things to really consider is that the old data is now inactive. If you aren't writing to it anymore, it's just historical data that can be "archived" mentally. In that case I think it's ok to freeze the schema and the data and let it live by itself.
Also consider if your customers are likely to change the schema yet again. If so, then maybe (1) is the best.
| |
doc_23537973
|
However, in some cases, I want to work on a stream in memory instead. I use open_memstream for this, but seeking to the end pads the buffer with zeros and it ends up being twice as big as it should be.
An example just to demonstrate the effect of the fseek to the end of the stream is below. In the actual code, we also fseek to different parts of the stream, patching and editing bits of it, etc., as the stream is processed. Note also that writing the file at the end to the filesystem is just for demonstration to show the contents of the buffer – otherwise I wouldn't need the memory stream.
#include <stdio.h>
#include <stdlib.h>
#if (defined(BSD) || __APPLE__)
#include "open_memstream.h"
#endif
int main(void) {
FILE *stream;
FILE *outfile;
char *buf;
size_t buf_len;
int i;
stream = open_memstream(&buf, &buf_len);
for(i = 0; i < 1000; i++) {
fprintf(stream, "%d\n", i);
}
fseeko(stream, 0, SEEK_END);
fclose(stream);
outfile = fopen("out.txt", "w");
fwrite(buf, buf_len, 1, outfile);
fclose(outfile);
return 0;
}
I was testing this out on Mac OS X with this implementation of open_memstream and it worked as I expected, but when I run this on Linux the file is twice the size with zeros at the end.
What's the best way to deal with this? I'm not sure if it's reliable to divide the buffer length by two and truncate it.
A: I've just ran into the same problem on Linux.
// It seams that SEEK_END does not work with open_memstream()
fseek(stream, 0, SEEK_END);
I've ended up doing this:
off_t o = ftell(stream);
/* do some things with the stream */
fseek(stream, o, SEEK_SET);
| |
doc_23537974
|
I wish to convert a given date from the format mm/dd/yyyy to the format Wyy"weeknumber"
For example, 4/10/2017 would become W1715, since it is week 15 of 2017.
The below shown image is of the excel table I am working on. I want to convert the dates in column LT Verification - Planned Date to the week number format mentioned above, in column LT Verification - Planned Week Numbers.
Edit: Because this is part of a larger VBA process, I need it to be in VBA, not a cell formula.
I have written the following code:
Public Sub WeekNumbers()
Dim lastRow As Integer
lastRow = Range("A1:AZ1").Find("*", , , , xlByRows, xlPrevious).Row
Dim myRange As Range
Set myRange = Range("A1:AZ1" & lastRow)
Dim myCell As Range
For Each myCell In myRange
myCell.Offset(0, 1).Value = "W" & Right(Year(myCell.Value), 2) & Application.WorksheetFunction.WeekNum(myCell.Value)**
Next myCell
End Sub
This code gives me error at myCell.Offset(0, 1).Value = "W" & Right(Year(myCell.Value), 2) & Application.WorksheetFunction.WeekNum(myCell.Value)
Here I have a excel workbook which will be updated every week. So, each time it is updated, it runs a macro to import data from another file & perform the week number activity & create a pivot table.
So, the sheet name changes every week. Also, the column headers may be in different columns in different weeks. Also, the number of rows may also change every week.
So, I need to specify column & row range dynamically based on that weeks data.
And have the week numbers in the column based on the column header rather than the column name (A or B or Z...)
A: This can be achieved easily with a cell formula:
="W" & RIGHT(YEAR(A1),2) & WEEKNUM(A1)
Where A1 can be replaced by the cell containing the date.
In VBA this is equivalent to
With Thisworkbook.Sheets("Sheet1")
.Range("A2").Value = "W" & Right(Year(.Range("A1").Value), 2) & Application.WorksheetFunction.WeekNum(.Range("A1").Value)
End With
Edit:
To fill an entire range, you could loop over the cells, apply the VBA calculation as above.
Dim myRange as Range
Set myRange = Thisworkbook.Sheets("Sheet1").Range("A1:A10")
Dim myCell as Range
For Each myCell in myRange
myCell.Offset(0,1).Value = "W" & Right(Year(myCell.Value), 2) & Application.WorksheetFunction.WeekNum(myCell.Value)
Next myCell
There are many methods for finding the last row in a range, so I'll leave that to you if you don't know your range.
Edit 2: in response to your error edit.
You have used the following line to define your range:
Set myRange = Range("A1:AZ1" & lastRow)
Let's imaging you have lastRow = 20, you now have
myRange.Address = "A1:AZ120"
Which is clearly wrong, you shouldn't have the 1 after the AZ. Also I don't know why you've gone to column AZ, if all of your date data is in column A, you should use
Set myRange = Range("A1:A" & lastRow)
The loop you've implemented uses an offset, so the values in column B are changed to reflect those in column A. You can't then set column C according to column B!
A: In VBA, you can get your string by using the Format function. "\Wyyww" is the format you are looking for, where \ is used to escape the interpretation of the first W character and to take it as a litteral.
myCell.Offset(0,1).Value = Format(myCell.Value, "\Wyyww")
More
You have to setup correctly the range for your loop. If your dates are in some column with header "LT Verificiation - Planned Date", you can try this:
Dim ws As Worksheet
Set ws = ActiveSheet ' <-- you can change this into something explicit like Sheets(someIndex)...
Dim myCell As Range
Set myCell = ws.Rows(1).Find("LT Verificiation - Planned Date")
For Each myCell In ws.Range(myCell.Offset(1), ws.Cells(ws.Rows.Count, myCell.Column).End(xlUp))
If IsDate(myCell.value) Then myCell.Offset(, 1).value = Format(myCell.value, "\Wyyww")
Next myCell
A: I don't think you need VBA for this, try this formula:
=RIGHT(YEAR(A1),2)&WEEKNUM(A1)&"W"
Of course, if you insist on VBA, you can always turn Excel Formulas into VBA code. In this case:
Dim rngInput As Range
Dim rngOutput As Range
With Application.WorksheetFunction
rngOutput.Value = .Right(.Year(rngInput.Value), 2) & .Weeknum(rngInput.Value) & "W"
End With
Or you may even set the Formula, and Insert the Value, like this
Dim rngInput As Range
Dim rngOutput As Range
rngOutput.Formula = "=RIGHT(YEAR(" & rngInput.Address(False, False) & "),2)&WEEKNUM(" & rngInput.Address(False, False) & ")&""W"""
rngOutput.Value = rngOutput.Value
| |
doc_23537975
|
The load time is over 30 seconds. However 25 seconds of this seems to be Adobe Reader doing, well who knows? The flow as described by Adobe seems to be.
Here is my self-created log file (the first bullet points, the time is MM:SS:milliseconds)
28:07:350 **First Initialization
*
*Triggered by an 'initialize' event tied to the first field element
that gets called (determined through trial and error)
*For these four seconds I do some initialization and walk the object
hierarchy tree*
28:11:597 Form initializations starting
*
*Done with my initial stuff
*For next 25 seconds I have zero, I mean no initialization calls
tied to objects in the object hierarchy
*What is Reader doing?
28:36:531 Form validation occurring
*
*Triggered by the first 'validation' event - so no initialization is over
*Turns out this is real quick
28:36:575 Form initializations complete
*
*'Form:ready' -- ready to run
Thanks for any and all ideas!!!
(Btw, I have a another, similar form I'm creating at 420KB/1500 lines of Javascript that fully loads in under 5 seconds!)
*A neat trick I haven't seen before. I do a lot of hiding and showing of subforms (e.g., a tab bar, radio button sensitive showing of subforms, etc.) but want to keep the native validation working. Turns out that a field in a hidden subform that is mandatory for validation will still be triggered. So you have to turn off that validation check when you hide a field. More work, but the default validation now works!
| |
doc_23537976
|
A: With version 3.3 of Alfresco, your choice is either CMIS, out-of-the-box web scripts, or custom web scripts.
You should definitely consider upgrading as you are running WAY behind the current release.
A: You could be interested in going through this custom web script implementation
| |
doc_23537977
|
I simplified my code as much as possible. When we start the application there is a 200 rows with a label (empty value by default) and a button. When I click a button my label changes it's value.
So here is a opened activity. As you can see no label is displayed.
Let's click a button at the first row
As expected label appearned at correct position. Let's scroll now.
We can see that label appeared also at 11th position, 21st, etc. Any ideas why?
My Adapter:
@NonNull
@Override
public RecyclerView.ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
LayoutInflater layoutInflater = LayoutInflater.from(parent.getContext());
ListLearnItemBinding itemBinding = ListLearnItemBinding.inflate(layoutInflater, parent, false);
return new LearnItemViewHolder(itemBinding);
}
@Override
public void onBindViewHolder(@NonNull RecyclerView.ViewHolder holder, int position) {
final LearnItemViewHolder viewHolder = (LearnItemViewHolder) holder;
// init binding
if(viewHolder.getBinding().getItem2() == null){
TestBinding t = new TestBinding();
t.setRating("");
viewHolder.getBinding().setItem2(t);
}
viewHolder.getBinding().test.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
TestBinding item2 = new TestBinding();
item2.setRating("Label");
viewHolder.bind(item2);
}
});
}
@Override
public int getItemCount() {
return 200;
}
ViewHolder:
public class LearnItemViewHolder extends RecyclerView.ViewHolder {
public ListLearnItemBinding getBinding() {
return binding;
}
private final ListLearnItemBinding binding;
public LearnItemViewHolder(ListLearnItemBinding binding) {
super(binding.getRoot());
this.binding = binding;
}
public void bind(TestBinding testItem) {
binding.setItem2(testItem);
binding.executePendingBindings();
}
}
Binding model
public class TestBinding {
public String getRating() {
return rating;
}
public void setRating(String rating) {
this.rating = rating;
}
private String rating;
}
OnCreate of my view
binding.recyclerView.setLayoutManager(new LinearLayoutManager(getApplicationContext()));
binding.recyclerView.setItemAnimator(new DefaultItemAnimator());
LearnAdapter mAdapter = new LearnAdapter(this, data);
binding.recyclerView.setAdapter(mAdapter);
Layout
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@{String.valueOf(item2.rating)}" />
<Button
android:id="@+id/test"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_marginTop="20dp"
/>
A: This happens because when you're scrolling old views are being resused for optimization.
You need something like this :
private List<String> mList;
@Override
public void onBindViewHolder(@NonNull RecyclerView.ViewHolder holder, int position) {
final LearnItemViewHolder viewHolder = (LearnItemViewHolder) holder;
TestBinding t = new TestBinding();
t.setRating(mList.get( position);
viewHolder.bind(t);
viewHolder.getBinding().test.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
TestBinding item2 = new TestBinding();
String label ="Label";
item2.setRating(label);
mList.set(position, label);
viewHolder.bind(item2);
}
});
}
@Override
public int getItemCount() {
return mList.size();
}
| |
doc_23537978
|
I have this DataFrame:
df = pd.DataFrame({"val": [1, 2, 3, 5], "signal": [0, 1, 0, 0]})
df
val signal
0 1 0
1 2 1
2 3 0
3 5 0
Then I do:
df["target"] = np.where(df.signal, df.val + 3, np.nan)
df["target"] = df.target.ffill()
df["hit"] = df.val >= df.target
df
val signal target hit
0 1 0 NaN False
1 2 1 5.0 False
2 3 0 5.0 False
3 5 0 5.0 True
To see if my target has been hit.
Here's my issue: let's say that the starting DataFrame was this:
val signal
0 1 0
1 2 1
2 3 0
3 5 1 # <-- new signal
4 6 0 # <-- new row
If I do the same operations as before I get:
df["target"] = np.where(df.signal, df.val + 3, np.nan)
df["target"] = df.target.ffill()
df["hit"] = df.val >= df.target
df
val signal target hit
0 1 0 NaN False
1 2 1 5.0 False
2 3 0 5.0 False
3 5 1 7.0 False
4 6 0 7.0 False
Now I lost the hit on index 3, as target has been replaced by the second signal.
What I would like is for signal to not create a new target if the previous target has not been hit yet.
Desired output (example 1):
val signal target hit
0 1 0 NaN False
1 2 1 5.0 False
2 3 0 5.0 False
3 5 1 5.0 True
4 6 0 NaN False
Desired output (example 2):
val signal target hit
0 1 0 NaN False
1 2 1 5.0 False
2 3 1 5.0 False
3 5 0 5.0 True
4 6 0 NaN False
Desired output (example 3):
val signal target hit
0 1 1 4.0 False
1 4 0 4.0 True
2 3 0 NaN False
3 4 1 7.0 False
4 7 0 7.0 True
Desired output (example 4):
val signal target hit
0 5 0 NaN False
1 3 1 6.0 False
2 6 1 6.0 True
3 2 1 5.0 False
4 7 0 5.0 True
P.S. Ideally, this needs to be done with vectorization as I'm going to perform this operation for millions of rows.
EDIT: Just so the logic is clearer, here's the "loopy" version of the algorithm:
def loopy_way(vals: list, signals: list) -> list:
active_trgt = None
hits = []
for val, signal in zip(vals, signals):
if active_trgt:
if val >= active_trgt: # Arbitrary logic
hits.append(True)
active_trgt = None
continue
# There's an active target, so ignore signal
hits.append(False)
continue
if signal:
active_trgt = val + 3 # Arbitrary condition
hits.append(False) # Couldn't be otherwise
continue
# No signal and no active target
hits.append(False)
return hits
A: You can look at both the new target and the previous target at each signal point using the .shift method in pandas.
Tracking both will allow you to signal if either we are over the current or the previous target.
Additionally, you want to track what the largest historical value you have seen in the previous signal window. You can enumerate signal windows with df.signal.cumsum() and then group by that window enumeration to get the cummax just per signal window with df.groupby(df.signal_window).val.cummax().shift(1).
As an additional condition for non-monotonic data, you can accept the candidate target in signal rows if it is less than the previous target.
Combining these, you can get your desired output.
I calculate and store these as intermediate columns below to show
how the logic works, but you don't have to store and then drop them in your code.
Note: All of this said, it may not be worth vectorizing this calculation. Using numba or similar you could get a very fast implementation in a loop with more readable/maintainable code and substantial runtime memory savings since you don't have to do all your intermediate calculations for every row at once.
import numpy as np
import pandas as pd
df1 = pd.DataFrame({
"val": [1, 2, 3, 5, 6], "signal": [0, 1, 0, 1, 0],
})
df2 = pd.DataFrame({
"val": [1, 2, 3, 5, 6], "signal": [0, 1, 1, 0, 0],
})
df3 = pd.DataFrame({
"val": [1, 4, 3, 4, 7], "signal": [1, 0, 0, 1, 0],
})
df4 = pd.DataFrame({
"val": [5, 3, 6, 2, 7], "signal": [0, 1, 1, 1, 0],
})
for df in [df1, df2, df3, df4]:
# add candidate target at signal times
df["candidate_target"] = np.where(df.signal, df.val + 3, np.nan)
# track previous target at signal times
df["prev_target"] = np.where(
df.signal,
df.candidate_target.ffill().shift(1),
np.nan
)
# enumerate the signal windows with cumsum
df["signal_window"] = df.signal.cumsum()
# track max value we have seen in previous signal window
df["max_to_date"] = df.groupby(df.signal_window).val.cummax().shift(1)
# for signal rows, actual target is candidate if previous has been exceeded, else previous
df["signal_target"] = np.where(
(df.max_to_date >= df.prev_target) | df.prev_target.isnull() | (df.prev_target > df.candidate_target),
df.candidate_target,
df.prev_target
)
# for non-signal rows, add target only if it has not been hit
df["non_signal_target"] = np.where(
(df.signal == 0) & (df.max_to_date < df.signal_target.ffill()),
df.signal_target.ffill(),
np.nan,
)
# combine signal target and non-signal target rows
df["target"] = df.signal_target.fillna(df.non_signal_target)
# hit is where value exceeds or equal to target
df["hit"] = df.val >= df.target
# drop intermediate calculations
df.drop(["max_to_date", "signal_target", "signal_window", "non_signal_target", "candidate_target", "prev_target"], axis=1, inplace=True)
print(df)
#> val signal target hit
#> 0 1 0 NaN False
#> 1 2 1 5.0 False
#> 2 3 0 5.0 False
#> 3 5 1 5.0 True
#> 4 6 0 NaN False
#> val signal target hit
#> 0 1 0 NaN False
#> 1 2 1 5.0 False
#> 2 3 1 5.0 False
#> 3 5 0 5.0 True
#> 4 6 0 NaN False
#> val signal target hit
#> 0 1 1 4.0 False
#> 1 4 0 4.0 True
#> 2 3 0 NaN False
#> 3 4 1 7.0 False
#> 4 7 0 7.0 True
#> val signal target hit
#> 0 5 0 NaN False
#> 1 3 1 6.0 False
#> 2 6 1 6.0 True
#> 3 2 1 5.0 False
#> 4 7 0 5.0 True
A: I think the difficulty here comes from the fact that the triggers are all in one column.
To make things easier, it's always best to organize all the data needed for a conditional test into one row.
To do this here we have to think about what value we need to test for a hit for each signal.
Here I calculated the 'minimum future value below the current row'. I did this by running the min function as an accumulator from the end to the beginning of the df.val column.
# Example data 1
df = pd.DataFrame({"val": [1, 2, 3, 5, 6], "signal": [0, 1, 0, 0, 0]})
from itertools import accumulate
# Calculate minimum future values
df['mf_val'] = np.fromiter(accumulate(df.val.values[::-1], min), dtype=int)[::-1]
df['hit'] = (df['val'] + 3 >= df['mf_val']).where(df.signal.astype(bool), False)
print(df)
Output in example 1:
val signal mf_val hit
0 1 0 1 False
1 2 1 2 True
2 3 0 3 False
3 5 0 5 False
4 6 0 6 False
Output in example 2:
val signal mf_val hit
0 1 0 1 False
1 2 1 2 True
2 3 1 3 True
3 5 0 5 False
4 6 0 6 False
Output in example 3:
val signal mf_val hit
0 1 1 1 True
1 4 0 3 False
2 3 0 3 False
3 4 1 4 True
4 7 0 7 False
This is not exactly the same as your desired values because it shows all hits and the hits are indicated in the same row as the corresponding signal. But at least it doesn't 'erase' the first hit. If you only want the first hit, use df.hit.tolist().index(True).
UPDATE
I think this does what you want:
# Example data 4
df4 = pd.DataFrame({"val": [5, 3, 4, 2, 7], "signal": [0, 1, 1, 1, 0]})
df['target'] = np.minimum.accumulate((df.val + 3).where(df.signal.astype(bool), np.inf))
df['hit'] = df.val >= df.target
print(df)
Output in example 4:
val signal target hit
0 5 0 inf False
1 3 1 6.0 False
2 4 1 6.0 False
3 2 1 5.0 False
4 7 0 5.0 True
A: If I understand correctly, this is the logic you want to implement:
def transition(value, signal, prev_target, prev_hit):
"""Calculate target and hit in current time step"""
if prev_hit:
prev_target = np.nan
if signal == 1:
new_target = value + 3
target = new_target if np.isnan(prev_target) else min(prev_target, new_target)
else:
target = prev_target
hit = True if value >= target else False
return target, hit
(PLEASE CONFIRM)
This works on the examples you provided so far (ignoring some values which I think are errors in your examples).
For example:
# Example data 3
df = pd.DataFrame({"val": [1, 4, 3, 4, 7], "signal": [1, 0, 0, 1, 0]})
# Prepare empty columns
df['target'] = None
df['hit'] = False
# Initial assumptions
target, hit = (np.nan, False)
for i, row in df.iterrows():
target, hit = transition(row.val, row.signal, target, hit)
df.loc[i, ['target', 'hit']] = target, hit
print(df)
Produces:
val signal target hit
0 1 1 4 False
1 4 0 4 True
2 3 0 NaN False
3 4 1 7 False
4 7 0 7 True
However, I think this requires a recursive solution due to the fact that a signal 'expires' after a hit. If I'm right, then I don't think this is vectorizable.
| |
doc_23537979
|
http://dl.dropbox.com/u/24708866/labs/jquery-multi-open-accordion/index.html
I want to add this to a Wordpress Site Page.
And want to load only to a particular page, so the jquery-ui-1.8.13.custom.min.js and jQuery.multi-accordion-1.5.3.js will not load to other post or pages.
I do not want to use any plugins is this possible?
A: Hare Krishna,
Do Like this
<?php wp_enqueue_script('jquery-1.3.2.min', 'wp-content/themes/xyz/js/jquery-1.3.2.min.js'); ?>
<?php wp_enqueue_script('custom', 'wp-content/themes/xyz/js/accordion.js'); ?>
<?php get_header(); ?>
<?php get_sidebar(); ?>
or e.g. you can do like this
if( is_page('x')) { ?>
// YOUR SCRIPT STUFF
<?php }
Also check your theme function.php file if these are exists then you have done
function scripts() {
if ( !is_admin() ) { // this if statement will insure the following code only gets added to your wp site and not the admin page cause your code has no business in the admin page right unless that's your intentions
// jquery
wp_deregister_script('jquery'); // this deregisters the current jquery included in wordpress
wp_register_script('jquery', ("http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"), false); // this registers the replacement jquery
wp_enqueue_script('jquery'); // you can either let wp insert this for you or just delete this and add it directly to your template
// your own script
wp_register_script('yourscript', ( get_bloginfo('template_url') . '/yourscript.js'), false); //first register your custom script
wp_enqueue_script('swfobject'); // then let wp insert it for you or just delete this and add it directly to your template
// just in case your also interested
wp_register_script('yourJqueryScript', ( get_bloginfo('template_url') . '/yourJquery.js'), array('jquery')); // this last part-( array('jquery') )is added in case your script needs to be included after jquery
wp_enqueue_script('yourJqueryScript'); // then print. it will be added after jquery is added
}
}
add_action( 'wp_print_scripts', 'scripts'); // now just run the function
So finally you can do like this
<?php
function load_index_page(){
wp_enqueue_script('jquery-1.3.2.min', 'wp-content/themes/xyz/js/jquery-1.3.2.min.js');
wp_enqueue_script('easySlider1.5', 'wp-content/themes/xyz/js/accordion.js');
}?>
<?php if (is_home()){
add_action('init', load_index_page);
} ?>
<?php wp_head(); ?>
A: 1 - You should call your jQuery scripts into header.php
2 - Then specify if is pageid into that same php file. Call your functions if page id is correct.
3 - Type the correct html code in the content of the page
note : make sure your html code is loaded before you call your jquery function.
Et Voilà !
| |
doc_23537980
|
1)
AIC_BRIDGE_API AIC_ERROR_CODE aic2_set_cb_function (
void (*cb2_start_dsts) (AIC2_DSTS_START_STOP),
void (*cb2_stop_dsts) (AIC2_DSTS_START_STOP),
void (*cb2_dsts_rcvd_ex) (unsigned int, unsigned long *, char *, AIC2_DSTS_STO),
void (*cb2_log) (const char *, int, const char *, int)
);
2)
aic2_set_cb_function (NULL, NULL, cb2_dsts_rcvd_ex, NULL);
The dll runs in many apps with C/C++ and .Net code.
My code in Java is this:
1)
public interface ReadCallbackInt extends Callback {
void invoke(int iNumDv, Pointer pMicDv, String pcARName, AIC2_DSTS_STO.ByValue sto);
}
2)
public void aic2_set_cb_function(StartDSCallbackInt fn1,
StopDSCallbackInt fn2,
ReadCallbackInt fnReadCB,
LogCallbackInt fn4);
3)
TestLib.ReadCallbackInt fnReadCB = new TestLib.ReadCallbackInt() {
long[] IntArray;
@Override
public void invoke(int iNumDv, Pointer pMicDv, String pcARName, AIC2_DSTS_STO.ByValue sto) {
if (sto.bTsNamePres > 0) {
System.out.println("iNumDv: " + iNumDv);
System.out.println("pMicDv: " + pMicDv);
System.out.println("pcARName: " + pcARName);
System.out.println("sto: " + sto.TsName);
if (pMicDv!=null) {
IntArray = new long[iNumDv];
IntArray = pMicDv.getLongArray(0, iNumDv);
if (IntArray != null) {
System.out.println("IntArray: " + IntArray +" First El. " + IntArray[0]);
}
}
}
}
}
...............
...............
4)
TestLib.INSTANCE.aic2_set_cb_function(null, null, fnReadCB, null);
The issue is that in IntArray I get all elements as zero. Can you help me?
5) Original C code:
void cb2_dsts_rcvd_ex (unsigned int iNumDv, unsigned long *piMicDv, char *pcARName, AIC2_DSTS_STO sto)
{
unsigned int i;
AIC2_DV *pdv;
printf ("\nRemote %s: ", pcARName);
if (sto.bTsNamePres)
printf ("Report (%s/%s) received\n", sto.TsName.pcDomainName, sto.TsName.pcName);
else
printf ("Report (unknown) received\n");
for (i = 0; i < iNumDv; i++)
if (piMicDv[i] != AIC_ID_DV_INVALID)
{
pdv = aic2_get_dv_info (piMicDv[i]);
log_data_values_aic2 (pdv, stdout);
if (pdv) aic2_free_dv_info(pdv);
}
}
| |
doc_23537981
|
I Would like to remove the home link so the breadcrumb trail starts with "Shop" as the first link.
Thanks!
A: add_filter('woocommerce_breadcrumb_defaults', function( $defaults ) {
unset($defaults['home']); //removes home link.
return $defaults; //returns rest of links
});
the above code goes to your functions.php.
A: You can also directly pass an empty string to the woocommerce_breadcrumb function.
<?php woocommerce_breadcrumb( array( 'home' => '' ) ); ?>
The function take an home argument and if this argument is empty, the home link isn't added, see: https://github.com/woocommerce/woocommerce/blob/master/includes/wc-template-functions.php#L2237
| |
doc_23537982
|
{
private:
int numOfX;
int numOfY;
int numOfZ;
int numOfSpc;
int itemMatrix [numOfZ][numOfY][numOfX];
public:
void build (Space spc, Item item)
{
numOfX = item.getX()/spc.getX(); //number of space requirement for X origin
numOfY = item.getY()/spc.getY(); //number of space requirement for Y origin
numOfZ = item.getZ()/spc.getZ(); //number of space requirement for Z origin
for (int layer=1; layer<=numOfZ; layer++) // stating layers of item through Z origin
{
for (int orgY=1; orgY<=numOfY; orgY++) // stating origin Y of a layer
{
for (int orgX=1; orgX<=numOfX; orgX++) // stating origin X
{
itemMatrix[layer][orgY][orgX]=0;
}
}
}
}
};
Hi, I'm very new to coding in C++. I'm trying to build 3D item for allocating in a domain. First, I got "item.get" and "spc.get" variables from other classes. When trying to state the units as 0 with itemMatrix, I got error about non-static condition of private variables. How would I state space units with matrix?
Please correct my codes with proper one
Thanks
A: The problem is here:
int itemMatrix[numOfZ][numOfY][numOfX];
C++ does not allow you to use values of member variables in declaring other members.
The process of creating a 3D matrix from arrays is a lot simpler if you use nested vectors:
std::vector<std::vector<std::vector<int>>> itemMatrix;
Then you can initialize it in the constructor as follows:
Itembuilder(int numOfX, int numOfY, int numOfZ)
: itemMatrix(numOfX, std::vector<std::vector<int>>(numOfY, std::vector<int>(numOfZ))) {
}
Is there any other way to initialize vector instead of constructor?
The vector needs to be initialized in the constructor in order to make the object consistent upon construction. However, it does not mean that you don't have an option to re-assign the vector once the constructor has finished. If you later need to change the matrix, for example, to change its size, you can re-assign the vector:
void changeSize(int numOfX, int numOfY, int numOfZ) {
itemMatrix = std::vector<std::vector<std::vector<int>>>(
numOfX
, std::vector<std::vector<int>>(numOfY, std::vector<int>(numOfZ))
);
}
| |
doc_23537983
|
I have to use Java 8 Future to build object paralally so that the code block would be more preferment.
The code looks below -
public CustomRequest getCustomRequest(Member member,
Address address,Member member){
CustomRequest customRequest = new CustomRequest();
CompletableFuture.runAsync(() -> {
populateAddress(address, customRequest);
populatecontact(contact, customRequest);
populateMemberDetails(member, customRequest);
});
return customRequest;
}
currently am getting "No values set in side of customRequest object" (have set some value inside populatecontact, populatecontact and populateMemberDetails to customRequest Object) as return of method call , do need to put wait on CompletableFuture or the use of Futures itself wrong.
A: The problem is that you return custom request before it was populated, because your return the object while it's still being populated in a different thread. If you want the object customRequest to be completely populated before returning it your need to wait for the CompletableFuture to finish by calling the method CompletableFuture.get() like this:
public CustomRequest getCustomRequest(Member member,
Address address,Member member){
CustomRequest customRequest = new CustomRequest();
CompletableFuture.runAsync(() -> {
populateAddress(address, customRequest);
populatecontact(contact, customRequest);
populateMemberDetails(member, customRequest);
}).get();//EDIT: added get method here to wait for the execution
return customRequest;
}
But this use of CompletableFuture doesn't really make much sense (except that the population is done in another thread). It will still be a blocking call and you'll have to wait for the object to be populated.
You could try to use the java 8 Future framework like this:
public CompletableFuture<CustomRequest> getCustomRequest(Member member, Address address, Member member){
return CompletableFuture.supplyAsync(() -> {
CustomRequest customRequest = new CustomRequest();
populateAddress(address, customRequest);
populatecontact(contact, customRequest);
populateMemberDetails(member, customRequest);
return customRequest;
});
}
This way you can create method calls like this (only an example):
getCustomRequest(aMember, anAddress, anotherMember).thenAccept(populatedCustomRequest -> populatedCustomRequest.doSomethingUsefull());
Using e.g. the method thenAccept(Consumer) of the class CompletableFuture. This would cause the method doSomethingUsefull() of the class CustomRequest to be executed on the complete populated CustomRequest object as soon as it is populated.
| |
doc_23537984
|
A: If you are working on emulator like Genymotion you can try this code for register parse.com
Parse.initialize(this, "YOUR API KEY", "YOUR APP KEY");
PushService.subscribe(this, "CHANNELNAME", YOURCLASSNAME.class);
PushService.setDefaultPushCallback(this, YOURCLASSNAME.class);
ParseInstallation.getCurrentInstallation().saveInBackground();
ParseAnalytics.trackAppOpened(getIntent());
I hope it will work.
| |
doc_23537985
|
from ctypes import *
import numpy as np
import matplotlib.pyplot as plt
I am locating the .dll file with:
rsa300 = WinDLL("RSA300API.dll")
The error occurs when executing the search function:
longArray = c_long*10
deviceIDs = longArray()
deviceSerial = c_wchar_p('')
numFound = c_int(0)
serialNum = c_char_p('')
nomenclature = c_char_p('')
header = IQHeader()
rsa300.Search(byref(deviceIDs), byref(deviceSerial), byref(numFound))
if numFound.value == 1:
rsa300.Connect(deviceIDs[0])
else:
print('Unexpected number of instruments found.')
exit()
When running the following error messages appear:
C:\Anaconda2\python.exe C:/Tektronix/RSA_API/lib/x64/trial
<WinDLL 'RSA300API.dll', handle e47b0000 at 3ae4e80>
Traceback (most recent call last):
File "C:/Tektronix/RSA_API/lib/x64/trial", line 44, in <module>
rsa300.Search(byref(deviceIDs), byref(deviceSerial), byref(numFound))
File "C:\Anaconda2\lib\ctypes\__init__.py", line 376, in __getattr__
func = self.__getitem__(name)
File "C:\Anaconda2\lib\ctypes\__init__.py", line 381, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'Search' not found
The issue that I am having is that the 'Search' function is not found. What would be the solution to this problem?
A: Tektronix application engineer here.
The problem here is a mismatch of API versions. Your code is referencing an old version of the API (RSA300API.dll) and the error message is referencing a newer version of the API (RSA_API.dll). Make sure you have installed the most current version of the API and that you reference the correct dll in your code.
Here is a link to download the latest version of the RSA API (as of 11/1/16):
http://www.tek.com/model/rsa306-software
Here is a link to download the API documentation (as of 11/1/16). There is an Excel spreadsheet attached to this document that outlines the differences between old functions and new functions:
http://www.tek.com/spectrum-analyzer/rsa306-manual-6
Function names were changed in the new version using for the sake of clarity and consistency. The old version of the API didn't have prefixes for most functions, and it was unclear which functions were grouped together just from reading the function names. The new version of the API applies prefixes to all functions and it is now much easier to tell what functional group a given function is in just by reading its declaration. For example the old search and connect functions were simply called Search() and Connect(), and the new version of the functions are called DEVICE_Search() and DEVICE_Connect().
Note: I use cdll.LoadLibrary("RSA_API.dll") to load the dll rather than WinDLL().
DEVICE_Search() has slightly different arguments than Search(). Due to different argument data types, the new DEVICE_Search() function doesn't play as well with ctypes as the old Search() function does, but I've found a method that works (see code below).
Here is the search_connect() function I use at the beginning of my RSA control scripts:
from ctypes import *
import os
"""
################################################################
C:\Tektronix\RSA306 API\lib\x64 needs to be added to the
PATH system environment variable
################################################################
"""
os.chdir("C:\\Tektronix\\RSA_API\\lib\\x64")
rsa = cdll.LoadLibrary("RSA_API.dll")
"""#################CLASSES AND FUNCTIONS#################"""
def search_connect():
#search/connect variables
numFound = c_int(0)
intArray = c_int*10
deviceIDs = intArray()
#this is absolutely asinine, but it works
deviceSerial = c_char_p('longer than the longest serial number')
deviceType = c_char_p('longer than the longest device type')
apiVersion = c_char_p('api')
#get API version
rsa.DEVICE_GetAPIVersion(apiVersion)
print('API Version {}'.format(apiVersion.value))
#search
ret = rsa.DEVICE_Search(byref(numFound), deviceIDs,
deviceSerial, deviceType)
if ret != 0:
print('Error in Search: ' + str(ret))
exit()
if numFound.value < 1:
print('No instruments found. Exiting script.')
exit()
elif numFound.value == 1:
print('One device found.')
print('Device type: {}'.format(deviceType.value))
print('Device serial number: {}'.format(deviceSerial.value))
ret = rsa.DEVICE_Connect(deviceIDs[0])
if ret != 0:
print('Error in Connect: ' + str(ret))
exit()
else:
print('Unexpected number of devices found, exiting script.')
exit()
| |
doc_23537986
|
I have a very simple class which has a LocalDateTime variable.
I have created a MySQL table where I want to store the object containing this variable. For the LocalDateTime variable I've tried DateTime and TimeStamp types.
As far as I read, Hibernate 5 is supposed to support java.time.localdatetime. As I said, I've tried to use timestamp type and date, as long as timestamp and datetime as Mysql column types.
Always the same error.
This is a new project I am starting and I want to start using new Java 8 DateTime.
Here I attach all the classes and configuration files.
This is the Fecha.java, the object I want to map on my Mysql Table.
@Entity
@Table(name = "Fecha", catalog = "qtx590", uniqueConstraints = { @UniqueConstraint(columnNames = { "_id" }) })
public class Fecha implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "_id", nullable = false, unique = true)
private int _id;
@Column(name = "_idEmpresa", nullable = false)
private int _idEmpresa;
@Column(name = "_idTurno", nullable = false)
private int _idTurno;
@Column(name = "Momento", nullable = false)
@Temporal(TemporalType.TIMESTAMP)
private LocalDateTime momento;
public Fecha() {
this._id = 99999;
this._idEmpresa = 99999;
this._idTurno = 99999;
this.momento = LocalDateTime.now();
}
public Fecha(int _id, int _idEmpresa, int idTurno, LocalDateTime momento) {
this._id = _id;
this._idEmpresa = _idEmpresa;
this._idTurno = idTurno;
this.momento = momento;
}
public String getMomentoString() {
DateTimeFormatter format = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
return this.momento.format(format);
}
public int get_id() {
return _id;
}
public void set_id(int _id) {
this._id = _id;
}
public int get_idEmpresa() {
return _idEmpresa;
}
public void set_idEmpresa(int _idEmpresa) {
this._idEmpresa = _idEmpresa;
}
public int get_idTurno() {
return _idTurno;
}
public void set_idTurno(int idTurno) {
this._idTurno = idTurno;
}
public LocalDateTime getmomento() {
return momento;
}
public void setmomento(LocalDateTime momento) {
this.momento = momento;
}
}
This is the HibernateConnectorClass
import org.hibernate.HibernateException;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.cfg.Configuration;
public class HibernateConnector {
private static HibernateConnector me;
private Configuration cfg;
private SessionFactory sessionFactory;
private HibernateConnector() throws HibernateException {
cfg = new Configuration();
sessionFactory = cfg.configure().buildSessionFactory();
}
public static synchronized HibernateConnector getInstance() throws HibernateException {
if (me == null) {
me = new HibernateConnector();
}
return me;
}
public Session getSession() throws HibernateException {
Session session = sessionFactory.openSession();
if (!session.isConnected()) {
this.reconnect();
}
return session;
}
private void reconnect() throws HibernateException {
this.sessionFactory = cfg.buildSessionFactory();
}
}
This is the test class to test it:
public class Prueba {
public static void main(String[] args) {
// TODO Auto-generated method stub
FechaDAO fechaDAO = new FechaDAO();
Fecha f = new Fecha();
System.out.println(f.getmomento());
fechaDAO.insertar(f);
System.out.println("FIN");
}
}
The hibernate.cfg.xml is working fine ( I know it because I tested it with other entity / table.
This is the specific for Fecha Object
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">
<!-- Generated 31-ago-2015 12:58:20 by Hibernate Tools 3.4.0.CR1 -->
<hibernate-mapping>
<class name="Fecha" table="Fecha">
<id name="_id" type="java.lang.Integer" access="field">
<column name="_id" />
<generator class="increment" />
</id>
<property name="_idEmpresa" type="java.lang.Integer" access="field">
<column name="_idEmpresa" />
</property>
<property name="_idTurno" type="java.lang.Integer">
<column name="_idTurno" />
</property>
<property name="momento" type="java.time.LocalDateTime">
<column name="Momento" />
</property>
</class>
</hibernate-mapping>
And this the error:
2015-09-02T13:47:16.719
sep 02, 2015 1:47:16 PM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate Core {5.0.0.Final}
sep 02, 2015 1:47:16 PM org.hibernate.cfg.Environment <clinit>
INFO: HHH000206: hibernate.properties not found
sep 02, 2015 1:47:16 PM org.hibernate.cfg.Environment buildBytecodeProvider
INFO: HHH000021: Bytecode provider name : javassist
sep 02, 2015 1:47:16 PM org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit>
INFO: HCANN000001: Hibernate Commons Annotations {5.0.0.Final}
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure
WARN: HHH000402: Using Hibernate built-in connection pool (not for production use!)
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH000401: using driver [com.mysql.jdbc.Driver] at URL [jdbc:mysql://qtx590.li-bra.es:3306/qtx590]
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH000046: Connection properties: {user=qtx590, password=****}
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH000006: Autocommit mode: false
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure
INFO: HHH000115: Hibernate connection pool size: 20 (min=1)
sep 02, 2015 1:47:17 PM org.hibernate.dialect.Dialect <init>
INFO: HHH000400: Using dialect: org.hibernate.dialect.MySQLDialect
sep 02, 2015 1:47:17 PM org.hibernate.engine.jdbc.env.internal.LobCreatorBuilderImpl useContextualLobCreation
INFO: HHH000423: Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4
Hibernate: select max(_id) from Fecha
Hibernate: insert into Fecha (_idEmpresa, _idTurno, Momento, _id) values (?, ?, ?, ?)
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
WARN: SQL Error: 0, SQLState: 22001
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
ERROR: Data truncation: Incorrect datetime value: '’' for column 'Momento' at row 1
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.batch.internal.AbstractBatchImpl release
INFO: HHH000010: On release of batch it still contained JDBC statements
sep 02, 2015 1:47:18 PM org.hibernate.internal.SessionImpl$5 mapManagedFlushFailure
ERROR: HHH000346: Error during managed flush [could not execute statement]
org.hibernate.exception.DataException: could not execute statement
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:52)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:207)
at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:45)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2823)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3323)
at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:89)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:447)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:333)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:335)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1224)
at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:464)
at org.hibernate.internal.SessionImpl.flushBeforeTransactionCompletion(SessionImpl.java:2890)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:2266)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.beforeTransactionCompletion(JdbcCoordinatorImpl.java:485)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.beforeCompletionCallback(JdbcResourceLocalTransactionCoordinatorImpl.java:146)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.access$100(JdbcResourceLocalTransactionCoordinatorImpl.java:38)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:230)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:65)
at FechaDAO.insertar(FechaDAO.java:26)
at Prueba.main(Prueba.java:16)
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect datetime value: '’' for column 'Momento' at row 1
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2983)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)
at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1604)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1519)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1504)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:204)
... 19 more
Hibernate: insert into Fecha (_idEmpresa, _idTurno, Momento, _id) values (?, ?, ?, ?)
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
WARN: SQL Error: 0, SQLState: 22001
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
ERROR: Data truncation: Incorrect datetime value: '’' for column 'Momento' at row 1
sep 02, 2015 1:47:18 PM org.hibernate.engine.jdbc.batch.internal.AbstractBatchImpl release
INFO: HHH000010: On release of batch it still contained JDBC statements
Exception in thread "main" org.hibernate.exception.DataException: could not execute statement
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:52)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:207)
at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:45)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2823)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3323)
at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:89)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:447)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:333)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:335)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:39)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1224)
at FechaDAO.insertar(FechaDAO.java:32)
at Prueba.main(Prueba.java:16)
Caused by: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect datetime value: '’' for column 'Momento' at row 1
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2983)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)
at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1604)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1519)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1504)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:204)
... 11 more
A: IIRC for Hibernate5 to map java.time.* no @Temporal annotation is needed/accepted. Hibernate5 have sufficient information to judge the mapping from the type of the property. This is stated in their document.
A: Java 8 support in Hibernate 5 but this is not portable to other JPA implementations. Add dependencies as below and no more configurations
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
<version>5.1.0.Final</version>
</dependency>
For further reading : Hibernate 5: How to persist LocalDateTime & Co with Hibernate
| |
doc_23537987
|
so a list with a single list item will match but empty lists or lists with more than one list item will not.
Is this possible?
A: $('li:only-child').parent();
?
Here is a demo
A: $("ul").filter(function(){return $(this).children().length == 1; })
.addClass("someClass");
here is the fiddle http://jsfiddle.net/JT5N2/
A: Try this
$("ul").filter(function(){ $(this).children().length == 1; });
| |
doc_23537988
|
my code for converting into csv in
...auditlogs.reduce((rows, data) => {
const newRows = []
const newRow = {}
console.log('data: ', data);
console.log('data._id: ', data._id);
console.log('data.actor: ', data.actor);
newRow[(null, 'Actor')] = data.actor;
newRow[(null, 'Date')] = data.date;
newRow[(null, 'Action')] = data.action;
newRow[(null, 'Data')] = data.object;
newRow[(null, 'Description')] = data.description;
newRows.push(Object.assign({}, rowProto, newRow))
const newRowsNoHeaders = newRows.map(row => Object.values(row))
return [...rows, ...newRowsNoHeaders]
}
result of console.log('data: ', data) is
data = { _id: 5ae01fa9dc9e47001a92abd4,
actor: 'Toan',
date: 2018-04-25T06:26:49.057Z,
origin: '',
action: 'Add',
label: '',
object:
'{ _id: 5ae01fa9dc9e47001a92abd3,\n name: \'Welcome\', __v: 0 }',
description: '',
__v: 0 }
But when I am trying to get the value it is showing undefined.
data.actor = undefined
but only data._id is giving correct value.
This is a screenshot
Can some tells me a solution?
A: Btw
JSON.parse(JSON.stringify(log))
works for me.
| |
doc_23537989
|
That way I am intercepting Opengl32, glu32, glut32 libraries. The apis then call the respective apis from the system folder or usual dlls.
These dlls have all the apis defined in their respective libraries.
Problem is that wglMakeCurrent returns 0 or fails with glut demo app like abgr.exe on NVidia GTX480 on Win 7 x64 bit. It doesnt work with other demo apps either.
It does work if i change the Compatibility of the abgr.exe or the GLIntercept executable which is invoking DetourCreateProcesswithDLL. i.e. right click on exe -> change properties ->Compatilibity ->Check Run in 256 colors.
However this looks like a work around and the display doesn't look all that good. How can i use without have to do that. How to get a good value for wglMakeCurrent.
wglMakeCurrent is called from within glutCreateWindow.
This is not a problem on the AMD Fusion GPU. There wglMakeCurrent and returns a good value. The problem is only on NVIDIA GTX 480. Does anyone know how to fix this problem? I have updated the driver and shows version 4.3.790.0 on the NVIDIA Control Panel Help->About
wglCreateContext returns 0x00010000
| |
doc_23537990
|
A: That's a feature of the operating system which you don't really have much control over.
| |
doc_23537991
|
while(true)
{
cout << "Enter a character: ";
cin.ignore(3, '\n');
ch = cin.get(); // ch is char type
cout << "char: ch: " << ch << endl;
}
Actually cin.ignore(3, '\n') ignores the first three characters and then gets the next immediate character. Till that point its fine. Since, I kept this in a while loop, I was trying to check the behavior of ignore() and get(). For instance, the output for which I checked was
Enter a character: abcd
char: ch: d
Enter a character: efgh
char: ch: e
Enter a character: ijkl
char: ch: i
Enter a character: mnopq
char: ch: m
Enter a character: char: ch: q
Enter a character:
Just to check the buffering, intentionally I was give 4 characters instead of 1. In the first case, its fine and got it. From second, the ignore doesn't seem to work. When I entered 5 characters, I din't get the behavior.
Need explanation on this. :)
A: According to documentation of std::cin.ignore(streamsize n = 1, int delim = EOF):
Extracts characters from the input sequence and discards them, until either n characters have been extracted, or one compares equal to delim.
http://www.cplusplus.com/reference/istream/istream/ignore/
You are putting abcd\n onto stdin. Your first ignore(3,'\n') removes abc and your get() fetches d. \n remains in the buffer.
Then you add efgh\n to the buffer which now contains \nefgh\n. Your next ignore() reads either 3 characters or a newline, whatever comes first. Since your newline is first in the buffer, only the newline is ignored.
You probably want to empty the stdin buffer before asking for more input. You can achieve this either by modifying your get() call, or by adding a second ignore() call before asking for more input.
A: cin.ignore(3, '\n') ignores up to three characters, stopping after it finds the end of a line (i.e. a \n character).
After the first line of input, the buffer will contain 5 characters, abcd\n. So ignore ignores abc, and get gets d, leaving \n.
After the second line, it contains \nefgh\n. So ignore just ignores the end-of-line character, and get returns e.
If you want to discard the rest of line after extracting the character, then use ignore again:
cin.ignore(numeric_limits<streamsize>::max(), '\n');
| |
doc_23537992
|
int main() {
double d1 = 10000000000.0;
const double d2 = 10000000000.0;
cout << static_cast<int>(d1) << endl;
cout << static_cast<int>(d2) << endl;
cout << static_cast<int>(10000000000.0) << endl;
}
The output is:
-2147483648
2147483647
2147483647
This surprised me grealy. Why would a positive double sometimes get casted to a negative int?
I'm using g++: GCC version 4.4.3 (Ubuntu 4.4.3-4ubuntu5).
A: From the C standard (1999):
6.3.1.4 Real floating and integer
1 When a finite value of real floating type is converted to an integer type other than _Bool,
the fractional part is discarded (i.e., the value is truncated toward zero). If the value of
the integral part cannot be represented by the integer type, the behavior is undefined.
From the C++ standard (2003):
4.9 Floating-integral conversions [conv.fpint]
1 An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates;
that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be
represented in the destination type. [Note: If the destination type is bool, see 4.12. ]
Most likely your double is too big to be converted correctly to int.
A: Casting a double to an int when int isn't big enough to hold the value yields undefined behaviour.
[n3290: 4.9/1]: A prvalue of a floating point type can be converted
to a prvalue of an integer type. The conversion truncates; that is,
the fractional part is discarded. The behavior is undefined if the
truncated value cannot be represented in the destination type.
This behaviour is derived from C:
[C99: 6.3.1.4/1]: When a finite value of real floating type is
converted to an integer type other than _Bool, the fractional part is
discarded (i.e., the value is truncated toward zero). If the value of
the integral part cannot be represented by the integer type, the
behavior is undefined.
For you, int clearly isn't big enough.
*
*And, in the first case, for you this just so happens to result in the sign bit being set.
*In the second and third cases, again for you, it's probably optimisations that happen to result in different behaviour.
But don't rely on either (or, indeed, any) behaviour in this code.
| |
doc_23537993
|
Now I wanna start analytics with apache hive on the twitter data. On the web I found following example from cloudera.
https://github.com/cloudera/cdh-twitter-example
But now, by creating the table, hive returns the following error message:
java.net.URISyntaxException: Relative path in absolute URI: text:STRING, Query returned non-zero code: 1,
cause: java.net.URISyntaxException: Relative path in absolute URI: text:STRING,
On web i didn't found something about this (only by starting hive), maybe someone here can help me!
Thanks!
A: Okay, first problem are solved by myself. forgot a semicolon on the command. sorry for this.
But now I get another error message after start jobs over hive. All Query Jobs on Hive abort after some seconds. In the Log I found only this:
2015-03-25 14:47:40,680 ERROR [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Container complete event for unknown container id container_1427105751169_0006_01_000030
Any Ideas here?
| |
doc_23537994
|
sessionstate mode="InProc" cookieless="UseUri
That way each tab generates a new unique session ID in the URL with the format like this :
http://www.domain.com/(S(kbusd155dhzflbur53vafs45))/default.aspx
It worked, but when I copy the url and paste it on another tab then the previous session value is inheriting. How can I solve this issue? Is there anyother method to solve issue?
A: If the user pastes a URL containing an existing session token into a new tab, your application cannot possibly know that this is a new tab and not an existing tab. I'm afraid that short of some hacky browser plugin there isn't much you can do about this.
A: A possible solution to this situation would be issue a ticket (guid or seomthing like that) in each response you write to the client. In the request the client would send this ticket and the server would 1) Check to see if it is valid and 2) Invalidate it so just one request (the original one) could be made with it. This way your user wouldn't be able to take advantage of new tabs or even copy/paste of URLs.
| |
doc_23537995
|
But how do I record the details of the exception in a log file on my server?
Does kestrel log exceptions and errors anywhere by default or do I have to do this manually?
Are there any examples or documentation available?
A: Yes, ASP.NET core has the built-in logging, but it does not provide the ability to log to file directly. Instead, you need to use 3-rd party libraries.
The good news is that all three most common logging libraries are available already for .NET Core as nuget packages: NLog, Serilog and log4net.
This article has a good overview over them and provides how-to-use examples: ASP.NET Core Logging Tutorial – What Still Works and What Changed?.
Then I suggest looking into samples in aspnet/Diagnostics repo. It shows how to custom or built-in error Middleware, ExceptionHandler.
A: There's a package called Serilog.Extensions.Logging.File that adds file logging with a single line of code (AddFile()):
public void Configure(IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
loggerFactory.AddFile("Logs/myapp-{Date}.txt");
From this article.
| |
doc_23537996
|
here's my code so far in calling the html file:
WebView webview = (WebView) findViewById(R.id.mapView);
MyJavaScriptInterface myJavaScriptInterface= new MyJavaScriptInterface(this);
webview.addJavascriptInterface(myJavaScriptInterface, "AndroidFunction");
webview.getSettings().setJavaScriptEnabled(true);
webview.setWebViewClient(new WebViewClient());
webview.loadUrl("file:///android_asset/index.html");
PC address: C:\Users\User\AndroidStudioProjects\SWEEPx\app\src\main\asset
Here's the output:
any idea?
A: try to move to another folder.
WebView.loadUrl("file:///Android_res/raw/test.HTML");//from raw folder
WebView.loadUrl("file:///Android_asset/test.HTML");//from asset folder
A: Try to setting this for your webview:
public void setAllowFileAccess (boolean allow)
Enables or disables file access within WebView. File access is enabled by default. Note that this enables or disables file system access only. Assets and resources are still accessible using file:///android_asset and file:///android_res.
You may also look at this, hope it help.
| |
doc_23537997
|
Changing grepformat from the default %f:%l:%m to %l:%m removes the filename at the beginning of each line in the location list but without the name it doesn't know to look in the current file so I can't jump to the different functions.
Looking through the errorformat and quickfix documentation doesn't indicate any options for changing the quickfix\location list display pattern as far as I can tell.
This provides a keybinding for a functional location list but bad formatting:
grepformat=%f:%l:%m
nnoremap <buffer> <leader>l :silent lgrep! function %<CR>:lopen<CR>
This provides a better formatted but a non-functional location list:
grepformat=%l:%m
nnoremap <buffer> <leader>l :silent lgrep! -h function %<CR>:lopen<CR>
Notice the -h grep option suppress the filename in the output
The raw grep output is almost exactly how I want the code formatted:
1:function actigraphyCalculator(dirname)
69: function [checkedFiles, metadata] = readQcData
75: function fileContents = openFile(name, filePaths)
80: function fileContents = qcprocessing(name, fileContents, metadata)
90: function fileContents = removeBadDays(name, fileContents, metadata)
106: function path = createSavePath(filepath)
The only issue is the indenting is inconsistent and the different number lengths cause the messages to not line up perfectly.
The current output for the location list of the same file is:
calcActigraphy/actigraphyCalculator.m|1| function actigraphyCalculator(dirname)
calcActigraphy/actigraphyCalculator.m|69| function [checkedFiles, metadata] = readQcData
calcActigraphy/actigraphyCalculator.m|75| function fileContents = openFile(name, filePaths)
calcActigraphy/actigraphyCalculator.m|80| function fileContents = qcprocessing(name, fileContents, metadata)
calcActigraphy/actigraphyCalculator.m|90| function fileContents = removeBadDays(name, fileContents, metadata)
Notice the lack of indentation at the beginning of the message.
A: You can use :help :syn-conceal to hide the filename from the quickfix list. It's still there physically (so navigation still works), it's just not displayed any longer.
I've found the basic idea in how to format vim quickfix entry; here's a mapping that I use for it (to be put into ~/.vim/ftplugin/qf_conceal.vim:
function! s:ToggleLocation()
if ! v:count && &l:conceallevel != 0
setlocal conceallevel=0
silent! syntax clear qfLocation
else
setlocal concealcursor=nc
silent! syntax clear qfLocation
if v:count == 1
" Hide file paths only.
setlocal conceallevel=1
" XXX: Couldn't find a way to integrate the concealment with the
" existing "qfFileName" definition, and had to replace it. This will
" persist when toggling off; only a new :setf qf will fix that.
syntax match qfLocation /^\%([^/\\|]*[/\\]\)\+/ transparent conceal cchar=‥ nextgroup=qfFileName
syntax clear qfFileName
syntax match qfFileName /[^|]\+/ contained
elseif v:count == 2
" Hide entire filespec.
setlocal conceallevel=2
syntax match qfLocation /^[^|]*/ transparent conceal
else
" Hide filespec and location.
setlocal conceallevel=2
syntax match qfLocation /^[^|]*|[^|]*| / transparent conceal
endif
endif
endfunction
"[N]<LocalLeader>tf Toggle filespec and location to allow focusing on the
" error text.
" [N] = 1: Hide file paths only.
" [N] = 2: Hide entire filespec.
" [N] = 3: Hide filespec and location.
nnoremap <buffer> <silent> <LocalLeader>tf :<C-u>call <SID>ToggleLocation()<CR>
| |
doc_23537998
|
import clubs from "./clubs.js";
class DataSource {
static searchClub(keyword) {
fetch(
`http://www.omdbapi.com/?apikey=dd08fe3c&s=${keyword}`
)
.then(response => {
response.json()
})
.then(responseJson => {
const movies = responseJson.Search;
let cards = '';
movies.forEach(m => cards += showCards(m));
const cardMovie = document.querySelector('.card-movie');
cardMovie.innerHTML = cards;
});
}
}
export default DataSource;
note: data-source.js
A: I think you are not getting any response from the API. You have null in responseJSON variable and you are trying to access Search on undefined that's why you are getting this error.
Try to
console.log(responseJSON);
and see is there any value. if not may be something is wrong with API endpoint.
| |
doc_23537999
|
line 61 col 25 This function's cyclomatic complexity is too high. (10)
line 101 col 22 This function's cyclomatic complexity is too high. (10)
how could I reduce the Cyclomatic complexity in this case ? my functions aren't that complex
first error
remove: function(line, row, type) {
var spreadSelected = (row.spreadSelected && type === 'spread'),
totalSelected = (row.totalSelected && type === 'total'),
moneyLineSelected = (row.moneyLineSelected && type === 'moneyline'),
lineValue;
if (spreadSelected || totalSelected || moneyLineSelected) {
switch (type) {
case 'spread':
lineValue = row.spread.line;
break;
case 'total':
lineValue = row.total.line;
break;
case 'moneyline':
lineValue = row.moneyLineId;
break;
default:
break;
}
AuthFactory.getCustomer().then(function(credentials) {
betSlipSelectionRequest('/betSlip/removeSelection', {
customerId: credentials.customer,
game: row.game,
pair: row.pair,
line: lineValue
});
});
if (spreadSelected) {
row.spreadSelected = false;
}
if (totalSelected) {
row.totalSelected = false;
}
if (moneyLineSelected) {
row.moneyLineSelected = false;
}
}
}...
and then the 2nd error function
add: function(line, row, type) {
var spreadSelected = (row.spreadSelected && type === 'spread'),
totalSelected = (row.totalSelected && type === 'total'),
moneyLineSelected = (row.moneyLineSelected && type === 'moneyline'),
lineValue;
if (!(spreadSelected || totalSelected || moneyLineSelected)) {
switch (type) {
case 'spread':
lineValue = row.spread.line;
break;
case 'total':
lineValue = row.total.line;
break;
case 'moneyline':
lineValue = row.moneyLineId;
break;
default:
break;
}
AuthFactory.getCustomer().then(function(credentials) {
betSlipSelectionRequest('/betSlip/addSelection', {
customerId: credentials.customer,
game: row.game,
pair: row.pair,
line: lineValue
});
});
switch (type) {
case 'spread':
row.spreadSelected = true;
break;
case 'total':
row.totalSelected = true;
break;
case 'moneyline':
row.moneyLineSelected = true;
break;
}
}
}
the weird thing here: this error is only with me, my co-worker opens the same files and run grunt and there is no errors in their terminals.
A: The way to reduce cyclomatic complexity of a function is to split it into several smaller functions and distribute that complexity into easy-to-understand chunks. For instance, you can extract your switch-case statement, resulting in something like this:
remove: function(line, row, type) {
var spreadSelected = (row.spreadSelected && type === 'spread'),
totalSelected = (row.totalSelected && type === 'total'),
moneyLineSelected = (row.moneyLineSelected && type === 'moneyline'),
lineValue;
if (!(spreadSelected || totalSelected || moneyLineSelected)) {
lineValue = getLineValue(row, type);
}
// ... and so on, in reasonable chunks.
}
function getLineValue(row, type) {
var lineValue;
switch (type) {
case 'spread':
lineValue = row.spread.line;
break;
case 'total':
lineValue = row.total.line;
break;
case 'moneyline':
lineValue = row.moneyLineId;
break;
default:
break;
return lineValue;
}
Then, we find that that you can reuse the getLineValue function in your second block as well:
add: function(line, row, type) {
var spreadSelected = (row.spreadSelected && type === 'spread'),
totalSelected = (row.totalSelected && type === 'total'),
moneyLineSelected = (row.moneyLineSelected && type === 'moneyline'),
lineValue;
if (!(spreadSelected || totalSelected || moneyLineSelected)) {
lineValue = getLineValue(row, type);
}
// ... and so on
}
So with this one change, you've distributed the complexity to another function, and also completely eliminated some complexity by eliminating duplication.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.