text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
MacOS 12.6 Numbers version 12.1 (7034.0.86) Day function in vlookup function
I am attempting to use the calendar spreadsheet provide as a template with Numbers. In the month section there are dates in the even rows and large cells in the corresponding odd rows for whatever text you might want to enter.
In cell C10 is the number 25 which is the result of a formula C8+DURATION(1) where C8 has 18 derived from a similar formula referencing the week prior.
If I put the formula =C10 in cell D10, the result is as expected "25".
If I use that like this =VLOOKUP(C10,Table 1::F9:G13,2,)the result is from one row lower than it should be. In other words the returned valued is "Pack" not "Travel". The lookup range (in Table 1) looks like this:
. F. G
9 25 Travel
10 26 Pack
11 27 Pack
12 28 Move
I've tried using the T and String functions to convert the C10 but that doesn't work.
I've tried =VLOOKUP(INDIRECT("C10"),Table 1::F9:G13,2,) with the same result.
=VLOOKUP(25,Table 1::F9:G13,2,) returns the correct value, i.e. "Travel".
Can someone explain a way to sort out this formula or change it to work as I expect?
I know the calendar template in Numbers but it is still difficult to comprehend the structure and other details of your spreadsheet despite the descriptions you have provided. Perhaps one or more screenshots might help. Not 100% sure if this would help either but have you tried using exact match 0 as the type of match in the VLOOKUP()s in the spreadsheet?
| common-pile/stackexchange_filtered |
Geotools vector grid not showing
I am working on a Maps Application using geotools, but since recently my grid layer stopped showing for no reason !
There is a WMS map representing the earth, and a FeatureLayer representing a grid with 10° squares (Geographic WGS84).
Here is my code to initialize my mapcontent. For the grid, I tried to set a grid on Australia like the vector grid geotools tutorial
public void initMap(int w, int h)
{
mapContent = new MapContent();
mapContent.getViewport().setMatchingAspectRatio(true);
mapContent.getViewport().setFixedBoundsOnResize(false);
// WMS
wmsLayer = new WMSLayer(wmsConnection, selectedLayer);
ReferencedEnvelope dataBounds = wmsLayer.getBounds();
CoordinateReferenceSystem dataCrs = dataBounds.getCoordinateReferenceSystem();
mapContent.addLayer(wmsLayer);
mapContent.getViewport().setScreenArea(new Rectangle(w, h));
mapContent.getViewport().setBounds(dataBounds);
mapContent.getViewport().setCoordinateReferenceSystem(dataCrs);
// Grid
ReferencedEnvelope gridBounds = new ReferencedEnvelope(110, 150, -45, -8, DefaultGeographicCRS.WGS84);
FeatureSource<SimpleFeatureType, SimpleFeature> lonLatGrid = Grids.createSquareGrid(gridBounds, 10);
ReprojectingFeatureCollection rfc;
try {
SimpleFeatureCollection sfc = (SimpleFeatureCollection)lonLatGrid.getFeatures();
rfc = new ReprojectingFeatureCollection(sfc, mapContent.getViewport().getCoordinateReferenceSystem());
} catch (IOException e)
{
throw new RuntimeException(e);
}
Style style = SLD.createSimpleStyle(rfc.getSchema());
FeatureLayer grid = new FeatureLayer(rfc, style);
mapContent.addLayer(grid);
}
I am then rendering the map on a JavaFX Canvas with StreamingRenderer. WMS Layer is rendered properly, but the FeatureLayer isn't showing at all :c.
Here is the log corresponding to geotools (the Enveloppe now corresponds to the whole earth and not just Australia) :
2023-02-16 14:50:26.739 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : Computed scale denominator: 7.687380703318137E7
2023-02-16 14:50:26.740 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : creating rules for scale denominator - 76,873,807.033
2023-02-16 14:50:26.740 DEBUG 133118 --- [lication Thread] org.geotools.styling : creating defaultMark
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.styling : creating defaultMark
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : CRSConverterFactory can be applied from Strings to CRS only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : InterpolationConverterFactory can be applied from Strings to Interpolation only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : CRSConverterFactory can be applied from Strings to CRS only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : InterpolationConverterFactory can be applied from Strings to Interpolation only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : CRSConverterFactory can be applied from Strings to CRS only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.data.util : InterpolationConverterFactory can be applied from Strings to Interpolation only.
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : Processing 1 stylers for http://www.opengis.net/gml:grid
2023-02-16 14:50:26.741 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : Expanding rendering area by 2 pixels to consider stroke width
2023-02-16 14:50:26.742 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : Querying layer http://www.opengis.net/gml:grid with bbox: ReferencedEnvelope[-139.28905476171872 :<PHONE_NUMBER>453125, -60.03515721679681 :<PHONE_NUMBER>0195317]
2023-02-16 14:50:26.743 DEBUG 133118 --- [lication Thread] org.geotools.styling : creating defaultMark
2023-02-16 14:50:26.765 DEBUG 133118 --- [lication Thread] org.geotools.renderer.label : TOTAL LINE LABELS : 0
2023-02-16 14:50:26.765 DEBUG 133118 --- [lication Thread] org.geotools.renderer.label : PAINTED LINE LABELS : 0
2023-02-16 14:50:26.765 DEBUG 133118 --- [lication Thread] org.geotools.renderer.label : REMAINING LINE LABELS : 0
2023-02-16 14:50:26.765 DEBUG 133118 --- [lication Thread] org.geotools.renderer.lite : Style cache hit ratio: 0.9977777777777778 , hits 449, requests 450
Thank you for your time !
You need to turn up the logging level to developer in the global settings page, and then make the request again. Then [edit] your question with the relevant part of the log file.
Done ! Sorry if I put too many/too few information, I struggle to figure out what is relevant
@IanTurton Do you have any idea of how I should interpret those logs ? I have no clue of where is the problem
You need to find the log that relates to drawing your grid (which I don't think is in the part you posted). I would also attach a debugger and check if your CRS is correct and that your bounding box has the correct axis order for the CRS.
@IanTurton I just cropped the logs so it only shows the part of the Grid drawing. I'll try to verify my CRS but I think the only projections I use are default WGS84.
ReferencedEnvelope(110, 150, -45, -8, looks like you are using lon/lat while I would expect most of GeoTools to be expecting lat/lon - but that can depend on other settings you may have used
@IanTurton the problem is still there when I initialize gridBounds with dataBounds, (that I reproject to DefaultGeographicCRS.WGS84) knowing that I'm sure that dataBounds are valid (I can request a WMS Map from those bounds)
in the logs your request is for eferencedEnvelope[-139.28905476171872 :<PHONE_NUMBER>453125, -60.03515721679681 :<PHONE_NUMBER>0195317] which does overlap but is quite different
@IanTurton I've done some Panning/Zooming before taking the logs. Sorry I didn't mention !
| common-pile/stackexchange_filtered |
HTML5 check if audio is playing?
What's the javascript api for checking if an html5 audio element is currently playing?
See related: http://stackoverflow.com/questions/6877403/how-to-tell-if-a-video-element-is-currently-playing
function isPlaying(audioEl) {
return !audioEl.paused;
}
The Audio tag has a paused property. If it is not paused, then it's playing.
Since there is no stop this is a clean solution.
Then paused is true and so isPlaying() returns false.
This is wrong answer. I am just working with it, and before the first start .paused is false.
@Tom you have a jsbin? I tried this and it seems like the correct answer.
@Harry I was playing around with it more and it seems like in some browsers it does work and it some not. I guess it depends on implementation. But for all latest updates for chrome and firefox it seems to work, so This answer is correct for latest implementations.
You can also use the ended listener so that you do not restart the audio when it finishes playing.
listen for events! addEventListener('playing', func) and addEventListener('pause', func).
According to both the HTML Living Standard and the W3C HTML5 spec, "The paused attribute represents whether the media element is paused or not. The attribute must initially be true". Maybe @Harry and @Tomas found this not to be the case in 2012 because the specs were not clear at that point. BUT ... I couldn't find anything in the specs (not to say it's not there!) that says that the 'paused` attribute must be true when playback ends. Any views?
You can check the duration. It is playing if the duration is more than 0 seconds and it is not paused.
var myAudio = document.getElementById('myAudioID');
if (myAudio.duration > 0 && !myAudio.paused) {
//Its playing...do your job
} else {
//Not playing...maybe paused, stopped or never played.
}
Personally I believe that !myAudio.paused||!myAudio.currentTime would do a better job.
@m93a don't you mean !myAudio.paused || myAudio.currentTime ?
Yes, should be !myAudio.paused || myAudio.currentTime. Never replied back it seems...
@MalcolmOcean ty from 2023 <3
@MalcolmOcean Do you know why an II is better than &&?
heads up—duration can be NaN, so some boolean checks on duration may behave erratically
While I am really late to this thread, I use this implementation to figure out if the sound is playing:
service.currentAudio = new Audio();
var isPlaying = function () {
return service.currentAudio
&& service.currentAudio.currentTime > 0
&& !service.currentAudio.paused
&& !service.currentAudio.ended
&& service.currentAudio.readyState > 2;
}
I think most of the flags on the audio element are obvious apart from the ready state which you can read about here: MDN HTMLMediaElement.readyState.
Doesn't work if audio was paused automatically by iOS when locked.
document.getElementsByTagName('audio').addEventListener('playing',function() { myfunction(); },false);
Should do the trick.
getElementsByTagName('audio')[0] because it returns collection, not one element
Try this function!
Audio playing would not be executed if the position is the beginning or ending
function togglePause() {
if (myAudio.paused && myAudio.currentTime > 0 && !myAudio.ended) {
myAudio.play();
} else {
myAudio.pause();
}
}
I wondered if this code would work, and it amazingly did:
if (a.paused == false) {
/*do something*/
}
or you could write it as:
if (!a.paused == true) {
/*do something*/
}
if you want your IDE annoying you in JSfiddle.
Maybe you could also replace "paused" with "running" and it would still work
Or you can just omit the "== true" part, and your IDE won't bug you.
you can to use the onplay event.
var audio = document.querySelector('audio');
audio.onplay = function() { /* do something */};
or
var audio = document.querySelector('audio');
audio.addEventListener('play', function() { /* do something */ };
The second method seems to work best. Firefox, Linux Mint, in footer script.
Try Out This Code:
var myAudio = document.getElementById("audioFile");
function playAudio() {
//audio.play();
//console.log(audio.play())
if (myAudio.paused && myAudio.currentTime >= 0 && !myAudio.started) {
myAudio.play();
console.log("started");
} else {
myAudio.pause();
}
}
Hello and welcome to SO! While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. Please read the tour, and How do I write a good answer?
To check if audio is really start playing, especially if you have a stream, need to check audio.played.length to 1. It will be 1 only if audio is really start sounds. Otherwise will be 0. It's more like a hack, but that still works even in mobile browsers, like Safari and Chrome.
I do this way
<button id="play">Play Sound</button>
<script>
var sound = new Audio("path_to_sound.mp3")
sound.loop = false;
var play = document.getElementById("play")
play.addEventListener("click", () => {
if (!isPlaying()) {
sound.play()
} else {
//sound.pause()
}
})
function isPlaying() {
var infoPlaying = false
var currentTime = sound.currentTime == 0 ? true : false
var paused = sound.paused ? true : false
var ended = !sound.ended ? true : false
var readyState = sound.readyState == 0 ? true : false
if (currentTime && paused && ended && readyState) {
infoPlaying = true
} else if (!currentTime && !paused && ended && !readyState) {
infoPlaying = true
}
return infoPlaying
}
</script>
There is an answer by @AllanRibas, in which the function should rather look like this in my opinion:
let plr = document.querySelector("#player-audio-element");
if(isPlaying(plr) === true) {
// do stuff ...
}
export function isPlaying(plr) {
let atStart = plr.currentTime == 0 ? true : false
if ((atStart && plr.paused) || (plr.ended && plr.readyState == 0))
return false
return true
}
I use this for play pause audio button
var audio=$("audio").get[0];
if (audio.paused || audio.currentTime == 0 || audio.currentTime==audio.duration){
//audio paused,ended or not started
audio.play();
} else {
//audio is playing
audio.pause();
}
I was able to solve the problem using Singletone
let instance: HTMLAudioElement;
export class SingletoneAudio {
public audioPath: string;
constructor(audioPath: string) {
this.audioPath = audioPath;
if (!instance) {
instance = new Audio(audioPath);
}
}
getInstance() {
return instance;
}
}
const audioInstance = new SingletoneAudio('your-path');
const audio = audioInstance.getInstance();
audio.loop = true;
I don't know why this could be wrong:
var isPlaying = false;
audio = document.getElementById('audioID');
audio.addEventListener('playing',function() { isPlaying = true; },false);
audio.addEventListener('pause', function() { isPlaying = false; },false);
audio.addEventListener('ended', function() { isPlaying = false; },false);
am using this jquery code withe tow button play and stop, play button is a play and pouse button
const help_p = new Audio("audio/help.mp3");//Set Help Audio Name
$('#help_play').click(function() {//pause-Play
if (help_p.paused == false) {
help_p.pause();//pause if playing
} else {
help_p.play();//Play If Pausing
}
});
$('#help_stop').click(function() {//Stop Button
help_p.pause();//pause
help_p.currentTime = 0; //Set Time 0
});
While there is no method called isPlaying or something similar, there are a few ways to accomplish this.
This method gets the % of progress as audio is playing:
function getPercentProg() {
var myVideo = document.getElementById('myVideo');
var endBuf = myVideo.buffered.end(0);
var soFar = parseInt((endBuf / myVideo.duration) * 100);
document.getElementById('loadStatus').innerHTML = soFar + '%';
}
If percent is greater than 0 and less than 100, it is playing, else it is stopped.
Erm, what if I pause the audio in the middle?
Then it would not be playing.
This only detects the amount buffered. If, for instance, the audio element has begun buffering data but has not started playing because it doesn't have enough data, this method can still return true. Or if the element has buffered data as part of playback and subsequently was paused.
getElementsByTagName('myVideo') ? i think it should be getElementById('myVideo')
Em... Why minusing? A great idea, bad written though. How can you check if realtime audio (WebRTC) is playing with .paused? Only with buffer checking. Not deserved minusing. I upvote.
Wtf?! Why double-parenthesized parseInt?
| common-pile/stackexchange_filtered |
I lost the down arrow icon for the "Next Change" in my VSCode toolbar -- how do I get it back?
My git diff window in VSCode used to show an up-arrow icon (Previous Change) and a down-arrow icon (Next Change) in the toolbar.
I somehow recently lost just the down-arrow, and I can't figure out how to get it back. The "Next Change" action now instead shows up only in the "More Actions..." menu (see screenshot).
Any suggestions how to fix this? I have scoured the VSCode settings but have not been able to find anything related.
This screenshot below shows where the arrow used to be:
Can you show the example when they're missing?
Yes it is in the screenshot above. You can see the up-arrow there in the toolbar but not the down-arrow (which used to be immediately to the right of it).
So right - This has been bugging me cause I accidentally "removed" the button by dragging it off the toolbar - searched the settings high and low to no avail. Certainly not intuitive. Well done on finding the solution. Would never have thought it.
It was weirdly hard to figure this out, but the answer turned out to be embarrassingly simple. Right-clicking on any icon in the toolbar displays a list of icons to show or hide.
The issue is caused by enabling too many options and the default toolbar displays limited option icons. I don't know how to display more option icons. But if you temporarily disable some options that you don't need, the remaining option icons you enabled can be shown, just like what @Tomoyuki Aota said below.
For my case, Next Change button appears after disabling other buttons.
Firstly, the buttons are like this:
Right-click any icon in the toolbar. Then, the list of available buttons appear like this:
Uncheck Print button. Then, the Next Change button appears.
| common-pile/stackexchange_filtered |
do.call rbind of data.table depends on location of NA
Consider this
do.call(rbind, list(data.table(x=1, b='x'),data.table(x=1, b=NA)))
returns
x b
1: 1 x
2: 1 NA
but
do.call(rbind, list(data.table(x=1, b=NA),data.table(x=1, b='x')))
returns
x b
1: 1 NA
2: 1 NA
How can i force the first behavior, without reordering the contents of the list?
Data table is really really faster in mapreduce jobs (calling data.table ~10*3MM times across 55 nodes, the data table is many many times faster than data frame, so i want this to work ...)
Regards
saptarshi
I'm guessing this happens because NA is logical and as.logical('x')=NA, so when rbind decides that that column is logical (based on its first argument), it coerces subsequent items to match. do.call(rbind, list(data.table(x=1, b=as(NA,'character')),data.table(x=1, b='x'))) works...
By the way, there is an "optimized do.call(rbind,...)" for data.tables called rbindlist. There are a few q's about it on this site, e.g., http://stackoverflow.com/questions/15673550/why-is-rbindlist-better-than-rbind/15673654#15673654
@Frank -- Very helpful comments. I've added a reference to rbindlist to my answer.
As noted by Frank, the problem is that there are (somewhat invisibly) several different types of NA. The one produced when you type NA at the command line is of class "logical", but there are also NA_integer_, NA_real_, NA_character_, and NA_complex_.
In your first example, the initial data.table sets the class of column b to "character", and the NA in the second data.table is then coerced to an NA_character_. In the second example, though, the NA in the first data.table sets column b's class to "logical", and, when the same column in the second data.table is coerced to "logical", it's converted to a logical NA. (Try as.logical("x") to see why.)
That's all fairly complicated (to articulate, at least), but there is a reasonably simple solution. Just create a 1-row template data.table, and prepend it to each list of data.table's you want to rbind(). It will establish the class of each column to be what you want, regardless of what data.table's follow it in the list passed to rbind(), and can be trimmed off once everything else is bound together.
library(data.table)
## The two lists of data.tables from the OP
A <- list(data.table(x=1, b='x'),data.table(x=1, b=NA))
B <- list(data.table(x=1, b=NA),data.table(x=1, b='x'))
## A 1-row template, used to set the column types (and then removed)
DT <- data.table(x=numeric(1), b=character(1))
## Test it out
do.call(rbind, c(list(DT), A))[-1,]
# x b
# 1: 1 x
# 2: 1 NA
do.call(rbind, c(list(DT), B))[-1,]
# x b
# 1: 1 NA
# 2: 1 x
## Finally, as _also_ noted by Frank, rbindlist will likely be more efficient
rbindlist(c(list(DT), B)[-1,]
Of course that would presumably slow the rbinding down somewhat in all cases. On the other hand, it might not be too hard to add a second 'colClasses' argument to rbindlist(), allowing users to pass in either a character vector of class names or a list with elements of the desired classes.
| common-pile/stackexchange_filtered |
Auth::attempt not working in Laravel 5.2
I want to check login with Laravel 5.2 Authentication.
In <5.2 I was worked this much easier and its now not working...
here is my code...
Iwas maked the tables without through migration...
The values are inserted like below...
Controller...
public function DoLogin(){
$userdata = [
'email' =>Input::get('email'),
'pwd' =>Input::get('pwd')
];
if(Auth::attempt($userdata)){
return "Success";
}else{
return "Error";
}
}
public function insert(){
$data = [
'email' =>'loremlipsum',
'pwd' =>Hash::make('12345'),
'created_at'=>date('Y-m-d H:i:s'),
'updated_at'=>date('Y-m-d H:i:s')
];
DB::table('users')->insert($data);
}
Every time its getting message as "Error".
1 id int(11) No None AUTO_INCREMENT Change Change Drop Drop
2 email varchar(400) latin1_swedish_ci No None Change Change Drop Drop
3 pwd varchar(400) latin1_swedish_ci No None Change Change Drop Drop
4 created_at timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop
5 updated_at timestamp No 0000-00-00 00:00:00 Change Change Drop Drop
Form....
<form action="{{URL::to(route('do_login'))}}" method="post">
<input type="text" name="email"><br/><br/><br/>
<input type="password" name="pwd"><br/><br/><br/>
<input type="submit" value="Login">
</form>
Remember 1 thing, I am using single table authentication like in 5.1 and below...
Thanks
Could you check if data inside $userdata is correct?
yes... here is the response.. array(2) { ["email"]=> string(11) "loremlipsum" ["pwd"]=> string(5) "12345" }
Maybe you'll need to rename pwd field to password.
paste your error or response here.
| common-pile/stackexchange_filtered |
How are you monitoring in version 0.11 of Kafka?
I registered the issue with Github, but I could not get the answer I wanted, so I asked here.
( https://github.com/Morningstar/kafka-offset-monitor/issues/12 )
If you are running Kafka in version 0.11, I am wondering how you are monitoring.
I want to monitor the system part, but the general parts like "producer", "consumer", "lag" etc. KafkaOffsetMonitor (https://github.com/quantifind/KafkaOffsetMonitor) Consumers "error.
If you have any problems facing me, please help me.
Have you looked at https://www.confluent.io/product/control-center/ ?
We're using Grafana to monitor. It's easy to setup and free.
| common-pile/stackexchange_filtered |
How to solve json/json.h: No such file or directory?
I've this problem:
when I try to compile my program, doing make command, I see the following error:
fatal error: json/json.h: No such file or directory
21 | #include <json/json.h>
I installed the json library throught the command sudo apt-get install -y libjson-c-dev and it was installed correctly. After that, I simply compiled the program adding -ljson-c to the makefile (I also tried with -lcjson and other names, none of them works). Here is my makefile:
ifndef MYDIR
MYDIR=../../
endif
TARGET=$(shell basename $(shell ls -1 *.c | head -n 1) .c)
TARGET2=$(shell basename $(shell ls -r -1 *.c | head -n 1) .c)
CFLAGS += -I$(MYDIR)/include
LDFLAGS += -L$(MYDIR)/lib
LIBS += -litsStatic -lpthread -lm -lrt -lconfig -lgps -ljson-c
all: $(TARGET)
$(TARGET): $(TARGET).c
$(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) $(LIBS)
clean:
rm -f $(TARGET)
rm -f $(TARGET)/.o
.PHONY:clean
all: $(TARGET2)
$(TARGET2): $(TARGET2).c
$(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) $(LIBS)
clean:
rm -f $(TARGET2)
rm -f $(TARGET2)/.o
.PHONY:clean
Where is the problem? Thank you for your help!
"and it was installed correctly." Where is json/json.h located in your file system? From where do you run your makefile?
"(I also tried with -lcjson and other names, none of them works)" You confuse linker with compiler. A missing header is a compiler warning. Option -l tells your linker to include a certain library. That is only relevant after compilation was successful which is not the case here. You should rather check whether you need to add another option -Ixy to your CFLAGS
my library is installed in usr/include/json-c how can I add it to the makefile?
You add your folder to CFLAGS += -I$(MYDIR)/include. In your case the missing file should be in /usr/include/json-c/json/json.h if you add /usr/include/json-c
Please [edit] your question to add clarification or requested information, don't use comments for this purpose. Is the exact location of the include file /usr/include/json-c/json.h? (see https://packages.debian.org/de/sid/amd64/libjson-c-dev/filelist) Then you could use #include <json-c/json.h>
| common-pile/stackexchange_filtered |
Are email addresses ending with .com different from those ending with .com.[country_code]
I recently realiased a change of the email address of a someone I have been emailing for ages. Her email address used to be like<EMAIL_ADDRESS>Now Gmail cannot deliver the email and the email address should be<EMAIL_ADDRESS>She said she never changed anything.
So, are these two email addresses different i..e can they co-exist? And why would it work before?
The email addresses are different - unless the same company owns both domains and forwards them to the same accounts.
| common-pile/stackexchange_filtered |
Firestore calls the data Twice
I am trying to call document and apply to recycler view, but in the recycler view, the items are double. I have checked other questions as well, but am sure its related to mine.
Here is the code
private void loadData() {
productList.clear();
firestore.collection("Products").orderBy("order")
.addSnapshotListener(new EventListener<QuerySnapshot>() {
@SuppressLint("NotifyDataSetChanged")
@Override
public void onEvent(@Nullable QuerySnapshot value, @Nullable FirebaseFirestoreException error) {
if(error != null) {
if(progressDialog.isShowing()){
progressDialog.dismiss();
Log.e("Firestore gives server error", error.getMessage());
new Flashbar.Builder(requireActivity()).gravity(Flashbar.Gravity.BOTTOM).duration(500).backgroundColorRes(com.margsapp.iosdialog.R.color.red).messageColorRes(R.color.white).showOverlay().showIcon().message("Something went wrong please try again later.").build();
}
}
assert value != null;
for (DocumentChange documentChange : value.getDocumentChanges()){
if(documentChange.getType() == DocumentChange.Type.ADDED){
productList.add(documentChange.getDocument().toObject(Product.class));
}
}
productAdapter.notifyDataSetChanged();
if(progressDialog.isShowing()){
progressDialog.dismiss();
}
}
});
As I understand, you have already productList, After receiving a callback in the snapshot listener you are adding a change item in product list again its a reason it duplicating the item
you need to replace available product or get all products again
Have you tried to clear the list before adding new objects?
Previous commenters: OP is processing only if(documentChange.getType() == DocumentChange.Type.ADDED){, which only contains the added documents on so-called delta snapshots. I'm not sure where the duplication comes from, but processing only Type.ADDED seems correct.
@FrankvanPuffelen thats why am confused
@AdityaNandardhane All the products again
@AlexMamo Yes I tried not working
@AdityaNandardhane Can you please elaborate more
| common-pile/stackexchange_filtered |
TFS express edition - installing on a server or local machine?
This is how we have TFS express edition set up among 5 developers. We have our lead programmer install TFS 2012 Express edition on his machine. That comes with an install of SQL Server Express 2012 editions. Remaining 4 developers also install TFS Express and SQL Server express 2012 on their respective machines. From inside Visual studio of their individual machines, the 4 developers connect to lead programmers path of tfs code. Is this the right set up? I am thinking, if the lead developer turns off his machine, the code db also goes down and hence the other developers can no longer access the source repository? is that correct? To avoid this happening, do I need to install TFS 2012 express edition on its own dedicated server box and have all 5 developers connect to it, so atleast the server will be accessible all the time. Am I thinking correctly? please advise.
TFS is the server software, and Visual Studio is the client software.
To use TFS, you would typically install the TFS server (including SQL etc) on one computer, and then all your developers would connect to it from their installs of Visual Studio. The developers should not install TFS on their own PCs.
If you turn off the TFS computer, then the server will not be running, so none of the developers will be able to access it - they will not be able to use source control, report bugs, etc without it. However, they can work "offline" until the server is turned back on - as long as they have the code they need on their PC, they do not need the server running.
Most people would recommend using a dedicated PC as the TFS server - it's not really a good idea to use the server as a develolpment PC. For 5 users the load on the server will be very low, so it will not need to be a particularly powerful PC in order to run SQL and TFS, as long as it has plenty of disk space for its source control databases (preferably with redundant RAID and/or a decent backup solution so you won't lose all your source code if the server fails).
I suggest you do some more reading up on TFS to get a better idea about how it works before you start installing it - it's a serious/complex bit of software and you'll need to follow the installation instructions carefully.
I have two small questions. First, can we install TFS on our development machine or it is not compulsory to have Windows Server to install TFS? Secondly, How TFS web works? Do we need to install any app on IIS for this?
@AnkushJain: Yes, you can run a TFS server on the same PC you use for Visual Studio & development, but it is preferable to use separate PCs (if your "team" is a single developer with a single PC then t may be convenient, but if you ever want to scale things up it isn't ideal). The TFS server provides a web-based UI - the TFS installer ensures that its dependencies (e.g. SQL server and IIS) are installed and configured.
| common-pile/stackexchange_filtered |
Problems importing packages on Circle Ci Golang
I am using Circle CI to test my project. The project is a simple Go application consisting of a few packages and a main.go file. When referencing packages within my project I simply import them as "projectName/packageName" in the code. This works fine locally, however, when I push to git and it gets built on Circle CI I get the following errors.
package crypto-compare-go/handlers: unrecognized import path
"crypto-compare-go/handlers" (import path does not begin with
hostname)
I fixed this by prepending github.com/myGitUsername/projectName to my local package imports, this means that when I'm developing locally If I change one of the packages within my project, I have to push to git, then pull to be able to use them even though they are all under the same parent project folder. This seems like a slow, very inefficient process.
Has anyone had this problem with Circle CI before?
Your dependencies must be resolvable. This means go get must work, or you can use vendoring. There's not really a third option.
Go get works fine for me locally, I would have thought as the packages are all within the same project it would resolve them fine.
I'm not sure what you mean "within the same project". go get has no concept of a "project". When you try to fetch these packages, what errors do you get?
Your local filesystem should match the proper Go import path. Set it correctly locally and you'll be able to do the same on CircleCI and any other environment.
I fixed this by prepending github.com/myGitUsername/projectName to my local package imports, this means that when I'm developing locally If I change one of the packages within my project, I have to push to git, then pull to be able to use them even though they are all under the same parent project folder. This seems like a slow, very inefficient process.
Nope. You get this wrong. Your go will use the local $GOPATH/src/github.com/myGitUsername/projectName dir to compile. You access github.com only if you run go get -u <package path>. It is documented in How to Write Go Code.
Note that you don't need to publish your code to a remote repository
before you can build it. It's just a good habit to organize your code
as if you will publish it someday. In practice you can choose any
arbitrary path name, as long as it is unique to the standard library
and greater Go ecosystem.
| common-pile/stackexchange_filtered |
Can you add a child component while within a JSF Renderer
I would like to add a child component while inside of the encodeBegin
public void encodeBegin(FacesContext context,
UIComponent component)
throws IOException {
XspInputText xip = new XspInputText();
ViewPickList vplComponent = (ViewPickList) component;
ResponseWriter writer = context.getResponseWriter();
String viewName = vplComponent.getViewName();
if (StringUtil.isNotEmpty(viewName)) {
xip.setId(vplComponent.getId() + "_InputText");
xip.setValue("Value");
vplComponent.getChildren().add(xip);
super.encodeBegin(context, vplComponent);
}
This doesn't appear to work, but I am trying to add the child component inside and have it render. Can anyone suggest a better way of doing this?
Why do you want to add child components during the rendering phase?
I"m trying to find a way to make add children components based on the values of the parent component. It doesn't have to happen at render, just not sure where to put it.
You should call you newly added components encodeBegin and encodeEnd methods to render them too.
Try this:
xip.encodeBegin(context);
xip.encodeEnd(context);
Also take a look at This link.
Please give me feed back if it works or not!
Thanks, I already found a better way of handling this issue, but looking at my code you are right.
Could you please give a reference or explain your way. Actually I'm doing sth like this now and want to know if there is a better or easier way ;) my use case is a little different, I want to edit existing component not adding new component and i don't know whether doing this in this phase (render response) is right or not? thanks
Well, What I found was the way you mentioned was pretty standard for Richfaces and Primefaces, however I am using an IBM implementation of JSF called XPages. IBM has built in an Interface called FacesComponent that allows if implemented to build children components inside of the component. The interface is: com.ibm.xsp.component.FacesComponent.
| common-pile/stackexchange_filtered |
PHP Image Resizing Does NOT Resize In Internet Explorer
I am having a very interesting problem. The script I wrote below works, but it doesn't work in Internet Explorer. The MAX_WIDTH variable is set to 450 and it still uploads the image with the original dimensions of the image, not 450 by whatever the conversion factor is. Any suggestions? It works and resizes in Chrome, Firefox, and Safari. Also, the version of IE I am testing on is IE 8 64-bit version. Thanks.
private function checkForResize() {
$fileTypeArray = array('image/gif', 'image/jpeg', 'image/png');
$origType = $this->_uploadType;
if (in_array($origType, $fileTypeArray)) {
$origImage = $_FILES[$this->_uploadInputField]['tmp_name'];
$imageWidth = getimagesize($origImage);
if ($imageWidth[0] > MAX_WIDTH) {
// Resize here
if ($origType == 'image/gif') {
$imageSrc = imagecreatefromgif($origImage);
} else if ($origType == 'image/jpeg') {
$imageSrc = imagecreatefromjpeg($origImage);
} else if ($origType == 'image/png') {
$imageSrc = imagecreatefrompng($origImage);
} else {
return false;
}
$width = $imageWidth[0];
$height = $imageWidth[1];
$newHeight = ($height / $width) * MAX_WIDTH;
$tmpImage = imagecreatetruecolor(MAX_WIDTH, $newHeight);
$this->setTransparency($tmpImage, $imageSrc);
imagecopyresampled($tmpImage, $imageSrc, 0, 0, 0, 0, MAX_WIDTH, $newHeight, $width, $height);
imagejpeg($tmpImage, UPLOAD_DIR.DS.$this->_uploadSafeName, 100);
imagedestroy($imageSrc);
imagedestroy($tmpImage);
return true;
}
}
return false;
}
php runs server side, it does not know what a browser is
You'll have to debug a little here. Is the image really not being resized? Maybe it's just a problem in how it's displayed in the browser. Put some debugging statements in to see if all of that code is executed as expected. If the image is uploaded and read correctly by imagecreatefrom*, there should be no difference based on the browser.
but see thats the thing. i can upload the same image in chrome and turn around a upload the same exact image in ie but one gets resized down to 450 in chrome and not in ie. i am looking directly at the files themselves and in a browser.
Did you check the HTML form code? Can you paste it here?
@mani my form code is perfectly fine. everything processes fine. just a resizing issue and the code is posted in the original topic.
I was asking because you posted your PHP script that has nothing to do with browser issues like you said you have image resizing working fine when you are using chrome it means your PHP script is fine. Problem may have in your HTML code from where maybe you are missing some tags. Firefox / Chrome ignore some HTML issues but IE always make problem in client side script. You mush double check it.
Maybe IE doesn't send the correct mimetype of the images?
@tktutorials IE sometimes sends an image/pjpeg or image/x-png mime types, that might be an issue
@DamienPirsy that was the issue. THANK YOU SO MUCH FOR TELLING ME THAT! Everything works great now.
Good. I wrote that as an answer, so that people seeing this question in the future would understand what the issue was without having to read all comments. Glad to have helped.
Converting my comment to an answer:
The browser has nothing to do with server-side scripts going wrong, as it's on the client side.
What can be wrong, though, is the fact that MIME type is an unreliable information, for it's the browser who detects and sends the MIME type.
And IE sometimes sends an image/pjpeg or an image/x-png MIME type when dealing with jpgs or pngs, so you need to check those also when validating.
| common-pile/stackexchange_filtered |
How to copy a custom function from a excel file
I have a excel file and it has a custom function to do some calculation. I want to know how can I get that custom function or copy it to another excel file.
If it helps, this is the excel file in question. It has a function tastytrade_price_call. I want to copy it to another file.
Thank you.
You need to contact the author of the file and get the password, then use the code editor to get the code. However, I think its password protected to stop you copying the code.
So I can't copy the function. In that case, I will have to make a copy of this excel file and copy my data in this file for bulk calculation. Can I do that?
Yes, adding a new worksheet to the workbook will allow you to use the functions without any issue.
Thanks a lot @NickSlash appreciate your help.
| common-pile/stackexchange_filtered |
Debugging what has been submitted in a form
I would like to like to be able to print out what has been submitted in my form in the submission form. How can I achieve this?
Is this a form created yourself, or do you want to hook into a form defined in another module? Is this for debugging purposes, or do you actually want the site to show this data to the end user?
a form that I created for debugging process
You can display the submitted values in your hook_form_submit() and set it like this (example):
function form_example_tutorial_7_submit($form, &$form_state) {
drupal_set_message(t('The form has been submitted. name="@first @last", year of birth=@year_of_birth', array(
'@first' => $form_state['values']['first'],
'@last' => $form_state['values']['last'],
'@year_of_birth' => $form_state['values']['year_of_birth'],
)));
}
You can also use dpm function from devel module. It prints using Krumo all kind of variables, complex arrays included. So you can just dpm the $form_state in the submit handler as Djouuuuh suggests.
OKaaayyy! I didn't know it was for debugging process. Of course use dpm with Devel module in the hook I suggested you. It's way easier.
I really recommend spending some time setting up XDebug and using that. Here are some links. If you use a good IDE, like PHPStorm, you get great integration with XDebug and no need to mess up your code with debugging statements. :) http://code.tutsplus.com/tutorials/xdebug-professional-php-debugging--net-34396 http://xdebug.org/docs/install https://www.youtube.com/watch?v=LUTolQw8K9A
Its better to install devel module and use the dpm function.
function MYMODULE_FORMID_submit($form, &$form_state) {
dpm(hook_form_id_submit);
}
Instead of printing stuff in the page output, I would recommend to use an IDE such as PhpStorm, Eclipse or Netbeans. Combined with Xdebug, you can set a breakpoint in the form submit function and inspect all variables available at that point. You'll find the submitted values in $form_State['values'].
| common-pile/stackexchange_filtered |
SSIS Connection to Excel via ACE.OLEDB as Service Account
We have a process which needs to work with a series of Excel (sigh) files.
The setup is:
SQL agent job run as a SSIS proxy account.
Calls SSIS package on a share on the server.
Which then starts accessing these excel files using the ACE driver.
The process will work under my credentials.
The process will work under other people's credentials.
The process will work in debug mode (although this is not a fair test
as that would use my local machine's driver)
The process will not work using the SSIS proxy account.
The process WILL work if I make the SSIS proxy account an
administrator on the server.
I have ruled out the following:
access to the files share. The account can load text files from
there.
32bit/64bit issues. The account CAN run given sufficient
permissions.
My opinion is that the service account needs some sort of level of permission to be able to use the driver. I can't work out what though.
I have tried LOCAL SECURITY POLICY option "Load and unload device drivers" with no success. ( I did think this had done it, but then realised that I had left the account in the admin group :-( )
Finally, the error message in question:
SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.
The AcquireConnection method call to the connection manager
"TPR_ReadReportsExcelConnection" failed with error code 0xC0202009.
There may be error messages posted before this with more information
on why the AcquireConnection method call failed.
Trying to understand what you've ruled out and why. You make the proxy account a member of the Administrators group on the machine and everything works. Take them out of the role and things go belly up? Is the Proxy account a service account or a user account? If service, does it have the ability to interact with the desktop? If you think it's a permissions thing, why not test by granting explicit access to the ACE install path?
There may be error messages posted before this with more information on why the AcquireConnection method call failed - what other error messages were there?
Sorry no other messages before this coming from SSIS. The account is a service account. How would I know it has the ability to interact with the desktop? I will try giving the account access to the installer path, thank you.
This seems to be beyond the supported scope depending on how you've set up your SSIS proxy account. See Additional Information section here. Not enough points to post an image so here is the important sentence:
provided the SSIS jobs run in the context of a logged-on user with a valid HKEY_CURRENT_USER registry hive
This seems like a promising route to explore. How would one allow the service account to function in this way? As an administrator it is able to function this way.
Well let's clarify, given my answer has been down voted, I may have the wrong end of the stick. Is the SSIS proxy account logged on as one of the built in user accounts: 'Network Service','Local System','Local Service' or is it logged on as something like DOMAIN\user?
I'm not an expert when it comes to these kinds of things obviously. As far as I can tell, the account does not logon to the server. It logs into SQL Server instance, there it has sufficient permission to execute SSIS packages via agent job which reside on the sever's fileshare. The name of the account is indeed domain\user.
Ok. So my mistake with that answer, I hope you didn't waste any time going down that route. So I think I'm right in saying you have confirmed the proxy account set up on the SQL Server is correct because as long as you put the account in the Administrators group it will run the job. So have you verified the DCOM config is correct? The proxy account should have permission to launch/ activate the DCOM object as in this link
| common-pile/stackexchange_filtered |
Passing the id via req.params.id Vs req.body.id
so i started learning backend recently, with node/expres.
Im building a small REST API for learning purposes, with the following routes:
GET /api/products >> to display a list of products
GET /api/cart >> to display list of items inside the cart
DELETE /api/cart/:id >> to delete an item from the cart
now, i want to be able to add an item to the cart, with the POST method ofcourse, should it be:
/api/cart >> and pass the item id in the body, so req.body.id
or
/api/cart/:id >> and pass the item id with req.prams.id ??
I understand that both work, but i've been told that with POST method, it's better to pass it via the body, so i would like to understand why, since in this specific case i am not creating a whole new item(in that case i would pass the data via the body of course), but i already have the item in the products list, so i just want to retrieve it and add it to the cart.
Is it that if i create the frontend side for this api, one of them will work better ? or how does the whole thing work? what's the preferred method?
Thank you
Your patterns are correct. In the second case, which is the update of a record, we normally use the PUT method by convention. But with POST it would also work.
You can fit more (diverse) data in the body than in the url. You can pass any string (special characters) in the body, while encoding them in the url would get you vulnerable to status 414 (Request-URI Too Long). And it's a lot easier to use the body when passing arrays and complex objects :)
| common-pile/stackexchange_filtered |
Why Wild Cards can't be used in generic class & method declaration?
Declaration like this :
class A<X extends Number & List> { }
is allowed.Whereas declaration like this is not allowed.
class A<? extends Number & List> { }
Is there any logical explanation about why Java restricts us to do that?
& what's the actual difference between
<T extends Number>
& <? extends Number>?
Do you actually have a class that extends Number and implements List? Or does it extend Number and implement List<Number>? or does X extend Number and implement List<X>? How would you even write that last one without declaring the type X somewhere?
Good point -- Number and List aren't really types you'd expect to coexist like that.
@LouisWasserman: Even if they do coexist like that, I was just pointing out that you might need the generic definition to specify what your list is holding. Either way, it is probably better to avoid the raw type.
@PaulHanbury: They are tested & compiled successfully.Number & List are taken just for example.You can replace them with any custom class name,interface name.The point here is why wild card is not allowed in generic class/method declaration.
@DebadyutiMaiti: By your logic, I should be happy with the following code because it compiles and passes its tests: public class Math {public static int multiply(int x,int y){int p = 0;for(int i=0;i<x;i++) for(int j=0;j<y;j++)p++;return p;}} I am not happy with it. Similarly, I don't think that using raw lists are a good idea either... especially in a templated class.
@DebadyutiMaiti: My point, however was that you need to explicitly name your type (i.e., "X", not "?") so that you can refer to it within the rest of your code.
The whole point of a type parameter like T is so that you can use it as a type inside the class. What would a wildcard there even mean? If you can't use it anywhere, why have a type parameter at all?
If you used <? extends Number & List>, then you wouldn't be able to do anything with the type parameter. It'd be completely useless.
Similarly, ? extends Number lets you deal with the special case when you don't need to refer to the type that extends number, and you don't need to give it a name.
Generic class and interface declarations want type parameters, such as T or U. ? is a wildcard, better used for method parameters that are themselves generic:
class Foo<T extends Number & List> {
void doStuff(List<T> items) {
// ...
}
void doMoreStuff(List<? extends OutputStream> streams) {
// ...
}
}
doStuff() indicates that it wants to operate on a List<T> where T is the type parameter on class Foo. So:
class Weird extends Number implements List {
//
}
Foo<Weird> f = new Foo<Weird>();
f.doStuff(...); // wants a List<Weird>
If we called doMoreStuff() on f, we could hand it something of type List<OutputStream>, List<FilterOutputStream>, List<ByteArrayOutputStream>, etc.
Shouldn't you be calling doStuff() on Foo, not weird?
| common-pile/stackexchange_filtered |
Notation for Conditional Expectation
I am kind of confused by the notations of conditional expectation.
Let $(\Omega, \mathcal F,\mathbb P)$ be the given probability space, $X$ be a random variable and $A \in \mathcal F$.
I am confused with the notation $E[X|A]=\frac{\int_AXd\mathbb P}{\mathbb P(A)}$ since $E[X|A] \neq E[X|\sigma A]$.
But if so, why the notation like $E[X|A]$ is widely used? It is so confusing.
Can anyone explain the reason for using this notation? Thanks in advance.
Unfortunately, standard notations for conditional expectations are a bit confusing. What is true is $E[X|\sigma (A)]$ is the random variable which has the value $E[X|A]$ on $A$ and $E[X|A^{c}]$ on $A^{c}$. The concept of conditional expectation given a sigma algebra came later and things like $E[X|A]$ existed earlier and the mess has somehow been created. Let us live with it :-)
Yeah I guess I have to embrace it anyway. Thank you for your kind response.
$\mathbb E[X\mid A]$ can be interpreted as the answer to the question: "what is the expectation of $X$ under the extra condition that event $A$ occurs?"
You write $\sigma A$ but formally that should be $\sigma(\{A\})$ which is the smallest $\sigma$-algebra on $\Omega$ that contains $\{A\}$ as subcollection (or equivalently contains $A$ as element).
It is evident that: $$\sigma(\{A\})=\{\varnothing,A,A^{\complement},\Omega\}$$
Further $\mathbb E[X\mid\sigma(\{A\})]$ is formally a random variable that is measurable wrt $\sigma(\{A\})$ and satisfies the condition:$$\int_BX(\omega)\mathbb P(d\omega)=\int_BE[X\mid\sigma(\{A\})](\omega)\mathbb P(d\omega)\text{ for every }B\in\sigma(\{A\})\tag1$$
The fact that $\mathbb E[X\mid\sigma(\{A\})]$ is measurable wrt $\sigma(\{A\})$ reveals that we must have constants $c, d$ with:$$\mathbb E[X\mid\sigma(\{A\})]=c\mathbf1_A+d\mathbf1_{A^{\complement}}$$
Then substituting in $(1)$ for $B$ the $4$ elements of $\sigma(\{A\})$ we find the conditions:
$0=0$
$\int_AX(\omega)\mathbb P(d\omega)=c\mathbb P(A)$
$\int_{A^{\complement}}X(\omega)\mathbb P(d\omega)=d\mathbb P(A^{\complement})$
$\mathbb EX=c\mathbb P(A)+d\mathbb P(A^{\complement})$
Based on that we find $c=\mathbb E[X\mid A]$ as defined in your question and $d=\mathbb E[X\mid A^{\complement}]$ so that:$$\mathbb E[X\mid\sigma(\{A\})]=\mathbb E[X\mid A]\mathbf1_A+\mathbb E[X\mid A^{\complement}]\mathbf1_{A^{\complement}}$$
The last bullet also arises if we take expectation on both sides:$$\mathbb EX=\mathbb E[X\mid A]\mathbb P(A)+\mathbb E[X\mid A^{\complement}]\mathbb P(A^{\complement})$$
I hope this makes things more clear for you and also provides you a link between conditional expectation $\mathbb E[X\mid A]$ (a real number) and $\mathbb E[X\mid\sigma(\{A\})]$ (a random variable, which is actually the Radon-Nikodym derivative of $X$ wrt $\sigma$-algebra $\sigma(\{A\})$).
| common-pile/stackexchange_filtered |
Sometime webdriver finds the element but sometime for the same xpath it gives nosuchelement exception
I created a test script in which i want to click a edit button and then close the window. Most of the time the code works as expected but some time for the same code i gets nosuchelement exception. I went through the similar questions asked in the forum but none of the solution worked for me. Below i am putting the java code along with the HTML Code in hope to find a solution.
for(String newwindow : window.getWindowHandles()){
//swithching to the new pop up using window.switchTo().window(passing newwindow as argument)
window.switchTo().window(newwindow);}
//getting title of new window using getTitle() method
System.out.println("NewWindow Title"+ window.getTitle());
window.findElement(By.xpath(".//*[@id='edit_resume_section3_open' and not(@disabled)]")).click();
window.manage().timeouts().implicitlyWait(40, TimeUnit.SECONDS);
System.out.println(window.findElement(By.xpath("//input[@name='title']")).getAttribute("value"));
window.manage().timeouts().implicitlyWait(40, TimeUnit.SECONDS);
window.findElement(By.id("update")).click();
window.manage().timeouts().implicitlyWait(40, TimeUnit.SECONDS);
window.close();
Html Code for the webelement-
<head>
<body class="bg_lightgreen" marginwidth="0" marginheight="0" onload="" leftmargin="0" topmargin="0" style="">
<iframe id="_yuiResizeMonitor" style="position: absolute; visibility: visible; width: 10em; height: 10em; top: -120px; left: -120px; border-width: 0px;"/>
<div id="fade_nation_mismatch" class="black_overlay"/>
<div id="fade_visual_resume" class="black_overlay"/>
<div id="show_visual_resume" class="white_content" style="position: absolute; top: 25%;">
<table class="bg_purple" width="100%" cellspacing="0" cellpadding="0" border="0">
<table class="bg_white" width="998" cellspacing="0" cellpadding="20" border="0" align="center">
<table class="bg_white" width="998" cellspacing="0" cellpadding="20" border="0" align="center">
<tbody>
<tr>
<td class="font_15 txt_purple bold" width="203" style="padding-bottom:5px; width:250px; word-wrap: break-word;">1.6YR Experiance/BSC(CS)</td>
<td width="715" style="padding-bottom:5px;">
<a id="edit_resume_section3_open" class="thickbox" title="Professional Details" style="cursor:pointer" onclick="showEditSection(3);">[Edit]</a>
</td>
</tr>
First ensure your xpath expression (or whatever you use to find the element) is correct in any case. For element dynamicaly generated, your xpath expression could not be valid in any case.
If it is correct, it means you get race condition. The element you're looking for is sometime not yet present. It might depend on your browser that could take more time to render the page at that time, or the http server that might take more time to serve your page. There could be many (stupid) reasons.
In order to fix it you have to use pooling. Check the presence of your element. If present, proceed. If not present, wait and try again. The class 'com.thoughtworks.selenium.Wait' fit that purpose.
You will find example using 'com.thoughtworks.selenium.Wait example' as keyword in google.
I would suggest checking if the element is present in a loop before trying to click on it
The element is present in the loop and sometime webdriver finds it and sometime it fails to find the same element
I would suggest you to use ID for finding the element. Since it has an ID, you can use that. While doing automation using xpath is not that preferable if you have other options. xpath may change.
I think the below line is used for opening the link,
window.findElement(By.id("edit_resume_section3_open")).click();
Also, there is no need to use implicit waits before every line. Once you declared, it will be affected for all lines in your test.
If you want to use wait for every line, then try explicit waits.
| common-pile/stackexchange_filtered |
How do I get rid of buzzing sound while watching Youtube?
So, I'm running Ubuntu 12.04.1 on my iMac G5 and I think I have Gnash installed as a Flash player. The videos on Youtube load fine even though Mozilla tells me from time to time that I need a Flash plugin, I get video image just fine but as for the sound it's all just a scratching buzzing static. Sound works well like testing and listening to music.
Please help!
Well, I just realized that the gnash plugin for Mozilla wasn't even installed and Mozilla was trying to play the Youtube videos using html5 and that's what was causing the buzzing. I ended up using Epiphany as both flash apps for Ubuntu worked very slow in Mozilla, I tried the latest version of Gnash and Lightspark. Imagine that, Mozilla not being able to play html5 from Youtube. And it was version 15.1 So it wasn't a sound drive issue after all as I feared...
| common-pile/stackexchange_filtered |
Google Nearby not working on Android Things by default
I'm developing an Android Things application on the iMX7D development board and I have implemented Google's Nearby services. The issue I have is that I get an error (sometimes) when I begin advertising the device. Here is the error:
com.google.android.gms.common.api.ApiException: 17: API: Nearby.CONNECTIONS_API is not available on this device.
I have managed to fix the error by following the instructions on https://stackoverflow.com/a/51428433/6377151, and that allows the code to run fine. The error gets fixed if I run the ADB command
adb shell am force-stop com.android.iotlauncher.ota
And then run the application, but that only works for the one time. As soon as the device is rebooted, the issue comes back. I'm aware that this is because the default launcher is already advertising the device, but I'm not sure how to fix this issue in code automatically when my application runs. But I need a way to either do this automatically on startup or to overcome the error in another way.
My Android Things device is running Android Things 1.0.10. Can anyone assist?
Disclaimer: I work on Nearby.
We have a release ready to allow multiple apps to advertise/scan at the same time. It's code-complete, but code pushes are slow at Google. It'll be a while before it's public. Note: Android Things boards might need to be reflashed to get the update. That was the case in development, but is hopefully not the case for release builds.
In the meantime, you'll unfortunately have to either install another launcher, or force stop the existing one. We treat clients as first-come-first-serve.
That sounds great. How long do you think it'll take to become public? Is it normally weeks or months?
Looking at the schedule, roughly mid-May.
Do you have any suggestions on workarounds until this?
Installing another Android launcher app is my best suggestion.
| common-pile/stackexchange_filtered |
Python3 - "ValueError: not enough values to unpack (expected 3, got 1)"
I'm very new to Python and programming overall, so if I seem to struggle to understand you, please bear with me.
I'm reading "Learn Python 3 the Hard Way", and I'm having trouble with exercise 23.
I copied the code to my text editor and ended up with this:
import sys
script, input_encoding, error = sys.argv
def main(language_file, encoding, errors):
line = language_file.readline()
if line:
print_line(line, encoding, errors)
return main(language_file, encoding, errors)
def print_line(line, encoding, errors):
next_lang = line.strip()
raw_bytes = next_lang.encode(encoding, errors=errors)
cooked_string = raw_bytes.decode(encoding, errors=errors)
print(raw_bytes, "<====>", cooked_string)
languages = open("languages.txt", encoding = "utf-8")
main(languages, input_encoding, error)
When I tried to run it I got the following error message:
Traceback (most recent call last):
File "pag78.py", line 3, in <module>
script, input_encoding, error = sys.argv
ValueError: not enough values to unpack (expected 3, got 1)
which I am having difficulties understanding in this context.
I googled the exercise, to compare it something other than the book page and, if I'm not missing something, I copied it correctly. For example, see this code here for the same exercise.
Obviously something is wrong with this code, and I'm not capable to identify what it is.
Any help would be greatly appreciated.
This line work when and only when you run the script with two positional arguments, but you ran it without any. Hence you only have one value instead of three to unpack into three variables.
Thank you Ondrej. I finally figured out what I was missing. I had no idea of what I was copying.
When you run the program, you have to enter your arguments into the command line. So run the program like this:
python ex23.py utf-8 strict
Copy and paste all of that into your terminal to run the code. This exercise uses argv like others do. It says this in the chapter, just a little bit later. I think you jumped the gun on running the code before you got to the explanation.
Let's record this in an answer for sake of posterity. In short, the immediate problem described lies not as much in the script itself, but rather in how it's being called. No positional argument was given, but two were expected to be assigned to input_encoding and error.
This line:
script, input_encoding, error = sys.argv
Takes (list) of arguments passed to the script. (sys.argv) and unpacks it, that is asigns its items' values to the variables on the left. This assumes number of variables to unpack to corresponds to items count in the list on the right.
sys.argv contains name of the script called and additional arguments passed to it one item each.
This construct is actually very simple way to ensure correct number of expected arguments is provided, even though as such the resulting error is perhaps not the most obvious.
Later on, you certainly should check out argparse for handling of passed arguments. It is comfortable and quite powerful.
I started reading LPTHW a couple of weeks ago. I got the same error as 'micaldras'. The error arises because you have probably clicked the file-link and opened an IEExplorer window. From there, (I guess), you have copied the text into a notepad file and saved. it.
I did that as well and got the same errors. I then downloaded the file directly from the indicated link (right click on the file and choose Save Target As). The saves the file literally as Zed intended and the program now runs.
| common-pile/stackexchange_filtered |
Django template decoupling : cant access to the static files outside of Django project
My app is decoupled between APIs and templates. I want to know the way to communicate the APIs to templates.
I know I can pull all the static files into API repo by python manage.py collectstatic , but I dont want to couple APIs and templates together into one folder.
I am trying to access the static files from my django app to the static files that are outside of my django app. I want to access the files in UIUX/static to DjangoRepo/myapp/static. it looks like below
UIUX/
- static
- staticfile...
- templates
- index.html
DjangoRepo/
- myapp
- static
- templates
my view.py
class IndexViewTemplateView(View):
# trying to acess the inedx.html file that is outside of Django file
template_name = '../../../UIUX/templates/index.html'
def get(self, request, *args, **kwargs):
return render(request, self.template_name)
def post(self, request, *args, **kwargs):
return render(request, self.template_name)
I did not edit my template at all for this.
is there any other template setting I need to do to archive this ?
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(PROJECT_ROOT, 'templates')],
# 'DIRS':[],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
# 'debug': DEBUG,
},
},
]
change this line
from - 'DIRS': [os.path.join(PROJECT_ROOT, 'templates')],
to - 'DIRS': [os.path.join(PROJECT_ROOT,'templates'),os.path.join('C:/Users/kevin/workspace/UIUX/static')].
The os.path module is always the path module suitable for the operating system Python is running on, and therefore usable for local paths. join - joins one or more path components. Using this we can call Local directory files.
Let me know your PROJECT_ROOT and STATICFILES_DIRS.
PROJECT_ROOT = os.path.dirname(os.path.abspath(file))
STATICFILES_DIRS = (
#os.path.join(PROJECT_ROOT, 'static/'),
"C:/Users/kevin/workspace/UIUX/static",
)
You want get files in templates or static?
Both. I want to get the access to the templates that has the access to the static files
With static file you can use "python manage.py collectstatic". With templates directory you can hard path it. Try it.
right, I know I can. But I want to decouple the APIs from UIUX
| common-pile/stackexchange_filtered |
DirectX3D 11 enumerating devices
I would like to write a program that lists all DirectX3D applications running on a Windows System and displaying the resources they use. I know how to enumerate the adapters using the IDXGIFactory interface and the adapter outputs via the IDXGIAdapter interface. I also know how to find out the adapter used by an ID3D11Device using QueryInterface and GetParent. Though, I need the reverse: an enumeration of ID3D11Device which operate on a given adapter. You need to specify an IDXGIAdapter creating an ID3D11Device via D3D11CreateDevice. So there must be a connection. Are there any ideas how to get such an enumeration?
Thanks a lot.
You can try searching processes that have loaded directx dll. finding loaded dll
OR
You can place a global hook on D3D11CreateDevice and whenever some application calls it enlist that application.
I placed the question here because I hoped beeing able to avoid hooking. In the meantime I see that my doubts concerning hooking were causeless. I tried your second proposal and it works quite well. Thanks for the hint.
| common-pile/stackexchange_filtered |
Size-related point in QGIS
I want to map the absolute increase/decrease of the population of a city. The amount of people that increased/decreased is represented by a size-related point. If it's negative I want to have it in blue, if it's positive I want to have it in red.
Does someone knows how I should do ? It should be like that :
Welcome to GIS.SE. Could you tell us, which kind of data you have (point, polygon) and whether you want this in the print composers legend or in your map itself.
Thanks for your help ! I have polygons of municipalities in which I want to put a point describing the relative increase or decrease of population since two years with size related(to the amount)-point in the municipality polygons. The points with on increase sshould be red, the ones with a decrease bleu. And I just want to have 6 points in my legend like the picture above.
This isn't really technical, but in my mind, red isn't the best color to represent an increase in value. I'd switch the colors, and probably change blue to green.
I think red and blue are fine. Nowadays we learn that red/green is to be avoided because it's hard to see for the colorblind.
Please, do not forget about "What should I do when someone answers my question?"
There is also a possibility to achieve the desired output with a QGIS plugin Proportional circles.
Proportional symbols are used for showing a quantity, for example the
population of cities or countries. This plugin generates layers of
proportional circles or sectors as a rose diagram and a legend. It is
also possible to generate a legend without an analysis. Requires
Memory layer Saver to save the memory layers.
References:
Proportional Circles Documentation (in French)
If it's point data just set the style to 'Graduated' and then choose/design a colour ramp that fits your specification. Quick and dirty solution for size would be to manually change the point size for the graduation brackets one at a time.
But as the comment suggests, its hard to advise without more information on your dataset.
Adding the circles to your legend will be some manual work, but adding them to your map is rather easy.
Open the properties of your polygon layer and go to the symbology-tab. Make sure you have single symbols switched on, not classified or else. You will have some symbology for your area already. From a cartographic point of view it is recommended to only use rather thick outline (e.g. 0.7 to 1 mm) but not to fill the area itself - this avoids informational overflow.
Then use the green plus to add another style and switch this one to centered- this adds a point to the center of your polygons. Go to the menu which allows you to adapt the marker as much as possible, then click on the rectangular button (with the two triangles) to the far right next to the size row. There you may choose a column on which to base the size of your point, e.g. your population data. In the next step you need to do some fiddling with how you symbolise it exactly, e.g. multiply or divide your data by some factor, and choose the right units for you points (e.g. you could use pixels as units and base the point-size on "population-column"/1000).
As for the colour of the points, this is rather tricky. I couldn't choose the colour based on a conditional clause, therefore you go back to the marker size and expand the formula to if("population-column">0,"population-column"/1000,0). This leaves all markers for population changes below zero invisible. Choose e.g. green as a colour for these markers. Then add another marker, use if("population-column"<0,"population-column"/1000,0) as formula and choose e.g. red.
If you feel really stylish today, you could choose a triangle instead of a circle for your marker - and let it point up or down, based on the development of the population in each area. Formula for this is similar to the ones above: if("population-column"<0,180,0). Enter this via edit next to the rotation row.
Here's an image for your orientation (sorry for the wrong language).
Another word of cartographic warning: If your data is quite diverse, ranging from say 1,000 to 1,000,000, you should rather choose classified symbology, otherwise you end up with almost invisible points next to ones filling the whole screen.
| common-pile/stackexchange_filtered |
How to effeciently find and parse last line of text from a log file via PowerShell?
I have a very large log file. I need to find out the last "WARN" line in that file effeciently (ie: read from the end), parse it, and return it as an object with "Date" field (DateTime type), "Level" field, and "Description" field
Any suggestions?
Here's what the file looks like
[Mon Dec 14 14:57:53 2015] [notice] Child 6180: Acquired the start mutex.
[Mon Dec 14 14:57:53 2015] [notice] Child 6180: Starting 150 worker threads.
[Mon Dec 14 15:04:43 2015] [warn] pid file C:/Program Files (x86)/Citrix/XTE/logs/xte.pid overwritten -- Unclean shutdown of previous Apache run?
[Mon Dec 14 15:04:43 2015] [notice] Server built: May 27 2011 16:04:42
[Mon Dec 14 15:04:43 2015] [notice] Parent: Created child process 5608
EDIT: This command must look inside the file, find the last matching line by search criteria, return that line, and "stop". Possible duplicate question is different in a number of ways: my script cannot simply sit there and wait for line to appear - it needs to run, get the line as quickly as possible, and get out. Furthermore, it needs to search for it by substring, and lastly it needs to return a DateTime and other fields broken up. Thanks for not voting to close this quesiton.
Possible duplicate of Unix tail equivalent command in Windows Powershell
It is not equivalent by any means. I need to find the last matching line based on a search criteria, not get the last lines. Also, need to parse out DateTime. Please do not vote to close
In general, SO is a place to get help with code you've written that isn't working. It is not a place to ask for a script to be written for you. I'm surprised that someone with 10,000+ reputation would post this!
The only substantial difference with the other question is that you need to filter based on contents -- breaking up the fields is a trivial exercise compared to quickly scanning the file in reverse. Nevertheless, that is a substantial difference.
@TonyHinkle Appreciate, the comment. I made fibble attempts at writing PowerShell script, they all failed and since I dont know PowerShell at all, I was too embarassed to post those ;)
Open the file as a raw Stream, seek a "decent" block size from the end (say 1 MB), then search the resulting bytes for the binary representation of "warn" until you've found the last instance (I'm assuming you know the encoding in advance). If you find it, scan for the line terminators. If you don't find it, seek back 1 + 1 MB and go again. Repeat until you seek to the beginning.
If there is no "warn" in the entire file, this will be slower than just reading the file sequentially, but if you're certain there's a line of the kind you want near the end, this can terminate pretty quickly. The essential thing to do is not read the file as text with a StreamReader, since you lose the ability to seek arbitrarily.
Actually getting the code for this idea right is more involved. The difficulty of this operation is not due to anything in PowerShell -- there is no simple way to do this in any language, because reading a file in reverse is not an efficient operation in any file system I know of.
I'd approach that this way:
get-content $file -ReadCount 3000 |
ForEach-Object {
if ($_ -like '*warn*')
{$Lastfound = $_}
}
($Lastfound -like '*warn*')[-1]
It's certainly not going to be efficient. Everything in PowerShell and C# (and everything else) is built around reading forwards, not backwards. Given that and the fact that you don't even know where the last line might be, I don't see any way to avoid processing the whole file unless you want to spend several hours writing your own ReverseStreamReader.
Assuming the file is bigger than RAM -- which makes Get-Content impractical, IMO -- I'd probably do something like:
$LineNumber = [uint64]0;
$StreamReader = New-Object System.IO.StreamReader -ArgumentList "C:\LogFile.log"
$SearchPattern = [Regex]::Escape('[warn]');
while ($Line = $StreamReader.ReadLine()) {
$LineNumber++;
if ($Line -match $SearchPattern) {
$LastLineNumber = $LineNumber;
$LastLineMatch = $Line;
}
}
$StreamReader.Close()
$LastLineNumber
$LastLineMatch
Parsing the line is probably going to involve a lot of String.IndexOf() and String.Substring(). Turning the date into a DateTime should be done like so:
[datetime]::ParseExact('Mon Dec 14 15:04:43 2015','ddd MMM dd HH:mm:ss yyyy',[System.Globalization.CultureInfo]::InvariantCulture,[System.Globalization.DateTimeStyles]::None);
I chose -match over -like because as far as I can tell it actually performs better. That might be just my system, however.
| common-pile/stackexchange_filtered |
Why does collision detection work in editor but not in android build? (AR Foundation)
I am building an augmented reality app using ar foundation. I need to detect a collision between two cubes. The cubes both have a box collider and a rigid body attached to them. When I run the scene in the editor everything works fine but when I build it for android and then test it won't detect any collisions. Could it be because when one of the cubes is spawned it is already touching the other one?
I'm pretty sure its an issue with Unity and not my code but here is some of it just in case.
I have also posted on Unity Answers here
void OnCollisionEnter(Collision collision)
{
Debug.Log(collision.gameobject.name);
if (collision.gameObject.tag == col_tag)
{
if (collision.gameObject != first && first != null)
{
//stuff
}
else
{
point = collision.contacts[0].point;
first = collision.gameObject;
}
}
}
try using void OnTriggerEnter. In the box collider activate IsTrigger and try to use this script:
void OnTriggerEnter (Collider collision)
{
Debug.Log(collision.gameobject.name);
if (collision.gameObject.tag == "col_tag")
{
if (collision.gameObject != first && first != null)
{
//stuff
}
else
{
point = collision.contacts[0].point;
first = collision.gameObject;
}
}
}
Thanks for the answer! Can two triggers trigger each other? I thought one of them had to not be a trigger? Also, I don't think colliders have contact points but maybe I could use a raycast.
Maybe it's a problem with 2020(because what isn't?), I'll switch to 2019 and see if it works.
I'm using 2020, and not face such an issue
| common-pile/stackexchange_filtered |
How to convert the CSV into JSON?
I need to convert the csv data into json value.
My CSV Data like below.
aa cc dd ee ff
cc dd ff gg hh ll mm nn oo pp
H1 "null" H3 "null" H5 H6 H7
c1 c2 c3
c4 c5 c6 c7 c8 c9 c10 c11 c12
i need to get the "H1" row data only it may contain some null columns which is in csv file.
How can i extract values in particular row values and convert that only into json value?
I have use split text and extract text,ReplaceText processor but it doesn't get "H1" row due to some empty columns present in the previous rows.
And processor only convert the "aa" row into json value.After that it doesn't read below rows.
Please anyone help me to solve this?
That doesn't look like CSV data to me.
What should the resulting JSON look like?
{"header":"Value of h1","header3":"value of the h3" }.
Note: There is multiple rows in file which start with "H1" but we need to get the row which having 7 columns only.-@James
Using extract text processor add some regex expression to get particular rows from csv file.
Using replace text processor add some value to null columns.
Added new property to extract text processor as "Columndata" and add this (.+),(.+),(.+),(.+) regex expression to columndata field,which splits the data by comma.
Finally form json data with replace text processor. Add these expression {"Column1":${Columndata.1},"Column2":${Columndata.2},"Column3":${Columndata.3}} in replacement value of replace text processor.
| common-pile/stackexchange_filtered |
How do I use SIFT in OpenCV 3.0 with c++?
I have OpenCV 3.0, and I have compiled & installed it with the opencv_contrib module so that's not a problem. Unfortunately the examples from previous versions do not work with the current one, and so although this question has already been asked more than once I would like a more current example that I can actually work with. Even the official examples don't work in this version (feature detection works but not other feature examples) and they use SURF anyway.
So, how do I use OpenCV SIFT on C++? I want to grab the keypoints in two images and match them, similar to this example, but even just getting the points and descriptors would be enough help. Help!
get the opencv_contrib repo
take your time with the readme there, add it to your main opencv cmake settings
rerun cmake /make / install in the main opencv repo
then:
#include "opencv2/xfeatures2d.hpp"
//
// now, you can no more create an instance on the 'stack', like in the tutorial
// (yea, noticed for a fix/pr).
// you will have to use cv::Ptr all the way down:
//
cv::Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
//cv::Ptr<Feature2D> f2d = xfeatures2d::SURF::create();
//cv::Ptr<Feature2D> f2d = ORB::create();
// you get the picture, i hope..
//-- Step 1: Detect the keypoints:
std::vector<KeyPoint> keypoints_1, keypoints_2;
f2d->detect( img_1, keypoints_1 );
f2d->detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_1, descriptors_2;
f2d->compute( img_1, keypoints_1, descriptors_1 );
f2d->compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using BFMatcher :
BFMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
also, don't forget to link opencv_xfeatures2d !
In the documentation detect and compute only appears as python functions. Is the documentation incomplete? Is the SIFT class better documented anywhere? And is the SIFT::operator() unusable or am I just doing something horribly wrong? Edit: the create(...) function isn't anywhere that I've looked in the online documentation... I guess I'm either looking in all the wrong places or I'm just going to have to do without the dox.
operator() is still there, but using that from a pointer, would make it something like f2d->operator(...). better use: f2d->detectAndCompute(...) then. (and yea, the docs need a fix urgently...)
The edit helps. After a bit of research I'd like to add that you can also use detectAndCompute(img, mask, keypoints, descriptors) and that you can use noArray() on that if you don't want a mask.
Also for future reference, you can use drawMatches() to actually see the matches.
@berak How do you link opencv_xfeature2d?
There are useful answers, but I'll add my version (for OpenCV 3.X) just in case the above ones aren't clear (tested and tried):
Clone opencv from https://github.com/opencv/opencv to home dir
Clone opencv_contrib from https://github.com/opencv/opencv_contrib to home dir
Inside opencv, create a folder named build
Use this CMake command, to activate non-free modules: cmake -DOPENCV_EXTRA_MODULES_PATH=/home/YOURUSERNAME/opencv_contrib/modules -DOPENCV_ENABLE_NONFREE:BOOL=ON .. (Please notice that we showed where the contrib modules resides and also activated the nonfree modules)
Do make and make install afterwards
The above steps should work out for OpenCV 3.X
After that, you may run the below code using g++ with the appropriate flags:
g++ -std=c++11 main.cpp `pkg-config --libs --cflags opencv` -lutil -lboost_iostreams -lboost_system -lboost_filesystem -lopencv_xfeatures2d -o surftestexecutable
The important thing not to forget is to link the xfeatures2D library with -lopencv_xfeatures2d as shown on the command. And the main.cpp file is:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/xfeatures2d/nonfree.hpp"
using namespace cv;
using namespace std;
int main(int argc, const char* argv[])
{
const cv::Mat input = cv::imread("surf_test_input_image.png", 0); //Load as grayscale
Ptr< cv::xfeatures2d::SURF> surf = xfeatures2d::SURF::create();
std::vector<cv::KeyPoint> keypoints;
surf->detect(input, keypoints);
// Add results to image and save.
cv::Mat output;
cv::drawKeypoints(input, keypoints, output);
cv::imwrite("surf_result.jpg", output);
return 0;
}
This should create and save an image with surf keypoints.
| common-pile/stackexchange_filtered |
Azure redirects to azure test site
my website which is hosted on Azure, redirects to the Azure test site link when I click a link.
For example, my website is http://www.williampross.com. When I click on a link it goes to the Azure test site like: http://wpr.azurewebsites.net/programming-books/
I want it to go to main URL rather than the azurewebsites URL instead (which is also a valid link).
I looked the source code and all the links has wpr.azurewebsites.net address and that's why users are redirected to that site. Can you share how you're creating these links?
In Azure under "Configure" -> "domain names" there are three things listed: www.williampross.com, williampross.com, wpr.azurewebsites.net
I also went to Namecheap where I bought there URL and there are 5 entries under advanced DNS:
Type Host Value
A record @ IP address (filled in)
CNAME record @ wpr.azurewebsites.net
CNAME record awverify awverify.wpr.azurewebsites.net
CNAME Record awverify.www awverify.wpr.azurewebsites.net
CNAME Record www wpr.azurewebsites.net
Everything looks fine with the a record & cname records & therefore suggest that the Azure part of the setup is fine.
I believe that your issue is related to the Wordpress setup. Log into your Dashboard, Go Settings>General & change the Wordpress Address (URL) & Site Address (URL) to http://www.williampross.com (goes without saying...make sure you back up your database before you start tweaking).
Another quick point - once the above works, go back into the Azure Portal & add williampross.com to the domain list (you have already done **www.**williampross.com) & add the cname records to the dns server. Otherwise, the site will load at www.williampross.com but not at williampross.com. Theres no need to alter anything in Wordpress further than as per the original answer.
The wordpress solution worked! You were right that I needed to change those Wordpress Address and Site Address fields. So not really an Azure issue after all. Thanks.
No probs. Can you mark the answer as correct (press the tick)?
Yea sure, clicked the click, still new to the site.
| common-pile/stackexchange_filtered |
Trigger tab with data-toggle and href
I have the tab-based page with a first tab open when the page is loaded.
The problem is that I want to create a link to open and scroll to some of the other tabs.
Here is the Tab code:
<li><a href="#advantages" class="tab-link" role="tab" data-toggle="tab">ADVANTAGES</a></li>
It works when you add #something at the end of the URL, but the problem is that you can not trigger the tab opening from button or link on the same page.
Example: If you want to lead someone from the page to open tab different than first.
The Question is: Is there a way to trigger and open another tab different than the first one
Please create a working example of the problem you are facing
I do not thing is neeed. I just want to know can I trigger this kind of type of tab with jquery or javascript?
| common-pile/stackexchange_filtered |
Invert a List of Character Vectors in R
What is an efficient way from inverting a list of character vectors as shown below?
Input
lov <- list(v1=c("a", "b"), v2=c("a", "c"), v3=c("a"))
Expected
list(a=c("v1", "v2", "v3"), b=c("v1"), c=c("v2"))
Similar to Revert list structure, but involving vectors:
We can either convert the list to a data.frame (using stack or melt from library(reshape2)) and then split the 'ind' column by the 'values' in 'd1'.
d1 <- stack(lov)
split(as.character(d1$ind), d1$values)
Or if the above method is slow, we can replicate (rep) the names of 'lov' by the length of each list element (lengths gives a vector output of the length of each element) and split it by unlisting the 'lov'.
split(rep(names(lov), lengths(lov)), unlist(lov))
The second one is preferrable, since it does not involve added packages
@cannin The first one also doesn't need any packages. I was referring to melt which comes from reshape2. Otherwise, stack is a base R function.
| common-pile/stackexchange_filtered |
Maven Build failure with Cannot find Symbol: class error
I had a working project, when i uploaded into Github, but now when I import that into another system, it shows build failure.
The project was uploaded from Win10 and now I am importing it to Mac. It is a very basic project for Maven, and I know there is some very basic mistake that I am missing to figure.
Please find the dependencies here
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>UI</groupId>
<artifactId>Automation</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Automation</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/io.cucumber/cucumber-java -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>6.9.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.cucumber/cucumber-junit -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-junit</artifactId>
<version>6.9.1</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.seleniumhq.selenium/selenium-java -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>3.141.59</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
After the import it is not recognizing the Hooks class that I am trying to extend to other classes too. And i get Cannot Find Symbol: Class error in the build.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/nehasingh/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_<IP_ADDRESS>00610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [file:/Users/nehasingh/eclipse/java-2021-032/Eclipse.app/Contents/Eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/nehasingh/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_<IP_ADDRESS>00610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [file:/Users/nehasingh/eclipse/java-2021-032/Eclipse.app/Contents/Eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[INFO] Scanning for projects...
[INFO]
[INFO] ---------------------------< UI:Automation >----------------------------
[INFO] Building Automation 0.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ Automation ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/nehasingh/Downloads/AutomationUI-main/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.5.1:compile (default-compile) @ Automation ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 3 source files to /Users/nehasingh/Downloads/AutomationUI-main/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[10,29] package UI.Automation.stepdfn does not exist
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[14,31] cannot find symbol
symbol: class Hooks
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[3,29] package UI.Automation.stepdfn does not exist
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[5,32] cannot find symbol
symbol: class Hooks
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[19,49] cannot find symbol
symbol: variable driver
location: class UI.Automation.pageobjects.HomePage
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[32,35] cannot find symbol
symbol: variable driver
location: class UI.Automation.pageobjects.HomePage
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[11,17] cannot find symbol
symbol: variable driver
location: class UI.Automation.pageobjects.loginPage
[INFO] 7 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.837 s
[INFO] Finished at: 2021-04-17T00:03:28-03:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.5.1:compile (default-compile) on project Automation: Compilation failure: Compilation failure:
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[10,29] package UI.Automation.stepdfn does not exist
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[14,31] cannot find symbol
[ERROR] symbol: class Hooks
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[3,29] package UI.Automation.stepdfn does not exist
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[5,32] cannot find symbol
[ERROR] symbol: class Hooks
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[19,49] cannot find symbol
[ERROR] symbol: variable driver
[ERROR] location: class UI.Automation.pageobjects.HomePage
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/HomePage.java:[32,35] cannot find symbol
[ERROR] symbol: variable driver
[ERROR] location: class UI.Automation.pageobjects.HomePage
[ERROR] /Users/nehasingh/Downloads/AutomationUI-main/src/main/java/UI/Automation/pageobjects/loginPage.java:[11,17] cannot find symbol
[ERROR] symbol: variable driver
[ERROR] location: class UI.Automation.pageobjects.loginPage
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
This was working completely fine in Win10 but not working in MacOS.
Versions:
JAVA:16
Maven: 3.8.1
Please add the pom file and error message as code in your question.
Based on the error message package UI.Automation.stepdfn does not exist please make sure that the UI.Automation.stepdfn package exist in your application.
As @HelloPutra pointed out, looks like you have unresolved packages and classes in your code, make sure you have imported complete project and have not missed out any manual jars added.
I have all classes and packages in the system. this project worked in Win10, this package exists
| common-pile/stackexchange_filtered |
select OnChange not working within a form
I'm still relatively new to all this so please bear with my hackjob code. I'm attempting to use JavaScript to toggle the visibility of a div on my webpage once the value of a dropdown menu is changed. For some odd reason, it is not working and I have exhausted every solution I have come up with. Google doesn't help too much, mainly cause I don't know the correct terms to search for and what to look for.
Could anyone help me with this? Here is the code I am having issues with:
<script type="text/javascript">
function toggle_visibility(options) {
var e = document.getElementsById(options);
if(e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
</script>
<div class="form-group">
<label for="Services">Please select a service</label>
<select class="form-control" name="Services" onchange="toggle_visibility(options);">
<option>Please select...</option>
<option value="1">Regular Clean</option>
<option value="2">One-off Clean</option>
<option value="3">Spring Clean</option>
</select>
</div>
<div id="options" style="display: none;">
Thanks in advance!!
Open console (hit f12) and look at your errors. Put a console.log("options", options) right before you try to get elements by id. You will see you already have the select dom element and that is throwing an error when you try to document.getElementById. You should really look into "javascript debugging techniques", it will definitely make your development life a little bit easier.
You have getElementsById, which isn't valid JavaScript, it should be getElementById. Also, your parameter and argument names are the same, so here's what I'd do:
<script type="text/javascript">
function toggle_visibility() {
var e = document.getElementById("options");
if(e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
</script>
<div class="form-group">
<label for="Services">Please select a service</label>
<select class="form-control" name="Services" onchange="toggle_visibility();">
<option>Please select...</option>
<option value="1">Regular Clean</option>
<option value="2">One-off Clean</option>
<option value="3">Spring Clean</option>
</select>
</div>
<div id="options" style="display: none;">
Or keep the param and just skip the entire document.getElementById line as they are already passing the triggering element. The second dom select is unnecessary
thank you!!! this immediately fixed my problem. i've been tryring to fix this for the past hour and a half, only if i knew it was this simple ahahah. thank you again!
If you're seeking to alternate visibility per menu change, you have the correct pattern but are missing a few details:
getElementsById is not a function (it's getElementById singular--each id is unique).
options is an undefined variable, not a parameter like you think it is.
There is no content in the <div> element you're showing/hiding, so it's hard to verify that anything is working.
Your <div> has no closing tag.
Use camelCase instead of snake_case in JS (cosmetic but important).
function toggleVisibility() {
var e = document.getElementById("options");
e.style.display = e.style.display === 'block' ? 'none' : 'block';
}
<div class="form-group">
<label for="Services">Please select a service</label>
<select class="form-control" name="Services" onchange="toggleVisibility();">
<option>Please select...</option>
<option value="1">Regular Clean</option>
<option value="2">One-off Clean</option>
<option value="3">Spring Clean</option>
</select>
</div>
<div id="options" style="display: none;">Hello world!</div>
You may also consider researching event listeners and avoiding inline CSS in the interests of keeping your behavior, style and markup separate.
The first problem is that options is meaningless in your onchange attribute. You can just pass that as a string to getElementById (Element, not Elements). If you are just showing/hiding the div any time the dropdown is changed, it's relatively straightforward:
<script type="text/javascript">
function toggle_visibility() {
var e = document.getElementById("options");
if (e.style.display == 'block') {
e.style.display = 'none';
}
else {
e.style.display = 'block';
}
}
</script>
<div class="form-group">
<label for="Services">Please select a service</label>
<select class="form-control" name="Services" onchange="toggle_visibility();">
<option>Please select...</option>
<option value="1">Regular Clean</option>
<option value="2">One-off Clean</option>
<option value="3">Spring Clean</option>
</select>
</div>
<div id="options" style="display: none;">
| common-pile/stackexchange_filtered |
How to set up the start of the document like this ? (noob here)
noob here.
I would like to ask you how to make the opening page of the document look like the following picture ? I think everything else would be doable for me without any help as there's only sections, bolds/italics and so forth, the basic stuff. I started writing the most basic LaTeX documents and already I feel a pain when thinking I have to use a WYSIWYG editor.
Can you please give me the input ? It's things like this that are the biggest hurdle for me to overcome as I wish to fully switch to it once I'm ready - it would be great to do all of my work on it.
Thank you kindly for checking this out.
P.S. The border around the page could be inserted but is unecessary. If it were to be made can you please give me the command to make it appear on all the pages ?
"I have to use a WYSIWYG editor." you mean non-WYSIWYG? Either way it's not really TeX's fault, the "ugly" way (i.e. repeatedly create manual newline or space to "force" the content into the correct position) is very easy (although it's almost the only possible way in e.g. Word, so you can choose to be smarter in TeX.)
For the ugly way, you do need to know spacing - How can I force a \hspace at the beginning of a line? - TeX - LaTeX Stack Exchange and (optionally, to avoid warning) line breaking - How to put two newlines in LaTeX - TeX - LaTeX Stack Exchange though.
This should be quite easy. Bur please help us to help you by posting a MWE that is to say a Minimal (non)-Working Example, (starting by \documentclass and ending by \end{document}) showing what you have already tried, So that we can copy paste to start with a compilable document which at least contains the text !
Here is a basic template for your setup:
\documentclass{article}
\usepackage{newtxtext}
\usepackage{graphicx,eso-pic}
\usepackage[margin=1in]{geometry}
\usepackage{fancyhdr}
\fancyhf{}% Clear header/footer
\renewcommand{\headrulewidth}{0pt}% Remove header rule
\fancyfoot[R]{\thepage}
\begin{document}
\thispagestyle{fancy}
% Add page border
\AddToShipoutPictureFG*{%
% Left rule
\AtPageLowerLeft{\hspace{2em}\rule[2em]{1.5pt}{\dimexpr\paperheight-4em}}%
% Right rule
\AtPageLowerLeft{\hspace{\dimexpr\paperwidth-2em-1.5pt}\rule[2em]{1.5pt}{\dimexpr\paperheight-4em}}%
% Bottom rule
\AtPageLowerLeft{\hspace{2em}\rule[2em]{\dimexpr\paperwidth-4em}{1.5pt}}%
% Top rule
\AtPageLowerLeft{\hspace{2em}\rule[\dimexpr\paperheight-2em]{\dimexpr\paperwidth-4em}{1.5pt}}%
}
\begin{center}
\large
Univerzitet u Ni\v{s}u
\bigskip
Filozofski fakultet
\bigskip
Departman za psihologiju
\vfill
\includegraphics[height=7\baselineskip]{example-image}
\vspace{3\bigskipamount}
\textbf{\Huge Anamneza}
\bigskip
{\Large Psihopatologija posebni deo\par}
\vfill
Mentor: Nikola \'{C}irovi\'{c}\hfill
Student: Danica Lezi\'{c}, 2256
\vfill
Avgust, 2021, godine
\end{center}
\end{document}
A Tikz solution:
The border is a separate picture, going into the footer, so appears on every page.
MWE
\documentclass{article}
\usepackage{graphicx}
\usepackage[margin=1in]{geometry}
\usepackage{fontspec}
\setmainfont{Noto Serif}
\usepackage{tikz}
\usetikzlibrary {positioning}
\newcommand{\gborder}{\tikz[remember picture,overlay]
\draw [black, line width=0.8mm]
([xshift=2em,yshift=2em]current page.south west)
rectangle
([xshift=-2em,yshift=-2em]current page.north east)
;}
\usepackage{fancyhdr}
\fancyhf{}% Clear header/footer
\renewcommand{\headrulewidth}{0pt}% Remove header rule
\fancyfoot[R]{\gborder\thepage}
\begin{document}
\pagestyle{fancy}
\begin{tikzpicture}[remember picture,overlay]
\node (uniname) at ([yshift={\dimexpr0.5\paperheight-8em}]current page.center) { \large Univerzitet u Nišu};
\node [below=of uniname,yshift=2em] (facname) {\large Filozofski fakultet};
\node [below=of facname,yshift=2em] (deptname) {\large Departman za psihologiju};
\node (cenpage) at (current page.center) {};
\node [above=of cenpage] (image) at (current page.center) {\includegraphics[height=7\baselineskip]{example-image}};
\node [below=of cenpage] (title) {\textbf{\Huge Anamneza}};
\node [below=of title] (subtitle) {\Large Psihopatologija posebni deo};
\node [below=of subtitle,xshift={\dimexpr-0.5\paperwidth+12em},yshift=-5em] (mentor) {\large Mentor: Nikola Ćirović};
\node [below=of subtitle,xshift={\dimexpr0.5\paperwidth-12em},yshift=-5em] (mentor) {\large Student: Danica Lezić, 2256};
\node (dateline) at ([yshift={\dimexpr-0.5\paperheight+8em}]current page.center) {Avgust, 2021, godine};
\end{tikzpicture}
\newpage
x
\newpage
x
\end{document}
| common-pile/stackexchange_filtered |
How to get value of a button type button with jQuery?
I use instead of
I simply wish to get the value/content of the selected/active button, with jQuery, on click button and onload page.
These are my buttons:
<button type="button" class="green test">Button 1</button>
<button type="button" class="green active test">Button 2</button>
I know I must to use $(".test:button") selector, but I don't know how to get the button content
Hi guys,
the problem is I have more button with the same class and I need to know the value of SELECTED button.
In my example I have the 'Button 2' with class = "green active test".
Is there a way to know the value of a selected button OR a button which contains "active" attribute in class?
check my edit on how to get the button containing active
Hi @Anton,
the problem is that I have more buttons with the same "active" class, e.g.:
Button 1
Button 2
Button 3
Button 4
So, I cannot only use .active selector
So you want to select a button which has green or red and active?
yes! I wish to select more classes: e.g.: 'green' AND 'active'
I'll update my answer
Use .text()
$(document).ready(function(){
alert($('.test[type="button"]').text());
$('.test[type="button"]').on('click',function(){
alert($(this).text());
});
});
DEMO
Edit
var text = $('button.active').text();
Edit
var redsText = $('button.red.active').text();
var greensText = $('button.green.active').text();
$('button').text();
or
$('button').html();
Try
$('button.test').click(function() {
alert($(this).text());
});
WORKING DEMO
<head>
<script type="text/JavaScript" src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
</head>
<button type="button" class="green test">Button 1</button>
<button type="button" class="green active test">Button2</button>
<script>
$(function(){
$(".green").click(function(){
$(this).addClass('active');
$(this).siblings().removeClass("active")
alert($(this).text());
})
})
</script>
| common-pile/stackexchange_filtered |
JBoss AS 7 Log4j in webapplication not change log level
Here is content of files
classes/log4j.properties
log4j.rootCategory=DEBUG, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - <%m>%n
# Enable web flow logging
log4j.category.org.springframework.webflow=DEBUG
log4j.category.org.springframework.faces=DEBUG
log4j.category.org.springframework.binding=DEBUG
log4j.category.org.springframework.transaction=DEBUG
pom.xml
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${org.springframework-version}</version>
<exclusions>
<exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
For each dependency which contain commons-logging, SLF4j is make exclusion.
WEB-INF/jboss-deployment-structure.xml
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.apache.log4j" />
</exclusions>
</deployment>
</jboss-deployment-structure>
When I start server in eclipse on console I still see only INFO log and WARN. I need DEBUG logging spring framework. What's is wrong with this configuration??
Nothing much clear about it. Seems like a bug with AS7
To have app specific logging level you need to
configure in your app in WEB-INF/classes/logging.properties or log4j.properties or log4j.xml
refer ondrej ziska comment at AS7-514 for detailed info
On jboss 7.1.1 Final this solution not work. Tomorrow I try it on 7.1.0 and 7.1.2 version of jboss which is in middleware blog post mentioned by ondrej ziska in his post.
@BartekM a piece of advice. Use AS7 nightly builds , judt click on the last successful artifact here : https://ci.jboss.org/jenkins/job/JBoss-AS-7.x-latest/ , AS7.1.1 that was released is continously going a lot of bug fixes. So its advisable.
thanks for advice. I downloaded latest version - jboss-as-7.2.0.Alpha1-SNAPSHOT from Jenkins, then I encountered problem with hibernate. In this moment I run my application on jboss-as-web-7.0.2.Final and ther logging in application work correctly.
Great so logging resolved. What sort of hibernate issue? i suppose it should be some configuration mismatch.
When I config onlyn log4j in application it's looks like logging is off. On console is throws only a few logs from JBoss with INFO Level. So I add META-INF/jboss-deployment-structure.xml with structure like in question to get conf in application and then I get log4j:ERROR A "org.apache.log4j.ConsoleAppender" object is not assignable to a "org.apache.log4j.Appender" variable and log4j:ERROR Could not instantiate appender named "stdout", when persistence.xml is readed. So I find this could be caused by double lib of lof4j. Everywhere I have only one lib and module is excluded from deploy.
@BartekM , log4j is the default logger for JBoss. so 1) you dont need its jars in your application (unless there are some version specifications) 2) just use log4j in your application, you dont need to place any jars, or configure in jboss-deployment-structure.xml.
3) or you could be looking for something like this
From the JBoss wiki page on configuring your application using log4j at https://docs.jboss.org/author/display/AS71/How+To:
I see in step 2, Include the log4j.properties or log4j.xml file in the lib/ directory in your deployment.
You had your log4j.properties in classes/ directory. Either your log4j properties is not correctly picked up(configuration error)or the documentation is incorrect.
If it does not work from the lib/ directory, the jboss documentation needs to be fixed.
lib/log4j.properties don't work, Looks like jboss documentation should be fixed
The configuration is included in the logging subsystem of the AS7, e.g. in the domain.xml or standalone.xml depending of the profile and mode.
You need to add a log category for org.spring and increase the log level threshold of the console handler.
<subsystem xmlns="urn:jboss:domain:logging:1.1">
<console-handler name="CONSOLE">
<level name="DEBUG"/>
...
</console-handler>
...
<logger category="org.spring">
<level name="DEBUG"/>
</logger>
...
</subsystem>
I know about definition logginig in standalone.xml, ther is also logginig.properties which is used when logginig subsytem is not defined in standalone.xml (asuming that I am using standalone jboss config). But problem is when I want configure logginig on the deployed application. I use jboss-deployment-structure.xml include in WEB-INF, task of this file is excluding jboss modules. I find today the default logginig module on jboss is org.jboss.as.logging, but when I add it in exclusion nothing happens.
| common-pile/stackexchange_filtered |
Dedicated NVidia GPU is not running when playing games on Ubuntu 20.04
So I've got myself a new ASUS TUF Gaming FX705DT and absolutely want to avoid installing Windows 10. I installed Ubuntu 20.04. Had a minor problem with the Wi-Fi driver, installed steam and War Thunder through it. I ran the game with nouveau and it crashed. So, I had to install an NVidia proprietary driver. Tried 418, 430, 440, 450 and 455. Non of them make the game run on NVidia GPU, the game runs on integrated(giving 10-29 FPS). The first 3 versions (418, 430 and 440) give a blank Nvidia X server settings program. The other 2 (450 and 455) give a short menu with no PRIME. When running nvidia-settings the message says PRIME: is it supprted? no.
I have spent more than 10 hours trying to fix this but no luck so far. Can anyone help me, please?
Here are my specs:
System:
Host: ASUS-FX705DT Kernel: 5.4.0-52-generic x86_64 bits: 64
Desktop: Gnome 3.36.4 Distro: Ubuntu 20.04.1 LTS (Focal Fossa)
Machine:
Type: Laptop System: ASUSTeK product: TUF Gaming FX705DT_FX705DT v: 1.0
serial: <superuser/root required>
Mobo: ASUSTeK model: FX705DT v: 1.0 serial: <superuser/root required>
UEFI: American Megatrends v: FX705DT.310 date: 12/24/2019
Battery:
ID-1: BAT0 charge: 64.3 Wh condition: 64.0/64.1 Wh (100%)
CPU:
Topology: Quad Core model: AMD Ryzen 7 3750H with Radeon Vega Mobile Gfx
bits: 64 type: MT MCP L2 cache: 2048 KiB
Speed: 1249 MHz min/max: 1400/2300 MHz Core speeds (MHz): 1: 1229 2: 1279
3: 1252 4: 1250 5: 1225 6: 1227 7: 1225 8: 1224
Graphics:
Device-1: NVIDIA TU117M [GeForce GTX 1650 Mobile / Max-Q] driver: nvidia
v: 450.80.02
Device-2: AMD Picasso driver: amdgpu v: kernel
Display: x11 server: X.Org 1.20.8 driver: amdgpu,ati,nvidia
unloaded: fbdev,modesetting,nouveau,vesa resolution: 1920x1080~120Hz
OpenGL: renderer: AMD RAVEN (DRM 3.35.0 5.4.0-52-generic LLVM 10.0.0)
v: 4.6 Mesa 20.0.8
Audio:
Device-1: AMD Family 17h HD Audio driver: snd_hda_intel
Sound Server: ALSA v: k5.4.0-52-generic
Network:
Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet
driver: r8169
IF: enp2s0 state: down mac: d4:5d:64:64:1b:03
Device-2: Realtek RTL8821CE 802.11ac PCIe Wireless Network Adapter
driver: rtl8821ce
IF: wlp4s0 state: up mac: d8:c0:a6:30:5b:c7
Drives:
Local Storage: total: 476.94 GiB used: 58.89 GiB (12.3%)
ID-1: /dev/nvme0n1 vendor: Micron model: 2200V MTFDHBA512TCK
size: 476.94 GiB
Partition:
ID-1: / size: 467.96 GiB used: 58.88 GiB (12.6%) fs: ext4
dev: /dev/nvme0n1p2
Sensors:
System Temperatures: cpu: 52.1 C mobo: N/A gpu: amdgpu temp: 52 C
Fan Speeds (RPM): cpu: 2400
Info:
Processes: 315 Uptime: 11m Memory: 7.65 GiB used: 1.99 GiB (26.0%)
Shell: bash inxi: 3.0.38
nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1650 Off | 00000000:01:00.0 Off | N/A |
| N/A 46C P8 1W / N/A | 11MiB / 3911MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 837 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 1410 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
nvidia-settings output:
(nvidia-settings:6231): GLib-GObject-CRITICAL **: 23:55:47.314: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
ERROR: nvidia-settings could not find the registry key file. This file should
have been installed along with this driver at
/usr/share/nvidia/nvidia-application-profiles-key-documentation. The
application profiles will continue to work, but values cannot be
prepopulated or validated, and will not be listed in the help text.
Please see the README for possible values and descriptions.
** Message: 23:55:47.367: PRIME: No offloading required. Abort
** Message: 23:55:47.367: PRIME: is it supported? no
I really wish you the best of luck on this, using your Nvidia GPU on ubuntu can be quite challenging. Could you give us the output of nvidia-smi ?
@StatisticDean I couldn't insert the output to this comment so I edited my main post and put it there on the bottom. Please, check.
Could you edit in the output of nvidia-settings as well ?
The nvidia-smi output looks like you are runnin on the Nvidia 450 driver, so look in Steam settings for GPU to use, or maybe just reinstall Steam but now with the Nvidia driver in place.
@StatisticDean Ok, I have added the output. Sorry it took so long.
@ubfan1 Yes, I am now using __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia %command% in SET LAUNCH OPTIONS. The video card is used now but only in "Ultra Low" for War Thunder. The FPS is much higher (70-90) but that's still not good.
| common-pile/stackexchange_filtered |
Manipulate Object with JS
I have this object:
const data = {
Jan: [{product: 'Shirt', num: '13'}, {product: 'Shoes', num: '15'}],
Feb: [{product: 'Shirt', num: '22'}, {product: 'Shoes', num: '1'}],
Mar: [{product: 'Shirt', num: '15'}, {product: 'Shoes', num: '25'}]
}
I need to create another object that looks like this:
const data = {
labels: ['Jan', 'Feb', 'Mar'],
datasets: [
{
label: 'Shirt',
data: [13, 22, 15]
},
{
label: 'Shoes',
data: [15, 1, 25]
}
]
}
The object above is for a chartJs. The data array in the datasets array, correspont to each value for the product per month.
Thank you in advance
Where do the colours come from? Where's your effort so far?
The colors are irrelevant. I will delete them
The labels are Object.keys(data). The datasets are grouped. You can find dozens of duplicates.
What have you tried and what didn't work as expected? I imagine you could get your labels from the keys in the object you have. And you could loop over the data you have (maybe more than once) to build your datasets and the data within them.
Yes, the labels are fine with Object.keys(). I have tried looping with a for in, but I am failing to see how to build up the datasets
For the datasets you can use Javascript group objects by property with Object.values(data).flat()
You can create the dataset using Array.prototype.reduce and create the new data array.
Note that you have to flatten the array as the Object.values(data) gives you an array of array.
const data = {
Jan: [{product: 'Shirt', num: '13'}, {product: 'Shoes', num: '15'}],
Feb: [{product: 'Shirt', num: '22'}, {product: 'Shoes', num: '1'}],
Mar: [{product: 'Shirt', num: '15'}, {product: 'Shoes', num: '25'}]
};
// Iterate through the data object's value and flatten this
// For example the Object.values(data) will provide you-
// [[{product: 'shirt', num: '13'}, {product: 'Shoes', num: '15'}], [{product: 'Shirt', num: '22'}, {product: 'Shoes', num: '1'}]] so on and so forth
// Need to make this a linear array using flat(1)
const dataset = Object.values(data).flat(1).reduce((acc, curr) => {
// Check if the product exists on the accumulator or not. If not then create a new
// object for the product.
// For example, this will create - {'Shirt': {label: 'Shirt', data: [13]}}
if (acc[curr.product] === undefined) {
acc[curr.product] = {
label: curr.product,
data: [Number(curr.num)]
}
} else {
// If the product already exists then push the num to the data array
acc[curr.product].data.push(Number(curr.num))
}
return acc;
}, {});
const newData = {
labels: Object.keys(data), // Set the keys as the labels
dataset: Object.values(dataset) // dataset is an object, just extract the values as an array
}
console.log(newData);
.as-console-wrapper{ min-height: 100vh!important; top: 0;}
Thank you @Sajeeb. I will spent all the time I need to understand this, and learnt from it.
I've just write some comments so that you could understand easily.
| common-pile/stackexchange_filtered |
Should I use NFC, RFID or something else?
I'm a web developer - so IoT is not my speciality at all - and I've been asked to find the cheapest and most efficient way (in this order of priority) to build a gizmo for a sport event (can't be more specific).
This is how it should work :
A Competitor wear a wristband carrying his unique ID.
At one place there is a terminal which will scan the wristband once in contact, so organizers will know at what time the competitor arrived to this terminal via a web app.
The Competitor must stay 3secs at the terminal and can't just extend arms forward, they must be at the terminal.
The Competitor is acknowledged that his wristband has been successfully scanned and can now move to the next terminal.
And so on
So my question is, what should I use for the wristband and the terminal knowing that the bracelets are throwaways ?
EDIT - More details :
Competitors can't have their phone nor any device with them during the event.
There will be between 40 and 50 terminals max
I've been asked for the cheapest solution but I don't have a min/max cost and I'm not limited by dev time (must be reasonable though)
Seems like you're going to need to find an off-the-shelf solution - and you're looking to confirm the type of technology?
@SeanHoulihane What do you mean by "off-the-shelf" ? I'm aware that there is no cheap solution 100% matching my needs, that's why I want to know what such project would take and if it's possible to assemble it myself based on an arduino/raspberry/[any microcontroller] solution.
@AdrienXL I think Sean's asking if it might be easier and cheaper to buy a ready-made solution such at these—£95 for 100 RFID wristbands, and then you'd only need to develop a reader of some description. Many modern phones have RFID reading capabilities, which might be good enough for your use case. Would something like this be acceptable, or do you have a specific price/complexity constraint?
@Aurora0001 oh sorry If I missunderstood Sean. I've been googling around and I've concluded that those kind of wristbands is indeed the cheapest solution. However, competitors can't have their phone with them and I need the "terminals" to be the RFID reader and only them. The best solution I've found so far is an RFID reader + 433mhz emeter on a Arduino card + a computer with a receiver. But I'm affraid of the cost and the time it will take if I have to assemble the arduinos myself (I need 50 terminals).
13.54 MHz seems to be a pretty good standard rfid to target. there are readers like this one to be had for super cheap that can interface with whatever MCU (arduino, esp8266, teensy, etc...) or micro computer (ras-pi, chip, etc..) you can get your hands on and program. Security and prevention of cheating is up to you and your skillz
It wasn't clear quite where this question fits in the product domain, and how much developer time you have to spend (vs cash). Nor reliability and scale of deployment you need. Use case seems defined well enough. Guessing you're not developing a product in its own right, more a one-off?
@SeanHoulihane Indeed, i'm trying to make a proof of concept first but close enough to reality regarding costs and time. If the project goes to the end, it will be only use by my client and won't be available for sale.
How far away from your base computer do the terminals need to be? Does it need to be a relatively real-time system or can the check-ins be cached for a few seconds?
If you could get away with the range of wifi, and the potential latency of an mqtt message (a good protocol if you need QOS) I think an esp8266 microcontroller with one of the these RFID readers would be a nearly ideal setup.
(I personally have a couple of wemos D1 mini's *note this is not the cheapest they can be found, but I try not to promote knock off's)
I've primarily used the NodeMCU firmware, but there's no baked in library for pn532 RFID chips, so you'd have to read/write i2c/spi registers manually. Adafruit has a library for the Arduino IDE, but it only works with i2c (seems under-tested / under-developed for the esp8266)
One of the benefits of a setup like this is that you could quite easily make these battery powered with a usb battery bank (watch out because some turn off if they don't sense enough current draw).
If I were to build these with parts from aliexpress (super cheap) this would be my shopping list:
project box: ~$1-$2
esp8266 module: ~$4
pn532 RFID module: ~$5
battery bank: ~$11
misc (hot glue, solder, blood, sweat, tears): ~$5
total: <$30 per terminal (very rough estimate)
Then for deployment, you'd need some sort of decent wifi access point that can handle a bunch of lightweight connections (some have a cap on # of connections) and probably a laptop running the mqtt host and your web app server.
Any view on RFID over NFC? I don't see much driving the choice other than component availability...
@SeanHoulihane comment section led me to spec a system around these wristbands.
The definition of nfc vs rfid is somewhat blurred... you really just have to pick a target protocol (in this case MIFARE 1K). Also component availability is not something that can ever be ignored. I would say it's actually one of the most important factors.
This is awesome @Aaron (even though you have underestimate tears a lot :P). Thank you ! All the area will be covered by wifi and no need for real real-time so it seems that your answer feets my needs perfectly. (And Thank you Sean too ! )
@AdrienXL hopefully you can prove yourself to this client of yours and charge lots for your tears xD
| common-pile/stackexchange_filtered |
that and where difference
Would you mind explaining me which one is the correct one in following sentences.
The place where we held the dinner last time was small and smelly.
The place that we held the dinner last time was small and smelly.
Generally, with places use "where". With things, "The food that they served there was really bad." and with people, "The person "who" ran the place could not cook anything well."
what about the place that i called home ? It should be the place where i called home ?
home is tricky because it has several meanings in context, for example "the physical house" vs. "the heart and soul of your family", etc. You might add another example sentence using "home" to your question so it can be addressed in answers.
1.The place where we had dinner last time was small and smelly.
2.This is the place which/that I call "home".
Whenever you want to find out the proper conjunction, try to ask questions.
1.We had dinner. Where did we have dinner?
I call "home". What do I call "home"?
I hope you see the difference.
| common-pile/stackexchange_filtered |
No Voltage on IR21844S HIGH OUTPUT (HO) pin
I need help, please.
my IR21844S was working when SD (ShutDown active Low) is powered with High logic level (5V) and Vin with Low logic level (0V), IR21844S Low Side(LO) = Vcc Voltage and IR21844S High Side(HO) = 0V which "was" Correct. and...
When both SD (ShutDown active Low) and Vin is powered with High Logic level (5V), IR21844S Low Side(LO) = 0V voltage and IR21844S High Side(HO) = Vcc Voltage which "was" also Correct.
Currently now, when SD (ShutDown active Low) is powered with High logic level (5V) and Vin with Low logic level (0V), IR21844S Low Side(LO) = Vcc Voltage and IR21844S High Side(HO) = 0V which is Correct.
But when "both" of the SD (ShutDown active Low) and Vin is powered with High Logic level (5V), IR21844S Low Side(LO) = 0V voltage and IR21844S High Side(HO) = 0.790V - 0.897V according to the Vin PWM signal variation Voltage is not supposed to be.
In a nutshell, IR21844S High Side(HO) is not bringing its voltage anymore when both SD (ShutDown active Low) and Vin is powered with High Logic level (5V) while IR21844S Low Side(LO) = 0V.
What I noticed I did wrong was that I did not connect the SD(ShutDown active Low) to any Logic Level(High or Low) which I know it shouldn't affect it because it's automatically connected to the GND according to its datasheet Functional Block Diagram but I only connected the Vin to High Logic Level(5V).
secondly, I used a Lead Acid battery 5V 4.5Ah (20hr) as my Logic Level.
logic could be 3.3V or 5V.
The Logic level 5V is meant for the SD(active low) pin so as to Enable the chip to be active and also meant for the Vin(PMW) pin so as to switch the Half-Bridge from its initial position LO = 1 & HO = 0 to LO = 0 & HO = 1 so that the motor spins clockwise or counterclockwise accordingly to the chip powered.
Now, if the Enable Pin is HIGH (5v) and either PMW1 pin or PMW2 pin is HIGH(with PMW voltage), there should be Movement, right? Yes. which is equivalent to LO = 0 & HO = 1.
but my own case is different, if the Enable Pin is HIGH (5v) and either PMW1 pin or PMW2 pin is HIGH(with PMW voltage), there will not be Movement. I traced where the problems are then I got to know it was from the Driver High Output(HO) having 0.790V which cannot drive the gate of the transistor and therefore the case is now LO = 0 & HO = 0.
I hope you'd clearly understand the problems now.
Please Help me out...
Looking forward to your responses.
Add a schematic and lose a lot of the words.
| common-pile/stackexchange_filtered |
Does Breeze js care what version of Json OData you are using?
I was wondering if Breeze js requires the use of Json verbose (Version 2.0) or light(Version 3.0), or if it can except both of these Odata versions of Json. Is one safer to use that the other? Also, is it posible to consume Atom OData (Just curious, not using it in my application)? Thanks
I'm guessing light is really all Breeze would need since it really only cares about the metadata from what I've seen. Just want to be sure before I fully commit though
Breeze currently uses the datajs library to provide support for any OData services. So we are constrained by whatever formats this library understands. (and I'm not sure how this library handles Atom OData either)
Note that this is only true for OData services, for any Web Api controller provided services JSON.NET provides a serialization format that is very comparable to the OData light format.
We have looked at providing a processing option that bypasses the datajs library and processes OData payloads directly but we aren't there yet.
| common-pile/stackexchange_filtered |
An issue when authorizing Facebook pages in Bot Framework
The issue originates from the bot framework itself, when I add a new Facebook channel for the first time everything works as expected, what I am trying to do is connecting multiple FB pages to the same bot, I read somewhere that you can do this by re-entering the page info and clicking "resubmit", the problem is when the "resubmit" button is clicked without first clicking "Deauthorize" first causes a problem, when I analyzed the request with the browser's inspector it seems that EnableChannelForBot method throws an error.
Also we are developing a service where users can register and link their FB pages to the bot just like ChatFuel or any other famous bot platform, the main problem is that bot framework is asking for a specific page id and access token per FB bot and you must do it manually through the bot framework dashboard, can we have an easy way to register the bot to multiple FB pages and without having to do so manually through an API or something Similar? Please work with us to provide a solution for this as soon as you can, Bot Framework is vital to our work and migrating to another SDK is going to be very costly and time-consuming.
I don't think that connecting multiple Facebook pages to the same bot is supported. Where did you read that?
Also, there is no API currently you can leverage on to register your bot/enable Facebook channel.
It was mentioned here:
https://github.com/Microsoft/BotBuilder/issues/1495
Scroll to the solution provided by Vrety
Also a more eligant way to handle the exception would be excpected from BF devs not just a disabled button and a console error!
| common-pile/stackexchange_filtered |
Is a particular meaning of "to decline" missing a noun?
When I check with the net about to decline, I see a number of verbs with various meanings. At the same place, I can also see a number of nouns corresponding to them.
to decline: to grow smaller has a decline: a change toward something smaller or lower
However, #9 and #10 seem not to have a noun a decline or "a declination* listed next to them. It seems so random and I want to verify it's not the case of a sloppy database or a PICNIC issue.
to decline: to show unwillingness towards
to decline: to refuse to accept
No, it's right.
Language is what it is, not what somebody thinks it ought to be. There is no established nominalisation of decline in those senses.
Looking at declination in the iWeb corpus, in the first 100 examples, over 90 are the technical astronomical or related senses; five or six represent your last two meanings (and several of those are in legal contexts). And one is talking about grammar, and clearly means declension.
Interesting. What would be stylistically proper noun'ization of the following: click here to decline the invitation which then is followed by confirmation of your ???, then? I've considered rejection and refusal as well as dismissal but all of them sound so negatively charged and formal. For certain reasons we're required to use a noun instead of a verb and inging it won't do, so I can't of your rejecting nor of your declining.
@KonradViltersten: I think I would use your decline in that case. There a few instances of your decline in that sense in the iWeb corpus, and several more where decline has that sense and is used as a modifier and must be parsed as a noun and not a verb (eg your decline letter)
| common-pile/stackexchange_filtered |
iPhone Objective C - error: pointer value used where a floating point value was expected
I do not understand why i am getting this error. Here is the related code:
Photo.h
#import <CoreData/CoreData.h>
@class Person;
@interface Photo : NSManagedObject
{
}
@property (nonatomic, retain) NSData * imageData;
@property (nonatomic, retain) NSNumber * Latitude;
@property (nonatomic, retain) NSString * ImageName;
@property (nonatomic, retain) NSString * ImagePath;
@property (nonatomic, retain) NSNumber * Longitude;
@property (nonatomic, retain) Person * PhotoToPerson;
@end
Photo.m
#import "Photo.h"
#import "Person.h"
@implementation Photo
@dynamic imageData;
@dynamic Latitude;
@dynamic ImageName;
@dynamic ImagePath;
@dynamic Longitude;
@dynamic PhotoToPerson;
@end
This is a mapViewController.m class i have created. If i run this, the CLLocationDegrees CLLat and CLLong lines:
CLLocationDegrees CLLat = (CLLocationDegrees)photo.Latitude;
CLLocationDegrees CLLong = (CLLocationDegrees)photo.Longitude;
give me the error : pointer value used where a floating point value was expected.
for(int i = 0; i < iPerson; i++)
{
//get the person that corresponds to the row indexPath that is currently being rendered and set the text
Person * person = (Person *)[myArrayPerson objectAtIndex:i];
//get the photos associated with the person
NSArray * PhotoArray = [person.PersonToPhoto allObjects];
int iPhoto = [PhotoArray count];
for(int j = 0; j < iPhoto; j++)
{
//get the first photo (all people will have atleast 1 photo, else they will not exist). Set the image
Photo * photo = (Photo *)[PhotoArray objectAtIndex:j];
if(photo.Latitude != nil && photo.Longitude != nil)
{
MyAnnotation *ann = [[MyAnnotation alloc] init];
ann.title = photo.ImageName;
ann.subtitle = photo.ImageName;
CLLocationCoordinate2D cord;
CLLocationDegrees CLLat = (CLLocationDegrees)photo.Latitude;
CLLocationDegrees CLLong = (CLLocationDegrees)photo.Longitude;
cord.latitude = CLLat;
cord.longitude = CLLong;
ann.coordinate = cord;
[mkMapView addAnnotation:ann];
}
}
}
Please show the compiler message, including the line of the error.
the actual compiler message is "error: pointer value used where a floating point value was expected", i edited the original post and added the lines above the 3rd code block.
NSNumber is not a float type, but a pointer, so you need to do this to convert it:
CLLocationDegrees CLLat = (CLLocationDegrees)[photo.Latitude doubleValue];
CLLocationDegrees CLLong = (CLLocationDegrees)[photo.Longitude doubleValue];
CLLocationDegrees is a double. You should use the -doubleValue method.
This works, thanks. Am i not able to do (double)photo.Latitude; because i am trying to convert a NSNumber pointer to a value by casting? Thanks!
Yes, casting won't work here because like in c, it will just reinterpret the address the pointer is pointing to as a double, which will give you a nonsensical value (although it might let you compile.) This is a place to be thankful for the error.
| common-pile/stackexchange_filtered |
How to crop UIImage with fixed dimensions?
I have an uiimage with dimensions (0,0,320,460)
How to crop this image to a dimension (10,30,300,300)
Set the bounds to 10, 30, 300, 300 + clipToBounds.
Target size in my code is always set to the full screen size of the device (so you have to change it).
@implementation UIImage (Extras)
#pragma mark -
#pragma mark Scale and crop image
- (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(@"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
| common-pile/stackexchange_filtered |
Hibernate can't read many to many mapping
I have a many to many mapping: users and rooms, as follow:
User.java
@Entity
@Table(name="users", indexes = {
@Index(unique=true, columnList="uid"),
@Index(unique=true, columnList="backendAccessToken"),
@Index(columnList="backendAccessToken", name="idx_backend")
})
public class User extends BasePersistable {
private static final long serialVersionUID =<PHONE_NUMBER>821424305L;
@Column(nullable=false, unique=true)
private String nickname;
@Column(nullable=false)
private Integer uid;
@Column(nullable=false)
private String backendAccessToken;
@Column
private String name;
@Column
@JsonIgnore
private String email;
@Column
private String location;
@Column
private String company;
@Column
private String avatar;
@Column
@JsonIgnore
private String accessToken;
@CreationTimestamp
private Date memberSince;
@ManyToMany(targetEntity=Room.class, cascade={ CascadeType.PERSIST, CascadeType.MERGE })
@JoinTable(name="room_users",
joinColumns={ @JoinColumn(name="user_id") },
inverseJoinColumns={ @JoinColumn(name="room_id") })
private List<Room> rooms = new ArrayList<>();
Room.java
@Entity
@Table(name="rooms", indexes = {
@Index(unique=true, columnList="uid"),
@Index(columnList="uid"),
@Index(columnList="fullName")
})
public class Room extends BasePersistable {
private static final long serialVersionUID = 1L;
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@Column(nullable=false)
private Integer uid;
@Column(nullable=false)
private String name;
@Column(nullable=true)
private String fullName;
@Column(nullable=false)
@Lob
private String description;
@Column(nullable=true)
private String homepage;
@Column(nullable=false)
private String owner;
@ManyToOne
private Organization organization;
@OneToMany(mappedBy="room")
@JsonIgnore
private List<Message> messages = new ArrayList<>();
@ManyToMany(mappedBy="rooms", targetEntity=User.class)
@JsonIgnore
private List<User> users = new ArrayList<>();
@Column(name="private")
private Boolean _private = false;
And when I try to create the schema, I'm seeing this error:
A Foreign key refering com.models.Room from com.models.User has the wrong number of column. should be 2
I did some research, tried to use JoinColumns annotation on User and didn't worked.
EDIT
I found the error: the class BasePersistable already defined the id. So I removed the id from Room and it worked. Thanks for the tip Mateusz Korwel and kakashihatake
Could you put full code of User and Room entities?
your join colums name need to refer primary key id's to classes so your room.class id colum name must be "room_id" and user.class must be "user_id"
Room entity contains primary key consisting of the two fields?
@MateuszKorwel no, the pkey is id
@kakashihatake sorry, I don't think so, since I'm actually creating the table via JoinTable I can specify the field names
your user.class not include ıd field if BasePersistable have id filed this time wrong because your rom.class have two id filed
@LuizE. I supposed the id is declared inside BasePersistable? Because User not contains field annotated by @Id. Or I missed something?
Edited and added comments; Thanks guys!
| common-pile/stackexchange_filtered |
How to get autocomplete and suggestions in Jupyter Notebook in VS Code?
I'm trying to code on Jupyter Notebook on VS Code, but there are no code suggestions nor syntax highlighting!
I am currently running python 3.10.
Install this extension copy and search in extension ms-toolsai.jupyter
If suggestion not show then try to press ctrl+space
Hey, thanks for the answer. The extension is already installed. The weird thing is that it was working perfectly fine till a few day ago. It stopped showing suggestions today only
@endi__edi reinstall extension.
| common-pile/stackexchange_filtered |
How to properly translate the key phrase of Erdoğan's 2016 letter to Putin, "kusura bakmasınlar," to Russian
My question is this: What is the proper or most usual/standard way to translate the Turkish common phrase "kusura bakmasınlar" to Russian?
Let me now explain why I got interested as well as the meaning and usage of that Turkish phrase and the research I have done.
Russian media and the Russian presidential website claim that Turkey's President Erdoğan apologized for shooting down a Russian military plane in his 2016 letter addressed to Putin (link1),(link2), but a central Turkish news agency, Anadolu, claims that the letter "did not include any words of 'apology'" (link3).
According to Erdoğan's spokesman İbrahim Kalın, the exact original expression in the letter is as follows:
kusura bakmasınlar diyorum
"Diyorum" means "I say," and "kusura bakmasınlar" is a form of the common Turkish expression "kusura bakma." "Kusura" is the dative case of the noun "kusur," which is defined in Wiktionary as "defect, fault, flaw," whilst "bakma" is the second-person singular negative imperative of the verb "bakmak," which is the Turkish verb for "to look," so the literal meaning of "kusura bakma" is "do not look at the fault." Here is how a native Turkish speaker explains the usage of "kusura bakma":
For example, you are at your friend's house, you dropped the glass and it got broken, you would say: Kusura bakma, or Kusuruma bakma lit. please neglect my fault. Or perhaps, you met a friend of yours after a long time, he used to be married but not anymore, however you don't know that. You might say: "So, are you two still dreaming about having twins?". He would tell you that he got divorced, to which you could say: "Aah! Kusura bakma ya...". (Source)
Another native Turkish speaker told me via Facebook that the purpose of saying "kusura bakma" is to inform that the fault was unintentional. "Bakmasınlar" is the third-person plural negative imperative of the same verb "bakmak" (link) and literally means "they should not look," so "kusura bakmasınlar" literally means "the fault should not be looked at." This sounds different from the official Kremin translation ("извините") and seems to confirm the words of the Anadolu news agency that there was no apology.
What I am looking for is how "kusura bakmasınlar" is usually translated to Russian in, e.g., books or movies. The key question for me is whether the official Kremlin translation is different from the standard practice of translating that phrase to Russian.
I did my own research, but it has not been fruitful. In Reverso there are no Turkish-Russian translations. Reverso has a few Turkish-English translations of that phrase, but they are conflicting - "regrets" vs "apologies." On the Internet I found conflicting statements as to what Russian translation is correct, with the suggested variants being "не взыщите," "не обессудьте / не судите строго за ошибку," and "извините." I was unable to find any examples of translation of that phrase in books or movies.
краткость - сестра таланта
@shabunc : Yes, I know, sorry.. I made a short version of the question at the very top. I don't know how to better explain the research I have done. It seems I am not talented...
I've closed this one because you're basically asking for translation without showing any prior research effort. You've spent too much time explaining what the Turkish phrase means but haven't make it clear why exactly you are still not satisfied.
@shabunc : I was afraid to add this, because it would make my question even larger, but now I will do that.
honestly, in this particular case your "here's the question and here's some additional context for those who is interested" is actually a cheating - you can not expect the audience in general to know what the phrase means.
I gotta warn you that there's a very high probability it will be heavily edited in any case.
i have an answer, it took me less than a minute, but it's slang, in my opinion the question should be unblocked, it's fully valid and ON topic
let's give it a shot than, ok - that means that at least one user got the question.
Google translates kusura bakmasınlar as I'm sorry and its analogs into all the languages it has. Do you think the KGB edited it?
@shabunc : I just did the heavy editing myself by removing a lot of non-essential information and adding the research I have done. Please have a look whether the post is fine now. If it is not, please kindly edit it if you can.
@БаянКупи-ка it's reopened
@Mitsuko I really appreciate that - honestly, now it's way, way clearer - thank you for you effort!
The case reminds me of a somewhat similar incident between China and the US quite a few years ago. In the end the Americans said "We are sorry!" and the Chinese claimed "Yes! They apologized!"
Here is what good Russian-Turkish dictionaries printed before 2015 write about kusura bakma.
Юсипова Р.Р., Турецко-русский словарь, Москва,
Русский язык — Медиа, 2005. Около 80 000 слов и выражений. Страница 376:
The next one is by a famous Soviet turkologist Nikolai Baskakov:
Баскаков Н.А., Турецко-русский словарь, Москва, Русский язык, 1977. 48 000 слов. Страница 576:
As you can see, both dictionaries translate the phrase as простите, извините, and also as не взыщите which I like the most, since the Turkish expression really does not contain Turkish verbs for прощать or извинять. Не взыщите actually means the same, but it is constructed more like the Turkish phrase, with a negation, and it can be roughly translated as "don't make me pay [for it]; don't prosecute [me], do not hold it against me".
Взыскать on Wiktionary.
Thanks a lot, this is really an enlightening answer. So it is not uncommon to translate it as "извините" despite that "не взыщите" is closer.
My answer may sound flippant but based on what i've managed to gather from your explanation, semantically (not stylistically) suitable Russian equivalents may be slangy imperatives забей and не заморачивайся.
In more decent terms не обращай внимания, не бери (дурное) в голову.
The first 2 variants you listed are good literary alternatives.
My suggestions stem from my understanding of the phrase connotation in the exchange between Erdogan and Putin. In the situation of a broken glass the slangy and colloquial expressions i mentioned will sound inconsiderate and rude just as they would in a diplomatic correspondence. And they aren't suitable semantically at all in the situation with a divorced friend. In Russian in such situation one would either say a, извини, не знал (women ой, извини, не знала) or without apology simply а, не знал (ой, не знала), because there's essentially nothing to apologize for. For having semblance of apology сожалею can be said.
Thanks a lot, I now know more Russian phrases. But my question is about the actual practice. I want to compare Kremlin's translation with how the same phrase is usually translated in books and movies. The key question is whether the Kremlin altered the standard translation for political motives.
Do you really think the expressions you mentioned are really suitable in the situations with the glass and the divorced friend Mitsuko mentioned? As for me your expressions are used not by the party that did something wrong, but by those who'd like to soothe the one who did wrong.
By the way, if I were to translate "kusura bakmasınlar" to Russian, I would say smth like, "не стоит придавать значение этому недосмотру." Maybe even "мы не нарочно это сделали"? I am really curious how this phrase (or at least "kusura bakma") is ACTUALLY translated to Russian from any Turkish sources different from that letter by Erdogan.
@Mitsuko you'd have to ask Russians versed in Turkish
@БаянКупи-ка : Aren't there any Turkish books translated to Russian, with the original text available? Then I could search for kusura bakma(sınlar) there and compare with the Russian translation.
@Mitsuko i'm almost sure there are but i unfortunately am not familiar with any
@Yellow Sky i don't disagree, but this is how the explanation has gone over with me, if the guilty party say this, they will come off rude
I just read your update. Yes, kusura bakma is like sorry without formal apologies. After all, the friend who did not know about the divorce has nothing to apologize for, because he was not informed. The meaning is along the line, "there was no bad intent, so you should not draw any conclusions from the fault."
@Mitsuko kusura bakmasınlar may mean exactly the same thing but in a formal register due to avoidance of direct address through resorting to the 3d person imperative and thereby aiming to sound neutral
There are actually THREE forms: (1) kusura bakma (second-person singular imperative, i.e., не смотри на ошибку), (2) kusura bakmayın (second-person plural imperative, also used as a polite form to address a single person just like in Russian, i.e., не смотрите на ошибку), (3) kusura bakmasınlar (third-person plural imperative; there is no direct grammatical Russian analogy, but it is similar to не надо смотреть на ошибку).
| common-pile/stackexchange_filtered |
StackOverflow error in Spring Integration LoggingHandler
I'm facing some weird stackoverflow error issue with no much details thrown by org.springframework.integration.handler.LoggingHandler. This is happening just after termination of flow.
Does anyone have any idea what can be the reason behind what's causing it?
{}2022-11-14T21:56:22,176WARN [HikariPool-1 housekeeper]com.zaxxer.hikari.pool.HikariPool:HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=1h40m40s250ms340�s500ns).
ERROR[pool-19-thread-1**]org.springframework.integration.handler.LoggingHandler:java.lang.StackOverflowError**
at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:86)
at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:45)
at org.springframework.messaging.core.AbstractDestinationResolvingMessagingTemplate.resolveDestination(AbstractDestinationResolvingMessagingTemplate.java:78)
at org.springframework.messaging.core.AbstractDestinationResolvingMessagingTemplate.send(AbstractDestinationResolvingMessagingTemplate.java:71)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:465)
at org.springframework.integration.handler.AbstractMessageProducingHandler.doProduceOutput(AbstractMessageProducingHandler.java:325)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:268)
This is happening just after termination of flow at savingLionProcessingRequestToDB
I checked there is no I/p channel and the output channel as same and causing some cycle
I've debugged the flow after the persistence of data in the termination step ie: in dsl .to(savingLionProcessingRequestToDB()) method, thread is going into starvation phase as mentioned in logs and throwing stackoverflow exceptions
Please find the scatter gather flow below:
@MessagingGateway
public interface LionGateway {
@Gateway(requestChannel = "flow.input", replyChannel = "lionReplyChannel")
Object processlionRequest(
@Payload Message lionRequest,
@Header("dbID") Long dbID,
@Header(value = "reqNumber", required = false) String requestNumber,
}
// Main flow
@Bean
public IntegrationFlow flow() {
return flow ->
flow.handle(validatorService, "validateRequest")
.split()
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.scatterGather(
scatterer ->
scatterer
.applySequence(true)
.recipientFlow(saveLionDB())
.recipientFlow(preparingRequestFromLionRequest()),
gatherer -> gatherer.outputProcessor(prepareLionRequest()))
.to(sampleFlow1());
@Bean
public IntegrationFlow sampleFlow1() {
return flow ->
flow.enrichHeaders(h -> h.replyChannel("responseChannel", true))
.handle(
Http.outboundGateway(serviceURL, restTemplateConfig.restTemplate())
.mappedRequestHeaders("type", "name")
.httpMethod(HttpMethod.POST)
.expectedResponseType(response.class))
.handle(lionsService, "lionService1")
.logAndReply("Response");
}
Another flow being leveraged further in processing :-
@Bean
@BridgeFrom("responseChannel")
public MessageChannel lionChannel() {
return MessageChannels.direct().get();
}
@Bean
public IntegrationFlow lionSampleFlow() {
return IntegrationFlows.from(lionChannel())
.split()
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.log("payload")
.scatterGather(
scatterer ->
scatterer
.applySequence(true)
.recipientFlow(getLionResponse())
.recipientFlow(prepareResponse()),
gatherer -> gatherer.outputProcessor(sendLionResponse()))
.to(savingLionProcessingRequestToDB());
}
@Bean
public IntegrationFlow savingLionProcessingRequestToDB() {
return flow ->
flow.filter(
conditionToPersistDBData,
f -> f.discardFlow(BaseIntegrationFlowDefinition::bridge))
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.enrichHeaders(h -> h.errorChannel("lionErrorChannel1", true))
.enrichHeaders(h -> h.replyChannel(lionReplyChannel(), true)) // defined at gateway layer as the replyChannel
.handle(
(payload, header) ->
lionsService.saveLionResponse(
header.get("dbID", Long.class),
header.get("reqNumber", String.class),
header.get("createdAt", Timestamp.class),
.logAndReply("persist_response");
}
public LionModel saveLionResponse(
long dbId,
String requestNumber,
Timestamp createdAt,) {
// return lionProcessingRepository.save(lionModel);
}
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
Please find edited query
Please, learn how to ask questions over here: https://stackoverflow.com/help/how-to-ask. You really need to be sure what tags you choose for your problem. I know it is possible to create tags easy, but it doesn't mean that all of them are monitored here.
Ah, I missed it, thanks for adding that tag. could you please share inputs on the query?
Please, share what do you do in that savingLionProcessingRequestToDB()? According to the current stack trace, it looks like you are looping your flow some way, since the part of the problem is producing a reply from the handler. What is I/p channel? Which channel is causing the cycle? You need to share more info around your problem. \
Please find the edited query for the same:
I/p channel for this flow is lionChannel getting a response from the main flow
Cycle is happening at the termination step in the lionSampleFlow after the data is persisted and when the response is returned through savingLionProcessingRequestToDB() method.
| common-pile/stackexchange_filtered |
turn off Cocos2D verbose logging
I just upgraded to cocos 2.1, and am seeing a ridiculous amount of logging to the console, such as:
2013-09-18 23:15:38.120 Notes and Clefs[842:907] cocos2d: deallocing <CCSprite = 0x1182aa0 | Rect = (816.00,640.00,32.00,64.00) | tag = -1 | atlasIndex = -1>
2013-09-18 23:15:38.121 Notes and Clefs[842:907] cocos2d: deallocing <CCSprite = 0x1182600 | Rect = (816.00,128.00,32.00,64.00) | tag = -1 | atlasIndex = -1>
2013-09-18 23:15:38.122 Notes and Clefs[842:907] cocos2d: deallocing <CCArray = 0x1161e00> = ( <CCSprite = 0x1182790 | Rect = (816.00,640.00,32.00,64.00) | tag = -1 | atlasIndex = -1>, )
etc..
From looking at the code, I see:
#if !defined(COCOS2D_DEBUG) || COCOS2D_DEBUG == 0
#define CCLOG(...) do {} while (0)
#define CCLOGWARN(...) do {} while (0)
#define CCLOGINFO(...) do {} while (0)
#elif COCOS2D_DEBUG == 1
#define CCLOG(...) __CCLOG(__VA_ARGS__)
#define CCLOGWARN(...) __CCLOGWITHFUNCTION(__VA_ARGS__)
#define CCLOGINFO(...) do {} while (0)
#elif COCOS2D_DEBUG > 1
#define CCLOG(...) __CCLOG(__VA_ARGS__)
#define CCLOGWARN(...) __CCLOGWITHFUNCTION(__VA_ARGS__)
#define CCLOGINFO(...) __CCLOG(__VA_ARGS__)
#endif // COCOS2D_DEBUG
And I set COCOS2D_DEBUG = 0, but I still get the same verbose logging...
I have Cocos2D in my project as a static library .a file.. Is it possible that this .a already has a macro/constant defined at level 2 or something, and that's why I'm seeing it not make any difference?
Can anyone recommend a way to turn this off?
Yes, when the static library is compiled using the Debug scheme, it'll print all these debug messages out. Try recompiling the static library with the COCOS2D_DEBUG preprocessor macro set to 1.
Why are you adding it as a static .a library though? I just add the cocos2d-ios.xcodeproj to my own project and add libcocos2d.a to Build Phases under Link Binary with Libraries. That way it'll automatically recompile cocos2d whenever a change occurs.
A static library is a ".a" file, no? But-- yeah I am doing it exactly as you described. I have the cocos2d-ios.xcodeproj in my project, and the .a file is added to the "link binary with libraries" option. I have a COCOS2D_DEBUG macro set to 0, and have cleaned, verified that the .a product is removed, yet when I build, I still get all the verbose logging output.
The preprocessor macro is in the cocos2d-ios project under the cocos2d target, not your project.
ahaaaa.. now that makes sense!
| common-pile/stackexchange_filtered |
Why add vodka to batter for frying fish?
At 4:46 in this video, Heston Blumenthal adds vodka to the batter for fried fish. This article in Robb Report also describes the process:
The star chef begins his experiment in an elevated way. He’s not using cod or halibut. No, no, no. This is a three-Michelin-star chef, so he’s going right for the whole turbot that he butchers himself to ensure maximum freshness and the correct portion size.
For the batter he mixes flour, rice flour, honey, vodka, and a beer. That’s all pretty standard, until he puts them in a CO2 cannister to make the batter even airier. Once the fish is dredged in flour and coated in batter from the soda cannister, he fries it. For one last step to make the batter crispy and thick, he drizzles more of it onto the fish while it’s in the frying oil.
What’s the purpose of the vodka?
@Cascabel i added the chef's name. i can't remember where on Stack Exchange, but i'm pretty sure a moderator told someone to remove names or brands because they look like advertising. does anyone know what i mean?
Given that vodka is literally "water + ethanol", your real question is, "why add water + ethanol to batter", which then raises the question: what does ethanol do besides get us drunk?
@PCilliterate There is no policy here against mentioning names or brands, especially when they're relevant. If every one of your posts is about the same brand, or the same food blog, sure, you're going to attract some "is this a spammer?" attention, but if you just include whatever details you think are relevant (people or brands or otherwise), that's fine.
According to this article about Blumenthal's method, which also explains the other ingredient/method choices: https://www.nytimes.com/2007/03/07/dining/07curious.html
The key to the Fat Duck batter is the alcohol, which does a couple of
very useful things. It dissolves some of the gluten proteins in the
wheat flour, so no elastic network forms and the crust doesn’t get
tough. (You’ll notice when you combine the ingredients that the mix
becomes mushy rather than sticky.) Alcohol also reduces the amount of
water that the starch granules can absorb, and boils off faster than
water, so the batter dries out, crisps and browns quickly, before the
delicate fish inside overcooks. The crispness lasts through the meal,
and revives well the next day in a hot oven.
Same principle as vodka for pie dough.
Sounds like (more commonly owned) cooking wine would do the same job then.
...and of course beer
@T.E.D. But both wine and beer (a) have a strong flavour, which vodka doesn't (b) have much higher water content (85–95% vs 60% for vodka), so I very much doubt they would do the same job nearly as well. The Blumenthal recipe already includes beer (for its flavour and carbonation) so clearly the vodka is included to achieve a further effect.
| common-pile/stackexchange_filtered |
Arrange values within a specific group
I'm trying to arrange values in decreasing order within a exact group in a nested dataframe. My input data looks like this. I've got two grouping variables (group1 and group2) and three values (i.e. id, value2, value3).
library(tidyverse)
set.seed(1234)
df <- tibble(group1 = c(rep(LETTERS[1:3], 4)),
group2 = c(rep(0, 6), rep(2, 6)),
value2 = rnorm(12, 20, sd = 10),
value3 = rnorm(12, 20, sd = 50)) %>%
group_by(group1) %>%
mutate(id = c(1:4)) %>%
ungroup()
I decided to group them by group1 and group2 and then nest():
df_nested <- df %>%
group_by(group1, group2) %>%
nest()
# A tibble: 6 x 3
# Groups: group1, group2 [6]
group1 group2 data
<chr> <dbl> <list>
1 A 0 <tibble [2 x 3]>
2 B 0 <tibble [2 x 3]>
3 C 0 <tibble [2 x 3]>
4 A 2 <tibble [2 x 3]>
5 B 2 <tibble [2 x 3]>
6 C 2 <tibble [2 x 3]>
Perfect. Now I need to sort only those data which group2 is equal to 2 by id. However I'm receiving a following error:
df_nested %>%
mutate(data = map2_df(.x = data, .y = group2,
~ifelse(.y == 2, arrange(-.x$id),
.x)))
Error: Argument 1 must have names
Can you use set.seed to generate random data and show expected output? Do you want to arrange by all value columns? Also I don't think arrange(-.x) does what you are expecting it to do. Check arrange(-mtcars), it just multiplies the data by -1.
You need to define by which variables and in which order you want to arrange your nested dataframes. You can't simply arrange.
Ok, I understand. I've added set.seed and id column. I need to arrange by id
You could do :
library(dplyr)
library(purrr)
df_nested$data <- map2(df_nested$data, df_nested$group2,~if(.y == 2)
arrange(.x, -.x$id) else .x)
So data where group2 is not equal to 2 is not sorted
df_nested$data[[1]]
# A tibble: 2 x 3
# value2 value3 id
# <dbl> <dbl> <int>
#1 13.1 -89.0 1
#2 9.76 -3.29 2
and where group2 is 2 is sorted.
df_nested$data[[4]]
# A tibble: 2 x 3
#value2 value3 id
# <dbl> <dbl> <int>
#1 15.0 -28.4 4
#2 31.0 -22.8 3
If you want to combine them do :
map2_df(df_nested$data, df_nested$group2,~if(.y == 2) arrange(.x, -.x$id) else .x)
I would suggest creating an additional variable id_ which will be equal to the original id variable when group2 == 2 and NA otherwise. This way if we use it in sorting it'll make no effect when group2 != 2.
df %>%
mutate(id_ = if_else(group2 == 2, id, NA_integer_)) %>%
arrange(group1, group2, -id_)
#> # A tibble: 12 x 6
#> group1 group2 value2 value3 id id_
#> <chr> <dbl> <dbl> <dbl> <int> <int>
#> 1 A 0 17.6 50.2 1 NA
#> 2 A 0 33.8 -14.4 2 NA
#> 3 A 2 23.1 22.6 4 4
#> 4 A 2 13.7 50.2 3 3
#> 5 B 0 15.4 49.9 1 NA
#> 6 B 0 16.2 63.7 2 NA
#> 7 B 2 41.7 -2.90 4 4
#> 8 B 2 16.6 46.7 3 3
#> 9 C 0 19.9 -64.3 1 NA
#> 10 C 0 19.9 59.7 2 NA
#> 11 C 2 34.1 48.5 4 4
#> 12 C 2 32.3 23.1 3 3
Then if needed we can group and nest the result.
| common-pile/stackexchange_filtered |
SCJP exam: Handle Exceptions
I'm studying for Java Programmer Certification (SCJP) exam. A question about exceptions, when handle exceptions is it best to handle a specific exception like NumberFormatException or catch all exceptions use the parent Exception class?
Base on my course unchecked exceptions are basically a RunTimeException which mostly result of a program bug. Does this mean that when I throw an exception manually I should rather use:
new Exception("... my message...")
and that I shouldn't handle RunTimeException?
Only handle checked exceptions?
You should handle as specific Exceptions as possible. If you know when a RuntimeException may be thrown, you should usually fix your program so it doesn't throw that Exception.
As far as catching checked Exceptions go, you should handle as specific an Exception as you can so:
try {
} catch (FileNotFoundException fnfe){
} catch (IOException ioe){
} catch (Exception e){
}
When you are throwing Exceptions, you should almost never use throw new Exception().
Throw an exception that can give someone who sees it some more information about what happened when it was thrown. (i.e. IndexOutOfBounds or NullPointerExceptions. They give specific info w/out even looking at the message or stacktrace.)
If you want to throw an Exception, and you don't think the Java API has one that fits your case, it is best to subclass or extend the Exception giving it a very informative name.
| common-pile/stackexchange_filtered |
TinyMCE 4 and image url's
I'm storing all of my images / videos above webroot, which means that the image is not found in the editor and it wont render. This is a problem. To display an image from the webroot, I have a function that is accessed like this: http:://mytestsite.com/getImage/k=generated_hash&photo=myphoto.jpg
When I add a photo to TinyMCE's wysiwyg, I need the source url changed from /home/media/myphoto.jpg to the example above. If i paste the link as the image source, it works flawlessly - I am just unsure how to make it happen automatically.
Any ideas?
| common-pile/stackexchange_filtered |
How to configure Ebean to generate SQLite code
I am using Play Framework 2.3.4 with Ebean and I am trying to use SQLite to persist the data on my application. When I try to run the application, I get this error:
[SQLITE_ERROR] SQL error or missing database (table address already exists)
[ERROR:1, SQLSTATE:null], while trying to run this SQL script:
...
alter table book add constraint fk_book_publisher_1 foreign key (publisher_name) references publisher (name);
...
After searching a while, I found out that SQLite does not allow the addition of a foreign key after the table is created. So, is there anyway to specify Ebean to generate SQLite code?
It looks like this known issue: https://github.com/ebean-orm/avaje-ebeanorm/issues/100
Thank you, Rob. As I was in the beginning of my project, I decided to go ahead with MySQL instead of SQLite. It would be a nice fix, though.
| common-pile/stackexchange_filtered |
How to get --color output when using webpack-dev-middleware?
I have an express API I'm using in my app so I am using webpack-dev-middleware and webpack-hot-middleware.
I'm trying to figure out how to get the webpack --color option when I use webpack through the API.
This is what I have right now:
const webpack = require('webpack')
const webpackConfig = require('../../webpack.config')
const compiler = webpack(webpackConfig)
const webpackDevMiddleware = require('webpack-dev-middleware')(compiler, {
noInfo: true
})
const webpackHotMiddleware = require('webpack-hot-middleware')(compiler)
app.use(webpackDevMiddleware)
app.use(webpackHotMiddleware)
I am currently using<EMAIL_ADDRESS>
Here, add
stats: {
colors: true
}
to your options, like:
const webpackDevMiddleware = require('webpack-dev-middleware')(compiler, {
noInfo: true,
stats: {
colors: true
}
})
Check out the usage section.
Tested with webpack 2.2.1 and webpack-dev-middleware 1.10.1.
| common-pile/stackexchange_filtered |
DateTime.Compare how to check if a date is less than 30 days old?
I'm trying to work out if an account expires in less than 30 days. Am I using DateTime Compare correctly?
if (DateTime.Compare(expiryDate, now) < 30)
{
matchFound = true;
}
Am I using DateTime Compare correctly?
No. Compare only offers information about the relative position of two dates: less, equal or greater. What you want is something like this:
if ((expiryDate - DateTime.Now).TotalDays < 30)
matchFound = true;
This subtracts two DateTimes. The result is a TimeSpan object which has a TotalDays property.
Additionally, the conditional can be written directly as:
bool matchFound = (expiryDate - DateTime.Now).TotalDays < 30;
No if needed.
Alternatively, you can avoid naked numbers by using TimeSpan.FromDays:
bool matchFound = (expiryDate - DateTime.Now) < TimeSpan.FromDays(30);
This is slightly more verbose but I generally recommend using the appropriate types, and the appropriate type in this case is a TimeSpan, not an int.
Should be allowed to give you 2+ ;) one for the answer and one for the short way to express it
Uh … I just made my answer longer so feel free to subtract one imaginary vote. ;-)
Please use TotalDays instead of days.
@João Why? It makes no difference in this case.
It is conceptually more accurate. It makes no difference because Days is the biggest component of TimeSpan. People reading this can extrapolate that to think that the Seconds property works the same way.
@João Hmm. Agreed. Will change.
Adding to the point João Portela made, even Days itself can be wrong too. Days and TotalDays are the same here only because the condition is < 30, but there would be an obvious difference if it was <= 30, because TotalDays may returns something like 30.421 while Days still returns 30.
should be
matchFound = (expiryDate - DateTime.Now).TotalDays < 30;
note the total days
otherwise you'll get werid behaviour
this answer was over a year after the last edit to accepted answer!
@Mitch - This is the correct answer, notice he is using TotalDays rather than Days.
The accepted answer is correct. TotalDays returns a fractional portion as well, which is redundant when comparing to an integer.
@MitchWheat TotalDays is conceptually correct field to use. In practice they give the same result but only because Days is the biggest component of TimeSpan, had there been a Months or Years component and this would have been a different story. Just try with Hours, Seconds or Milliseconds to see how they work.
Well I would do it like this instead:
TimeSpan diff = expiryDate - DateTime.Today;
if (diff.Days > 30)
matchFound = true;
Compare only responds with an integer indicating weather the first is earlier, same or later...
Try this instead
if ( (expiryDate - DateTime.Now ).TotalDays < 30 ) {
matchFound = true;
}
Hmm, you need to either invert the order of your dates or take the absolute value, unless the expiration date is already passed.
Compare returns 1, 0, -1 for greater than, equal to, less than, respectively.
You want:
if (DateTime.Compare(expiryDate, DateTime.Now.AddDays(30)) <= 0)
{
bool matchFound = true;
}
This will give you accurate result :
if ((expiryDate.Date - DateTime.Now.Date).Days < 30)
matchFound = true;
actually what happens hr is eg.expryDte is 28/4/2011 if U rite (expiryDate-DateTime.now) it will take the time as well (28/4/2011 12:00:00 AM - 26/4/2011 11:47:00 AM) & above code takes value as 28/4/2011 12:00:00 AM -26/4/2011 12:00:00 AM which ill give accurate difference.
Assuming you want to assign false (if applicable) to matchtime, a simpler way of writing it would be..
matchtime = ((expiryDate - DateTime.Now).TotalDays < 30);
The ternary operator here is completely redundant as ((expiryDate - DateTime.Now).TotalDays < 30) already returns a boolean.
@Fabio Thanks buddy removed them to assign the Boolean value via the return type.
What you want to do is subtract the two DateTimes (expiryDate and DateTime.Now). This will return an object of type TimeSpan. The TimeSpan has a property "Days". Compare that number to 30 for your answer.
No it's not correct, try this :
DateTime expiryDate = DateTime.Now.AddDays(-31);
if (DateTime.Compare(expiryDate, DateTime.Now.AddDays(-30)) < 1)
{
matchFound = true;
}
No, the Compare function will return either 1, 0, or -1. 0 when the two values are equal, -1 and 1 mean less than and greater than, I believe in that order, but I often mix them up.
No you are not using it correctly.
See here for details.
DateTime t1 = new DateTime(100);
DateTime t2 = new DateTime(20);
if (DateTime.Compare(t1, t2) > 0) Console.WriteLine("t1 > t2");
if (DateTime.Compare(t1, t2) == 0) Console.WriteLine("t1 == t2");
if (DateTime.Compare(t1, t2) < 0) Console.WriteLine("t1 < t2");
Actually none of these answers worked for me. I solved it by doing like this:
if ((expireDate.Date - DateTime.Now).Days > -30)
{
matchFound = true;
}
When i tried doing this:
matchFound = (expiryDate - DateTime.Now).Days < 30;
Today, 2011-11-14 and my expiryDate was 2011-10-17 i got that matchFound = -28. Instead of 28. So i inversed the last check.
// this isn't set up for good processing.
//I don't know what data set has the expiration
//dates of your accounts. I assume a list.
// matchfound is a single variablethat returns true if any 1 record is expired.
bool matchFound = false;
DateTime dateOfExpiration = DateTime.Today.AddDays(-30);
List<DateTime> accountExpireDates = new List<DateTime>();
foreach (DateTime date in accountExpireDates)
{
if (DateTime.Compare(dateOfExpiration, date) != -1)
{
matchFound = true;
}
}
Isn't that a bit complicated?
Where is the mention of accountExpireDates in the question? You copy pasted a bad solution. matchFound almost sounds like you're mixing Pattern or RegEx. Btw, you need to break when a match is found or it continues to loop. Also what if it is -2? MSDN does not say the possible values are -1, 0 and 1.
You can try to do like this:
var daysPassed = (DateTime.UtcNow - expiryDate).Days;
if (daysPassed > 30)
{
// ...
}
Please try to be more descriptive in your explanation.
Compare is unnecessary, Days / TotalDays are unnecessary.
All you need is
if (expireDate < DateTime.Now) {
// has expired
} else {
// not expired
}
note this will work if you decide to use minutes or months or even years as your expiry criteria.
Not a great answer because now you are also factoring in hours, minutes and seconds. DateTime.Today would be more correct for the OPs situation.
| common-pile/stackexchange_filtered |
How to find an element with the minimum number of steps
How can I count the minimum steps to get to a specific index counting forward and backward?
The list is:
content = [1, 2, 3, 4, 5]
If I start from index 0 and want to know how many steps to get to the number 4 iterating forwards I'll get 3, because:
#0 #1 #2 #3
[1, 2, 3, 4, 5]
But I also want to know it backward, like:
#0 #2 #1
[1, 2, 3, 4, 5]
In the example above the minimum amount of steps is 2
Another example
content = ["A", "B", "C", "D", "E"]
start_index = 4 # "E"
to_find = "D"
#1 #2 #3 #4 #0
["A", "B", "C", "D", "E"]
# Moving forward I'll start from "A" again until reache "D"
#1 #0
["A", "B", "C", "D", "E"] # Moving backwards...
In the example above the minimum amount of steps is 1
Note: the target element is unique.
Simply decrement your index instead of incrementing it. Python lists support negative indexing
I don't understand how the problem involves counting or explicitly iterating. You know how to ask Python where the 4 is in the list (its index), right? If you are starting from index 0, can you think of a mathematical rule that tells you the distance to the other index?
This question is really confusing. Is your list [1, 2, 3, 4] or [1, 2, 3, 4, 5]? Also, Are you searching for 2 like the last example, or 4 as in the second?
@Mark I'm trying to know the minimum steps to get to 4
@RafaAcioly the element you're trying to find in the list is unique? Or duplicates exist?
@AbhinavMathur yes, they are unique
It seems to me that this is a modulus problem:
content = ["A", "B", "C", "D", "E"]
start_index = 4 # "E"
to_find = "D"
length = len(content)
difference = content.index(to_find) - start_index
print(min(difference % length, -difference % length))
Note that we only search content once, unlike some solutions, including the currently accepted one, which search content twice!
This is basically the same as my answer. I didn't use the modulus operator but the idea is the same.
@Olivier, I came at this from a modular aproach, first checking whether math.fmod() alone would give me what I wanted when % didn't (the two vary when it comes to negative numbers) but finally settled on the above.
You could do:
len(content) - content.index(4)
Because content.index(4) finds the "forward" index, and then the "backward" index equals the number of elements from the element 4 to the end of the list, which is the same as all of them minus the first content.index(4).
As noted in the comments, this finds the index of the first occurrence in the list.
In order to find that of the last (i.e. first from the end), you might do:
content[::-1].index(4) + 1
Example:
>>> content = ['a', 'b', 'c', 'b']
>>> len(content) - content.index('b')
3
>>> content[::-1].index('b') + 1
1
I believe there is a risk in this option → in case there is more than one value 4 in the list, it will return according to the first and not the last. I think it might be interesting to increase the answer by warning about this!
Give then OP's input this would return 3 when searching for the value 2. But in that case the forward search is shorter.
@Mark No, it would return 4: len(content) - content.index(2) = 5 - 1 = 4.
@Anakhand — the question is pretty unclear. It looks like you answered the question title. The body of the question seems to involve finding a minimum.
@Anakhand I just add another example
def minimal_steps(lst: list, num: int, start: int = 0):
pos = lst.index(num)
return min(abs(pos - start), len(lst) - abs(pos - start))
EDIT: update answer since the question updated.
Maybe not. Your parameters don't match the variables in your function so the code doesn't run. Even if I fix that, it doesn't appear to get the OP's first example correct (off by 1).
No for loops solution:
content = ["A", "B", "C", "D", "E"]
start_index = 4 # "E"
to_find = "D"
c2 = content*2
forward = c2[start_index:].index(to_find)
backward = c2[::-1][len(content)-start_index-1:].index(to_find)
print('minimum steps:', min(forward, backward))
Assuming the target element is always present in the array:
content = ["A", "B", "C", "D", "E"]
start_index = 4 # "E"
to_find = "D"
end_index = content.index(to_find)
d = end_index - start_index
forward = d if d>=0 else len(content) + d
backward = -d if d<=0 else len(content) - d
print(min(forward, backward))
Here is a simple method that offsets the list first and uses the start list.index() method:
content = [1, 2, 3, 4, 5]
start_index = 0
to_find = 4
temp = content[start_index:] + content[:start_index]
print(temp.index(to_find)) # Forward
print(temp[::-1].index(to_find) + 1) # Backward
Output:
3
2
For example #2:
content = ["A", "B", "C", "D", "E"]
start_index = 4
to_find = "D"
temp = content[start_index:] + content[:start_index]
print(temp.index(to_find)) # Forward
print(temp[::-1].index(to_find) + 1) # Backward
Output:
4
1
Simply decrement your index instead of incrementing it. Python lists support negative indexing
content = [1, 2, 3, 4, 5]
end_element = 4
i = 0
count = 0
while content[i] != end_element:
i -= 1
count += 1
print(count) # 2
Of course, this leaves the possibility of an IndexError: list index out of range when the end_element is not in your list, but you can handle that error pretty easily.
| common-pile/stackexchange_filtered |
ASP.NET runtime error: Could not load file assembly 'System.Security.Cryptography.Algorithms
Full error:
Severity Code Description Project File Line Suppression State
Warning C:\Users\jmatson\Source\Repos\Web-IT-Spark-CICD\SparkIdeaGenerator\Main.aspx:
ASP.NET runtime error: Could not load file or assembly
'System.Security.Cryptography.Algorithms, Version=<IP_ADDRESS>,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its
dependencies. The located assembly's manifest definition does not
match the assembly reference. (Exception from HRESULT:
0x80131040) SparkIdeaGenerator C:\Users\jmatson\Source\Repos\Web-IT-Spark-CICD\SparkIdeaGenerator\Main.aspx 1
I had a similar issue with System.Net.Http which I think I solved by specifying the version in the Web.Config of my ASP.NET project, (non .NET Core, just standard .NET 4.6.1)... but now I get this error trying to build? It's so frustrating and I have no idea how to solve it.
The error indicates that dlls present in application direction dont match the ones referenced by the project. Try reading problematic references.
Possible duplicate of The located assembly's manifest definition does not match the assembly reference
| common-pile/stackexchange_filtered |
New Cisco SG350X changing config of SG500 switches
We have about 20 Cisco SG500 Switches in our network configured with various VLANs, trunks, ports etc.
We recently needed to install additional network capacity so purchased some SG350X switches as the SG500s are end-of-life.
Whilst getting the SG350X configured and ready to deploy, we noticed some weird network issues on our existing network, specifically Wifi access points and IP phones were broken and no longer.
Our investigation revealed that the configuration of the broken ports had been changed and we suspect that the new SG350X may have been the culprit. We did change the admin password of the switch to match that of our other switches so does the SG350X have the ability to manage other switches? And if not, any other ideas of how it couold have happened? Is their an audit of configuration changes?
An example of a change to a ports configuration is from:
interface gigabitethernet1/1/19
loopback-detection enable
dot1x guest-vlan enable
dot1x port-control auto
description 115
switchport trunk native vlan 210
To
interface gigabitethernet1/1/19
loopback-detection enable
dot1x guest-vlan enable
dot1x port-control auto
description 115
storm-control broadcast enable
storm-control broadcast level 10
storm-control include-multicast
port security max 10
port security mode max-addresses
spanning-tree portfast
macro description ip_phone_desktop
!next command is internal.
macro auto smartport dynamic_type unknown
OK, looks like it is due to "Auto Smartports" although its surprising we've not been affected before
https://www.cisco.com/c/en/us/td/docs/switches/lan/auto_smartports/12-2_55_se/configuration/guide/asp_cg/concepts.html
| common-pile/stackexchange_filtered |
iOS access file with data protection after user locks a device
I have a thread that keeps running in background after the device is locked.
I have some files on the application and they are protected with NSFileProtectionComplete
How can I access the files after the user locks the device(after the 10 secs)?
Not the solution I wanted, but it works. I changed the protection to NSFileProtectionCompleteUntilFirstUserAuthentication
According to app docs with NSFileProtectionCompleteUntilFirstUserAuthentication "The file is stored in an encrypted format on disk and cannot be accessed until after the device has booted. After the user unlocks the device for the first time, your app can access the file and continue to access it even if the user subsequently locks the device."
| common-pile/stackexchange_filtered |
Client moves only when im the host
I have a Player (the client) with a Network Identity, Network Transform and Network Rigidbody 2D all checked with client authority.
I want to apply force to the rigidbody in the server, but the command (CmdAddForce) only works when im the host, when im the client the command dont execute and i cant move.
This is the code:
using UnityEngine;
using Mirror;
public class Player : NetworkBehaviour
{
private Rigidbody2D rb;
private float force = 12;
private void Awake()
{
rb = GetComponent<Rigidbody2D>();
}
private void FixedUpdate()
{
// Solo aplicar el codigo localmente
if (!isLocalPlayer)
return;
CmdAddForce(new Vector2(Input.GetAxisRaw("Horizontal"), Input.GetAxisRaw("Vertical") * force));
}
[Command]
void CmdAddForce(Vector2 force)
{
rb.AddForce(force);
}
}
You are applying the force on the server. But you checked the client authority on Network Rigidbody component; try to disable it, so it will be synced from server to clients.
| common-pile/stackexchange_filtered |
Installed Ubuntu and now eclipse wont work
I have just install ubuntu on my laptop as a dual boot. I also install Eclipse but can't get any of my java programs to work. The error message is below. Any help would be appreciated.
Exception in thread "main" java.lang.UnsupportedClassVersionError: SquareRootTest : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:634)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class: SquareRootTest. Program will exit.
You appear to have a different version of Java installed. Did you rebuild your projects with the compiler you have installed in Ubuntu?
Version 51.0 is Java 7 i guess. Can anyone confirm.
I have built them again using eclipse but still same issue. Not sure if I am doing it propery
There's absolutely no evidence, that installing ubuntu in a dual-boot setup is the cause of this. In fact it's a java version issue. Please consider removing the ubuntu tag, as it is misleading.
The reason is that SquareRootTest class was compiled with Java 7 (class Version 51.0) and you are trying to run it with a JVM from an older generation. Try to run java -version to see what JVM you use, also look in the eclipse preferences. You should also clean you project to be sure it is recompiled...
It seems you've copied your project folder from previous OS workspace that contains
compiled .class files which i think is the problem in your case.
Anyway, what you can try here is - just delete all your project related .class files from workspace and eclipse will automatically recompile all your .java files.
You need to install java 7 on your ubuntu. Also ensure you make java 7 as the default on your ubuntu desktop. Follow this link in order to do that:
http://www.devsniper.com/ubuntu-12-04-install-sun-jdk-6-7/
| common-pile/stackexchange_filtered |
Double Click Event - Multiple Ranges
I'm looking for the best way to code in multiple ranges for my double click event.
Private Sub Worksheet_BeforeDoubleClick(ByVal Target As Range, Cancel As Boolean)
If Not Intersect(Target, Range("A3:A25")) Is Nothing Then
'code
End If
End Sub
As you see above, when A3 to A25 is clicked, the double click event takes place. But I also have other sections throughout the sheet that I want to include to set off the event. A29:A40, F3:F37, K3:K40, P3:P40.
What is the best way to code that without adding new 'If' blocks?
Or is adding the new 'If' blocks (and calling a subroutine) the best way?
Use this one:
Private Sub Worksheet_BeforeDoubleClick(ByVal Target As Range, Cancel As Boolean)
If Not Intersect(Target, Range("A3:A25, A29:A40, F3:F37, K3:K40, P3:P40")) Is Nothing Then
'code
End If
End Sub
Wow! That's simple enough. Still learning VBA, I didn't know you can do ranges like that. Thanks alot simoco!!
| common-pile/stackexchange_filtered |
Train and test split data with features
Load the Iris dataset from sklearn. Split the dataset into training and testing parts. Pick 2 of the 4 features.
I write this code:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
X, y = iris.data, iris.target
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33,random_state=42)
But I didn't understand "Pick 2 of the 4 features". Is that means test_size and random_state? Or is it something different?
What do you mean with "Pick 2 of the 4 features" ? Where were you asked to pick two? Also, I'm no expert, but it may be referring to the four features of this Iris dataset (length, width, sepals, petals)
Iris dataset has petal length,petal width,sepal length,sepal width as 4 features.
Pick 2 of the 4 feature means take two of those 4 features in your training model.
I don't know why you want to do that as using all four feature makes model more accurate
I want to estimate and graph densities for existing classes using the Gaussian Mixture Model. Thanks for answer
The test size is a ratio. For example if the test_size = 0.33, then 33% of the data will be the test data and the remaining 67% will be the train data.
The random state is the seed that is produced for sampling taking the test and training data randomly from the whole data set (fixed value like 42 is used when you need to ensure repeatibility). You can study in detail the theory behind random number creation from a seed in computer science that leads to a random number.
| common-pile/stackexchange_filtered |
petset on GKE: could not find the requested resource
I want to experiment with PetSet on GKE.
I have a 1.3.5 Kubernetes cluster on GKE, but PetSet does not seem to be activated.
> kubectl get petset
Unable to list "petsets": the server could not find the requested resource
Do I need to activate v1alpha1 feature on GKE ?
Dupe of http://stackoverflow.com/questions/39163685/kubernetes-petset-on-google-cloud/39170065
I'm using PetSet in zone europe-west1-d but got the error you're seeing when I tried in zone europe-west1-c.
Update:
Today, September 1, I got an email from Google Cloud Platform announcing that PetSet was "accidentally enabled" and will be disabled on September 30.
Dear Google Container Engine customer,
Google Container Engine clusters running Kubernetes 1.3.x versions accidentally enabled Kubernetes alpha features (e.g. PetSet), which are not production ready. Access to alpha features has already been disabled for clusters not using them, but cannot be safely disabled in clusters that are currently using alpha resources. The following clusters in projects owned by you have been identified as running alpha resources:
Please delete the alpha resources from your cluster. Continued usage of these features after September 30th may result in an unstable or broken cluster, as access to alpha features will be disabled.
The full list of unsupported alpha resources that are currently enabled (and will be disabled) is below:
Resource API Group
petset apps/v1alpha1
clusterrolebindings rbac.authorization.k8s.io/v1alpha1
clusterroles rbac.authorization.k8s.io/v1alpha1
rolebindings rbac.authorization.k8s.io/v1alpha1
roles rbac.authorization.k8s.io/v1alpha1
poddisruptionbudgets policy/v1alpha1
Now officially supported on "temporary clusters": https://cloud.google.com/container-engine/docs/alpha-clusters
| common-pile/stackexchange_filtered |
Cannot open ip camera with opencv4.1.0-openvino on respberry pi 4
I need to access video stream from AXIS M1125 Network Camera with my raspberry pi 4.
I have a code that works on my laptop and raspberry pi 3. I use opencv 4.1 which comes in openvino distribution for raspbian.
camera = cv2.VideoCapture('http://<IP_ADDRESS>/axis-cgi/jpg/image.cgi')
When I run the code and debug OPENCV_VIDEOCAPTURE_DEBUG the output is:
[ WARN:0] VIDEOIO(FFMPEG): trying capture filename='http://<IP_ADDRESS>/mjpg/1/video.mjpg' ...
[ WARN:0] VIDEOIO(FFMPEG): backend is not available (plugin is missing, or can't be loaded due dependencies or it is not compatible)
[ WARN:0] VIDEOIO(GSTREAMER): trying capture filename='http://<IP_ADDRESS>/mjpg/1/video.mjpg' ...
(python3:6939): GStreamer-CRITICAL **: 14:32:38.521:
Trying to dispose element appsink0, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
...
[ WARN:0] VIDEOIO(GSTREAMER): can't create capture
[ WARN:0] VIDEOIO(V4L2): trying capture filename='http://<IP_ADDRESS>/mjpg/1/video.mjpg' ...
[ WARN:0] VIDEOIO(V4L2): can't create capture
[ WARN:0] VIDEOIO(CV_IMAGES): trying capture filename='http://<IP_ADDRESS>/mjpg/1/video.mjpg' ...
[ WARN:0] VIDEOIO(CV_IMAGES): created, isOpened=0
[ WARN:0] VIDEOIO(CV_MJPEG): trying capture filename='http://<IP_ADDRESS>/mjpg/1/video.mjpg' ...
[ WARN:0] VIDEOIO(CV_MJPEG): can't create capture
The output from cv2.getBuildInformation():
Platform:
Timestamp: 2019-03-19T16:11:44Z
Host: Linux 4.13.0-45-generic x86_64
Target: Linux 1 arm
CMake: 3.7.2
CMake generator: Ninja
CMake build tool: /usr/bin/ninja
Configuration: Release
Video I/O:
FFMPEG: YES
avcodec: YES (57.64.101)
avformat: YES (57.56.101)
avutil: YES (55.34.101)
swscale: YES (4.2.100)
avresample: NO
GStreamer: YES (1.10.4)
v4l/v4l2: YES (linux/videodev2.h)
Have you completed installing all the dependencies? under /opt/intel/openvino/install_dependencies. I also suggest to check running the demo application if it is running and you still have the problem.
Regardless of the function of OpenVINO, it can be processed by OpenCV.
import cv2
cap = cv2.VideoCapture("http://username:password@xxx.xxx.xxx.xxx:port/xxxx")
#cap = cv2.VideoCapture("http://username:password@xxx.xxx.xxx.xxx:port")
while(True):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Please refer https://software.intel.com/en-us/forums/computer-vision/topic/801714
Hope this helps.
| common-pile/stackexchange_filtered |
How to solve the heat equation for compound materials with different heat conductivities numerically?
I'm solving the heat equation with time dependent boundary conditions numerically in a 2D system using the ADI scheme. For the purpose of this question, let's assume a constant heat conductivity and assume a 1D system, so
$$
\rho c_p \frac{\partial T}{\partial t} = \lambda \frac{\partial^2 T}{\partial x^2}.
$$
This works very well, but now I'm trying to introduce a second material. This one differs slightly in heat capacity and density but has a very different heat conductivity and is connected to the other material by a sharp interface, i.e. a stepwise change in $\lambda$.
How should this be treated numerically in the ADI scheme? I can think of different approaches:
Treat the two materials as independent domains and connect them by a boundary condition calculating the heat flow in and out of the interface in terms of temperature on the other side of the interface in the last time step. Use a simple forward difference for that on both sides of the interface.
Treat it as one domain and use a very fine discretization close the interface as compared to the homogeneous material. Use a scheme like
$$
\lambda_{left} \frac{T_i - T_{i-1}}{\Delta x_i} = \lambda_{right} \frac{T_j - T_{j+1}}{\Delta x_j},
$$
where $i$ and $j$ are the points left and right of the interface, instead of the the standard ADI for those points.
Drop the assumption of constant heat conductivity and use
$$
\rho c_p \frac{\partial T}{\partial t} = \frac{\partial \lambda}{\partial x}\frac{\partial T}{\partial x} + \lambda \frac{\partial^2 T}{\partial x^2}.
$$
But in order do so one needs to approximate derivative of lambda at the step position, i.e. introduce an unknown characteristic width $s$ of the sharp interface. I assume, that the (more or less arbitrary) choice of this width will significantly influence the system's behaviour.
Any advice?
The generalized heat equation is,
$$
\frac{\partial T}{\partial t}=\nabla\cdot\left(\alpha\nabla T\right)
$$
If $\alpha$ is spatially independent, then we can pull it out of the differential operator and obtain your 1st equation. If $\alpha$ is spatially dependent, then numerically in one dimension, you have
$$
\frac{T^{n+1}_i-T^n_i}{dt}=\frac1{dx^2}\left[\alpha_{i+\frac12}\left(T^n_{i+1}-T^n_{i}\right)-\alpha_{i-\frac12}\left(T^n_i-T^n_{i-1}\right)\right]
$$
where
$$
\alpha_{i+\frac12}=\frac12\left(\alpha_{i+1}+\alpha_{i}\right)
$$
and all other terms take their normal meaning. Extension to 2D or implicit methods should be trivial from here.
| common-pile/stackexchange_filtered |
CSS: td tag removed from table, a little space remains
I've been trying to mess around with my MAL CSS (myanimelist) to my liking. I've encountered this weird behavior in webkit browsers like chrome and opera where an extra space is left when I reposition a td element.
This is what I'm getting:
I made a jsfiddle for the project, where I basically copy pasted some html code from my anime list and slapped in my CSS code.
http://jsfiddle.net/qwwtt/1/
So what I exactly did was I removed a td element from where it's supposed to be, then repositioned it and added position:fixed and display:block so I could freely move it around. Now the thing is, it leaves a space on it's table location when viewed with Chrome or Opera. Here's the CSS code of the table rows and cells. (td:nth-of-type(6) is the one I repositioned)
tr [class^=td] {
background-color: rgba(64,64,64,0);
border-top: 1px solid rgba(255,255,255,0);
border-bottom: 1px solid rgba(255,255,255,0);
-webkit-transition: background-color 2s ease, border-color 1s ease;
-moz-transition: background-color 2s ease, border-color 1s ease;
-o-transition: background-color 2s ease, border-color 1s ease;
transition: background-color 2s ease, border-color 1s ease;
}
tr:hover [class^=td] {
background-color: rgba(255,255,255,0.25);
border-top: 1px solid rgba(255,255,255,0.5);
border-bottom: 1px solid rgba(255,255,255,0.5);
-webkit-transition: background-color 0.25s ease, border-color 0.25s ease;
-moz-transition: background-color 0.25s ease, border-color 0.25s ease;
-o-transition: background-color 0.25s ease, border-color 0.25s ease;
transition: background-color 0.25s ease, border-color 0.25s ease;
}
tr td[class^=td] {
padding: 5px 0;
}
/**
*
* Per-column CSS
*
**/
tr td:nth-of-type(1)[class^=td], tr td:nth-of-type(1)[class=table_header] {
width: 4%;
min-width: 35px;
max-width: 100px;
text-align: right;
padding-right: 5px;
}
tr td:nth-of-type(2)[class^=td], tr td:nth-of-type(2)[class=table_header] {
min-width: 400px;
text-align: center;
}
tr td:nth-of-type(3)[class^=td], tr td:nth-of-type(3)[class=table_header],
tr td:nth-of-type(4)[class^=td], tr td:nth-of-type(4)[class=table_header],
tr td:nth-of-type(5)[class^=td], tr td:nth-of-type(5)[class=table_header],
tr td:nth-of-type(7)[class^=td], tr td:nth-of-type(7)[class=table_header],
tr td:nth-of-type(8)[class^=td], tr td:nth-of-type(8)[class=table_header],
tr td:nth-of-type(9)[class^=td], tr td:nth-of-type(9)[class=table_header],
tr td:nth-of-type(10)[class^=td], tr td:nth-of-type(10)[class=table_header] {
width: 8%;
min-width: 75px;
}
tr td:nth-of-type(6)[class^=td], tr td:nth-of-type(6)[class=table_header] {
-webkit-backface-visibility: hidden;
pointer-events: none;
position: fixed;
bottom: 10px;
left: 2%;
width: 40%;
opacity: 0;
-webkit-transform: translate(0px,50px);
-moz-transform: translate(0px,50px);
-ms-transform: translate(0px,50px);
transform: translate(0px,50px);
-moz-transition: 1s ease-out;
-webkit-transition: 1s ease-out;
-o-transition: 1s ease-out;
transition: 1s ease-out;
padding: 5px 5px;
background-color: #000;
border: 2px solid rgba(255,255,255,0.75);
border-radius: 5px;
box-shadow: 0 0 5px rgba(255,255,255,0.5);
text-align: justify;
}
tr:hover td:nth-of-type(6)[class^=td] {
opacity: 1;
-webkit-transform: translate(0px,0px);
-moz-transform: translate(0px,0px);
-ms-transform: translate(0px,0px);
transform: translate(0px,0px);
-moz-transition: 0.25s ease-out 0.25s;
-webkit-transition: 0.25s ease-out 0.25s;
-o-transition: 0.25s ease-out 0.25s;
transition: 0.25s ease-out 0.25s;
}
Unrelated- but I weirdly find myself quite liking what you've produced here design-wise
"weirdly"? xD haha thanks, but it's just a copy paste from my animelist so I didn't really do anything but the css codes :D
| common-pile/stackexchange_filtered |
Maven Tycho compilation failure
I am using maven and tycho plug-in to build an eclipse RCP application. I use some java 8 features like lambda expression, but it could not build correctly because of compilation failure.
[ERROR] Failed to execute goal org.eclipse.tycho:tycho-compiler-plugin:1.0.0:compile (default-compile) on project ***.***: Compilation failure: Compilation failure:
[ERROR] .filter(Objects::nonNull)
[ERROR] ^^^^^^^^^^^^^^^^
[ERROR] Method references are allowed only at source level 1.8 or above
[ERROR] 2 problems (2 errors)
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
My questions are:
Do you think the files .settings/org.eclipse.jdt.core.prefs and .classpath (I guess not for this one) are necessary for the build of maven tycho? Do I have to explicitly define java 8 in those files?
Do I have to specify something else in the pom files?
As mentioned in the tycho-compiler-plugin documentation, the information in file .settings/org.eclipse.jdt.core.prefs will be passed to the compiler.
It is necessary then to update the file as follow:
org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
org.eclipse.jdt.core.compiler.compliance=1.8
org.eclipse.jdt.core.compiler.source=1.8
How about removing those lines from .settings/org.eclipse.jdt.core.prefs and just use the maven.compiler-properties from the pom.xml? That way you avoid misleading double configuration.
As long as it works, it's good. It's just a question of choice.
You have to specify the source level in your pom.xml file:
<project>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
</project>
This has nothing to do with tycho, it is the compiler plugin.
You can alternatively configure the plugin itself.
Please refer to this page.
Before using lambda expression of java 8, it compiled and built normally, even without specifying the option you mentioned. Do you think it is the original cause?
Adding method references is enough reason for maven to complain, if source level is put to a level less than Java 8. Method references and lambda are new features of Java 8.
Ok. I'll try to add these two properties in my root pom. However concerning the first question, do you think maven/tycho has anything to do with the file .settings/org.eclipse.jdt.core.prefs? I just changed the file to have
org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
org.eclipse.jdt.core.compiler.compliance=1.8
org.eclipse.jdt.core.compiler.source=1.8, and it seems compiled. Do you have any idea?
I don't thing that tycho looks inside your Eclipse project files. AFAIK, it can compile any OSGi bundle, not necessarily an Eclipse RCP. So ideally, it would perfectly compile without the .setting directory and .classpath file. These files are usually not tracked by your revision control system.
Yes, that's what I think too. It should not be dependent on a specify IDE.
You can try by renaming the .settings dir, and give it a compile (mvn clean install from the command line). You may want to close Eclipse in the meantime, not to recreate a .settings dir.
I just found out that the settings in file .settings/org.eclipse.jdt.core.prefs are passed to tycho compiler. Check it out here
I had the same problem and the solution was to set JAVA_HOME to the right JDK.
| common-pile/stackexchange_filtered |
How to connect to an Oracle database using odbcDriverConnect
I need to use odbcDriverConnect to connect to an Oracle database and I CANNOT odbcCConnect so i have to provide a Driver parameter but I am not sure what it should be. Here is my current call to the odbcDriverConnect() function
odbcDriverConnect("Driver=??????;server=myServer;database=mydatabase;uid=me;pwd=mypassword;port=1234")
What driver should I put in so I can query the database?
Thank you.
Driver and dsn are specific to your machine and your db. There's no way anyone not sitting at your computer would know.
But can you give an example or tell me where I can find out on my machine what the driver should be?
I'm not going to be able to provide any information that isn't in the (excellent) RODBC vignette, especially since you haven't even said what platform you're on.
My Driver is OraClient12Home1
which vignette is that? I on windows 7
This vignette you find on this page.
| common-pile/stackexchange_filtered |
Minimum difference between numbers in a file
I'm learning Java and I created a simple solution for the following problem:
Given a file [filename].txt such that each line has the format string, double, find the minimum possible difference between any two values and identify if it is less than an arbitrary value mindDist
package com.company;
import java.io.*;
import java.util.*;
public class Main {
public static void main(String[] args) { try {
Scanner sc = new Scanner(System.in);
System.out.print("Please enter the name of the input file: ");
Scanner inFile = new Scanner(new File(sc.nextLine()));
sc.close();
List<Double> l = new ArrayList<>();
while (inFile.hasNextLine()) {
String[] parts = inFile.nextLine().split(",");
l.add(Double.parseDouble(parts[1]));
}
Collections.sort(l);
double minDist = 5;
double diff;
double minDiff = Double.MAX_VALUE;
for (int i = 1; i < l.size(); i++) {
diff = l.get(i) - l.get(i - 1);
if (diff < minDiff) minDiff = diff;
}
if (minDiff < minDist) System.out.println("The satellites are not in safe orbits.");
else System.out.println("The satellites are in safe orbits.");
if (l.size()!=1) System.out.println("The minimum distance between orbits (km): " + minDiff);
inFile.close();
} catch (NumberFormatException | FileNotFoundException | ArrayIndexOutOfBoundsException e) {
System.err.println(e);
}
}
}
I am looking on advice on if there are any other possible errors / exceptions I may have missed, in addition to any way of increasing the efficiency / simplicity of my code.
I posted a Meta question about an edit to this question and some other cosmetic issues. This should not block people from answering, as any proposed edits should not invalidate answers based on the code.
You need to make sure you always close resources that can be closed. That means either a try-with-resources block or a try-finally block. You can’t just leave close outside one of these constructs because if an exception gets thrown the close method might not get called.
Arguably, parsing the distances is easier to read using the stream API. You might not yet have learned that part of the language yet, in which case there’s nothing wrong with your loop except that inFile is never closed. Oh, no, it is, but waaaay too late. Keep the scope of the scanner as tight as possible.
Likewise, the code for finding the minimum distance can be written as a stream. Credit to @AnkitSont.
Variable names are really, really important. They should describe the information being pointed to. l is a terrible variable name because it doesn’t tell us anything. list would be a slight improvement, but distances would be much better.
Along the same lines, there’s no reason to cut short variable name lengths. sc instead of scanner doesn’t save you anything, and it makes it harder on the reader because they have to go dig up what an sc is. minDist, diff and minDiffcan and should all be expanded out, and minDist might be better named safeDistance.
Declare variables where they’re first used, and limit their scope as much as possible. This reduces the cognitive load on the reader. minDist can be declared right before your output. It could arguably also be a constant in your class (private static final double declared at the class level). diff can be declared inside the for loop.
Even though the language allows curly braces to be optional for one-line blocks, it is widely regarded best practice that you always include curly braces. To do otherwise is inviting problems later when your code is modified.
Be careful with conditional checks. All of your code will work correctly if there are zero distances in the file until you hit if (l.size() != 1). You really want to check if (l.size() > 1), right?
Your catch block isn’t really doing anything. Uncaught exceptions would effectively be handled the same way - execution would terminate and the stack trace would be logged to System.err.
It's nice to use final when you can to indicate that a variable isn't going to be reassigned. This also reduces the cognitive load of the reader.
If you were to apply all these changes to your code, it might look something like:
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Scanner;
import java.util.stream.IntStream;
public final class Main {
public static void main(final String[] args)
throws IOException {
final String filename;
System.out.print("Please enter the name of the input file: ");
try (final Scanner scanner = new Scanner(System.in)) {
filename = scanner.nextLine();
}
final double[] distances =
Files
.lines(Paths.get(filename))
.mapToDouble((line) -> Double.parseDouble(line.split(",")[1]))
.sorted()
.toArray();
final double minimumDifference =
IntStream
.range(1, distances.length)
.mapToDouble(i -> distances[i] - distances[i - 1])
.min()
.getAsDouble();
final double safeDistance = 5;
if (minimumDifference < safeDistance) {
System.out.println("The satellites are not in safe orbits.");
} else {
System.out.println("The satellites are in safe orbits.");
}
if (distances.length > 1) {
System.out.println("The minimum distance between orbits (km): " + minimumDifference);
}
}
}
You can divide your main into smaller methods.
The whole code shouldn't be wrapped inside a try/catch block. You should minimize the scope of your try block only where expect an Exception.
Calculation of the minimum distance can be done using Stream.
Try to have parentheses {} for if/else statements even when there's only one statement in it.
Scanner should be closed as soon as its work is done, instead of closing at the very end. Or use try with resources.
5 can be extracted as a constant.
class Main {
private static final double MIN_DISTANCE = 5;
public static void main(String[] args) {
List<Double> satelliteDistances = getSatelliteDistancesFromFile();
double minDiff = getMinimumDistanceBetweenSatellites(satelliteDistances);
displayResults(satelliteDistances, minDiff);
}
private static double getMinimumDistanceBetweenSatellites(List<Double> satelliteDistances) {
return IntStream.range(1, satelliteDistances.size())
.mapToDouble(i -> satelliteDistances.get(i) - satelliteDistances.get(i - 1))
.min().getAsDouble();
}
private static List<Double> getSatelliteDistancesFromFile() {
List<Double> satelliteDistances = new ArrayList<>();
try (Scanner fileScanner = getFileScanner()) {
while (fileScanner.hasNextLine()) {
String distanceAsString = getDistanceAsString(fileScanner);
satelliteDistances.add(Double.parseDouble(distanceAsString));
}
}
Collections.sort(satelliteDistances);
return satelliteDistances;
}
private static String getDistanceAsString(Scanner fileScanner) {
return fileScanner.nextLine().split(",")[1];
}
private static Scanner getFileScanner() {
Scanner sc = new Scanner(System.in);
System.out.print("Please enter the name of the input file: ");
Scanner inFile = null;
try {
inFile = new Scanner(new File(sc.nextLine()));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
sc.close();
return inFile;
}
private static void displayResults(List<Double> satelliteDistances, double minDiff) {
String safeOrbitResult = minDiff < MIN_DISTANCE ? "The satellites are not in safe orbits." : "The satellites are in safe orbits.";
System.out.println(safeOrbitResult);
if (satelliteDistances.size() != 1) {
System.out.println("The minimum distance between orbits (km): " + minDiff);
}
}
}
| common-pile/stackexchange_filtered |
Possible to extend methods of Array in ruby?
I have an array of instances of e.g. class Result, I want to join names of results rather than results with a ',' like below:
@results=[result1, result2...]
results.join(", ") do |r|
r.name
end
results.join method should be an extensino methods of Array, I want it available to all arrays in my program.
Possible?
#join is already an array method... what are you asking for exactly?
Why should it be available to all arrays if it only works on arrays of objects that respond to "name"? Besides, you can already collect or map and get the same effect without polluting a general-purpose class with message-specific functionality. Yes, it's possible.
Yes, join is already a method of array, what I want is the join which can take a block. like def join(delim) { array.map {|x| yield(x) }.join(delim)}
The code you posted doesn't even work because join does not take a block. You need to use collect and then join.
Yes, this is possible.
class Array
def join_names
collect(&:name).join(", ")
end
end
But this makes it more likely that you code will have namespace collisions with other libraries that add methods to the Array class.
Thanks. What I want should be like this, which means I want to override/overload? the existing join function of Array, possible and any problem?
class Array
def join(delim)
if block_given?
self.map { |item| yield(item) }.join(delim)
else
self.join(delim)
end
end
end
Overriding the built-in function isn't a good idea. Extend Array and define a new method, like join_names, as in the answer here.
| common-pile/stackexchange_filtered |
How can I remove symmetry from an armature?
I have a model with an armature that I obtained from Turbosquid. As you can see in this screen shot, there are bones for both front legs at each joint.
I want to move only the right front leg. However, when I try to pose the right leg in pose mode, all movements are mirrored onto the left front leg, and the the bones of the left front leg don't move at all.
When I try to move the bones of the left leg, they don't affect the position of the leg at all.
There is no mirror modifier applied, and the x axis symmetry option is unchecked. I can't find anything to indicate how the bones of the right leg are linked to the left in order to unlink them. Can anyone point me in the right direction for this? I'd rather not have to build a whole new armature for this.
File: https://drive.google.com/file/d/1Ykw7LA0aLLTbxYVrQb23CEBUxrnu2g-0/view?usp=sharing
hi could you please share your file
@Ribbit12 just added to the bottom of the post!
The weights aren't right. The solution for this isn't particularly easy. You need to reassign everything on the right side of the model to bones with the .R suffix. The easiest way to do this would be separate the mesh object into 2 half objects, rename vertex groups on the right side, then rejoin and weld the two sides. I'm not sure that it's worth the trouble. This is pretty shoddy work on Turbosquid; if you've done something to the original, leading to these problems, it might be better to revert.
Thanks so much for taking a look @Nathan I did make some adjustments to the geometry of the legs and used Mesh -> Symmetrize in edit mode to apply the changes to the left side. Do you think that could have been the culprit? Since I only need to move one leg I guess just ditching the armature and making new bones just for the leg is the way to go.
The easy way to see if your changes were the culprit is to load the original and see if it works.
| common-pile/stackexchange_filtered |
Work Item creation not bound to Pull Request bi-directionally
When creating a new Work Item of a specific type, and trying to link it to a PR at the same time, I can see that the Work Item is now shown on the Pull Request summary page.
So I'm making a fairly simple POST request to the /_apis/wit/workitems/{type} endpoint to create a new Work Item. It's working as expected for the basic properties, and since I'm adding a relation to a PR, that is also working when viewing the Work Item, but not the other way round (viewing the Pull Request shows no work item present).
My Patch Request looks like this (with the appropriate IDs in the ArtifactLink of course):
[
{
"from":null,
"value":"Test Title",
"op":"add",
"path":"/fields/System.Title"
},
{
"from":null,
"value":"Some\\Area\\Path",
"op":"add",
"path":"/fields/System.AreaPath"
},
{
"from":null,
"value":"Iteration 1",
"op":"add",
"path":"/fields/System.IterationPath"
},
{
"from":null,
"value":"",
"op":"add",
"path":"/fields/System.Description"
},
{
"from":null,
"value":{
"rel":"ArtifactLink",
"url":"vstfs:///Git/PullRequestId/{id}%2F{id}%2F13811",
"attributes":{
"name":"Pull Request"
}
},
"op":"add",
"path":"/relations/-"
}
]
| common-pile/stackexchange_filtered |
Is it possible that I just saw Jupiter's moons?
Today at about 18:00 I was looking for Venus near the moon and I saw a short bright line. I thought that maybe I was seeing Venus' crescent but it was perpendicular to the crescent of the moon. I then noticed Venus very close to the moon and realised that the line is Jupiter. I can confirm the location was Jupiter in Stellarium, and I can confirm in Stellarium that the moons are in fact aligned in the direction that I saw the line. Is it possible that I caught a glimpse of the Jupiter-moons system as a line and not a point source?
Only if your eyes are a heck of a lot better than mine. But see http://en.wikipedia.org/wiki/Galilean_moons#Visibility
Maybe.
The Galilean moons are (barely) bright enough to be seen with the naked eye, but they're so close to the much brighter Jupiter that seeing them is at best very difficult (but easy with even low-powered binoculars). Jupiter is not currently at opposition (the closest it gets to Earth), so that's not ideal.
I've never seen them without binoculars or a telescope, but someone with very good eyes in excellent seeing conditions might be able to make them out. And the OP didn't see them individually, just "a bright line".
Try again if you get another opportunity. If you see the "bright line", try rotating your head. If the line rotates along with your head, you're seeing some other phenomenon, perhaps an irregularity in your eyes, or a reflection in your glasses or contact lenses if you wear them. If it doesn't rotate, I'd say there's a good chance you're really seeing the Galilean moons.
Another test would be to repeat the observation when all the moons are on the same side of Jupiter.
Just to gauge your eyesight, how many Pleiades are you able to see without binoculars or a telescope?
(And in case anyone is wondering, Jupiter's rings are not visible with the naked eye; they weren't even discovered until the Voyager 1 flyby in 1979.)
References:
http://en.wikipedia.org/wiki/Galilean_moons#Visibility
http://denisdutton.com/jupiter_moons.htm
Thank you. Even before I read this comment, I took my daughters out today at 17:00 and sure enough, when the clouds parted my five year old, myself, and a 9 year old neighbour were able to see Jupiter as a line. The moons are in a completely different configuration now, but it is a phenomenon that is reproducible. The 9 year old described the angle of the line exactly as I saw it, which corroborates with Stellarium's angle for the moons. My own eyesight is not perfect, slightly nearsighted, and I can make out maybe 5 or some individual points of light in the Pleiades.
@dotancohen: No offense, but I'm still a bit skeptical, given that three different people were able to see them, that your own eyesight doesn't appear to be remarkably good, and that (depending on your location) the sky shouldn't be all that dark at 17:00, and there were some clouds in the sky. Were you in an extremely dark location? Did you try tilting your head? Do you see the line as asymmetric, and if so does the asymmetry correspond to the actual positions of the moons? You make a somewhat remarkable statement, and it's worth being very sure.
I am as sceptical as you are! The location was not extremely dark, however myself and my daughters often try to find Jupiter and Venus during those hours. I did succeed a few weeks ago to see Venus at 16:25 and we have seen Venus at 17:00 quite a few times already. That part I am sure about! But this line is hard for me to believe. The older neighbour certainly see it in the same direction as I see it, and tilting the head confirms that it is not an ocular phenomenon. The only explanation that satisfies me is that we are seeing them blurred together, not individual points of light.
I don't think so - Jupiter itself is easily visible to the naked eye, but the moons aren't.
Not sure what you would have seen though.
Did you read @Keith's comment: 'Only if your eyes are a heck of a lot better than mine. But see http://en.wikipedia.org/wiki/Galilean_moons#Visibility'. To me it looks like he may possibly have seen the moons if he has excellent eye sight and a good seeing condition.
| common-pile/stackexchange_filtered |
plotting graph with sound in asp.net
User A will record his voice and i want to plot a graph according to his voice... is there any voice chart available...
I have use the Bass library from un4seen.com
http://www.un4seen.com/
they have an asp.net port.
Similar question: asp.net create waveform image from mp3
i downloaded the sample... but unable to find the code achieve such a task... can u pls help?
| common-pile/stackexchange_filtered |
QuickLook set app to display preview
Is it possible to set the app that displays the QuickLook preview in OS X?
I'm developing iOS apps and I have Xcode installed, but when I open the QuickLook preview of a Source-Code file, the preview uses the iA Writer, which I have also installed but the "Open with" button un the top right corner shows Xcode.
Info: I'm using a MacBook Air, Late 2010, OS X 10.9.2
If I uninstall iA Writer it looks like this
Question:
How do I tell the Quick Look NOT to use the iA Writer (without having to uninstall it)?
what is the question here? you want to change the open with? or something else?
The question is if it is possible to set the program used for the preview in QuickLook
So the QL correctly identifies the file and displays it using the QL plug in for it!, You do not like the display ? What does the iA write has to do with it? What app do you want to open with QL?
I want it not to use iA Writer for the preview. If I uninstall iA Writer, it looks like this: https://dl.dropboxusercontent.com/s/ejo7ifvdvospqto/Screenshot%202014-04-18%2011.48.28.png
when you RIGHT click on the Open with, what options do you see ?
I see this: https://www.dropbox.com/s/clpszxhh1nkc92f/Screenshot%202014-04-18%2012.44.53.png
qlmanage
Use the command line tool qlmanage to investigate your QuickLook set up, including the default generator for each file format.
qlmanage -- Quick Look Server debug and management tool
Apple's QuickLook developer documentation provides a good overview of how QuickLook works and how to test specific plugins.
Editing the iA Writer QuickLook Plugin
You may want to edit the iA Writer QuickLook plugin to disassociate it from your .h and .m files. See QuickLook file associations for more information about achieving this.
This answer is coming a lot later, but I had to figure this out for myself. Go to where Writer is located
Go to where Writer is located. /Applications/
Right click and show package contents
Enter Library
Delete QuickLook
Restart your computer or enter into command line qlmanage -r
Try this
select the file and press cmd+I.
Then under "Open with" select the application you want to open in QuickLook.
And then press the "Change All..." button
This only changes the application that opens the file when you doubleclick on it, it doesn't affect the QuickLook Plugin.
This does not change the quicklook, as asked. Furthermore, it changes the application that opens the extension, which was not asked for.
| common-pile/stackexchange_filtered |
Calculate the distance from Plickford to Murbell
Attached is my question. Please provide an explanation for how I could calculate the distance from Plickford to Murbell.
HINT :
$\angle{MBP}=57^\circ+(180^\circ-155^\circ)=82^\circ$.
Now use the law of cosines.
| common-pile/stackexchange_filtered |
Android retrieve image from parse.com and use it
Actually, I want to use this library"Aphid-FlipView-Library"to do an animation of flipping.
The images I want to flip are from Parse.com.
But I got problems when doing this.
Please give me some instruction.Thanks!
First, I create a class to set and get image's bitmap value.
package com.example.bookard;
import android.graphics.Bitmap;
public class UseForFlip {
private Bitmap Photo;
public Bitmap getPhoto(){return Photo;}
public void setPhoto(Bitmap photo){Photo = photo;}
}
And, The code below is the first half of the rotate Activity. I want to retrieve image from Parse.com and add it into an arraylist called "notes". This arraylist is used in the second half of the rotate Activity.
public class rotate extends Activity {
UseForFlip forflip;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.rotate);
forflip = new UseForFlip();
ParseQuery<ParseObject> query = new ParseQuery<ParseObject>("Card");
query.getInBackground("wxYBHhRlhZ", new GetCallback<ParseObject>() {
@Override
public void done(ParseObject object, ParseException e) {
// TODO Auto-generated method stub
ParseFile fileObject1 = (ParseFile) object.get("Photoin1");
fileObject1.getDataInBackground(new GetDataCallback(){
@Override
public void done(byte[] data, ParseException e) {
// TODO Auto-generated method stub
forflip.setPhoto(BitmapFactory.decodeByteArray(data, 0,data.length));
}
});
}
});
ArrayList<Bitmap> notes = new ArrayList<Bitmap>();
notes.add(forflip.getPhoto());
FlipViewController flipView = new FlipViewController(this, FlipViewController.HORIZONTAL);
flipView.setAdapter(new NoteViewAdapter(this, notes));
setContentView(flipView);
}
The below code is the second half of the rotate Activity.
public class NoteViewAdapter extends BaseAdapter {
private LayoutInflater inflater;
private ArrayList<Bitmap> notes;
public NoteViewAdapter(Context currentContext, ArrayList<Bitmap> allNotes) {
inflater = LayoutInflater.from(currentContext);
notes = allNotes;
}
@Override
public int getCount() {
return notes.size();
}
@Override
public Object getItem(int position) {
return position;
}
@Override
public long getItemId(int position) {
return position;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View layout = convertView;
if (convertView == null) {
layout = inflater.inflate(R.layout.rotate, null);
}
Bitmap note = notes.get(position);
ImageView tView = (ImageView) layout.findViewById(R.id.picture);
tView.setImageBitmap(note);
return layout;
}
}
I run the code, but there's nothing in the tView.
The only thing I can't figure out is that how can I put the image from Parse.com into the arraylist
You need to move that part inside the callback.
ParseQuery<ParseObject> query = new ParseQuery<ParseObject>("Card");
query.getInBackground("wxYBHhRlhZ", new GetCallback<ParseObject>() {
@Override
public void done(ParseObject object, ParseException e) {
// TODO Auto-generated method stub
ParseFile fileObject1 = (ParseFile) object.get("Photoin1");
fileObject1.getDataInBackground(new GetDataCallback(){
@Override
public void done(byte[] data, ParseException e) {
// TODO Auto-generated method stub
if(e == null){
Bitmap bmp1 = BitmapFactory.decodeByteArray(data, 0,data.length);
ddd.setPhoto(bmp1);
ImageView img=(ImageView)findViewById(R.id.xxx);
img.setImageBitmap(ddd.getPhoto());
}else{
Log.d("test","There was a problem downloading the data.");
}
}
});
}
});
}
As you can see, there's no need to use this ddd class.
Alternative solution: use a <com.parse.ParseImageView>. The usage is pretty straightforward, i.e.:
ParseImageView view = (ParseImageView) findViewById(R.id.parse);
ParseFile fileObject1 = ... //file you get from the query
view.setParseFile(fileObject1);
view.loadInBackground();
Based on your edit, I'd suggest:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.rotate);
FlipViewController flipView = new FlipViewController(this, FlipViewController.HORIZONTAL);
NoteViewAdapter mAdapter = new NoteViewAdapter(this);
flipView.setAdapter(mAdapter);
setContentView(flipView); //why are you calling setContentView twice?
ArrayList<Bitmap> notes = new ArrayList<Bitmap>();
ParseQuery<ParseObject> query = new ParseQuery<ParseObject>("Card");
query.getInBackground("wxYBHhRlhZ", new GetCallback<ParseObject>() {
@Override
public void done(ParseObject object, ParseException e) {
ParseFile fileObject1 = (ParseFile) object.get("Photoin1");
fileObject1.getDataInBackground(new GetDataCallback(){
@Override
public void done(byte[] data, ParseException e) {
notes.add(BitmapFactory.decodeByteArray(data,0,data.length));
mAdapter.setNotes(notes);
}
});
}
});
}
Then you create this method inside the adapter:
public void setNotes(ArrayList notes) {
this.notes = notes;
notifyDataSetChanged();
}
But I really need to use it out of the code. Is there any solution? Thanks!
What do you mean? You clearly can't use the bitmap before it has been downloaded. In the code you posted the error is that getPhoto is called before setPhoto. It's not a matter of in/out, but of before/after.
I update the question just to make it clear. thanks!
You should create an empty adapter with a simple constructor (NoteAdapter a = new NoteAdapter(this), without notes) and later you perform the query, you create your notes object and pass it to the adapter (a.setNotes(notes), with a public method).
It shows me that ArrayList<Bitmap> notes = new ArrayList<Bitmap>(); and NoteViewAdapter mAdapter = new NoteViewAdapter(this); should be defined as "final".After I ran the code, an error occured.Here comes the error message
08-18 08:57:05.688: E/AndroidRuntime(3743): Caused by: java.lang.NullPointerException 08-18 08:57:05.688: E/AndroidRuntime(3743): at com.example.bookard.rotate$NoteViewAdapter.getCount(rotate.java:72) 08-18 08:57:05.688: E/AndroidRuntime(3743): at com.aphidmobile.flip.FlipViewController.setAdapter(FlipViewController.java:291) 08-18 08:57:05.688: E/AndroidRuntime(3743): at com.aphidmobile.flip.FlipViewController.setAdapter(FlipViewController.java:280) 08-18 08:57:05.688: E/AndroidRuntime(3743): at com.example.bookard.rotate.onCreate(rotate.java:31)
08-18 08:57:05.688: E/AndroidRuntime(3743): at android.app.Activity.performCreate(Activity.java:5133) 08-18 08:57:05.688: E/AndroidRuntime(3743): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1087) 08-18 08:57:05.688: E/AndroidRuntime(3743): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2175) 08-18 08:57:05.688: E/AndroidRuntime(3743): ... 11 more
Modify your getCount method with return notes == null ? 0 : notes.size()
It works, thank you very much! But I got another question, I have many rows in a parseobject called "Card", and each row has 4 columns that are used to store image. So I want to flip these 4 images not just 1, how should I do?
Use query.findInBackground, not query.getInBackground (used to retrieve a single row). You will get your four Cards object, then you can set up a for loop and populate your notes array.
But the 4 images are in the same row. And I just want that 4 images, is there any solution? Thanks.
Oh. Well, you need to get those four parse files like you did here.
But mAdapter.setNotes(notes); can only set once, where to set it?
You can transform it into an addNote(Bitmap b) , and call it four times.
I really appreciate your help! It works very well! This is my first time to develope an app, so the informations you provide are very useful for me! Thanks a lot!
So sorry to bother you again.
So sorry to bother you again.
I want to add 4 textview in the same adapter and I use exactly the same code you told me yesterday, but it didn't work.
`texts.add(object.getString("Textin1"));
mAdapter.setTexts(texts);
texts.add(object.getString("Textin2"));
mAdapter.setTexts(texts);
//2 more left'
'public void setTexts(ArrayList texts) {
this.texts = texts;
notifyDataSetChanged();'
How can I set those textview with the text I get from Parse?
Thank you so much!!!!
I suggest you ask another question, you'll get more visibility than just me :-)
Thanks for your suggestion. I've already asked another question. Here is the web link [thanks]: http://stackoverflow.com/questions/32141798/android-retreiving-image-and-text-from-parse-com-and-using-it-in-an-adapter. Please help me solve the problem if you are free, thanks.
| common-pile/stackexchange_filtered |
Flutter dual sim -Send SMS
I have a dual sim android phone. I want to send SMS by selecting sim slots through flutter dart programatically. When i tried , by default , it is going from first sim slot. Is there any solution in flutter , as like in subscriptionManager() in android.
you can create a native method and the connect it with the flutter with the help of platform channels.
Thank you. But is there any plugin like subscriptionManager() like Android .
I don't know about that but as far as I know, flutter does not support core features like that.
I tried with below code in android native and then bridge to Flutter with platform channels.And it worked for me.
SubscriptionManager localSubscriptionManager = (SubscriptionManager)getSystemService(Context.TELEPHONY_SUBSCRIPTION_SERVICE);
if (localSubscriptionManager.getActiveSubscriptionInfoCount() > 1) {
List localList = localSubscriptionManager.getActiveSubscriptionInfoList();
SubscriptionInfo simInfo = (SubscriptionInfo) localList.get(simSlot);
SmsManager.getSmsManagerForSubscriptionId(simInfo.getSubscriptionId()).sendTextMessage(phone, null, smsContent, sentPI, deliveredPI);
}else{
List localList = localSubscriptionManager.getActiveSubscriptionInfoList();
SubscriptionInfo simInfo = (SubscriptionInfo) localList.get(simSlot);
sms = SmsManager.getDefault();
sms.sendTextMessage(phone, null, smsContent, sentPI, deliveredPI);
}
Get i read incoming sms in dual sim using flutter?
| common-pile/stackexchange_filtered |
Waydroid ABI error when installing APK
I bought a new laptop, installed Ubuntu 24.04, downloaded from apkmirror.com the APK's I want to use (including APKMirror Installer) to play games, copied these APK's to my home directory's .local/share/waydroid/data/media/0/Download (I could then see the APK's in Waydroid), installed APKMirror in Waydroid with no problem but when I try to install any game APK I get the following error:
"The system failed to install the package because its packaged native code did not match any of the ABIs supported by the system"
My Waydroid Parameters "About phone" shows "WayDroid x86_64 Device". Help!
Do you know for sure the apps you tried to install are known to work with Ubuntu 24.04? You may want to visit the web site and find out.
APKs are for Android OS. Ubuntu is Linux. They don't run natively in Ubuntu. Edit your question and add exactly each step of what you installed.
I found the cause of my problem this morning. To my horror, I discoved that most applications for cell phone or tablet support either the ARM or x86 architecture, but not both. Very few, maybe 10% from what I've seen, support both: they are qualified as Universal. Applications being dependant on the underlying hardware is beyond me. Sorry to have bothered you guys.
| common-pile/stackexchange_filtered |
How to combine current and target values for many edit controls?
The software needs to display numeric values of many (say 20 or more) control variables. It needs to show the current value which may get updated by the program every so often. Additionally, the user may want to set target values for some or all of these variables.
This could be done by two columns (first column showing current values, second column showing desired values) but I fear that this will clutter the whole UI unnecessary, doubling the number of display elements.
What would be alternative designs for such an UI? If possible I would like to combine both, the current and the target value in one element and make the target editable as well as indicate if a target value has been set.
The software is for controling a machine, so the users are used to some complexity and a more technical presentation.
Example:
"It needs to show the current value which may get updated every so often", the image you share has an input for both columns, are both editable in this view?
The first column showing the current value would not be editable. The value would get updated by the program regularly.
You'll cut the number of fields in half just by displaying the non-editable values without a text field. Another benefit would be that the non-editable information will look non-editable.
@KenMohnkern That's true, but the number of displayed elements will stay the same. Please note, that I edited the question a bit in that regard before you commented. I guess the question is really about how to hide/show target values depending upon if they are needed/ not needed. The existing answers go some way along that line and are definitely helpful but they are not that convincing that I can mark them as solution yet.
Considering the current values are system generated and not editable. I think this will help:
Depending on the type of users of your product, you can even think of displaying the set link on mouse-over. You can use a horizontal scroll for the values columns, and keep the Current and Target columns fixed to the viewport when number of columns increase.
You exchanged rows and columns here and decided to replace a target value not set by a set link. While this is already nice it still suffers from the doubled number of elements/cells in the table. Is there a typically used, nice looking way to combine both rows (columns in my question)?
@Trilarion Additionally, the user may want to set target values for some or all of these variables - meaning it's optional. In which case, it's best to let the user choose if he wants to set the target value. You've to repeat something, either the Target title or the set link. The approach above saves vertical space.
You can use one of the below image for reference[just cooked up something it still requires a lot of work but you get the basic idea]. It is one of the way to show the range type present / target type UI. It visually represent the current and goal. you can put a textbox for the target field if you want. but placing the editable icon is important to show that the field is editable.
| common-pile/stackexchange_filtered |
When I click the "Tour"-button on English Language Learners-site the right edge of the text is partially cut off
When I click the "Tour"-button the right edge of the text is partially cut off. I have to guess what's meant there. How can I change this fault if at all?
If there are animations, where can I find them on the ELL's-site, please? I can't find them actually.
Seems to be the case here too https://meta.stackexchange.com/tour . If its by design, it is rather confusing
No repro, appears fine to me. Screenshot and browser details please? (/cc @Journeyman)
https://i.sstatic.net/6TvA4.png right edge of the examples. I'm running vivaldi, a chrome fork, but reproduced it in chrome
@JourneymanGeek Animations are missing in the Tour page - the same issue you are referring?
Ah, potentially. In this case though, one can imagine the temporary solution is rather confusing, It could be by design, or something that perhaps should be reviewed in due course
Well those are just example posts, meant to show the voting arrows, and the general concept, putting whole screenshot will just clog the screen. Animation was nice to have, but without it, guess this is the "least bad" way to show it.
Depends. If you're a new user and seeing all half cut off examples - its possible you might be confused. OP was! I haven't even seen the tour in years and didn't realise it was intentional.
| common-pile/stackexchange_filtered |
Would it be useful to convert NTFS to EXT4 with this system setup?
Consider the following system:
Distribution: Devuan Beowulf.
DRAM Memory: 16 GB.
Linux kernel version: 5.10.0.
Secondary storage: One SDD and one HDD.
Used as a desktop, including software development which can be taxing on system resources in itself.
The SSD has a boot partition P1, a large partition P2 mounted as /.
The HDD has a single large partition P3, with different paths in / symlinked to place within it (e.g. /home or /home/someuser, and perhaps some paths under /var etc.) It is in heavy use.
How important/beneficial is it for partition P3 to be ext4 rather than NTFS?
Bonus question: Should another filesystem be considered for P3?
@muru: Actually, I'm not sure - since the answer there was written 6 years ago.
I don't think NTFS-3G has been re-rewritten to be in-kernel in the past 6 years,
You should only use NTFS, if you have Windows or at least a Windows repair flash drive. It may need chkdsk & defrag which you can only run from Windows. NTFS does not have Linux ownership & permissions. You should not symlink any system partitions. You use NTFS if using Windows, just for data, but if Linux only use Linux format. I use ext4, but see others being used for data, but not system.
| common-pile/stackexchange_filtered |
Send an email I want it isn't received
:)
I searched for any solution but I found nothing. I'd send an email like as follows:
To<EMAIL_ADDRESS>CC<EMAIL_ADDRESS>
but I'd want that Recipient1 does NOT receive the email, while Recipient2 does. In this way, Recipient2 thinks I sent the email also to Recipient1 but, actually, Recipient1 received nothing.
Is it possible?
Thank you very much.
That's not possible from the client side. If you add an address to to or cc or bcc the server will try to send to that address. If you have control over the Server, you may be able to block it there.
Try adding special characters to the first email address, like a non breaking space, or a similar looking character from another character set, you may find a way to generate a failing address while not really displaying it!
I know it works that way with URL and some spam e-mails...
| common-pile/stackexchange_filtered |
UIImage size doubled when converted to CGImage?
I am trying to crop part of an image taken with the iPhone's camera via the cropping(to:) method on a CGImage but I am encountering a weird phenomenon where my UIImage's dimensions are doubled when converted with .cgImage which, obviously, prevents me from doing what I want.
The flow is:
Picture is taken with the camera and goes into a full-screen imageContainerView
A "screenshot" of this imageContainerView is made with a UIView extension, effectively resizing the image to the container's dimensions
imageContainerView's .image is set to now be the "screenshot"
let croppedImage = imageContainerView.renderToImage()
imageContainerView.image = croppedImage
print(imageContainerView.image!.size) //yields (320.0, 568.0)
print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) //yields (640, 1136) ??
extension UIView {
func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
I have been wandering around here with no success so far and would gladly appreciate a pointer or a suggestion on how/why the image size is suddenly doubled.
Thanks in advance.
Just a thought because I don't know for sure.... but why would some sort of rendering - in your case to a imageViewContainer - result in "effectively resizing the image"? Sure, it may scale down to what you want for rendering. But resize a UIImage.size? That doesn't quite make sense to me.
This is because print(imageContainerView.image!.size) prints the size of the image object in points and print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) print the size of the actual image in pixels.
On iPhone you are using there are 2 pixels for evert point in both horizontal and vertical. The UIImage scale property will give you the factor which in your case will be 2.
See this link iPhone Resolutions
| common-pile/stackexchange_filtered |
Singing and hearing quarter tones
I’ve just received a choir piece that contains some (rare) quarter tones on long held notes. For example, a B which slides into a Bᴓ (that's a half flat).
I’ve tried singing the part where it happens. Not for very long, I’ll admit, but with little success. It seems that my brain tends to put the pitch into a category it knows well: either I’m hearing a (low) B, or a (high) B♭. I’m never sure I’m singing the "right"1 note.
How can I ensure that I’m actually singing this "right" note? Do any of you know of some tools out there that could help me train my ear?
I do realise that "right" here can actually have different meanings. Let's say 24-TET. Quite frankly, if I manage to consistently sing something that is not quite a B, nor quite a B♭, I’ll be satisfied anyway.
You could down load an app that is a tuner. InsTuner is a good one as it will show you how close you are to the note.
Be interesting to see what your choirmaster has to say about it.
Out of curiosity, what piece is it?
You say long-held notes but you also say slide. I think you need to practice the slide. Sing up an ordinary scale, "doh re mi .." When you get to the semitones, slow down and slide gradually from the E to the F and the B to the C. If these are really slides then you don't have to settle on an intermediate tone, just slide through it.
@chaslyfromUK It doesn't slide through the half tone. It's a portamento from a B to a B (half flat) that is then held for a few mesures.
Probably indeed there are two technical difficulties: 1. slide, 2. quarter tones. Practice them independently. Concerning quarter tones, a practice technique I know is to play B and Bb on an instrument (e.g. piano, or generate them on a computer), sing one, sing another, then try to sing the note in the middle, and repeat. After a while half tone starts to feel like a huge interval.
Start by using two regular keyboards which can be tuned, assuming you are a keyboardist. I would recommend two of these two get started
https://www.sweetwater.com/store/detail/SA76--casio-sa-76-portable-arranger-keyboard
These were designed for use in music education in India; which in short means they can be tune freely up or down 100 cents (100 division of a half step).
Try regular tuning & 50 cents sharp for 1/4 tones first. Try singing quarter tones and half steps so you don't temporarily forget what half-steps 'sound like'. When I first started I was singing quarter tones so much I put them in place of half steps.
Here are two example files to get you started. Here is 4 octaves of quarter-tones.
Here is something I have been calling 1/3 tone major. Basically 1/3 tones are 1/3 of a whole step so 3 equal 2 half steps. So this one goes: C, C-1/3#, D-1/3b, D, D 1/3#, E, F, F-1/3#, G-1/3b, G, G-1/3#, A-1/3b, A, A-1/3# B-1/3b, B, & C. Basically a major scale where you replace whole-steps with third-steps but keep the half-steps.
These links might break... not sure if changing use name does that... will update in comment if so.
| common-pile/stackexchange_filtered |
can't start media recorder in javascript getting unkown error
I am using media stream recorder to use with ffmpeg in electron js then when stopping then starting again the record i am getting
this error
i am starting record with timeslice = 0
const sourcesMediaStream = new MediaStream()
navigator.mediaDevices.getUserMedia({ video: false, audio: { deviceId: { exact: "default" } } }).then(stream => {
sourcesMediaStream.addTrack(stream.getAudioTracks()[0])
})
videoStream = canvas.captureStream(15).getVideoTracks()[0]
const recorder = new MediaRecorder(sourcesMediaStream, {
audioBitsPerSecond: 128000,
videoBitsPerSecond: 2500000,
})
btn.addEventListner("click", () => {
if (!streaming) {
recorder.start(0)
} else {
recorder.stop()
}
})
recorder.ondataavailable = async function (e) {
ipcRenderer.send("ytStreamBuffer", new Uint8Array(await e.data.arrayBuffer()))
}
Do share more of your code.
You construct and start the recorder immediately, but sourcesMediaStream isn't fully set up until the getUserMedia promise resolves. You need to wait until your .then block to set up the recorder. Not sure if more is going on here, since per the other comment there seems to be some code missing.
@kdau the problem not in the first time starting the recorder but in the second time
apparantly I had An old chrome version so it was the wrong version thanks to every one who helped
| common-pile/stackexchange_filtered |
Variable zoom on 3D rayshader rgl plot
How can I enable scroll wheel zoom on rgl rayshader plots?
Older (2019) rayshader plots seemed to have this functionality by default
(I can still open the old html files and they retain the functionality.)
However, plots generated today (2022) do not have scroll wheel zooming available.
Example from rayshader vignette
library(rayshader)
#Here, I load a map with the raster package.
loadzip = tempfile()
download.file("https://tylermw.com/data/dem_01.tif.zip", loadzip)
localtif = raster::raster(unzip(loadzip, "dem_01.tif"))
unlink(loadzip)
#And convert it to a matrix:
elmat = raster_to_matrix(localtif)
elmat %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(elmat), color = "desert") %>%
plot_3d(elmat, zscale = 10, fov = 0, theta = 135, zoom = 0.75, phi = 45, windowsize = c(1000, 800))
htmlwidgets::saveWidget(rgl::rglwidget(), "rayshader ex1.html")
If you run a simple plot in rgl and look at the mouse mode, you'll see this:
par3d("mouseMode")
#> none left right middle wheel
#> "none" "trackball" "zoom" "fov" "pull"
If you do the same after running your sample code, you'll see this:
rgl::par3d("mouseMode")
#> none left right middle wheel
#> "none" "polar" "fov" "zoom" "none"
So something in the rayshader package has changed the setting for the mouse wheel. There's a ton of code there so I don't know where this happened (though I'm guessing it was unintentional), but the way to undo it would be to set the mouse wheel mode yourself after running your example:
rgl::par3d(mouseMode = c("none", "polar", "fov", "zoom", "pull"))
You have to do this before the saveWidget call, or you'll save the bad setting.
| common-pile/stackexchange_filtered |
What is a “proper systematic order listing”?
I have a paper under review at a journal, and in it I have included a table of all of the species which exhibit a particular behaviour. Species are grouped by Order, and then family, and then species. The table is in alphabetical order by Order, and then within each order alphabetically by Family, and then within each family alphabetically by species. A reviewer gave very few comments, and the most substantial was as follows:
follow a proper systematic order listing the species
What does this mean? That I order the table to have more basal Orders at the top? That seems faintly ridiculous, but I can’t think of anything else. Or is there some official way to list taxa in tables that isn’t alphabetical?
There are natural and unnatural groupings of species, genera, families, orders etc among each other. Multiple orderings may make equal sense but alphabetical is not an appropriate ordering.
Remember that all of these are just labels for different branches of a tree, but the tree is still there. So if you have species where A and B have a more recent common ancestor with each other than with C, ordering them as A B C or B A C is appropriate but A C B or B C A is not.
I'd interpret the reviewer comment as asking you to use biology and common ancestry to sort rather than the alphabet. It's possible they would prefer you even abandon entirely the concepts of Order, Family, etc, as these are now a bit outdated. But even if you are going to use them, you should sort by the relationships rather than alphabetically.
So if you have species where A and B have a more recent common ancestor with each other than with C, ordering them as A B C or B A C is appropriate but A C B or B C A is not. This makes no sense without using parentheses to denote the relationships! See my answer.
@Cheery It does, in fact, make sense. You should not place C between two more closely related species/groups. That's it. No parentheses needed.
| common-pile/stackexchange_filtered |
User location won't come up in simulator in Xcode
I'm new to app development and I'm trying to get the user location to come up in the simulator but I keep getting the Use of unresolved identifier error. I have looked at other questions, which are very specific to their own projects, and have tried to approach my own app in a similar way but to not avail. Any help, por favor? Here's my code, and a link to a screenshot of Xcode.
import UIKit
import MapKit
import CoreLocation
class SecondViewController: UIViewController, CLLocationManagerDelegate {
//Map
@IBOutlet weak var map: MKMapView!
let manager = CLLocationManager()
func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation])
{ let location = locations[0]
let span:MKCoordinateSpan = MKCoordinateSpanMake(0.01, 0.01)
let myLocation:CLLocationCoordinate2D = CLLocationCoordinate2DMake(location.coordinate.latitude, location.coordinate.longitude)
let region:MKCoordinateRegion = MKCoordinateRegionMake(myLocation, span)
self.map.showsUserLocation = true
}
override func viewDidLoad()
{
super.viewDidLoad()
manager.delegate = self
manager.desiredAccuracy = kCLLocationAccuracyBest
manager.requestWhenInUseAuthorization()
manager.startUpdatingLocation()
//This is where I get an error
map.setRegion(region, animated: true)**
}
Error: Use of unresolved identifier
you need to on userLocation in xcode
have you set custom location in simulator?
if no debug -> location-> custom location
set and try this
the region set before the location taken by the device. So you need to set after the lat long taken by the device
Your let region:MKCoordinateRegion is a local identifier inside your delegate method:
func locationManager(_:didUpdateLocations) {...}
This is why you are getting Use of unresolved identifier. Make this identifier accessible throughout the class, the error will be gone. Like:
class SecondViewController: UIViewController, CLLocationManagerDelegate {
//Map
@IBOutlet weak var map: MKMapView!
var region = MKCoordinateRegion()
.......
.......
}
N.B: But this won't let you see anything in the map. Best way to see any output from your MapView is to put map.setRegion(region, animated: true) inside your delegate method:
func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
let location = locations.first
let span = MKCoordinateSpanMake(0.01, 0.01)
let myLocation = CLLocationCoordinate2DMake((location?.coordinate.latitude)!, (location?.coordinate.longitude)!)
region = MKCoordinateRegionMake(myLocation, span)
map.setRegion(region, animated: true)
}
Use MKCoordinateRegion instance before viewdidload method
You can follow below code for reference.
class Test: UIViewController {
var region = MKCoordinateRegion()
override func viewDidLoad() {
super.viewDidLoad()
region = MKCoordinateRegionMake(myLocation, span)
}
}
You are getting Error because
the variable region inside viewDidLoad() method do not have access of region variable inside
viewDidLoad() method
//This is where I get an error
map.setRegion(region, animated: true)**
Solution :
Make region variable global to class so that that can be accessed from anywhere you want to.
Instead of this line inside func locationManager
let region:MKCoordinateRegion = MKCoordinateRegionMake(myLocation,span)
Use
self.region = MKCoordinateRegionMake(myLocation,span)
Here is complete code That you can use.
import UIKit
import MapKit
import CoreLocation
class SecondViewController: UIViewController, CLLocationManagerDelegate {
//Map
@IBOutlet weak var map: MKMapView!
//global variable
var region:MKCoordinateRegion = MKCoordinateRegion()
let manager = CLLocationManager()
func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation])
{ let location = locations[0]
let span:MKCoordinateSpan = MKCoordinateSpanMake(0.01, 0.01)
let myLocation:CLLocationCoordinate2D = CLLocationCoordinate2DMake(location.coordinate.latitude, location.coordinate.longitude)
self.region = MKCoordinateRegionMake(myLocation, span)
self.map.showsUserLocation = true
}
override func viewDidLoad()
{
super.viewDidLoad()
manager.delegate = self
manager.desiredAccuracy = kCLLocationAccuracyBest
manager.requestWhenInUseAuthorization()
manager.startUpdatingLocation()
//This is where I get an error
map.setRegion(region, animated: true)**
}
//
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
And more important if you are using simulator then you have to enable location simulation.
How is this answer different from my answer?
| common-pile/stackexchange_filtered |
Linq condition in list to get children elements of specific element
Can you help me with a lambda expression equivalent for this:
var q = from c in result
from s in result
where (c.parent == s.title)
where s.title=="myvalue"
select c;
Unsure how to do a lambda when two tables are involved, even if it's sort of a self join
I ve tried this:
var newl = res.Where(yy => yy.title == "x").SelectMany(vv => vv.parent = y.title).ToList(); //wrong syntax
where "my value" is what the parent has on title field, but I want to calculate "my value" on the same line of code, which can render actually multiple possibilities for parents (say, 3 elements have the quality of being a parent), so I want to have 3 different results. I know I can use Skip(x).First() or something like that to take the parent of rank x, where x is a number.
Can you show your code to see what have you tried so far?
will cancel down vote after the question is explained in more detail
Did you meant something like myList.Where(x=>x.Field1 == x.Field2)?
for better readability you should write linq query with self join not Lambda expression, in this case.
I still dont get that what that fields is. An property of class? a string ? Parent of a xml ? Thats why they want your code.
Maybe you trying to tell something like this:
myList.Where(x=>x.Field1 == x.Field2 && x.Field2 == x.Field3);
Thank u GBursali.
I tried sth like this: var nl= result.Where(v => v.parent == result.Where(vv => vv.title == "parent").Skip(0).Take(1)); // because are several parents, parents are identified by keyword "parent" on field title. but syntax here is wrong...( This should give me the kids of first parent, then I would play with Skip and Take to take the kids of second and so on. )
You are introducing a cross join and without any sample data I can't really understand your purpose. Anyway in lambda syntax it would be like:
var q = result.SelectMany(r => result
.Where(re => re.title=="myvalue" && re.title == r.parent),
(c,s) => c);
EDIT: I based my answer on your code. Reading your explanations on what you are trying to do, it looks like your current Linq code is not right? Maybe you meant something like this:
List<Dummy> result = new List<Dummy> {
new Dummy { Id=1, parent=0, title="myvalue"},
new Dummy { Id=2, parent=0, title="myvalue"},
new Dummy { Id=3, parent=0, title="myvalue"},
new Dummy { Id=4, parent=1, title="p4"},
new Dummy { Id=5, parent=1, title="p5"},
new Dummy { Id=6, parent=2, title="p6"},
new Dummy { Id=7, parent=3, title="myvalue"},
new Dummy { Id=8, parent=0, title="value"},
new Dummy { Id=9, parent=0, title="myvalue"},
};
var q = from c in result
from s in result
where (c.parent == s.Id && s.title == "myvalue")
select c;
result.SelectMany(p => result.Where(c => c.parent == p.Id && p.title=="myvalue"),
(prnt,child) => child);
This is the way with self join.
result.Join(result,r1=>r1.parent,r2=>r2.title,(r1,r2) => new { r1, r2 })).
Where(m=>m.r2.title=="myvalue").
Select(d=>d.r1);
| common-pile/stackexchange_filtered |
How to create an anti-diagonal identity matrix (where the diagonal is flipped left to right) in numpy
How can I create anti-diagonal matrix in numpy? I can surely do it manually, but curious if there is a function for it.
I am looking for a Matrix with the ones going from the bottom left to the upper right and zeros everywhere else.
@jpp Thank you, but ones should be going from bottom left to upper right and zeros everywhere else
Then np.eye(5)[::-1] ? Not sure you can get much better than this.
Use np.eye(n)[::-1] which will produce:
array([[ 0., 0., 0., 0., 1.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 0., 0., 0.],
[ 1., 0., 0., 0., 0.]])
for n=5
One way is to flip the matrix, calculate the diagonal and then flip it once again.
The np.diag() function in numpy either extracts the diagonal from a matrix, or builds a diagonal matrix from an array. You can use it twice to get the diagonal matrix.
So you would have something like this:
import numpy as np
a = np.arange(25).reshape(5,5)
>>> a
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
b = np.fliplr(np.diag(np.diag(np.fliplr(a))))
>>> b
[[ 0 0 0 0 4]
[ 0 0 0 8 0]
[ 0 0 12 0 0]
[ 0 16 0 0 0]
[20 0 0 0 0]]
I'm not sure how efficient doing all this will be though.
This makes an anti diagonal matrix, not a flipped version of the identity matrix.
If you wanted a flipped version of the identity matrix, you could simply call np.fliplr() on the output of np.eye(n). For example:
>>> np.fliplr(np.eye(5))
array([[ 0., 0., 0., 0., 1.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 1., 0., 0., 0.],
[ 1., 0., 0., 0., 0.]])
I'll leave this up for people who may need an anti diagonal matrix in numpy, but @jpp answered the questions that is actually being asked
You could simply use np.fliplr(np.eye(n)) to answer the question.
@pault well thats a flipped version of the identity matrix. The original question, before the OP edited it, was how do you make an anti diagonal matrix. An anti diagonal matrix is not guaranteed to be ones along the diagonal, so i left mine up for posterities sake.
@pault Well it is an anti diagonal matrix, but its simply a subset. The flipped identity matrix is an anti diagonal matrix of the identity matrix, where as my solution i initially provided works for any matrix that may or may not already be a diagonal matrix.
| common-pile/stackexchange_filtered |
Visitor returns first Item of an List as null
I want to implement a Visitor for My class MyList. The List itself holds Elements of the type MyEntry. Those Entrys hold a generic Value and a reference to the next Entry in the List.
public class MyEntry<E> implements Visitable {
MyEntry<E> next;
E o;
MyEntry() {
this(null, null);
}
MyEntry(E o) {
this(o, null);
}
MyEntry(E o, MyEntry<E> e) {
this.o = o;
this.next = e;
}
public void accept(Visitor visitor) {
}
}
The List class
public class MyList<E> implements Visitable {
private MyEntry<E> begin;
private MyEntry<E> pos;
public MyList() {
pos = begin = new MyEntry<E>();
}
public void add(E x) {
MyEntry<E> newone = new MyEntry<E>(x, pos.next);
pos.next = newone;
}
public void advance() {
if (endpos()) {
throw new NoSuchElementException("Already at the end of this List");
}
pos = pos.next;
}
public void delete() {
if (endpos()) {
throw new NoSuchElementException("Already at the end of this List");
}
pos.next = pos.next.next;
}
public E elem() {
if (endpos()) {
throw new NoSuchElementException("Already at the end of this List");
}
return pos.next.o;
}
public boolean empty() {
return begin.next == null;
}
public boolean endpos() {
return pos.next == null;
}
public void reset() {
pos = begin;
}
@Override
public void accept(Visitor visitor) {
begin = pos;
while(pos.next != null && visitor.visit(pos.o)) {
//Move one Item forward
pos = pos.next;
}
}
I have implemented the Visitor interface to my ListVisitor class
public class ListVisitor implements Visitor {
public ListVisitor() {
}
@Override
public boolean visit(Object o) {
return false;
}
}
The Interface Visitor that will be implemented in every new Visitor that i want to create, for now i only have the ListVisitor
public interface Visitor<E> {
boolean visit(Object o);
}
And the Interface Visitable, that is implemented in every class i want to visit, in this case the MyList class and the MyEntry class.
public interface Visitable {
public void accept(Visitor visitor);
}
To test the implementation of the Visitor pattern I made a Testclass that creates a new Mylist and puts some Strings in it. Next it creates a new Visitor that visits the List.
import org.junit.Test;
public class MyListTest {
@Test
public void MyListTest() {
MyList s = new MyList();
s.add("Hello");
s.add("World");
s.add("!");
Visitor v = new Visitor() {
@Override
public boolean visit(Object o) {
System.out.println(o);
return true;
}
};
s.accept(v);
}
}
Now when I run MyListTest the output is:
null
!
World
My Question now is why the first Element that the Visitor visits has a null reference in it. When i add more items to my List before creating a Visitor the output always extends, except for the first item that has been inserted into the List, it will always be null.
what is the implementation of increaseModCount?
That was an old Implementation of an Fail-Fast Iterator, its not needed anymore
Visitor pattern will not change the current code structure. It is used to add the new functionality in existing code. Classes which holds the new behaviors are commonly known as Visitors.
The classes and objects participating in this pattern are:
Visitor: It declares a Visit operation for each class of ConcreteElement in the object structure. The operation's name and
signature identify the class.
ConcreteVisitor: It implements each operation declared by the Visitor.
Element: It defines an Accept operation that takes a visitor as an argument.
ConcreteElement: It implements an Accept operation that takes a visitor as an argument.
ObjectStructure: It holds all the elements of the data structure as a collection, list of something which can be enumerated and used by
the visitors.
public abstract class Food {
public final string Name {
get {}
set {}
}
public final Decimal Price {
get {}
set {}
}
public final int Count {
get {}
set {}
}
public Food(string name, Decimal price, int count) {
this.Name = name;
this.Price = price;
this.Count = count;
}
public abstract void Accept(Visitor visitor);
}
public class Pizza extends Food {
public Pizza(string name, Decimal price, int count) {
super(name, price, count);
}
public override void Accept(Visitor visitor) {
visitor.Visit(this);
}
}
public class Pasta extends Food {
public Pasta(string name, Decimal price, int count) {
super(name, price, count);
}
public override void Accept(Visitor visitor) {
visitor.Visit(this);
}
}
public abstract class Visitor {
public abstract void Visit(Pasta pasta);
public abstract void Visit(Pizza pizza);
}
public class DiscountVisitor extends Visitor {
public override void Visit(Pasta pasta) {
var totalPrice = (pasta.Price * pasta.Count);
byte discount = 10;
pasta.Price = (totalPrice - (totalPrice * (discount / 100)));
}
public override void Visit(Pizza pizza) {
byte discount = 0;
if ((pizza.Count < 5)) {
discount = 10;
}
else if (((pizza.Count >= 5) && (pizza.Count < 20))) {
discount = 20;
}
else {
discount = 30;
}
var totalPrice = (pizza.Price * pizza.Count);
pizza.Price = (totalPrice - (totalPrice * (discount / 100)));
}
}
public class OrderList extends List<Food> {
public final void Attach(Food element) {
Add(element);
}
public final void Dettach(Food element) {
Remove(element);
}
public final void Accept(Visitor visitor) {
this.ForEach(() => { }, x.Accept(visitor));
}
public final void PrintBill() {
this.ForEach(() => { }, this.Print(x));
}
private final void Print(Food food) {
Console.WriteLine(string.Format("FoodName: {0}, Price:{1}, Count:{2}", food.Name, food.Price, food.Count));
}
}
}
OrderList orders = new OrderList();
orders.Add(new Pizza("pizza", 45000, 2));
orders.Add(new Pasta("pasta", 30000, 1));
DiscountVisitor visitor = new DiscountVisitor();
orders.Accept(visitor);
orders.PrintBill();
| common-pile/stackexchange_filtered |
Question with several subquestions
Suppose one question posting contains several question. If I can answer one question, but cannot answer all of the questions, is it ok to post my answer as an answer, and not just as a comment?
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.