text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Duplicate symbol '_hasListeners' react native
I'm currently working on a project that requires both BLE and iWatch integration but for some reason when I install libraries to do both, I get this error
duplicate symbol '_hasListeners' in:
/Users/wilcox/Library/Developer/Xcode/DerivedData/project-id/Build/Products/Debug-iphonesimulator/RNAppleHealthKit/libRNAppleHealthKit.a(RCTAppleHealthKit.o)
/Users/wilcox/Library/Developer/Xcode/DerivedData/project-id/Build/Products/Debug-iphonesimulator/react-native-ble-manager/libreact-native-ble-manager.a(BleManager.o)
I'm using the libraries https://github.com/innoveit/react-native-ble-manager and https://github.com/agencyenterprise/react-native-health
I've tried updating these libraries to the newest versions, cleaning the build folder and deleting derived data without any success. Is there anything I can do except hope the devs fix it?
Did you manage to fix this?
Find Pods - Development Pods - RNAppleHealthKit - RCTAppleHealthKit.m
And change:
bool hasListeners;
to:
static bool hasListeners;
And then build the project, will fix it.
| common-pile/stackexchange_filtered |
How to set Acks=all in properties file spring boot
I want to set acks=all property for my producer in my spring cloud stream kafka application.
I had tried like this :
spring.cloud.stream.kafka.binder.requiredAcks=all
and
spring.cloud.stream.kafka.streams.binder.configuration=all
and
spring.cloud.stream.kafka.streams.bindings.<channel>.producer.configuration.requiredAcks=all
Unfortunately noting works for me.
Can you please help me how to set these kind of properties to application level or producer/consumer.
Just to be sure, are you using the Kafka binder or Kafka Streams binder ?
The configuration below is only used by the Kafka Binders (not the kafka Streams one). It is used to set the acks property of a producer instance.
spring.cloud.stream.kafka.binder.requiredAcks
To configure a Kafka Streams instance, properties must be prefixed with spring.cloud.stream.kafka.streams.binder(Spring Cloud Stream Configuration).
Into KafkaStreams, producers properties can be override by adding the prefix ".producer" (see Configuring a Streams Application; So to configure producer acks you should define the following property :
spring.cloud.stream.kafka.streams.binder.configuration.producer.acks=all
Note, that if you are building a stateful Kafka Streams application it's highly recommended to enable the exactly_once semantic.
This semantic can be configured with :
spring.cloud.stream.kafka.streams.binder.configuration.processing.guarantee=exactly-once
Thanks for reply, I had taken example from demo of Josh Long. Here is link for properties file: [link]https://github.com/joshlong/spring-cloud-stream-kafka-streams/blob/master/analytics/src/main/resources/application.properties , here we had two producers and I want to customize properties for each producer. How can I add any required configuration for specific producer or consumer or application level?
spring.cloud.stream.kafka.binder.configuration.acks: all
| common-pile/stackexchange_filtered |
Can I transit through Germany with an unused Schengen visa issued by Spain?
I am a Indian citizen, traveling to Mexico via Frankfurt (from Mumbai) with a multiple-entry Spanish Schengen visa. (The return journey would be Mexico-Munich-Mumbai.)
As per official Mexican Visa authorities (link doesn't work anymore, but here's an unofficial copy):
...Holders of any valid U.S.A., Canada, United Kingdom and Schengen
Visas (any nationality) DO NOT require a visa to enter Mexico on
tourist, business and transit purposes only....
And that was confirmed to me by the Mexican embassy also.
I have applied for and gotten the Schengen visa, for the first time.
But my query is:
I am not sure as I have never entered a Schengen country and the German authorities would consider my first point of entry as Germany and deny my entry as my Schengen visa was issued by Spanish authorities and on top of that I am travelling to Mexico!
I have edited your post to improve clarity, please review the edit and make sure it is true to your intentions. If it's not, roll it back or edit further.
As an Indian national you would ordinarily need an Airport Transit Visa in order to make an airside transit at a German airport. However, holding a valid type C visa from any Schengen state exempts you from this requirement.
See the Schengen Visa Code, article 3(5)(a).
Whether you have the right paperwork for the transit will be checked by the airline rather than the German authorities. The airline is not competent to judge whether the reasons for issuing the visa still apply (which is what could create a problem at the border if you tried to enter the Schengen area on a visa that clearly doesn't match your stated plans); they merely check that you have a visa of the right kind.
And even so, logically the exemption from airport transit visas that comes with a type C visa will practically always be something you use on a different trip than the one your visa was issued for -- because on the trip your visa was issued for you won't need to transit anyway but to enter the Schengen area. So you're not using the exemption in a way it wasn't intended to be used.
Thank you for the response.Just need to clarify one thing.You have mentioned unused Schengen Visa issued by Spain.I would like to add Unused Schengen Visa issued for the first time! I have never been issued a schengen visa before this. And also on my return Journey from Mexico (Mexico-Munich-Mumbai) can i enter Germany and stay there.Would that create a problem
@lizonceramicexport: It shouldn't matter that the visa is unused or whether you have had such visas before -- the airline doesn't know about that and have no reason to care.
@lizonceramicexport: As for making a trip into Germany on an unrelated (and unused-for-its-originalk-purpose) visa, it's a bit of a grey area. See Relaxed's answer here for discussion.
| common-pile/stackexchange_filtered |
Zend setRawHeader
I have IndexController. I need to set raw header in indexAction.
I try to make
function indexAction(){
$this->getResponse()->setRawHeader('HTTP/1.1 404 Not Found');
}
But I see in Google chrome status 200 OK.
How set raw header?
To set a 404, use:
$this->getResponse()->setHttpResponseCode(404)
->setRawHeader('HTTP/1.1 404 Not Found'); // optional
If you don't explicitly set an HTTP response code, ZF will automatically send a 200 response if it was not overridden by setHttpResponseCode. Once it sends all the headers it checks to see if a response code was sent, and if not, sends a 200 regardless of your rawHeader.
I modify your code and it works. $this->getResponse()-> setHttpResponseCode(405)-> setRawHeader('HTTP/1.1 405 Method Not Allowed')->sendHeaders(); if do not call sendHeaders() headers don't set. Thanks for help.
| common-pile/stackexchange_filtered |
Connecting to SQL Server LocalDB in Delphi
I have a .mdf database file, I want to connect to this file with ADOConnection and SQL Server LocalDB as provider
My connection string looks like this :
Data Source=(localdb)\v11.0;Integrated Security=SSPI;AttachDbFileName="MyMDFFileAddress";
But when I try to connect, this error is shown:
An attempt to attach an auto-named database for file "MDF File"
failed. A database with the same name exists, or specified file
I have tried many ways, but always the error above is shown!
I have Installed SQLLocalDB and SQL Server Native Client 11.0
On my machine I can connect to my own created instance on localDB and my database, but when I want to connect this file in another machine and use default instance and AttachDbFileName, that error is shown
I copied the .mdf file to default instance folder of LocalDB and tried to connect, but the same error is shown
I searched a lot but found no correct answer !
I'm using Delphi XE 6
Did you try this?
Data Source=(localdb)\v11.0;Integrated Security=True;AttachDbFileName=|DataDirectory|\"MyMDFFileAddress.mdf";Initial Catalog=YourDataBaseName;providerName="System.Data.SqlClient"
Is there DataBase instance (v11.0) in another machine?
Open command prompt and type to verify:
sqllocaldb info
| common-pile/stackexchange_filtered |
How Android renders layouts
I am wondering, in Android, when you specify setContentView(R.layout.some_layout); thing to note from docs:
This view is placed directly into the activity's view hierarchy
in which way Android renders all views in R.layout.some_layout?
How does Android calculate all the dimensions of views?
I am asking because I know that it is a common courtesy to wait until all views are rendered and already then try to get desired dimensions of views
Question popped up during a small discussion under this answer, when user @AndroidDev stated the following:
Android knows already BEFORE it creates your view the dimension of the
area to create the view
And I started thinking that it actually might be true. Either Android already knows in advance all the measurements or runs a recursive measurement function after each and every new Laoyout, UI component being appended to the Activitys root view, which updates some sort of temporary pointers where to insert the next view (that is how i used to imagine it)
have you ever looked at hierarchyviewer resides in \android-sdk\tools directory?
@ChintanRathod no, doesn't it show the hierarchy after views creation?
Isn't my answer helpful to you?
@ChintanRathod let me watch the video and to read all the documents and then I will be certain if your answer was helpful to me or not
@ChintanRathod but it seems legit :)
The Hierarchy Viewer is a visual tool that can be used to inspect your application user interfaces in ways that allow you to identify and improve your layout designs.
Check articles here and here.
Developer's site has provided Optimizing Your UI
Edit
Red to know how Android Draws View.
Here is video by Google I/O 2011: Accelerated Android Rendering.
Hierarchy Viewer does provide lots of information, but it does not say how Android renders views
Like for example, from View Properties in Hierarchy Viewer, mMeasuredHeight why it has 53 value (in my case)? Where does this value come from? How this value is measured?
| common-pile/stackexchange_filtered |
Matching biblatex in two machines
I have a latex document that I have to compile in two different computers throughout the day. In one of them I have Arch Linux and the other one I have Linux Mint. So the packages are different since Arch is much more up-to-date.
The problem comes when doing the references. I was using biblatex and biber, and I wasn't able to compile the document in both machines apparently because the versions were different. I still have to use biblatex, but I changed to backend to bibtex to make it easier.
What I have:
Arch:
Biblatex v3.8
Bibtex v0.99d (TeX Live 2017/Arch Linux)
Linux Mint:
Biblatex v3.4
Bibtex v0.99d (TeX Live 2015/Debian)
My main computer is Arch, so I'd like to keep it as it is. When I compile with xelatex on Mint I get this error:
! Use of \blx@bbl@verbadd@i doesn't match its definition.
l.378 \verb
And then many other undefined control sequences that are apparently generated because of the versions being different. So here are my questions.
How can I make it work on both computers? (I'm trying to avoid upgrading biblatex on Mint because that would mean upgrading pearl.)
Why does biblatex care what version it is? The versions are not so far apart.
Cheers
If you remove all the aux files (i.e., .aux and all the biblatex related aux files) on the Mint machine and recompile do you still get the error?
Have a look at: https://tex.stackexchange.com/q/311426/35864 this was a temporary bug in biblatex that has been resolved. If you delete the temporary files before you compile on the other machine there should be no problem using Biber (if the versions of biblatex and Biber on the same machine are compatible, of course)
@AlanMunn No, but then I'd have to do that every time I change machines, which is just too much.
@TomCho Seriously? Deleting those files is a single keystroke for me and I assume you're not switching machines many times a day. This hardly seems like a big deal especially since you don't want to upgrade
@AlanMunn I change machines many times a day (not physically or even manually), and this is a script that has to be compiled fast. So it's not so much deleting everything that it's the problem, but the longer compilation time that comes from having to do latex, bibtex, latex, latex at every compilaton.
@TomCho Well then you're between a rock and a hard place.
Well, this question has some time already, but nobody mentioned one very natural alternative. That would be for you to install "vanilla" TeX Live on both your computers, which would be more up-to-date than your current Arch install, and they would both be in equal standing. (TL 2018 should be available soon, if you like the idea, it's better to wait a tad longer).
I'm afraid there is no way to make things work using the same set of auxiliary files on both machines. If you are OK with deleting the .aux, .bcf, .bbl, ... files and starting afresh on one of the machines, things should work, though.
biblatex cares about the format of the .bbl files (and similarly about that of the .bcf), because that is the medium biblatex and Biber or BibTeX use to communicate. Biber or BibTeX reads the .bib file and prepares the read data in a format that biblatex can consume easily in the .bbl file. So naturally the version of your .bbl files must be compatible with the version of biblatex that should read them. The .bbl version does not increase with every release of biblatex. It is only stepped up if necessary, i.e. if internal changes require it. Similarly for the .bcf file which is written by biblatex to pass commands to Biber.
In your case there is about one and a half year between v3.4 and v3.8 the changes for BibTeX's .bbls can be estimated from this diff of biblatex.bst. There are a few changes, but let me pick out the most important
extrayear was renamed to extradate. This became necessary because the field has a wider scope now.
refcontext has taken over more responsibilities. So the purely internal .bbl macro \sortlist became \datalist.
Initials of name parts are now indicated with an appended i and not _i. Use of _ required category code changes for _ that caused troubles with other packages.
All of these changes mean that a file for biblatex 3.4 can not work properly with biblatex 3.8 and vice versa.
While biblatex aims at keeping the user interface and also the interface for style developers stable (the former more so than the latter), the requirement for stability is not felt as much for the temporary files as they can be easily deleted and recreated anyway. Development of new features and fixes for bugs can always require changes to the .bbl file structure.
| common-pile/stackexchange_filtered |
Excel: how to remove leading characters of a string with a function?
I have a text column in Excel full of hexademical values like this:
0000000000000c10
00000000000036f0
00000000000274da
00000000000379e0
Which function can help me strip the leading 0's up until the first non-0 character?
Desired output:
c10
36f0
274da
379e0
This would help me feed the HEX2DEC function.
Ok, found it.
Assuming the values are in a table, and its column is named "Column1":
=REPLACE([@Column1],1,FIND(LEFT(SUBSTITUTE([@Column1],"0","")),[@Column1])-1,"")
This will produce the desired output as shown in the question.
The core of the solution is handled by this formula, which counts the 0's before the text value we're looking for.
=FIND(LEFT(SUBSTITUTE([@Column1],"0","")),[@Column1])-1
| common-pile/stackexchange_filtered |
ggplot spatial data and ggsave unique id's in a for loop
I have a dataset with watersheds across 3 provinces. Each watershed has a corresponding shapefile, and score from 0 to 1 for ecosystem disruption under 3 rcp (climate projection models).
Here is the head of my dataframe:
ws rcp ecodisrup geometry
ANNAPOLIS_sp rcp26 0.09090909 MULTIPOLYGON (((-64.86239 4...
BARRINGTON_C rcp26 0.00000000 MULTIPOLYGON (((-65.4722 43...
CHETICAMP_RI rcp26 0.09090909 MULTIPOLYGON (((-60.59559 4...
CLAM_HRB_ST. rcp26 0.27272727 MULTIPOLYGON (((-61.38909 4...
COUNTRY_HARB rcp26 0.09090909 MULTIPOLYGON (((-61.96444 4...
EAST_INDIAN_ rcp26 0.09090909 MULTIPOLYGON (((-63.94016 4...
The watersheds and their shapefiles repeat for each rcp (26, 45, 85), eg:
ws rcp ecodisrup geometry
ANNAPOLIS_sp rcp26 0.09090909 MULTIPOLYGON (((-64.86239 4...
... rcp26 0.00000000 MULTIPOLYGON (((-65.4722 43...
46th ws rcp26 0.09090909 MULTIPOLYGON (((-60.59559 4...
ANNAPOLIS_sp rcp45 0.27272727 MULTIPOLYGON (((-64.86239 4...
... rcp45 0.09090909 MULTIPOLYGON (((-65.4722 43...
46th ws rcp45 0.09090909 MULTIPOLYGON (((-60.59559 4...
ANNAPOLIS_sp rcp85 0.09090909 MULTIPOLYGON (((-64.86239 4...
... rcp85 0.00000000 MULTIPOLYGON (((-65.4722 43...
46th ws rcp85 0.09090909 MULTIPOLYGON (((-60.59559 4...
I'd like to map the watersheds, coloured by ecodisrup, under each of the three rcp scenarios by writing a for loop. The problem I run into is including a ggsave function in the for loop, and also the loop I'm writing is just producing the same map three times.
Later on in my analysis I'll need the same type of for loop for mapping, but I'll add in a species column (produce a map coloured by ecodisrup for each species across all watersheds, under 3 rcp scenarios).
Here is the code I've tried before, expected an output of three maps (one for each rcp). Instead I got three maps, but they all had the exact same colouring (not sure if they just coloured by the last one? rcp85?).
# make empty list to fill in with plots
ecodis_rcp_plots = list()
# define the different rcps
ecodis_rcp = unique(ecodis$rcp)
# begin for loop
for (rcp in ecodis_rcp){
ecodis_rcp_plots[[rcp]] = ggplot(data = ecodis, aes(geometry = geometry)) +
geom_sf(aes(fill = ecodisrup)) +
scale_fill_viridis_c(option = "viridis", limits = c(0, 1))
ggsave(paste("C:/Users/myname/Desktop/EcoDis", rcp, ".png"),ecodis_rcp_plots[[rcp]],width=8,height=8,units="in",dpi=300)
}
Any insight would be greatly appreciated, thank you!
It's the same ggplot(data = ecodis, aes(geometry = geometry)) + geom_sf(aes(fill = ecodisrup)) in every iteration, of that loop, no filtering / subsetting on data. Same input data >> same output plot.
Oh I see that now, thank you! How would I go about changing the input data on each iteration?
One option is to split the dataset first and then cycle through the list of resulting sf objects. Following is based on a nc dataset from sf package for a more generic example:
library(ggplot2)
library(sf)
#> Linking to GEOS 3.9.3, GDAL 3.5.2, PROJ 8.2.1; sf_use_s2() is TRUE
# prepare some example data
nc <- st_read(system.file("shape/nc.shp", package="sf"), quiet = TRUE)
nc$sample_group <- paste("Group", rep(1:5, each = 20))
nc$area <- scales::rescale(nc$AREA)
nc[,c("NAME", "sample_group", "area")]
#> Simple feature collection with 100 features and 3 fields
#> Geometry type: MULTIPOLYGON
#> Dimension: XY
#> Bounding box: xmin: -84.32385 ymin: 33.88199 xmax: -75.45698 ymax: 36.58965
#> Geodetic CRS: NAD27
#> First 10 features:
#> NAME sample_group area geometry
#> 1 Ashe Group 1 0.36180905 MULTIPOLYGON (((-81.47276 3...
#> 2 Alleghany Group 1 0.09547739 MULTIPOLYGON (((-81.23989 3...
#> 3 Surry Group 1 0.50753769 MULTIPOLYGON (((-80.45634 3...
#> 4 Currituck Group 1 0.14070352 MULTIPOLYGON (((-76.00897 3...
#> 5 Northampton Group 1 0.55778894 MULTIPOLYGON (((-77.21767 3...
#> 6 Hertford Group 1 0.27638191 MULTIPOLYGON (((-76.74506 3...
#> 7 Camden Group 1 0.10050251 MULTIPOLYGON (((-76.00897 3...
#> 8 Gates Group 1 0.24623116 MULTIPOLYGON (((-76.56251 3...
#> 9 Warren Group 1 0.38190955 MULTIPOLYGON (((-78.30876 3...
#> 10 Stokes Group 1 0.41206030 MULTIPOLYGON (((-80.02567 3...
# split by some factor, resulting list is named according to factor levels:
nc_split <- split(nc, ~sample_group)
# create named list of nc_split names for lapply(), this helps us to get
# a named list from lapply() while providing access to sample_group values
nc_groups <- setNames(names(nc_split), names(nc_split))
nc_groups
#> Group 1 Group 2 Group 3 Group 4 Group 5
#> "Group 1" "Group 2" "Group 3" "Group 4" "Group 5"
# cycle though the list of sf objects and generate list of plots
plots <- lapply(nc_groups, \(grp_name){
ggplot(nc_split[[grp_name]], aes(fill = area)) +
geom_sf() +
scale_fill_viridis_c(limits = c(0, 1)) +
ggtitle(grp_name) +
theme_void() +
theme(legend.position = "none")
})
# resulting list of plots, named:
str(plots, max.level = 1)
#> List of 5
#> $ Group 1:List of 9
#> ..- attr(*, "class")= chr [1:2] "gg" "ggplot"
#> $ Group 2:List of 9
#> ..- attr(*, "class")= chr [1:2] "gg" "ggplot"
#> $ Group 3:List of 9
#> ..- attr(*, "class")= chr [1:2] "gg" "ggplot"
#> $ Group 4:List of 9
#> ..- attr(*, "class")= chr [1:2] "gg" "ggplot"
#> $ Group 5:List of 9
#> ..- attr(*, "class")= chr [1:2] "gg" "ggplot"
# visualise all plots:
patchwork::wrap_plots(plots, nrow = 3)
# save all
for (plot_name in names(plots)) {
ggsave(paste0("nc_plot ", plot_name, ".png"),plots[[plot_name]])
}
#> Saving 7 x 5 in image
#> Saving 7 x 5 in image
#> Saving 7 x 5 in image
#> Saving 7 x 5 in image
#> Saving 7 x 5 in image
# check resulting files
fs::dir_info(glob = "nc_plot*")[,1:3]
#> # A tibble: 5 × 3
#> path type size
#> <fs::path> <fct> <fs::bytes>
#> 1 nc_plot Group 1.png file 84.1K
#> 2 nc_plot Group 2.png file 96.7K
#> 3 nc_plot Group 3.png file 96.3K
#> 4 nc_plot Group 4.png file 88.9K
#> 5 nc_plot Group 5.png file 93.9K
Created on 2023-06-16 with reprex v2.0.2
| common-pile/stackexchange_filtered |
Android Aidl Compile Error: couldn't find import for class
(I know there're multiple questions on stackoverflow and elsewhere (like google group) about adding parcelable for NetworkInfo but this is not about that.)
My work is under $(AOSP_ROOT)/device/ and involves multiple aidl files. one of it is like,
package com.example;
parcelable SomeRequest;
And another aidl is like,
package com.example;
import com.example.SomeRequest;
interface SomeService {
SomeRequest getRequest();
}
And I'll get compile errors like,
device/somedevice/sdk/libs/aidl/com/example/SomeService.aidl:9: couldn't find import for class com.example.SomeRequest
I'm wondering it is the order of processing aidl files. My Android.mk looks like,
LOCAL_SRC_FILES := $(call all-java-files-under, src) $(call all-Iaidl-files-under, aidl)
LOCAL_AIDL_INCLUDES := $(call all-Iaidl-files-under, aidl)
This build error is introduced after I moved aidl files from src/ folder to aidl/ folder (for some reason I have to do so). It worked before but now even if I moved it back to src/ folder it doesn't work anymore. I tried to clean up $(AOSP_ROOT)/out/device/target but it's not helping.
Ideas?
Finally I got it resolved.
If you dig into /build/core/base_rules.mk, you'll find that LOCAL_AIDL_INCLUDES is actually the folders to be included during AIDL compiling phase in addition to the default folders like framework or so.
$(aidl_java_sources): PRIVATE_AIDL_FLAGS := -b $(addprefix -p,$(aidl_preprocess_import)) -I$(LOCAL_PATH) -I$(LOCAL_PATH)/src $(addprefix -I,$(LOCAL_AIDL_INCLUDES))
In this specific case, what you want is actually,
LOCAL_SRC_FILES := $(call all-java-files-under, src) $(call all-Iaidl-files-under, aidl)
LOCAL_AIDL_INCLUDES := $(LOCAL_PATH)/aidl
What if my *.aidl file define in library, is not in current local path, just like ../LibraryProject/aidl, how to fix?
This 'all-Iaidl-files-under' function returned with path relative to $(LOCAL_PATH). You can specify library path directly from your root project. That is:
LOCAL_AIDL_INCLUDES := LibraryProject/aidl
@m0skit0, it's referring to e.g., https://android.googlesource.com/platform/build/+/master/core/base_rules.mk
This seems no longer valid. Running Android Studio 3.4, there si no base_rules.mk file, and no AIDL include path in any configuration.
It's moved to build/make/core/java.mk in AOSP12+
It still gives the same error. ERROR: lineage-sdk/sdk/src/java/lineageos/app/IProfileManager.aidl: Couldn't find import for clas s android.app.NotificationGroup
So I looked around - the solution is to put the AIDL files only into their own directory called aidl, which I think some others have suggested.
It turns out that Android Studio looks there, but doesn't quite know what to do about .aidl files put in the src directory.
I don't get it. I just don't get it. AIDL imports worked perfectly in Eclipse. Android Studio is not production ready. This is a major step backwards.
I'm was having trouble getting the AIDL compiler to find imports as well, and I'm thinking there has to be a better way than modifying base_rules.mk, which the platform manages.
Arg!
I'm using Android Studio 2.1 and did pretty much what you suggested: Put 2 .aidl files in aidl folder. However, what happened to me is that Android Studio prevented me to create the second .aidl (i.e., SomeRequest.aidl based on OP). It turns out you need to create this file a little bit brute-forcely: Create an .aidl with a random name and then change to the correct name. Otherwise Android Studio is not happy.
I already have aidl files in a separate aidl/ folder. I even tried with subfolder in src/ and a root folder. Still same problem.
Ok, so I was struggling for days trying to figure out this same problem. Clean/Rebuilds didn't work for me. Eventually, I realized a really simple mistake I was doing in Android Studio - When I right-clicked my mouse and created a new "AIDL", I wasn't clicking in the appropriate folder as my original model classes.
For example, my model was in a java package called com.example.model, but I wasn't right-clicking in that folder when I created my AIDL. So when Android Studio generates the AIDL folder for me, it would reference the wrong folder, and I would end up with a compile error. I wish the android documentation made it clear that we needed to be right-clicking in the appropriate folder before generating the AIDL files. I know, it sounds silly, but I hope this answer applies to some of you!
just need name the .aidl file to be same as the .java file!
As of now, currently accepted answer is no longer an option. No base_rules.mk file in Android Studio 3.4, none that I could find anyway.
To solve this, I took an example from IPackageStatsObserver which uses a parcelable PackageStats.
All imports must be declared in a .aidl file, some kind of .h if we were in C.
Here is the content of the PackageStats.aidl file, reflecting the existence of a class with the same name (on the java side):
package android.content.pm;
parcelable PackageStats;
So I declared all my parcelable in .aidl, for every matching .java I need to use in my aidl interface, and voila, it compiles.
Got to see if it actually works now ;)
Is there a verbose debug.log file for aidl.exe ?
To resolve couldn't find import for class in parcelable class, put the aild file and java file in correct package and folder
// MyObject.aidl
package com.example.androidaidlserver.model;
parcelable MyObject;
// ILocalLogService.aidl
package com.example.androidaidlserver;
import com.example.androidaidlserver.model.MyObject;
interface ILocalLogService {
void writeObject(in MyObject log);
}
// MyObject.java
package com.example.androidaidlserver.model;
import android.os.Parcel;
import android.os.Parcelable;
public class MyObject implements Parcelable {
...
Had no idea I need to create package hierarchy for aidl files too. Thanks, man, your screenshot gave me a clue!
Thank you. This was very helpful. But I also needed to update gradle wrapper 7.6.3 to 8.0.1 to build the project successfully.
For anyone who's still encountering this issue as of 2023, just put your AIDL files directly inside the aidl folder, without any package hierarchy, but do not change the package declaration inside the aidl file itself.
Make sure each AIDL file has its counterpart java file, for example, if you have a LicenseValidator.aidl, you have to have a LicenseValidator.java, because they work together.
If nothing worked, try to have your code compiled WITHOUT the aidl files, that is, anything depending on it or whatever. There is this weird bug in Android Studio Iguana that won't let you detect aidl files no matter what, until at least a successful build happens...?
| common-pile/stackexchange_filtered |
How can i change calender view code from d3 v3 to v5?
I'm having a trouble with when i want to change calendar view code (https://github.com/mohans-ca/d3js-heatmap) in d3 from v3 to v5.
I changed some parts in code like d3.timeFormat, d3.scaleLinear but there are also some missing points.
In version3 the code which i could not change to v5 code shows below:
var data = d3.nest()
.key(function(d) { return d.Date; })
.rollup(function(d) { return Math.sqrt(d[0].Comparison_Type / Comparison_Type_Max); })
.entries(calenderData);
console.log(data);
It seems on my console like below:
But in V3 it should be seems like object, v3 code is below:
var data = d3.nest()
.key(function(d) { return d.Date; })
.rollup(function(d) { return Math.sqrt(d[0].Comparison_Type / Comparison_Type_Max); })
.map(calenderData);
and it seems and working in v3:
How can i change my v3 code to getting data with object?
Is there any especial reason to use V5 instead of V6?
Hi, thanks for your reply. I will use calender view visualization on my other V5 js code. But it is possible to use V6 instead of v3.
Convert your data with this simple routine:
const objData = data.reduce((obj, {key, value}) => ({...obj, [key]: value}), {});
Hi, i tried your code and got an error, you can see below my editing code as you said, where should i change? Thank you
See my comment on your answer. Also, please note that the usual StackOverflow practice is to reply to answers with comments, not to post other answers - It's confusing. Hope it helps :)
Hi, thank you for advice :) i edited my code but now Object seems undefined on my console. var objData=calenderData.reduce((obj,{key,value})=>({...obj,[key]:value}),{});
What do you see in your console?
'Object
undefined: undefined
proto: Object' i only see this undefined object. By the way, i dont know how can i insert an image on comment line so paste it console output.
Well, I think we have a mess here... Let's go back to the beginning. You say var data = d3.nest(... returns an array of 87 {key, value} objects (the first screenshot). Then, you need to convert that array to an object like in the second screenshot. Am I right?
Yes, i have calenderData which is array of 87 {key,value} which are Date and Comparison_Type (i mean values for every Date) for my calender view. In version3, calenderData works good with d3.nest.rollup.map. With this function my calenderData looks an Object of 87. But when i change this code with version5, d3.nest.rollup.map. doesnt work, i need to show my calenderData with same in v3 like Object format.
So my answer does exactly what you need: convert an array data to an object objData. Or we just don't understand each other and you need the opposite: convert an object to an array... Please check everything again very carefully, print out your data at every step and try to figure out what's going on
Hi, i posted some codes to my answer section. You can find my reply here. Thanks
Hi, i solve my problem now thanks for your support
Glad to help :) Please mark my answer as correct.
| common-pile/stackexchange_filtered |
Macro to remove ASCII code
Any idea if there is a Excel macro to remove ASCII code such as & #148; and so on.
There is a function like CLEAN for non-printable characters to do it?
please clarify your question. Do you want to remove all #148 characters from cell, entire worksheet / workbook? Please also share what have you already tried.
Do you mean something like:
Sub kleanup()
Cells.Replace what:=Chr(148), replacement:=""
End Sub
You can search a given cell for the char and replace it with whatever you want.
Dim iIndex as integer
Dim szValue as String
szValue = Range("A1").Value
iIndex = InStr(szValue, Chr(148))
If iIndex <> 0 Then
Mid(szValue, iIndex, 1) = " "
End If
Range("A1").Value = szValue
Thanks in fact i would like to apply it to all ranges cells of my excel file
| common-pile/stackexchange_filtered |
Win7 shortcut targets not changing
I have Win7 installed on an HDD (C drive) that has all my stuff on it. Recently I bought an SSD (E drive) and today I have been migrating things over. I changed the registries so that C is my default installation drive using this guide. My problem is that some of my desktop shortcut's targets point to C:.... for applications that are installed in E:.... The start in field is still correct, but the target is wrong. When I try to change the target, I am able to edit it to E:.... and click apply, but when I close the properties window it goes back to C:.... Why is this and how can I fix it?
Edit: The applications themselves are fine, its just the shortcuts that are messed up. If I navigate to their folder everything works.
Sorry, your link is broken. Which values in your reg did you modify?
I just formatted my computer to install my OS on my SSD, and apps in my HDD. I've also modified my reg keys (CommonFilesDir and ProgramFilesDir) to point to my HDD but I noticed the 32-bit software doesn't install there and sometimes Win7 has trouble finding them.
The link is fixed.
I'm trying to add a command line parameter to a Windows 7 shortcut, that starts with an at symbol (@). Although it seems to save properly when hitting [Apply], it actually doesn't. Changing it to (/@) works...but is not what the application I'm using requires: http://jpsoft.com/help/cmdlineopts.htm :(
If I understand you right, your programs are working except the shortcuts on your desktop are all messed up. If this are simply shortcuts on your desktop, just delete them and recreate them. To recreate them just go to them in your program menu, use your RIGHT mouse button to drag them to your desktop. When you release it, select "create shortcut".
Yeah... I know I could do that is (a) not interesting and (b) doesnt fix the underlying problem. While C is my default drive, I still install a bunch of stuff to E. I dont want to have to recreate a shortcut EVERY time I install something to E.
Are you having to change the icon now when you install something new? It doesn't point to E now if you install it on E?
Yes, the icon also points incorrectly to C and I need to reset it to its actual location in E. And yes, this only happens when I install to E. Things installed to C work fine.
| common-pile/stackexchange_filtered |
How to model a one-to-one relationship in JPA when the "parent" table has a composite PK?
While there is plenty of information around on how to model, in JPA (2), a one-to-one relationship OR an entity having a natural key, I haven't been able to find a clear / simple answer to how to model the situation where we have both, i.e. a one-to-one relationship where the parent table has a natural key. It could obviously be that I might have missed such a tutorial; if so, pointing me to one could also be the answer.
And, as many times with JPA and noobs such as I, the moment one needs a bit more than the most basic model, one can quickly hit the wall.
Hence, considering the following DB model:
What would be the corresponding JPA-annotated object model? (I'm sparing you guys of the things I've tried since I don't want to influence the answer...)
Performance recommendations are also welcome (e.g. "a one-to-many could perform faster", etc.)!
Thanks,
The parent object has a composite PK, and so has @IdClass. The Child object has a single field annotated with @Id and a Parent reference. No idea how that is "non-standard". Who knows what "T_CHILD_C_PK" is. You put the FK at one side or the other, so I have to assume it is P_NK_1 and P_NK_2 in the Child
Indeed, the previous diagram was less than clear, thanks. I've updated the post with a (hopefully) clearer image. The parent table (T_PARENT) has a natural key formed of 2 fields, (PARENT_PK1, PARENT_PK2) and is being referenced by the child table T_CHILD.
Your schema is clearer now, but the fact remains if you just have a @OneToOne in the Child back to the Parent then it maps simply (and the "parent" relation field in the Child gives the 2 FK columns in the CHILD table).
The composite identifier is built out of two numerical columns so the mapping looks like this:
@Embeddable
public class EmployeeId implements Serializable {
private Long companyId;
private Long employeeId;
public EmployeeId() {
}
public EmployeeId(Long companyId, Long employeeId) {
this.companyId = companyId;
this.employeeId = employeeId;
}
public Long getCompanyId() {
return companyId;
}
public Long getEmployeeId() {
return employeeId;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof EmployeeId)) return false;
EmployeeId that = (EmployeeId) o;
return Objects.equals(getCompanyId(), that.getCompanyId()) &&
Objects.equals(getEmployeeId(), that.getEmployeeId());
}
@Override
public int hashCode() {
return Objects.hash(getCompanyId(), getEmployeeId());
}
}
The parent class, looks as follows:
@Entity(name = "Employee")
public static class Employee {
@EmbeddedId
private EmployeeId id;
private String name;
@OneToOne(mappedBy = "employee")
private EmployeeDetails details;
public EmployeeId getId() {
return id;
}
public void setId(EmployeeId id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public EmployeeDetails getDetails() {
return details;
}
public void setDetails(EmployeeDetails details) {
this.details = details;
}
}
And the child like this:
@Entity(name = "EmployeeDetails")
public static class EmployeeDetails {
@EmbeddedId
private EmployeeId id;
@MapsId
@OneToOne
private Employee employee;
private String details;
public EmployeeId getId() {
return id;
}
public void setId(EmployeeId id) {
this.id = id;
}
public Employee getEmployee() {
return employee;
}
public void setEmployee(Employee employee) {
this.employee = employee;
this.id = employee.getId();
}
public String getDetails() {
return details;
}
public void setDetails(String details) {
this.details = details;
}
}
And everything works just fine:
doInJPA(entityManager -> {
Employee employee = new Employee();
employee.setId(new EmployeeId(1L, 100L));
employee.setName("Vlad Mihalcea");
entityManager.persist(employee);
});
doInJPA(entityManager -> {
Employee employee = entityManager.find(Employee.class, new EmployeeId(1L, 100L));
EmployeeDetails employeeDetails = new EmployeeDetails();
employeeDetails.setEmployee(employee);
employeeDetails.setDetails("High-Performance Java Persistence");
entityManager.persist(employeeDetails);
});
doInJPA(entityManager -> {
EmployeeDetails employeeDetails = entityManager.find(EmployeeDetails.class, new EmployeeId(1L, 100L));
assertNotNull(employeeDetails);
});
doInJPA(entityManager -> {
Phone phone = entityManager.find(Phone.class, "012-345-6789");
assertNotNull(phone);
assertEquals(new EmployeeId(1L, 100L), phone.getEmployee().getId());
});
Code available on GitHub.
Thanks Vlad! What I kinda' wanted to obtain was a member / property in the "parent" class (Employee in your example) giving me access to the child (EmployeeDetails). Something like: public class Employee { ... public EmployeeDetails getDetails() { ... } ...
Because this uses Objects class which was introduced in Java 7, it does not compile in Java 6.
Java 6 is very old. Maybe it's time to upgrade it.
And, if truly stuck for whatever reason with versions prior to 7, you could create your own replicas, eventually by simply copying the code from OpenJDK (although implementing on one's own should not be hard :) ) -- see http://hg.openjdk.java.net/jdk10/jdk10/jdk/file/72f33dbfcf3b/src/java.base/share/classes/java/util/Objects.java
| common-pile/stackexchange_filtered |
How to send file in Android with Asmack+Openfire?
could you please send me the code to send file using Asmack + openfire.
I tried, but I am getting error like error code="503" type="cancel.
Please, help me.
i did all as you said, still its not working.
Ok check this http://stackoverflow.com/questions/21721605/asmack-file-sending-error-503-type-cancel-with-openfire
You have to send a fully-qualified jabber ID in the "sentTo" it consists of a node, a domain, and a resource (user@domain/resource), what are you sending actually?.
Yes you are correct. now i am able to send a file. Thanks vzamanillo thanks a lot :)
But still getting problem while receiving a file in spark chat. I mean there was an error during file transfer.
You have to send a fully-qualified jabber ID as detination userID when you create the OutgoingFileTransfer it consists of a node, a domain, and a resource (user@domain/resource) as I said before in the comments, actually you are sending a2@aaa and is not correct.
<iq id="SU8c1-17" to="a2@aaa" from="a1@aaa/Smack" type="set">
<si xmlns="http://jabber.org/protocol/si" id="jsi_2427513438410796738" profile="http://jabber.org/protocol/si/profile/file-transfer">
<file xmlns="http://jabber.org/protocol/si/profile/file-transfer" name="user.json" size="379">
<desc>test_file</desc>
</file>
<feature xmlns="http://jabber.org/protocol/feature-neg">
<x xmlns="jabber:x:data" type="form">
<field var="stream-method" type="list-multi">
<option>
<value>http://jabber.org/protocol/bytestreams</value>
</option>
<option>
<value>http://jabber.org/protocol/ibb</value>
</option>
</field>
</x>
</feature>
</si>
</iq>
So, your sentTo variable shoulbe
String sentTo = "user@domain/resource";
OutgoingFileTransfer transfer = manager.createOutgoingFileTransfer(sentTo)
<iq type="error" id="U45Co-6" from="shailesh@apps-shailesh/Smack" to="admin@apps-shailesh/Smack"
I am getting this even putting valus
I think you should send a jabber id with the file node, source and domain. It should be in sentTo method.
| common-pile/stackexchange_filtered |
Kii not working on unity android build
I want to use Kii(Cloud) as backend for my Unity Android game.
Everything works fine on my computer. I added some users, a few global buckets and used queries to filter information from them. The problems start when I try to start the .apk on my phone.
The app itself starts, but it never gets past a grey screen.
The unity scene never loads.
Investigating it further, even an empty unity project with the unity-cloud-sdk-3.2.10.unitypackage imported does not seem to load the first scene. Just a grey screen.
I am using Unity 2018.1.0b13 Personal Edition.
Tested on physical OnePlus One with Android Cyanogen Mod
and emulated Nexus 5 with standard Android.
Does anyone know that is wrong here?
I posted this exact same question on their own forum, but the last post there is from september 2017.
Copied from the KiiCloud Forum
Hi,
I confirmed that unity-cloud-sdk-3.2.10.unitypackage works properly on the android real device.
My env is:
Unity Personal Edition Version 2017.4.0f1
Galaxy S8 (Android7.0)
ASUS Zenfone3 (Android7.0)
Did you get any error messages?
Please ensure that your project is configured properly.
http://docs.kii.com/en/guides/cloudsdk/unity/quickstart/install-sdk/
Best regards,
So basically Unity 2018 does not work properly if you try to build for android.
http://community.kii.com/t/kii-not-working-on-unity-android-build/1678
| common-pile/stackexchange_filtered |
How to get max values for each unique value in a different column in Google Sheets?
I have two columns, the first column (A) has names and the second column (B) has values.
A
B
apple
10
orange
12
orange
14
apple
8
Is there a way to get only rows with unique names from A AND max values from B?
So the result should look like this:
A
B
apple
10
orange
14
I tried using different combinations of FILTER, QUERY and UNIQUE, but so far no luck. Note that the actual dataset I'm using is much larger than this, but the idea is the same.
use:
=SORTN(SORT(A:B; 2; 0); 9^9; 2; 1; 1)
Should be doable directly within a query as well:
=query(A:B,"select A,max(B) where A is not null group by A")
| common-pile/stackexchange_filtered |
Create Superuser who can access more than one Schema in oracle 11G
I have two Schema Schema-1 and Schema-2. I want to create one super User Who can access both Schema(Schema-1 and Schema-2).
I want to create a user with command in oracle 11g. It is possible?
See https://stackoverflow.com/questions/9447492/how-to-create-a-user-in-oracle-11g-and-grant-permissions
Such an user already exists; it is called SYS, who owns the database. Though, it is not a very good idea to use it for daily jobs - you'd rather (as you wanted) create your own "superuser" who is capable of doing such things. For example:
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> create user superuser identified by superman;
User created.
SQL> grant dba to superuser;
Grant succeeded.
OK, let's try it:
SQL> connect superuser/superman
Connected.
SQL> select count(*) From scott.emp;
COUNT(*)
----------
14
SQL> select table_name from dba_tables where owner = 'MIKE';
TABLE_NAME
------------------------------
EMP
DEPT
BONUS
SALGRADE
DUMMY
ABC
6 rows selected.
SQL> select * from mike.abc;
KEY ID SEQ THINGS DESCR
---------- ---------- ---------- ---------- ----------
1 1 0 Food Chicken
2 1 1 Cars BMW
3 1 2 Sport Soccer
4 2 0 Food Mutton
5 2 1 Cars Ford
6 2 2 Sport Tennis
6 rows selected.
SQL>
Now, is DBA right role for that user, I can't tell. Maybe it is not, so perhaps you'd rather grant only required set of privileges. Which set is it, I can't tell either.
Maybe it would be enough to grant e.g. select privileges to superuser for both schema1 and schema2 users' tables. Though, you can't do that in a single command - you'd have to do it separately for each user and for each of their tables (which means a lot of grant select statements). Let's try it:
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> revoke dba from superuser;
Revoke succeeded.
SQL>
It is a boring job writing statement-by-statement, so I'll write code to write code for me:
SQL> select 'grant select on ' || owner ||'.' ||table_name || ' to superuser;' str
2 from dba_tables
3 where owner in ('SCOTT', 'MIKE')
4 order by owner, table_name;
STR
--------------------------------------------------------------------------------
grant select on MIKE.ABC to superuser;
grant select on MIKE.BONUS to superuser;
grant select on MIKE.DEPT to superuser;
<snip>
grant select on SCOTT.TEST_B to superuser;
grant select on SCOTT.TEST_D to superuser;
26 rows selected.
SQL>
OK; now copy/paste the above grant statements and run them.
SQL> grant select on MIKE.ABC to superuser;
Grant succeeded.
SQL> grant select on MIKE.BONUS to superuser;
Grant succeeded.
SQL> grant select on MIKE.DEPT to superuser;
Grant succeeded.
<snip>
SQL> grant select on SCOTT.TEST_B to superuser;
Grant succeeded.
SQL> grant select on SCOTT.TEST_D to superuser;
Grant succeeded.
SQL>
Does it work?
SQL> connect superuser/superman
ERROR:
ORA-01045: user SUPERUSER lacks CREATE SESSION privilege; logon denied
Warning: You are no longer connected to ORACLE.
SQL>
Aha! Not just yet! Revoking DBA revoked a large set of privileges, so superuser now exists as user, but can't do anything. So, let's let it connect to the database:
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> grant create session to superuser;
Grant succeeded.
SQL> connect superuser/superman
Connected.
SQL> select * From scott.dept;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SQL> select * From mike.abc;
KEY ID SEQ THINGS DESCR
---------- ---------- ---------- ---------- ----------
1 1 0 Food Chicken
2 1 1 Cars BMW
3 1 2 Sport Soccer
4 2 0 Food Mutton
5 2 1 Cars Ford
6 2 2 Sport Tennis
6 rows selected.
SQL>
Right; much better. That's what I meant by saying "grant only required set of privileges"; don't grant more privileges than someone really needs.
| common-pile/stackexchange_filtered |
Spring Security - No visible WebSecurityExpressionHandler instance could be found in the application context
I am having trouble displaying a logout link in a JSP page only if the user is authenticated. Here is the exception I have at this line of the JSP page:
<sec:authorize access="isAuthenticated()">
Exception:
Stacktrace:
....
root cause
javax.servlet.jsp.JspException: No visible WebSecurityExpressionHandler instance could be found in the application context. There must be at least one in order to support expressions in JSP 'authorize' tags.
org.springframework.security.taglibs.authz.AuthorizeTag.getExpressionHandler(AuthorizeTag.java:100)
org.springframework.security.taglibs.authz.AuthorizeTag.authorizeUsingAccessExpression(AuthorizeTag.java:58)
Here is my application-context-Security.xml:
<http auto-config='true' >
<intercept-url pattern="/user/**" access="ROLE_User" />
<logout logout-success-url="/hello.htm" />
</http>
<beans:bean id="daoAuthenticationProvider"
class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<beans:property name="userDetailsService" ref="userDetailsService" />
</beans:bean>
<beans:bean id="authenticationManager"
class="org.springframework.security.authentication.ProviderManager">
<beans:property name="providers">
<beans:list>
<beans:ref local="daoAuthenticationProvider" />
</beans:list>
</beans:property>
</beans:bean>
<authentication-manager>
<authentication-provider user-service-ref="userDetailsService">
<password-encoder hash="plaintext" />
</authentication-provider>
</authentication-manager>
I understand that I could use use-expression="true" in the http tag but that means I would have to use expression in the intercept-url tags and in the java code. Is there a workaround?
An unrelated observation. The daoAuthenticationProvider and authenticationManager in your configuration aren't being used.
You can just add one to your application context
<bean id="webexpressionHandler" class="org.springframework.security.web.access.expression.DefaultWebSecurityExpressionHandler" />
but the easiest way is just to enable expressions in your <http> configuration, and one will be added for you. This only means that you have to use expressions within that block, not in Java code such as method @Secured annotations.
You are right, I just had to use expression within the block. The problem is solved.
and to be absolutely explicit, your tag should include a "use-expressions" attribute set to true, e.g.
I share what worked for me:
Add bean
<b:bean id="jspExpresionHandler" class="org.springframework.security.web.access.expression.DefaultWebSecurityExpressionHandler"> </b:bean>
Add use-expressions in 'http'
<http auto-config="true" use-expressions="true" pattern="/**">
Add global-method-security after
<global-method-security pre-post-annotations="enabled"> <expression-handler ref="jspExpresionHandler"/> </global-method-security>
| common-pile/stackexchange_filtered |
How to send iMessage on AppleScript?
I want to send an iMessage using AppleScript, and Python. Code is compiling, but no messages are being sent.
script = """tell application "Messages"
send "TEST TEXT" to buddy "1xxxxxxx" of service<EMAIL_ADDRESS>
end tell"""
def script_run(script):
applescript.run(script)
script_run(script)
script = """tell application "Messages"
send "TEST TEXT" to buddy "1xxxxxxx" of service<EMAIL_ADDRESS>
end tell"""
def script_run(script):
applescript.run(script)
script_run(script)
I don't know whether this will be a factor for you, but in case you aren't aware, you will only be able to send messages to existing conversations in Messages. You cannot send an iMessage with AppleScript as a brand new conversation.
| common-pile/stackexchange_filtered |
Does MySQL view always do full table scan?
I'm trying to optimize a query which uses a view in MySQL 5.1. It seems that even if I select 1 column from the view it always does a full table scan. Is that the expected behaviour?
The view is just a SELECT "All Columns From These Tables - NOT *" for the tables I have specified in the first query below.
This is my explain output from when i select the indexed column PromotionID from the query which makes up the view. As you can see it is vastly different from the output on the view.
EXPLAIN SELECT pb.PromotionID FROM PromotionBase pb INNER JOIN PromotionCart pct ON pb.PromotionID = pct.PromotionID INNER JOIN PromotionCode pc ON pb.PromotionID = pc.PromotionID WHERE pc.PromotionCode = '5TAFF312C0NT'\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: pc
type: const
possible_keys: PRIMARY,fk_pc_pb
key: PRIMARY
key_len: 302
ref: const
rows: 1
Extra:
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: pb
type: const
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: const
rows: 1
Extra: Using index
*************************** 3. row ***************************
id: 1
select_type: SIMPLE
table: pct
type: const
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: const
rows: 1
Extra: Using index
3 rows in set (0.00 sec)
The output when i select the same thing but from the view
EXPLAIN SELECT vpc.PromotionID FROM vw_PromotionCode vpc WHERE vpc.PromotionCode = '5TAFF312C0NT'\G;
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: <derived2>
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 5830
Extra: Using where
*************************** 2. row ***************************
id: 2
select_type: DERIVED
table: pcart
type: index
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 33
Extra: Using index
*************************** 3. row ***************************
id: 2
select_type: DERIVED
table: pb
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: readyinteractive.pcart.PromotionID
rows: 1
Extra:
*************************** 4. row ***************************
id: 2
select_type: DERIVED
table: pc
type: ref
possible_keys: fk_pc_pb
key: fk_pc_pb
key_len: 4
ref: readyinteractive.pb.PromotionID
rows: 249
Extra: Using where
*************************** 5. row ***************************
id: 3
select_type: UNION
table: pp
type: index
possible_keys: PRIMARY
key: pp_p
key_len: 4
ref: NULL
rows: 1
Extra: Using index
*************************** 6. row ***************************
id: 3
select_type: UNION
table: pb
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: readyinteractive.pp.PromotionID
rows: 1
Extra:
*************************** 7. row ***************************
id: 3
select_type: UNION
table: pc
type: ref
possible_keys: fk_pc_pb
key: fk_pc_pb
key_len: 4
ref: readyinteractive.pb.PromotionID
rows: 249
Extra: Using where
*************************** 8. row ***************************
id: 4
select_type: UNION
table: pcp
type: index
possible_keys: PRIMARY
key: pcp_cp
key_len: 4
ref: NULL
rows: 1
Extra: Using index
*************************** 9. row ***************************
id: 4
select_type: UNION
table: pb
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: readyinteractive.pcp.PromotionID
rows: 1
Extra:
*************************** 10. row ***************************
id: 4
select_type: UNION
table: pc
type: ref
possible_keys: fk_pc_pb
key: fk_pc_pb
key_len: 4
ref: readyinteractive.pb.PromotionID
rows: 249
Extra: Using where
*************************** 11. row ***************************
id: 5
select_type: UNION
table: ppc
type: index
possible_keys: PRIMARY
key: ppc_pc
key_len: 4
ref: NULL
rows: 1
Extra: Using index
*************************** 12. row ***************************
id: 5
select_type: UNION
table: pb
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: readyinteractive.ppc.PromotionID
rows: 1
Extra:
*************************** 13. row ***************************
id: 5
select_type: UNION
table: pc
type: ref
possible_keys: fk_pc_pb
key: fk_pc_pb
key_len: 4
ref: readyinteractive.pb.PromotionID
rows: 249
Extra: Using where
*************************** 14. row ***************************
id: 6
select_type: UNION
table: ppt
type: index
possible_keys: PRIMARY
key: ppt_pt
key_len: 4
ref: NULL
rows: 1
Extra: Using index
*************************** 15. row ***************************
id: 6
select_type: UNION
table: pb
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: readyinteractive.ppt.PromotionID
rows: 1
Extra:
*************************** 16. row ***************************
id: 6
select_type: UNION
table: pc
type: ref
possible_keys: fk_pc_pb
key: fk_pc_pb
key_len: 4
ref: readyinteractive.pb.PromotionID
rows: 249
Extra: Using where
*************************** 17. row ***************************
id: NULL
select_type: UNION RESULT
table: <union2,3,4,5,6>
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
Extra:
17 rows in set (0.18 sec)
Views in MySQL are not indexed so by their very nature require a full scan each time they are accessed. Generally speaking this makes Views really only useful for situations where you have a fairly complex static query that returns a small result set and you plan to grab the entire result set every time.
Edit: Of course Views will use the indexes on the underlying tables so that the View itself is optimized (otherwise they wouldn't make any sense at all to use) but because there are no indexes on a View it is not possible for a WHERE query on the View to be optimized.
Constructing indexes for Views would be expensive anyway because while I've not tried to profile any Views, I'm fairly certain that a temp table is constructed behind the scenes and then the result set returned. It already takes plenty of time to construct the temp table, I wouldn't want a view that also tries to guess what indexes are needed. Which brings up the second point which is that MySQL does not currently offer a method to specify what indexes to use for a View so how does it know what fields need to be indexed? Does it guess based on your query?
You might consider using a Temporary Table because then you can specify indexes on fields in the temporary table. However, from experience this tends to be really, really slow.
If all this view contains is a SELECT ALL FROM table1, table2, table3; then I would have to ask why this query needs to be in a View at all? If for some reason its absolutely necessary, you might want to use a stored procedure to encapsulate the query as you'll then be able to get optimized performance while maintaining the benefit of a simpler call to the database for the result set.
It says here http://dev.mysql.com/doc/refman/5.0/en/view-restrictions.html that the view will use the indexes of the underlying tables.
I am pretty sure you are right - queries on MySQL views can use indexes from its source tables. You just can't have an index on the view itself and hence cannot have a single index containing columns from more than one of those tables.
Unfortunately that doesn't explain why that second query is so poorly optimised by MySQL. All I can think of is maybe that view isn't exactly joining in the way it is intended to. Don't know though.
If you use a WHERE statement on a View then a full table scan has to be performed to find the values that match the WHERE clause because the View itself is not indexed.
I've looked deeper into it an I've missed a key point of information :( My view query actually has a union with another table. This is causing the view to use the TEMPORARY TABLE algorithm instead of the MERGE algorithm.
The TEMPORARY TABLE algorithm doesn't allow the use of indexes in the underlying tables.
This seems to be a bug in MySQL and was reported way back in 2006 but doesn't look like it has been solved in 2009! http://forums.mysql.com/read.php?100,56681,56681
Looks like i'm just going to have to re-write the query as an outer join.
| common-pile/stackexchange_filtered |
How to stop auto complete in Visual Studio 2013 C#
I'm having trouble with the auto completion in Visual Studio (C#), because it keeps completing stuff I dont want it to.
For example if I do int x = z; Then when I press z it suggests "DivideByZeroException", and if i press ; then it chooses this completion. I only want to accept the marked auto completion with enter. How can I do this?
This is not possible in C#. You can do it in C++ like this.
You'll have to press Esc to close the AutoComplete box before pressing ;.
BTW I think if you had Resharper you could do it.
As you mentioned you do have Resharper, you can configure your completion characters from
ReSharper | Options | Environment | IntelliSense | Completion Characters
as documented here.
I have respharper actually.
Go to Tools>Options>TextEditor>General UnCheck That you don't want. You can also check All Languages section
I've looked there, but I can't find any settings that turns it off.
In SharpDevelop its in Tools>Options>Text Editor>Code Completion
| common-pile/stackexchange_filtered |
AddForce changes behavior based on location
I'm trying to make an enemy jump toward its target to attack using AddForce, but just how much force is added seems to depend greatly on where in the scene the entities are (see video).
What could the issue be?
void Start(){
ani = GetComponent<Animation>();
agent = GetComponent<NavMeshAgent>();
rig = GetComponent<Rigidbody>();
state = states.idle;
}
private void Attack(){
state = states.attack;
agent.enabled = false;
ani.Play("attack3");
Vector3 force = transform.forward * 300f;
force.y = 750f;
rig.AddForce(force);
}
I am thinking it has to do with how you call the Attack method. Most likely in the first case of your video, the method is called once while in the second case, it happens to be called over several frames. You should provide the whole AI script, maybe via some dropbox or else.
Agreed, can you paste the script that calls the attack method please
Without seeing more code it's hard to say, but there's one thing that does stand out to me: if your entity is rotated around the x-axis (i.e. could be angled upward or downward) then you'd see the behaviour you describe from the code you have. (It's not clear from your video whether this is the case or not, but it does seem that your ground has some slope to it which could cause this if the models were aligned to the floor.)
Consider: if your agent is facing directly along the Z axis then its transform.forward will have a y of zero. The initial value of force will have a magnitude of 300 (and will still have a y of zero). You will then set that y to 750 giving an overall force of (0, 750, 300).
Now consider that your agent is facing almost completely upward. transform.forward is now (0, 299, 0.01) and you multiply by 300, but then set y = 750. Your resulting force is now (0, 750, 3) for an agent that's jumping in the same direction.
You could fix this by modifying the end of Attack like so:
Vector3 horizontalDirection = new Vector3(transform.forward.x, 0, transform.forward.z);
Vector3 force = horizontalDirection.normalized * 300f;
force.y = 750f;
rig.AddForce(force);
The above code guarantees the magnitude of force will be consistent no matter which direction your character is facing.
(The other issue mentioned in comment by @fafase is a very plausible cause also, and it could be one or both of these to blame!)
| common-pile/stackexchange_filtered |
remove all anchors except it's contents?
I found that sometimes this code returns NotFoundError: An attempt was made to reference a Node in a context where it does not exist.
$('a').replaceWith(function() {
return $.text([this]);
});
Is it complaining about a case when the anchor does not have any text node? perhaps a child HTML element?
What parts are you trying to keep? href="this" or this?
what is your target markup and the desired output
I think you should use return $.text(this);
@LJ-C I am trying to remove the anchor tag entirely
@C-link still returns the error
@ArunPJohny I am targetting all anchors so anything like <a href...>anything here text or html</a> and return anything here text or html
what is the desired output you are looking for
Try this:
$('a').contents().unwrap()
DEMO
then please update demo with some real world html for comparison. It's not entirely clear what the issue is
@charlietfl I've provided the real world html which your code and my code brings about different results. http://jsfiddle.net/KLq5w/3/
there's script in your html that throws errors, please remove. What are you wanting, text only? No images? Would help to scale your demo html down to just a few links...and show what output you want as well
basically with my code it seems to remove the empty spacings beside the title of costumes. with your code it seems to retain the empty spacings. I don't know what is causing this.
mine just unwraps what's inside which is what the subject title of your posts suggested. If you get text() of element that has many text nodes in different children, children can be separated by css...no gurantee that text() won't cause some merging. Still not clear what output you wnt...and it's been asked numerous times
| common-pile/stackexchange_filtered |
Disable double at(@) sign in exim
How do I disable double at sign in message-ID?
because of the double at, my e-mails are entering recipients spam folder:
001d01d2d093$7154f9d0$53feed70$ @ USER @ domain.tld
in the log, I see the above message header, and the mail-tester.com, notes this issue
"Message-ID contains multiple '@' characters"
Any idea how I can disable the doube at sign in exim??
Detailed log output:
<EMAIL_ADDRESS>F= R=lookuphost T=remote_smtp S=16179 H=OFFICE365domain-COM.mail.protection.outlook.com [<IP_ADDRESS>] X=TLSv1.2:ECDHE-RSA-AES256-SHA384:256 C="250 2.6.0 <001d01d2d093$7154f9d0$53feed70$ @ USER @ domain.tld> [InternalId=2070174243006, Hostname=VI1PR0601MB2605.eurprd06.prod.outlook.com] 23799 bytes in 0.280, 82.980 KB/sec Queued mail for delivery"
http://www.exim.org/exim-html-current/doc/html/spec_html/ch-message_processing.html says that the message-id includes "@ and the primary host name. "
Your log message above seems to indicate that<EMAIL_ADDRESS>is the name of your host.
If that's true, then you need to give your host a name that does not include an @.
You might also check your exim configuration:
Additional information can be included in this header line by setting the message_id_header_text and/or message_id_header_domain options.
Sorry for the late reply, but, the issue [I think] is 001d01d2d093$7154f9d0$53feed70$ @ USER @ domain.tld. And in the previous log, I malfored the log, i hade some things wrong, It is now edited. I check everything else, and there is an Excual rule in SpamAssising that block double at signs: https://wiki.apache.org/spamassassin/Rules/MSGID_MULTIPLE_AT
| common-pile/stackexchange_filtered |
React useEffect repeats an action several times
I have a form with next fields:
A drop-down list that allows to choose a branch + A city that is automatically selected according to the branch selected.
A calendar component that allows to select a date + A details about the date that automatically displays, such as whether it is a work day or a holiday.
Filleds with the temperature details that automatically displays, depending on the city of the branch and date selected. It updated whenever either of them changes, and both are validted.
The check if the first two fields are validted, getting the temperature from API, and update the temperature fields in temperature (using useState), are done in useEffect. The problem is that every time there is a change in the first two fields, the API call is made two or three times.
I deleted the <React.StrictMode> and it doesn't help.
import Calendar from "../components/Calendar";
import Languages from "../components/Languages/Languages.json";
import React, { useContext, useEffect, useRef, useState } from "react";
import { ContextCities } from "../Contexts/ContextCities";
import { ContextLanguages } from "../Contexts/ContextLanguages";
import { loadBranches } from "../utils/branchesHandelApi";
import { loadWeather } from "../utils/weatherHandelApi";
export default function Transactions() {
const { cities } = useContext(ContextCities);
const { selectedLanguage } = useContext(ContextLanguages);
const [branch, setBranch] = useState([]);
const [selectedBranchIndex, setSelectedBranchIndex] = useState(0);
const [selectedDate, setSelectedDate] = useState();
const [selectedDateData, setSelectedDateData] = useState();
const [weather, setWeather] = useState(0);
const handleSubmitTransaction = async (e) => {
e.preventDefault();
// saveTransactions(transaction);
};
useEffect(() => {
loadBranches(setBranch);
}, []);
const isInitialMount = useRef(true);
useEffect(() => {
if (isInitialMount.current) {
isInitialMount.current = false;
} else {
if (
cities !== "undefined" &&
selectedBranchIndex !== 0 &&
selectedDate
) {
let station_id = cities.cities.find(
(itm) => itm.Name === branch[selectedBranchIndex - 1].city
).meteorological_station.id;
console.log("station_id", station_id);
loadWeather(station_id, selectedDate, setWeather);
} else setWeather(0);
}
});
return (
<>
<h1>
<span style={{ fontSize: "2rem", fontFamily: "Noto Color Emoji" }}>
{"\ud83d\udcb3"}
</span>{" "}
{Languages.transactions[selectedLanguage]}
</h1>
<form
autoComplete="off"
className="App"
id="form"
onSubmit={(e) => handleSubmitTransaction(e)}
>
<fieldset>
<legend>
<h2>{Languages.manual_update[selectedLanguage]}</h2>
</legend>
<h2>{Languages.branches[selectedLanguage]}</h2>
<h3>{Languages.name[selectedLanguage]}:</h3>{" "}
<select
defaultValue=""
name="branch"
required
onChange={(e) => setSelectedBranchIndex(e.target.selectedIndex)}
>
<option hidden value="">
{Languages.name[selectedLanguage]}
</option>
{branch.map((itm, index) => (
<option key={index}>{itm.name}</option>
))}
</select>
 
<h3>{Languages.city[selectedLanguage]}:</h3>{" "}
<input
disabled="disabled"
readOnly
value={
selectedBranchIndex === 0
? ""
: branch[selectedBranchIndex - 1].city
}
/>
<br />
<br />
<h2>{Languages.date[selectedLanguage]}</h2>
<Calendar
setSelectedDate={setSelectedDate}
setSelectedDateData={setSelectedDateData}
/>
<br />
<br />
<h2>{Languages.weather[selectedLanguage]}</h2>
<h3>{Languages.meteorological_station_id[selectedLanguage]}:</h3>{" "}
<input
readOnly
disabled="disabled"
value={
selectedBranchIndex === 0 || cities === "undefined"
? ""
: cities.cities.find(
(itm) =>
itm.Name === branch[selectedBranchIndex - 1].city
).meteorological_station.id
}
/>
 
<h3>
{Languages.meteorological_station_location[selectedLanguage]}:
</h3>{" "}
<input
readOnly
disabled="disabled"
value={
selectedBranchIndex === 0 || cities === "undefined"
? ""
: cities.cities.find(
(itm) =>
itm.Name === branch[selectedBranchIndex - 1].city
).meteorological_station.name
}
/>
 
<h3>{Languages.temperature[selectedLanguage]}:</h3>{" "}
<input readOnly disabled="disabled" value={weather} />
{" \u2103"}
</fieldset>
</form>
</>
);
}
Please add some code so we can help diagnose and explain the issue. It is likely that you have a dependency in the useEffect dependency array that is either changing a lot, or is "complex" in some way (e.g it is a (JSON) Object).
show some code please
Your second useEffect doesn't have a dependency array
Thanks Harrison and Raziel, I added the code. Thanks Konrad. When I add empty dependency array, it completely cancels the call to the API temperature.
You need to add the dependency array to your second useEffect(), otherwise it'll keep executing indefinitely every time the component re-renders. If you define it empty, it'll run once at the first render. If you pass some values, it'll run again when one of those values is changed. Here is an example with an empty array of dependencies a dependency array with some variables that are used inside the hook:
useEffect(() => {
if (
cities !== "undefined" &&
selectedBranchIndex !== 0 &&
selectedDate
) {
let station_id = cities.cities.find(
(itm) => itm.Name === branch[selectedBranchIndex - 1].city
).meteorological_station.id;
console.log("station_id", station_id);
loadWeather(station_id, selectedDate, setWeather);
} else {
setWeather(0);
}
}, [cities, selectedBranchIndex, selectedDate]);
As you asked in the comments section (and it was mentioned there) you need to pass the variables you want to "listen" for changes. From the code you posted, I suppose they are cities, selectedBranchIndex, selectedDate, so you just need to add them there. Trim that based on your code/needs. You could read more about the useEffect() hook, its dependency array, and how it works at the Official React Docs
When I add empty dependency array, it completely cancels the call to the API temperature.
When it was mentioned "If you pass some values, it'll run again when one of those values is changed", that's what you need to do. I also updated the answer with some ideas and the React Hooks Docs.
Thanks @Ilê Caian, you solved my problem. branch should also be added: [branch, cities, selectedBranchIndex, selectedDate]
| common-pile/stackexchange_filtered |
Error related to scriptDatabaseOptions when generating scwdp using Azure Toolkit
Sitecore Azure Toolkit 2.7.0 and 2.8.0
Trying to convert a module package into an scwdp package using the following command:
$package = "C:\temp\MyModulePackage.zip"
ConvertTo-SCModuleWebDeployPackage -Path $package -Destination $destination -DisableDacPacOptions * -Force
Provider 'dbDacFx' does not support setting 'scriptDatabaseOptions'.
If I remove the parameter DisableDacPacOptions then it complains:
ConvertTo-SCModuleWebDeployPackage : The SQL provider cannot run with dacpac option because of a missing dependency.
Please make sure that DacFx is installed. Learn more at:
https://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DACFX_NEEDED_FOR_SQL_PROVIDER.
What could be the possible cause for the scriptDatabaseOptions error?
I did notice that on Sitecore documentation (https://doc.sitecore.com/xp/en/developers/sat/28/sitecore-azure-toolkit/web-deploy-packages-for-a-module.html) * character is wrapped in single quotes like -DisableDacPacOptions '*'.
I was able to reproduce both errors as you mentioned with my local environment using Sitecore Azure Toolkit 2.7.0.
As part of Prerequisites while using Sitecore Azure Toolkit, It require to install Data-Tier Application Framework (https://doc.sitecore.com/xp/en/developers/sat/27/sitecore-azure-toolkit/getting-started-with-the-sitecore-azure-toolkit.html)
It gets resolved by following steps:
I installed Microsoft SQL Server Data-Tier Application Framework (DacFX) for SQL server 2012 or later.
Restart the system after installation.
Any of this command works perfectly fine.
ConvertTo-SCModuleWebDeployPackage -Path $package -Destination $destination -DisableDacPacOptions * -Force
ConvertTo-SCModuleWebDeployPackage -Path $package -Destination $destination -Force
Hope it helps!
| common-pile/stackexchange_filtered |
D3 SVG Tree Graph with different symbols for nodes
So, I want to render different symbols for node points on a tree graph. Which isn't too bad, I have the following code that can do that:
nodeEnter.append("path")
.attr("d", d3.svg.symbol()
.type(function(d) { if
(d.type == "cross") { return "cross"; } else if
(d.type == "rectangle") { return "rect";}
etc...
}));
The issue I have is, if you use append with a specific shape, for example append("circle"), you can specify the width, height, etc. With d3.svg.symbol, you can only specify the size. How can I dynamically use something like this:
nodeEnter.append("rect")
.attr("width", rectW)
.attr("height", rectH)
.attr("stroke", "black")
.attr("stroke-width", 1)
.style("fill", function (d) {
return d._children ? "lightsteelblue" : "#fff";
});
But also do it with dynamic shapes based on the node type attribute?
I tried something like:
nodeEnter.append(function(d){if
(d.type == "rectangle") { return "rect"; }});
However, this throws an error of:
TypeError: Argument 1 of Node.appendChild is not an object.
Most of the examples I have found while searching this don't bother trying to modify the symbol as long as they are all unique. Again, I want to be able to do something more complex.
Did not get any responses for this, but I was able to work something out. The answer is to use a raw input for the 'd' attribute and skip d3.svg.symbol altogether:
nodeEnter.append("path")
.attr("d", function(d) { if
(d.type == "circle") { return "M-40,0a40,40 0 1,0 80,0a40,40 0 1,0 -80,0";}
});
The caveat is, you need to draw your shapes manually with path.
| common-pile/stackexchange_filtered |
Custom project template with xcode 4.5
I have been searching for topics related to custom project template creation with xcode 4.5. None have been appropriate and concise. Firstly I cant locate the default templates folder with xcode 4.5 and OS X 10.8 (guess it is hidden by default) and secondly , the steps to create a custom template from scratch is unavailable or I haven't searched effectively.
Kindly help, with my situation.
With this link you can download a good document to explain how create custom template with XCode.
Your post is a duplicate of this post
To add your own template you need to go :
~/Library/Developer/Xcode/Templates/
and create the folder Templates if doesn't exist.
Thank you for the link, i m looking into it and it has been very helpful. But I m not able to locate the project templates folder, where the default templates like single view app, master detail app are available with respect to xcode 4.5. Any idea , about its location?
Got the project template folder. Right clicking on xcode under applications and selecting show package contents takes me there..
so copy past the folder "Templates" here and it's good :)
| common-pile/stackexchange_filtered |
ios - simulator started opening in horizontal view. How to get it to go back to default portrait view?
I have a strange thing that recently happened. The simulator keeps starting in horizontal view. Is there a setting to make it start in portrait view? I keep digging around for that setting but can't seem to locate it.
Thanks and sorry for such a simple question.
Yes there is. If you go to your project in Xcode where you have "boxes" for your iphone orientation, click them in order from portrait up, portrait down and then landscape up/down. That should do it.
Try rotating it with CMD + left arrow, that might do the trick.
I am able to rotate it once without problems. The issue was to make the emulator start in portrait view as a default.
The order of the 'Supported interface orientation' items of your project-info.plist file is key.
Check if the "Portrait (bottom home button)" is on the very first position in the list. If not, just move it up.
(from this answer)
just checked and my portrait view is the default. But nice idea!
| common-pile/stackexchange_filtered |
How to modify the links that appear on Asset Publisher portlet?
The requirement is as follows,
When a new web content(corresponding to a particular structure, say A) is published, it should automatically get updated on the Asset Publisher portlet (default functionality of Asset Publisher).
By default the Title of the web content is what appears as a link on the Asset Publisher for different web contents.
Instead of this I want the content of an element (say name) of structure A to appear as a link. Clicking on this link should open an Alloy UI Popup containing the corresponding Web content.
For this to happen I created a new 'display style' jsp using hooks (tweaked the abstracts.jsp).
Wrote this scriptlet in the .jsp:
<%
String personName=null;
JournalArticle journalArticle=null;
String myContent=null;
Document document = null;
Node node=null;
Node node1=null;
Node node2=null;
Node node3=null;
int noOfWords=0;
String pic=null;
String aboutMe=null;
double version=0;
try {
version=JournalArticleLocalServiceUtil.getLatestVersion(assetRenderer.getGroupId(), "14405");
journalArticle = JournalArticleLocalServiceUtil.getArticle(assetRenderer.getGroupId() , "14405",version);
myContent = journalArticle.getContent();
document = SAXReaderUtil.read(new StringReader(myContent));
node = document.selectSingleNode("/root/dynamic-element[@name='personName']/dynamic-content");
if (node.getText().length() > 0) {
personName = node.getText();
}
node1 = document.selectSingleNode("/root/dynamic-element[@name='pic']/dynamic-content");
if (node1.getText().length() > 0) {
pic = node1.getText();
}
node2 = document.selectSingleNode("/root/dynamic-element[@name='noOfWords']/dynamic-content");
if (node2.getText().length() > 0) {
noOfWords = Integer.parseInt(node2.getText());
}
node3 = document.selectSingleNode("/root/dynamic-element[@name='aboutMe']/dynamic-content");
if (node3.getText().length() > 0) {
aboutMe = node3.getText(). substring(0,noOfWords)+"....";
}
} catch (PortalException e) {
e.printStackTrace();
} catch (DocumentException e) {
e.printStackTrace();
}
%>
But here the articleId needs to be hard coded.
I want to fetch the articleId here as and when a new web content is published i.e. dynamically.
Which API should be used here?
Any help is appreciated.
Thanks.
This method works for me on the latest version of Liferay - Liferay 6.1.1 CE GA2, but I think it should works without any changes on previous versions too.
Briefly, you could use getClassPK() method of the AssetEntry instance.
In all of the display jsps you get asset entry as request attribute:
AssetEntry assetEntry = (AssetEntry)request.getAttribute("view.jsp-assetEntry");
And then to get latest version of journal article that's associated with asset entry instead of using:
double version =
JournalArticleLocalServiceUtil.getLatestVersion(assetRenderer.getGroupId(),
articleId);
JournalArticle journalArticle =
JournalArticleLocalServiceUtil.getArticle(assetRenderer.getGroupId(),
articleId, version);
you could just write:
JournalArticle journalArticle =
JournalArticleLocalServiceUtil.getLatestArticle(assetEntry.getClassPK());
Hope this helps.
Thnks a lot for the reply!!!really helpful...It solved one purpose of mine ,but the other is still unsolved..On clicking on the link I need the corresponding web content /article to open up.so in the url i would still need groupId,articleId,version. How do I get these?
Once you've got instance of JournalArticle you've also got all the getters to obtain what you need: journalArticle.getGroupId(), journalArticle.getArticleId(), journalArticle.getVersion(). Try to use them for your task. In case you don't succeed, please, create another, more specific question. It's would be difficult to answer in comments.
Thanks again..n yep..i will be posting another question regarding this.Hope you can help me out there too.
| common-pile/stackexchange_filtered |
How to use properly my unique LAMBA formula instead of a copied down formula?
I try to use a MAKEARRAY/LAMBA so I don't have to use a copied down formula but I struggle.
In the example below I try to retreive an array of country in H2:H corresponding to dishes G2:G. Good to know : Columns (Countries) B2:E2 may extend, and rows (dishes) B3:E may also extend
I first tried to build a formula with BYCOL/LAMBDA to copy it down from H2, and it worked :
LET(
Γ;B$2:E;
Δ;G2;
Σ;XMATCH(TRUE;BYCOL(Γ;LAMBDA(col;REGEXMATCH(CONCATENATE(col);Δ))));
IF(LEN(Δ);INDEX(Γ; 1; Σ);)
)
And tried in I2 to build a unique formula with MAKEARRAY/LAMBDA (not working):
LET(
Γ;B$2:E;
Δ;G2:G;
Σ;XMATCH(TRUE;BYCOL(Γ;LAMBDA(col;REGEXMATCH(CONCATENATE(col);Δ))));
MAKEARRAY(COUNTA(Δ);1;LAMBDA(r;c;IF(LEN(Δ);INDEX(Γ; 1; Σ);)))
)
Two questions then :
How can I make this last formula work?
Is my way of thinking (BYCOL-> REGEXMATCH -> CONCATENATE) the best (optimal) way to work out my problem?
You may try:
=map(G2:G;lambda(Σ;if(Σ="";;torow(index(if(B3:E=Σ;B2:E2;));1))))
Wow. Really helpfull ! Torow, another keyword to learn. I can see you you've set a B2:E2 Will this work if I had a column "UK" for example after Column E ?
then the ranges within the formula ought to be B3:F & B2:F2
sure. But what I wanted to know is, is there a way to make this more dynamic, something like like a B2:2. This way we don't have to edit the formula
| common-pile/stackexchange_filtered |
pandas.Series inducing KeyError in pyplot.hist
I generate a Dataframe. I pull a Series of floats out of it, and plot it in a histogram. Works fine.
But when I generate a sub-series of that data, using either of the two descriptions:
u83 = results['Wilks'][results['Weight Class'] == 83]
u83 = results[results['Weight Class'] == 83]['Wilks']
pyplot.hist throws a KeyError on that Series.
#this works fine
plt.hist(results['Wilks'], bins=bins)
# type is <class 'pandas.core.series.Series'>
print(type(results['Wilks']))
# type is <type 'numpy.float64'>
print(type(results['Wilks'][0]))
#this histogram fails with a KeyError for both of these selectors:
u83 = results['Wilks'][results['Weight Class'] == 83]
u83 = results[results['Weight Class'] == 83]['Wilks']
print u83
#type is <class 'pandas.core.series.Series'>
print(type(u83))
#plt.hist(u83) fails with a KeyError
plt.hist(u83)
I just started messing with Pandas. Perhaps I'm not groking the right way to do the sql-equivalent of 'select * from table where WeightClass = 83' etc?
Oh, solved it.... pass the Series with its values attribute.
plt.hist(u83.values)
Sort of weird.
As a backtrace -- now any of my sub-selection methods worked. It was simply that I was passing plt.hist(u83) instead of plt.hist(u83.values).... Sort of lame.
You probably want something like this:
u83 = results.loc[results['Weight Class'] == 83, 'Wilks']
plt.hist(u83)
And you might want to read these docs on indexing...
yeah, afraid this gives the same error. The Series itself looks normal to me. [In] u83
Out[15]:
3 410.370158
5 379.581755
14 361.213206
23 349.908011
29 335.061991
50 308.511000
51 306.523497
53 304.869996
57 302.155009
68 289.432998
69 288.466504
Name: Wilks, dtype: float64
Figured out the answer.
| common-pile/stackexchange_filtered |
What's the prediction algorithm behind websites like farecast.com (bing travel)?
I think everything is in the title of the question: What's the prediction algorithm behind farecast.com (bing travel) ?
The website : http://www.bing.com/travel/ originally named http://farecast.com before it was bought buy bing is a website that predicts AirFares to help you purchase tickets when they are the cheapest.
I know farecast algorithm is based on historical prices. They used a huge database of airfare observations to build the predictions.
But like options (in finance call/put), there are formulas to calculate the plane ticket prices, so there must be more than just simple datamining behind their algorithm. (for exemple getting historical datas to find the different parameters in a generic formula for pricing tickets - like finding the implied volatility from historical prices of options.)
Can someone tell me what is the theory behind these kind of prediction?
I believe the theory is pretty new since the idea came up in 2003, only 8 years ago.
Hope my question is clear,
Thanks in advance
EDIT
A very quick edit to answer yi_H comment:
I'm looking for recent papers on forecasting algorithm based on hitorical prices and pricing calculation.
Such algorithm may exist in Financial engineering, and farecast might have used quantitative finance algorithm to predict price of options to help them predict airfares.
if by chance someone knows the algorithm farecast uses, it would be great.
Thanks again
you could probably ask how to forecast prices, but asking how a specific site does it is a bit tough.. you need a huge database of previous ticket prices and also have to harvest what they forecast and find correlation between the two data. I doubt anyone has the time or resources to do this
@yi_H, I updated my question based on your comment. thx
This would be under the field of Actuarials http://en.wikipedia.org/wiki/Actuarial_science
@Tony Wu, can you develop a little bit if you believe this can help. Not sure I understand what to look at in this wikipedia article. Thx in advance.
Actually the prices are not based on historic data.
A few years ago I heard a talk from a guy who works for ITA software (who built the system that is used by orbitz and was recently bought by google).
Here are some slides by a founder:
http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/img9.html
The airlines maintain a database with the air fares that is propagated to those airfare optimizers.
However, the airfare system is overly complicated and it is very hard to find optimal prices.
In the talk, that I heard, the speaker said, they were working with a Canadian airline to get rid of the old database stuff and replace it with something more efficient.
| common-pile/stackexchange_filtered |
Google Analytics Dynamic event value. What's wrong with this code?
Any suggestions as to why this dynamic value will not report report in GA?
To start:
I have created a way to split the URL parameters up so that I can insert the value from the URL that I want into the Google Analytics event onclick tracking.
This is an example of my URL:
<http://www.example.org/sweden/se/stod-oss/gava/info/?view=DDM&price=118>
The price in the url is a dynamic amount.
This is how I successfully split the url up in the :
<script type="text/javascript">
var params = {};
if (location.search) {
var parts = location.search.substring(1).split('&');
for (var i = 0; i < parts.length; i++) {
var nv = parts[i].split('=');
if (!nv[0]) continue;
params[nv[0]] = nv[1] || true;
}
}
</script>
So that works correctly and when I insert params.price into the button submit it works fine wen placed in the category section, like so:
<button type="submit" onClick="_gaq.push(['SE._trackEvent', 'se_donationpages', 'submitinfo', params.price,, false])" class="btn btn-gp btn-gp-special">Next<i class="icon-arrow-right icon-white"></i></button>
Google Analytics registers this fine in the reports.
But, this is not where I want this. I would like the price value to be inserted in the value section, like so:
<button type="submit" onClick="_gaq.push(['SE._trackEvent', 'se_donationpages', 'submitinfo', 'payment',params.price, false])" class="btn btn-gp btn-gp-special">Nästa <i class="icon-arrow-right icon-white"></i></button>
So, when I do this one above, Google Analytics does not register the event at all.
I thought there might be a problem with the value being a string, so I converted the price parameter to a integer like so in the head:
<script type="text/javascript">
var params = {};
if (location.search) {
var parts = location.search.substring(1).split('&');
for (var i = 0; i < parts.length; i++) {
var nv = parts[i].split('=');
if (!nv[0]) continue;
params[nv[0]] = nv[1] || true;
}
}
var price_param = params.price;
var view_param = params.view;
var price_param_int = parseInt(price_param)
</script>
and inserted the variable into the button code like so:
<button type="submit" onClick="_gaq.push(['SE._trackEvent', 'se_donationpages', 'submitinfo', 'payment',price_param_int, false])" class="btn btn-gp btn-gp-special">Next<i class="icon-arrow-right icon-white"></i></button>
...but, this doesnt report in GA :(
Any suggestions as to why this dynamic value will not report report in GA?
It's boggling my mind!
You are right that it must be an integer variable type. I don't know why GA doesn't just convert it automatically..
perhaps you simply typoed while posting, but in your code, you assign the integer-converted value to price_param_int (notice the lack of "s" on "param") but in your GA code you reference price_params_int
edit
Okay you mentioned in comment that it was just a typo when posting.. well I tested your code and it works fine. So here's another dumb question: are you sure you are going to your page with the price parameter actually in the URL? e.g.
http://www.yoursite.com/page.html?price=123
If you are and it's still not working then.. you must have something else going on that's affecting your code, because when I just have on a test page GA code and that button and the query param grabbing code you posted, it works fine.
@NicholasKnight sweet! if the question is solved then can you click the checkbox next to the answer? :D
Crayon, no sorry I meant the typo was just a typo in this particular post but the problem still remains as is.
@NicholasKnight Aaah okay.. well see my edited post.
| common-pile/stackexchange_filtered |
Configure maven-surefire-report-plugin to read from multiple source directories
I have configured surefire to use different output directories based on CPU count:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<forkCount>3C</forkCount>
<reportsDirectory>target/surefire-reports-${surefire.forkNumber}</reportsDirectory>
</configuration>
</plugin>
I can see directories target/surefire-reports-[1..36] (could be more or less depending on machine build runs). How can I make the report plugin pick up the data from there?
I tried:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-report-plugin</artifactId>
<version>3.0.0-M7</version>
<configuration>
<reportsDirectories>
<reportsDirectory>target/surefire-reports*</reportsDirectory>
</reportsDirectories>
</configuration>
</plugin>
But nothing was picked up.
Looking at this pretty quickly doesn't seem like (wildcard) works. Can you look into maven exec plugin or https://stackoverflow.com/questions/17963222/assign-output-of-exec-maven-plugin-to-variable to define a variable of the folder that exists on the machine you run on by looking at the directory contents?
Or your other option is to list all target/surefire_reports_1, etc till 36 under reportDirectories
Thx for the quick reply. Tried the 36 dirs too. Didn’t seem to work
Can you explain the reason why needed different output for difference cpu count?
@stwissel what do you mean with 36 dirs, it is not working? Are you getting filenotfound or? Please show an update of what you tried.
| common-pile/stackexchange_filtered |
iOS - singleton is not working as supposed in delegate
Currently I'm working on an app that uses four protocols for communication between classes. Three are working fine, but one is still not working. I've set it up same as the others but the delegate is always losing its ID. I'm quite new to Objective-C so I can't get to the bottom of it. Here is what I did:
I have a MainViewController.h with the delegate
@property (weak, nonatomic) id <PlayerProtocol> player;
and a MainViewController.m
- (void)viewDidLoad {
[super viewDidLoad];
[[Interface sharedInstance] Init];
NSLog(@"Player ID: %@", _player);
NSLog(@"viewDidLoad: %@", self);
}
- (void)sedPlayer:(id) pointer{ //sed is no typo!
_player = pointer;
NSLog(@"sedPlayer ID: %@", _player);
NSLog(@"sedPlayer: %@", self);
}
+ (instancetype)sharedInstance {
static dispatch_once_t once;
static id sharedInstance;
dispatch_once(&once, ^{
sharedInstance = [[self alloc] init];
});
return sharedInstance;
}
In the Interface.m (NSObject)
- (void)Init {
[[MainViewController sharedInstance] sedPlayer:self];
}
And of course a protocol.h but this is not of interest as the delegate does the trouble! When I run the code I get the following output on the console:
sedPlayer ID: <Interface: 0x1700ab2e0>
sedPlayer: <MainViewController: 0x100406e30>
Player ID: (null)
viewDidLoad: <MainViewController: 0x100409550>
So it is obvious that the singleton is not working as the instance of the MainViewcontroller is different. For the singleton I'm using the dispatch_once standard method as I do with the other protocols that work fine. ARC is turned on. Does anyone has a clue what is wrong here and why the singleton is not working?
first you should use assign replace of strong, when you using a delegate:@property (strong, nonatomic) id <PlayerProtocol> player, second, please post your whole code in your question, we can find where is the issue.
^ No, use weak instead of assign. assign will require you to nil out references to that object when it gets deallocated or else you'll get a crash when you send a message to the (now deallocated) object.
@Hecot are you sure [MainViewController sharedInstance] is the same instance of MainViewController as viewDidLoad is called from?
@aircraft: I've used weak before, but also assign makes no difference.
@AdamPro13 I've added two NSLog outputs and found that the instance is different and it seams the singleton is not working as supposed. I added some more code above, maybe you can help?
using singletons is a hint that your application architecture is flawed, using view controller singletons sounds outright wrong to me.
Here's how I think you ended up with two instances of the MainViewController. The first one, I assume, is created when navigating to the screen associated with MainViewController. The second one is created when you call [MainViewController sharedInstance] in Interface.m.
As the ViewController view is lazy loaded ("View controllers load their views lazily. Accessing the view property for the first time loads or creates the view controller’s views." from the Apple docs under ViewManagement), you see the viewDidLoad: <MainViewController: 0x100409550> log only once, when the first MainViewController gets navigated to and loads up the view.
Here's my suggestion:
Since you do the Interface initializing in the - (void)viewDidLoad, you might as well set self.player = [Interface sharedInstance].
The code would look something like this:
- (void)viewDidLoad {
[super viewDidLoad];
self.player = [Interface sharedInstance];
NSLog(@"Player ID: %@", _player);
NSLog(@"viewDidLoad: %@", self);
}
You should also get rid of - (void)sedPlayer:(id) pointer and + (instancetype)sharedInstance in your MainViewController. It is never a good idea to have a ViewController singleton, since you might end up messing up the navigation or having multiple states of it.
For a more in-depth article on avoiding singletons, you can check objc.io Avoiding Singleton Abuse
| common-pile/stackexchange_filtered |
how to change value of html element by classname using javascript
Here is the code i am using to change value of html element ***
<a class="classname" href="Vtech.com"> This text to be chnage</a>
<script type="text/javascript">
document.getElementsByClassName("classname")[0].innerHTML = "aaaaaaqwerty";
</script>
how can I change this text on page load instans
its working see here
Seems you need to add DOMContentLoaded or put your script before </body>
Native JavaScript solution
document.addEventListener("DOMContentLoaded", function(event) {
document.getElementsByClassName("classname")[0].innerHTML = "qwerty";
});
Add your script before </body>
Version with jQuery
$(funtion(){
$(".classname:first").text("qwerty");
});
You can use css selector, but it can be not safe, because this method return first occurance:
document.querySelector(".classname");
By the way, almost all developers use some js framework: jQuery, prototype, extJs, etc
$(document).ready(funtion(){
$(".classname").text("aaaaaaqwerty");
});
Using jquery (I refered to jquery, since you have tagged it in your question), you could achieve this like below:
$(funtion(){
$("a .classname")[0].text("aaaaaaqwerty");
});
No, $(".classname")[0] will retrieve the DOM node from the jQuery collection, which has no text() method. You could use $(".classname").eq(0).text('...') though.
@DavidThomas you are correct ! Thank you very much. I think that my edit also do the same thing that you referred to. Correct? I mean the selector is correct now. Thank you in advance for your response.
You could use this
document.addEventListener("DOMContentLoaded", function(event) {
document.getElementsByClassName("classname")[0].innerHTML = "some texts";
});
Using jQuery
$(document).ready(function(){
$(".classname").eq(0).val("aaaaaaqwerty");
});
No, $(".classname")[0] will retrieve the DOM node from the jQuery collection, which has no val() method. You could use $(".classname").eq(0).val('...') though. Or you could, if the <a> element had a val() method.
@DavidThomas. Thankx for your valuable comment. I've edited my answer
| common-pile/stackexchange_filtered |
Merge or Concat Dataframes using Pandas
Simplified. I have 2 dataframes that I would like to merge/concatenate/join together into one using the following scenario as framework.
df1 looks like
C1 C2 C3
0<PHONE_NUMBER>.0<PHONE_NUMBER>.0 YQHDK
1<PHONE_NUMBER>.0<PHONE_NUMBER>.0 YQHJW
2 846369000.0 846369000.0 YQHMF
3 508287000.0 508287000.0 YQHRV
4 878002000.0 878002000.0 YQHVT
5 NaN<PHONE_NUMBER>.0 YQHRM
While df2 looks like
C3 C1
0 YQHRM<PHONE_NUMBER>.0
What I desire is to fill in the NaN value as follows:
C1 C2 C3
0<PHONE_NUMBER>.0<PHONE_NUMBER>.0 YQHDK
1<PHONE_NUMBER>.0<PHONE_NUMBER>.0 YQHJW
2 846369000.0 846369000.0 YQHMF
3 508287000.0 508287000.0 YQHRV
4 878002000.0 878002000.0 YQHVT
5<PHONE_NUMBER>.0<PHONE_NUMBER>.0 YQHRM
I've tried using df1.merge(df2, how='left', on='C3), but this creates two C1 columns, a C1_x and a C1_y.
I've also tried using pd.concat([df1, df2]) but this results in two rows for YQHRM'.
What am I missing here?
Take a look at combine_first():
df1.set_index('C3')
df2.set_index('C3')
df2.combine_first(df1)
C1 C2
C3
YQHDK 1.659712e+09<PHONE_NUMBER>
YQHJW 5.797862e+09<PHONE_NUMBER>
YQHMF 8.463690e+08 846369000
YQHRM 2.362463e+09<PHONE_NUMBER>
YQHRV 5.082870e+08 508287000
YQHVT 8.780020e+08 878002000
df2.reset_index() will revert the index back to column.
| common-pile/stackexchange_filtered |
window.gapi.auth2.getAuthInstance().signIn doesn't send callback in Safari
We embedded Google OAuth button inside iframe. When user loads a page with this iframe, clicks Google button from that iframe, window.gapi.auth2.getAuthInstance().signIn fires and new window appears. User enters email, password and submits form. Window closes but there is no callback to this function.
It only reproduces when main window domain is different to iframe domain!
You can try here - https://sparklejobs.com/localhost-universal/
Main window domain - sparklejobs.com
Iframe window with Google button - staging-web.good.co
My exact issue. I'm still trying to implements a workaround so I can't "Answer" this question, but I can give you some hints after reading lots of stuff about this issue. First of all Disabling (oddly enough) Allow Cross-site tracking Seems to be an option. Also take a look at these: https://github.com/google/google-api-javascript-client/issues/503#issuecomment-687741866 and https://github.com/google/google-api-javascript-client/issues/260#issuecomment-675055603
| common-pile/stackexchange_filtered |
How to block access to all VSTS accounts except our work VSTS account
My company want to get VSTS. But they do not want users to be able to access their personal VSTS accounts at work. They are concerned that users will upload source code to their personal VSTS accounts and download it at home. Or worse, they are worried that users can upload a virus to their personal VSTS account, come into work and download it. I'm sure everyone will understand why they want to do this.
Without getting into ethical reasons about how companies need to trust employees etc......They want to stop this or reduce this as much as possible.
Is there any guidance on how to achieve this?
One solution is to maybe block *.visualstuido.com but whitelist only our company VSTS account? This is messy because there will be a bunch of other visualstudio.com urls that we will need to access such as {accountname}.vsrm.visualstudio.com.
There is no way for us to know all the urls that we will need to allow access if we block *.visualstudio.com
any advice is appreciated!
Wrong place for this. This is a programming q&a site, and this question has nothing to do with programming. And definitely not the place to post about how you and your company distrust the programmers who work for you, or make assumptions about how everybody will understand why you're doing it. Also please do not abuse tags (such as ms-access).
it's a legitimate question. Just asking for advice. No need to get so defensive.
@DavidMakogon Stack Overflow is for questions about programming and tools commonly used by programmers, which includes VSTS. That said, this really boils down to an issue of network configuration and belongs elsewhere for that reason.
@DanielMann - This question has nothing to do with how to work with a programming tool. It's about user access blocking for a specific online service (well outside the scope of the tool) - which is precisely why it is off-topic here. But... the topic of not trusting employees? Completely superfluous and shouldn't be included, whether posted here, SuperUser, or elsewhere.
@DavidMakogon If you read the question clearly, I specifically say "Without getting into ethical reasons". i.e. not talk about that subject.
I don’t think you can achieve it. There are many ways that can store/download the source code except VSTS, such as github. Users also can upload/download the source code through email. So you can’t prevent them to do it unless block to access internet.
The simple way is that:
Build an intranet network
Clone VSTS repository to a shared folder
Others work with that repository (commit changes to that repository)
Push changes to VSTS by yourself or build the app to track repository and push commits automatically.
| common-pile/stackexchange_filtered |
Longest path in a graph with special property
I have a special graph in which I have two types of edges only, say one with type 0 and one with type 1. Now I have to find a longest path in the graph such that it starts with a vertex then follows as many type 0 edges as it can and then again start with a vertex and follows as many type 1 edges it can. The length of the longest path will be number of distinct vertices in both the paths. (If some vertex coincide count it once.)
Note : The graph is undirected, heavily contains cycles and has upto 10^6 vertices. So I would need a O(n) algorithm.
P.S : Sorry forgot to give the more important information, for every vertex there are 0 or 2 edges of each type always.
Finding longest paths is famously NP-hard, so no, there is no known polynomial, let alone linear time algorithm. I don't see how having two types of edges helps any. (The size of your single graph of interest is irrelevant for that. However, it's not too large; maybe simple algorithms are fast enough?) By the way, do you want your graphs to be simple?
As Raphael pointed out, this problem is also NP-complete. There is an easy reduction from longest path problem to this problem. Just take the graph of longest path problem and label all of the edges type-0. There won't be any type-1 edge. This labeled graph will have a longest path of length $k$ iff original graph has a longest path of length $k$.
This proves the NP-hardness of the problem.
It is easy to see that the problem is in NP, the NDTM machine for the problem will be similar to the longest path problem. Both this add to make the problem NP-complete.
But if you are happy with any kind of approximation, then you can refer Approximation to longest path and modify the algorithm to suit your purpose. However be prepared that the modification may not be trivial.
I have modified the question, see P.S below.
Even longest path problem for graphs with nodes of only even degree is NP-complete. Just replace two 2-length path for every edge.
And do you mean exactly 0 or 2 edges or atleast 0 or 2 edges?
I mean exactly 0 or 2 edges.
I have also slightly modified the problem statement.
| common-pile/stackexchange_filtered |
Delete newly added files when shelving them
When shelving a set of changes, I uncheck "Preserve pending changes locally". The changed files revert to their previous version, but the newly added files remain in my project directory.
To keep working on the application without these new changes, I then have to manually delete all the new files that I have added.
Is this normal behavior or am I doing something wrong?
If this is normal, is there a way of getting rid of newly added files when shelving them?
1) This is normal. Shelve /move is implemented as regular Shelve + Undo. While it's always safe to clean uncontrolled files from your local disk after they're stored on the server, the call to Undo() doesn't know this, so it leaves pending adds behind like it always does. There have been feature suggestions since v1.0 to track this behavior more granularly so that files can be cleaned up when appropriate, and/or to use a platform specific "safe delete" (read: move to Recycle Bin if you're on Windows). But AFAIK none has been implemented to date.
2) The latest TFS Power Tools added a feature 'tfpt unshelve /undo' that handles this more intelligently. It's also trivial with a Powershell pipeline: tfshelve <parameters> | tfundo | tfprop | del
Change to your project files shouldn't affect this behavior one way or another. Though if you do add the file to a project & fail to undo that edit, you'll see an orphan file reference in Solution Explorer.
I just tried this using VS-2008 with power tools installed on TFS 2008 and it delted the new file from my workspace.
Are you shelving your project at the same time so the file add is removed from the project as well?
Currently the newly added files do get removed when I shelve them, whether or not I shelve the project file, and I am unable to reproduce my previous problem. It must be some weird combination of steps. But still, thanks!
| common-pile/stackexchange_filtered |
How to pass javascript prompt value to a view
Is there a way to pass a js popup value directly to a python/django view, or is necessary to capture this value with a javascript function and then do an ajax call?
For example:
<form>
<input type="text" name="name"/>
<input type="submit" id="submit"/>
</form>
<script type="text/javascript">
function myFunction() {
$('#submit').click(function() {
var name=prompt("Please enter your name","");
});
}
</script>
If it can be done directly, how would this be done?
You would have to do something along the lines of:
$('#submit').click(function() {
var name = prompt("Please enter your name", "");
$.post('/url/to/django/handler', {'name': name});
});
to get the value the user filled out in the prompt back to your django app.
If you want to modify the current view, you can use Javascript to modify the DOM, as in:
$('#myDiv').html(name)
Which will replace the contents of the div with id "myDiv" with the value you captured in "name".
If you're talking about a different view entirely, then you're always going to have to do a server call (via AJAX or a form submission) to get the value from the client to the server!
| common-pile/stackexchange_filtered |
Simple yet so hard: Jquery and scroll
Okay, so I am trying to do a couple of things:
Create a list with an id and underlying paragraphs
Assign text that is found in the DOM to each of the list paragraphs
Assign an id to some of the h2 paragraphs the DOM
use that id to automatically scroll down the page(yes i know this can be done in many different ways)
But, I ran into one problem which I have been stuck on for 3 hours now.
Whenever I do step nr. 3 my page automatically scrolls down it self. And yes I know it is possible to make the page scroll back up, but that is unnecessary code that I would like to avoid.
GIF: http://im.ezgif.com/tmp/ezgif-1640909552.gif
Here is my code
JS
window.onload = function () {
$("<ul id='list'> <li></li> <li></li> <li></li> <li></li> </ul>").insertAfter(".content h1");
for(i = 1; i <= 4; i++) {
$("#list li:nth-child("+i+")").html("<a>" + $(".content h2:nth-of-type(" +i+ ")").text() + "</a>");
$(".content h2:nth-of-type("+i+")").attr("id", "h2nr"+i); //this little sh#t makes my page scroll down >:(
$("#list li:nth-of-type("+i+") a").attr("href", "#h2nr"+i);
}
}; //end of ONLOAD
HTML
This page is only made for training and is not intended for any
commercial use and may therefor have a couple of faults;
<!DOCTYPE html>
<html>
<head>
<title>Specials | The Landon Hotel</title>
<link rel="stylesheet" href="style/style.css">
<script src="http://code.jquery.com/jquery-2.1.4.min.js"></script>
<script src="challenge.js"></script>
</head>
<body>
<div class="container">
<div class="header"><img src="img/header.png"></div>
<div id="hero">
<div class="current"><img src="img/HomePageImages/Paris.jpg"></div>
</div>
<nav class="main-navigation" role="navigation">
<div>
<ul id="menu-main-menu" class="nav-menu">
<li><a href="index.html">Home</a></li>
<li><a href="restaurant-to-room-service.html">Room Service</a></li>
<li><a href="specials.html">Specials</a></li>
<li><a href="reservations.html">Reservations</a></li>
<li><a href="meetings-events.html">Meetings & Events</a></li>
<li><a href="news.html">News</a></li>
<li><a href="contact.html">Contact</a></li>
</ul></div>
</nav>
<div class="content">
<h1>Specials</h1>
<h2>San Francisco, Bernal Heights</h2>
<h3>Military Family Deal:</h3>
<p>Active and retired military, and their families, save 20% when booking a three or more day stay at the Landon Hotel in lovely Bernal Heights, San Francisco. Book by August 1st, 2015.</p>
<h3>Bring Fido:</h3>
<p>Bring your travel-loving canine to our pet-friendly Bernal Heights Landon Hotel, and see why San Francisco is a dog's paradise, with endless activities and locations that cater to canines. You'll save 10% just for bringing Fido, and there are no hidden pet fees. Book by April 30th, 2014.</p>
<h3>Meeting Mondays:</h3>
<p>The new Bernal Heights conference room is just the place for your corporate meetings, and if you book for three or more consecutive days, that include a Monday, you'll receive Monday free. Book by September 15th, 2014.</p>
<hr/>
<h2>London, West End</h2>
<h3>Theatre Package:</h3>
<p>Theatre lovers can enjoy two free tickets to a West End theater production of their choice, when booking a weekend stay at the West End Landon. Tickets are mezzanine level and are limited to available productions at the time of booking. Book by August 1st, 2015.</p>
<h3>Shopper's Paradise:</h3>
<p>Oxford, Regent, and Bond Streets have some of the best shopping in the world, and all are just a tube stop away when you stay at the West End Landon. And, if you book a minimum of five days, you'll get a bonus gift certificate worth $125 to use in the boutique of your choice, based on participating vendors at time of booking. Book by November 2015.</p>
<hr/>
<h2>Hong Kong, Kwun Tong</h2>
<h3>Spa Holiday:</h3>
<p>The Hong Kong is home to a half-dozen world-renowned spas, some tucked away in skyscrapers, others in beachside retreats. You can have your pick of a one-day Spa Holiday if you book a five-consecutive night stay during the months of February through April. Book by November 1, 2014.</p>
<h3>Leisure and Luxury:</h3>
<p>Stay at the Landon Hotel in the Kwun Tong District and you'll have both leisure and luxury at your fingertips. Play a complimentary round of golf and enjoy a complimentary seaweed body wrap and massage, if you book a weekend stay by August 1st , 2015.</p>
<hr/>
<h2>Paris, Latin Quarter</h2>
<h3>Sweet Deal:</h3>
<p>Paris is renowned for its delectable pastries and other dessert creations by the most highly skilled chefs in the world. If you book a weekend stay by February 28th, 2015 you'll receive a complimentary dessert tray every night of your stay. Be prepared for a sweet feast!</p>
<h3>Spiritual Walk:</h3>
<p>The Latin Quarter is the place to tour some of the world's oldest churches and monasteries. You can enjoy a complimentary church walking tour for two, guided by an entertaining and enlightening guide, if you book a weekend stay by March 1, 2015.</p>
<h3>Holiday Package:</h3>
<p>Spend the winter holidays in Paris and enjoy festivity and fine food under a star-filled winter sky. You'll receive 15% off your hotel accommodations, if you reserve for 7 consecutive nights in December 2014 or January 2015. Book by October 30th, 2014.</p>
<hr/>
</div>
</body>
</html>
This sounds to me like a css issue as well. Can you throw all of this in a jsfiddle?
@J4G I have never used it before but I have seen some examples of it so it may not be perfect. NOTE: some of the pics are missing
https://jsfiddle.net/StanlyHV/mvfbuo1z/2/
I modified the Javascript to do what you wanted. I used the each function so it automatically selects all the H2 elements and builds the list afterwards to do the scrolling. You can also use jquery's scroll plugin to make it a smooth scroll.
window.onload = function() {
var items = $('.content h2');
var list = $('<ul id="list"></ul>');
items.each(function(i, el) {
var e = $(el);
var a = $('<a name="h2nr' + i + '"></a>'),
listA = $('<a href="#h2nr' + i + '">' + e.html() + '</a>');
e.append(a);
var li = $('<li></li>');
li.append(listA);
list.append(li);
});
list.insertBefore('.content');
var loc = location.href;
if (loc.indexOf('#') > -1) {
loc = loc.substring(0, loc.indexOf('#'));
}
loc+="#";//Scroll to top...
location.href = loc;
}; //end of ONLOAD
<!DOCTYPE html>
<html>
<head>
<title>Specials | The Landon Hotel</title>
<link rel="stylesheet" href="style/style.css">
<script src="http://code.jquery.com/jquery-2.1.4.min.js"></script>
<script src="challenge.js"></script>
</head>
<body>
<div class="container">
<div class="header">
<img src="img/header.png">
</div>
<div id="hero">
<div class="current">
<img src="img/HomePageImages/Paris.jpg">
</div>
</div>
<nav class="main-navigation" role="navigation">
<div>
<ul id="menu-main-menu" class="nav-menu">
<li><a href="index.html">Home</a>
</li>
<li><a href="restaurant-to-room-service.html">Room Service</a>
</li>
<li><a href="specials.html">Specials</a>
</li>
<li><a href="reservations.html">Reservations</a>
</li>
<li><a href="meetings-events.html">Meetings & Events</a>
</li>
<li><a href="news.html">News</a>
</li>
<li><a href="contact.html">Contact</a>
</li>
</ul>
</div>
</nav>
<div class="content">
<h1>Specials</h1>
<h2>San Francisco, Bernal Heights</h2>
<h3>Military Family Deal:</h3>
<p>Active and retired military, and their families, save 20% when booking a three or more day stay at the Landon Hotel in lovely Bernal Heights, San Francisco. Book by August 1st, 2015.</p>
<h3>Bring Fido:</h3>
<p>Bring your travel-loving canine to our pet-friendly Bernal Heights Landon Hotel, and see why San Francisco is a dog's paradise, with endless activities and locations that cater to canines. You'll save 10% just for bringing Fido, and there are no
hidden pet fees. Book by April 30th, 2014.</p>
<h3>Meeting Mondays:</h3>
<p>The new Bernal Heights conference room is just the place for your corporate meetings, and if you book for three or more consecutive days, that include a Monday, you'll receive Monday free. Book by September 15th, 2014.</p>
<hr/>
<h2>London, West End</h2>
<h3>Theatre Package:</h3>
<p>Theatre lovers can enjoy two free tickets to a West End theater production of their choice, when booking a weekend stay at the West End Landon. Tickets are mezzanine level and are limited to available productions at the time of booking. Book by
August 1st, 2015.</p>
<h3>Shopper's Paradise:</h3>
<p>Oxford, Regent, and Bond Streets have some of the best shopping in the world, and all are just a tube stop away when you stay at the West End Landon. And, if you book a minimum of five days, you'll get a bonus gift certificate worth $125 to use
in the boutique of your choice, based on participating vendors at time of booking. Book by November 2015.</p>
<hr/>
<h2>Hong Kong, Kwun Tong</h2>
<h3>Spa Holiday:</h3>
<p>The Hong Kong is home to a half-dozen world-renowned spas, some tucked away in skyscrapers, others in beachside retreats. You can have your pick of a one-day Spa Holiday if you book a five-consecutive night stay during the months of February through
April. Book by November 1, 2014.</p>
<h3>Leisure and Luxury:</h3>
<p>Stay at the Landon Hotel in the Kwun Tong District and you'll have both leisure and luxury at your fingertips. Play a complimentary round of golf and enjoy a complimentary seaweed body wrap and massage, if you book a weekend stay by August 1st ,
2015.
</p>
<hr/>
<h2>Paris, Latin Quarter</h2>
<h3>Sweet Deal:</h3>
<p>Paris is renowned for its delectable pastries and other dessert creations by the most highly skilled chefs in the world. If you book a weekend stay by February 28th, 2015 you'll receive a complimentary dessert tray every night of your stay. Be prepared
for a sweet feast!</p>
<h3>Spiritual Walk:</h3>
<p>The Latin Quarter is the place to tour some of the world's oldest churches and monasteries. You can enjoy a complimentary church walking tour for two, guided by an entertaining and enlightening guide, if you book a weekend stay by March 1, 2015.</p>
<h3>Holiday Package:</h3>
<p>Spend the winter holidays in Paris and enjoy festivity and fine food under a star-filled winter sky. You'll receive 15% off your hotel accommodations, if you reserve for 7 consecutive nights in December 2014 or January 2015. Book by October 30th,
2014.
</p>
<hr/>
</div>
</body>
</html>
Thanks dor the input, but when I tried your code it automatically scrolled itself down to the bottom of the page, also the list was placed at the bottom. (Did not try the html as I assume it is the same as the one I already have)
You confused me then :) You want the list at the top? Thats easy, what do you mean that it automatically scrolled to the bottom of the page.
Yep! sorry if i confused you. My goal is to have the html and possibly js as is except it scrolling down to h2nr3 by itself whenever the page loads
I think you mean the way it autoscrolls if you already had #h2nr3 in the url? That is a browser feature, but you may be able to remove it after the fact...
I think the changes I made should fix your issue.
you're right! its probably because of the url! Thanks for the help! not that it is a problem any more, but how would you suggest to remove #h2nr3 from the url after you clicked the link?
I added the changes to the code in the snippet. I just remove the #h2nr3 and replace it with # So it just goes to the top
That seems to do the trick! again thank you for your help! it is greatly appreciated!
| common-pile/stackexchange_filtered |
How to create a log of each function which was called while my server was running? for debugging purposes
How can I create a log of each function that was called while my server was running, for debugging purposes?
I am trying to understand and debug the source code of a node library (specifically y-websocket).
I am running the library locally. After closing my server I want to get a log of each function that was called, in the order they were called.
Thank you!
I have tried using node --inspect, but the node debugger requires me to set breakpoints.
| common-pile/stackexchange_filtered |
Update statement not working within Cursor & While block
I am trying to execute an Update statement on a table. This statement is placed within Cursor and While block. I have checked in debugger and the values are coming into the statements and variable, still the update is not putting values into the table fields. Please advise what am I doing wrong here.
ALTER PROCEDURE SP_PO1 @P1 int
AS
BEGIN
SET NOCOUNT ON;
DECLARE @DOC AS INT;
DECLARE @CASH AS FLOAT;
DECLARE @TENTYPE AS VARCHAR(100);
DECLARE @UDF AS VARCHAR(100);
DECLARE @COUNTER AS INT;
DECLARE @SQL AS VARCHAR(500);
SELECT @DOC=DOCTYPE FROM InvNum WHERE AutoIndex = @P1;
IF @DOC = 6
BEGIN
SET @COUNTER = 1;
DECLARE Cur_Tender CURSOR FOR
SELECT Tender.TenderNo FROM Tender;
OPEN CUR_TENDER;
FETCH NEXT FROM CUR_TENDER INTO @TENTYPE;
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @CASH = ISNULL(_btblPOSTenderTx.fTxAmount,0) FROM _btblPOSTenderTx INNER JOIN Tender ON _btblPOSTenderTx.iTenderID = Tender.IdTender INNER JOIN _btblPOSXZTable ON _btblPOSTenderTx.iPOSXZTableID = _btblPOSXZTable.IDPOSXZTable WHERE (_btblPOSXZTable.iTillTxType = 7) and (_btblPOSXZTable.IDPOSXZTable = (select Max(IDPOSXZTable) from [dbo].[_btblPOSXZTable])) AND (TenderNo = @TENTYPE);
SET @UDF = 'ufIDPOSInvTENDER' + CONVERT(VARCHAR(2),@COUNTER);
UPDATE InvNum SET @UDF=@CASH WHERE AutoIndex = @P1;
SET @COUNTER = @COUNTER + 1;
FETCH NEXT FROM CUR_TENDER INTO @TENTYPE;
END
END
CLOSE CUR_TENDER
DEALLOCATE CUR_TENDER
END
GO
You update variable
UPDATE InvNum SET @UDF=@CASH WHERE AutoIndex = @P1;
update table column
UPDATE InvNum SET <column>=@CASH WHERE AutoIndex = @P1;
if you want a dynamic column name - use dynamic sql
EXEC('UPDATE InvNum SET ' + @UDF + '=' + CAST(@CASH as VARCHAR(50) + ' WHERE AutoIndex = ' + CAST(@P1 as VARCHAR(5) ' );
The column name is dynamically generated. There are 5 Columns as follows:
ufIDPOSInvTENDER1, ufIDPOSInvTENDER2, ufIDPOSInvTENDER3, ufIDPOSInvTENDER4, ufIDPOSInvTENDER5
Hi,
I got what you meant. This statement is actually assigning a value to another variable. In this case can you explain a solution.
generate sql string and execute
| common-pile/stackexchange_filtered |
Json data not well formated error
json data not binding into table
controller code
$http(
{
method: 'post',
url: 'Service.asmx/WPGetDS',
data: $.param({ as_sql: "select * from testtab", strConKey: "Etech" }),
dataType: 'json',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' }
}).success(function (data, status, headers, config) {
var myjson = JSON.parse(data);
$scope.dtDioSearch = myjson;
console.log(myjson);
}).error(function (data, status, headers, config) {
console.log(data);
});
Web Service Code
Public Sub WPGetDS(ByVal as_sql As String, ByVal strConKey As String)
Dim dt As New DataTable()
Dim conGlobal As New SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings(strConKey).ConnectionString)
Dim a(0) As String
Dim dr As DataRow
Dim dtDataTable As DataTable
If conGlobal.State = ConnectionState.Closed Then conGlobal.Open()
Dim SDA = New SqlDataAdapter(as_sql, conGlobal)
Dim DS As DataSet = New DataSet()
Dim data As New WPData
Dim js As New JavaScriptSerializer()
Dim lCmdSql, lCmdErr As New SqlCommand
Try
dtDataTable = New DataTable("Table")
Dim dcolSrNo As DataColumn
dcolSrNo = New DataColumn("SlNo")
dcolSrNo.AutoIncrement = True
dcolSrNo.AutoIncrementSeed = 1
dcolSrNo.AutoIncrementStep = 1
dtDataTable.Columns.Add(dcolSrNo)
DS.Tables.Add(dtDataTable)
SDA.Fill(DS, ("Table"))
SDA.Dispose()
data.Message = ConvertDataTableTojSonString(DS.Tables(0))
Context.Response.Write(js.Serialize(data.Message))
Catch ex As Exception
dt.Columns.Clear()
dt.Columns.Add("Error")
dr = dt.NewRow
dr.Item("Error") = ex.Message.Trim
dt.Rows.Add(dr)
DS.Tables.Add(dt)
conGlobal.Close()
data.Message = ConvertDataTableTojSonString(DS.Tables(0))
Context.Response.Write(js.Serialize(data.Message))
Finally
If conGlobal.State = ConnectionState.Open Then conGlobal.Close()
End Try
End Sub
HTML Code
<div class="table-responisive">
<table class="table">
<thead>
<tr>
<th>#</th>
<th>Test</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="erdata in dtDioSearch track by $index">
<td>{{erdata.SlNo}}</td>
<td>{{erdata.Test}}</td>
</tr>
</tbody>
</table>
</div>
Console Json data
[{"SlNo":1,"test":"test"},{"SlNo":2,"test":"test"},{"SlNo":3,"test":"test"},{"SlNo":4,"test":"test"},{"SlNo":5,"test":"test"},{"SlNo":6,"test":"test"},{"SlNo":7,"test":"test"},{"SlNo":8,"test":"test"},{"SlNo":9,"test":"test"},{"SlNo":10,"test":"test"},{"SlNo":11,"test":"test"},{"SlNo":12,"test":"test"},{"SlNo":13,"test":"test"},{"SlNo":14,"test":"test"},{"SlNo":15,"test":"test"},{"SlNo":16,"test":"test"},{"SlNo":17,"test":"test"},{"SlNo":18,"test":"test"},{"SlNo":19,"test":"test"},{"SlNo":20,"test":"test"},{"SlNo":21,"test":"test"},{"SlNo":22,"test":"test"}]
My problem is json data not bind to the html table. in firefox there was an error shown not well-formed in console. please help...
The first argument of your success callback will be a JavaScript object containing many properties including a data property whose value is the parsed JavaScript object based on the JSON returned by your API. Trying to parse a JavaScript object will result in error.
Try modifying the success method to:
.success(function (response, status, headers, config) {
var myjson = response.data;
$scope.dtDioSearch = myjson;
});
error occour in cosole undefined
not well-formed
'}).success(function (response, status, headers, config) {
var myjson = response.data;
$scope.dtDioSearch = myjson;
console.log(myjson);'
@VinodJohn if the JSON is exactly what you posted it is not likely to throw an error. Or maybe you posted a sample, but the real JSON might be containing weird invisible characters... try validating actual response in an online json validator. If not this error is coming from somewhere else in your application
@T J.. i just validate the json and result is valid json . then why the data is not binding the html table.
@VinodJohn can you use unminified version of angular and post the exact error? Else try replicating the issue in a [mcve] becuase the shared code doesn't even contain an ng-app directive so obviously it won't bind
@t j }).success(function (data, status, headers, config) {
$scope.dtDioSearch = [{"id": "1", "test": "this is a description"}];
the above code sucessfully executed. that means webservice return json is not an array.
@VinodJohn well you posted a JSON array in question... We only know what you share.
Public Function GetJSon(ByVal dt As DataTable) As List(Of Dictionary(Of String, Object))
Dim rows As New List(Of Dictionary(Of String, Object))
Dim row As Dictionary(Of String, Object)
'Return JsonConvert.SerializeObject(dt).ToList
'Return JSONString
For Each dr As DataRow In dt.Rows
row = New Dictionary(Of String, Object)
For Each col As DataColumn In dt.Columns
If col.DataType = GetType(Date) Then
Dim dtt As DateTime = DateTime.Parse(dr(col).ToString())
row.Add(col.ColumnName, dtt.ToString("dd-MM-yyyy hh:mm:ss"))
Else
row.Add(col.ColumnName, dr(col))
End If
Next
rows.Add(row)
Next
Return rows
End Function
@tj thank you for your support. the problem is json return string and i changed it to array list
| common-pile/stackexchange_filtered |
How do I get the name of a subclass when annotating the method of a parent?
I wrote a method decorator to log some information about a class. It mostly works as expected but has one issue. I want it to log the name of the effectively running class, but it logs the name of the abstract parent class.
Here is the decorator code:
const logClassName: MethodDecorator = (
target: any,
_propertyKey: string,
_descriptor: PropertyDescriptor,
) => {
console.log(`class name from decorator: ${target.constructor.name}`);
};
The two class definitions, one abstract and the second extends the first, thus creating a concrete class
abstract class AbstractClass {
@logClassName
logClassName() {
console.log(`class name from function : ${this.constructor.name}`)
}
}
class ConcreteClass extends AbstractClass {}
And here is the sample execution:
const concrete = new ConcreteClass();
concrete.logClassName();
The output:
class name from decorator: AbstractClass
class name from function : ConcreteClass
I would expect both logs to contain ConcreteClass, or at least to contain the same thing.
What I want to know is how to get the decorator log to print the concrete class.
Here is a fiddle with a working example
As it turns out, the issue was that decorators in TypeScript are evaluated once, when the class is defined. This means that when the @logClassName decorator is applied to the AbstractClass method, it is evaluated with AbstractClass as its target.
To capture the effective running class, I had to delay accessing the constructor's name until the method was called. This can be achieved by wrapping the original method with a new function that logs the name of the current instance's constructor:
const logClassName: MethodDecorator = (
target: any,
propertyKey: string,
descriptor: PropertyDescriptor,
) => {
const originalMethod = descriptor.value;
descriptor.value = function(...args: any[]) {
console.log(`class name from decorator: ${this.constructor.name}`);
return originalMethod.apply(this, args);
}
return descriptor;
};
Here's the updated fiddle
| common-pile/stackexchange_filtered |
systemd start service after another one stopped issue
I have 2 services that i need to start.
First service has download jobs required for second service.
First service
[Unit]
Description=First
After=network.target
Second service
[Unit]
Description=Second
After=First
Problem is they both start at the same time, i need second service to wait until first one is dead.
I don't wait to use sleep because download jobs can be big.
Thank you.
In your first service add
ExecStopPost = /bin/systemctl start Second
what this does is when the service terminates the above option is activated and thus second service is called.
This particular option(ExecStopPost) allows you to execute commands that are executed after the service is stopped. This includes cases where the commands configured in ExecStop= were used, where the service does not have any ExecStop= defined, or where the service exited unexpectedly.
| common-pile/stackexchange_filtered |
Modifying rewrite rules in subfolder when RewriteBase / is enforced
I'm trying to add watermark to images on my site.
It worked for me on my local test PC, but on the hosting, they enforce
RewriteBase /
which seems to complicate things a bit.
.htaccess placed in /images/stories/virtuemart/product/ folder does work any more as expected, and I can not figure out the way to fix it.
Some of the latest version is:
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^images/stories/virtuemart/product/(.*)\.(jpeg|jpg)$ /watermark/watermark.php [QSA,NC]
I also tried to put the same in the root .htaccess
Original version that works on my test server without Rewritebase / in the root:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule \.(gif|jpeg|jpg)$ ../../../watermark/watermark.php [QSA,NC]
I suspect that the issue might be with relating the URI to watermark.php, or maybe something needs to be changed in php receiving this request, e.g. this part:
$path = $_SERVER['DOCUMENT_ROOT'].$_SERVER['REQUEST_URI'];
preg_match('@\.(gif|jpg|jpeg|png)$@i',$_SERVER['REQUEST_URI'],$m);
$ext = strtolower($m[1]);
$generated = CACHE.md5($_SERVER['REQUEST_URI']);
or Rules should store some more parameters, like in:
RewriteCond %{REQUEST_URI} ...
Any help is greatly appreciated.
Where is the watermark directory relative to the root directory? Is it in /images or in the root directory?
| common-pile/stackexchange_filtered |
activemq start suppresses stdout/stderr
when using AMQ 5.6 and starting the broker using ./activemq start...where does the stdout/stderr go?
I expected it to show up in the /data/activemq.log file, but it doesn't...is there are way around this with a tweak to the log4j or JavaServiceWrapper config perhaps?
When I start in console mode using ./activemq console, the stdout/stderr messages are displayed as expected. In particular, I need to get output from e.printStackTrace() to show up in the logs when running in this mode.
apparently the stderr/out is simply redirected to /dev/null based on startup script...strange that its not just sent to a log file by default
it seems to just get redirected to /dev/null...I changed the /bin/activemq script to redirect to ../data/start.log instead and sure enough, the stdout/err are there...not sure why this isn't the default behavior to be honest...
When i remember correctly, there is another file called wrapper.log. look out for it in the same dir where wrapper.conf is.
| common-pile/stackexchange_filtered |
Scheme in the header: logo , slogan, image, etc, Should I use containers or not?
I want to create a schema like this:
I'm wondering if it is better create a div container for logo and slogan or not..
I'm wondering also if it is better create a div container for telephone icon, phone1, phone2 and facebook button or not..
Someone told me that is not necessary, and in that way I could save some bits..
What is your opinion? containers or not?
honestly this is the best solution i may think of, it has balance between minimum css and html coding, http://jsfiddle.net/fkrQa/.
hope this helps
| common-pile/stackexchange_filtered |
Use Companion object in kotlin android is a good practice?
I have ever used Java to programming in android, but a few weeks ago I started to learn kotlin, when I use Java I tried to use object oriented approach and use the less possible static objects or instances, so when I see some materials in internet about some implementations of consume web services in kotlin I see something like this:
/*call of method from activity*/
val message = WebServiceTask.getWebservice(getString(R.string.url_service))
/*Class to do the call to webservice*/
class WebServiceTask {
companion object {
fun getWebService(url: String): WebServiceResponse {
val call =
RetrofitInstance.getRetrofit(url).create(ApiService::class.java).getList()
.execute()
val webServiceResponse = call.body() as WebServiceResponse
return user
}
}
}
/*Class to get Retrofit instance*/
class RetrofitInstance {
companion object{
fun getRetrofit(url: String): Retrofit {
return Retrofit.Builder()
.baseUrl(url)
.addConverterFactory(GsonConverterFactory.create())
.build()
}
}
}
Like you see i use companion object in two classes and according to i read companion object is equivalent to static instance in java, so my question is:
is this code following object oriented programming?, this aproach is recommended?, in case that answer is no and which is the best implementation for this code in object oriented
Should perhaps look into dependency injection like kodein (https://github.com/Kodein-Framework/Kodein-DI) or dagger (https://github.com/google/dagger) if you don't to use companion objects, generally they aren't going to affect you much if you do use them but it would be better not to
companion object is how you define static variables/methods in Kotlin. You are not supposed to create a new instance of Retrofit / ApiService each time you execute a request, however.
Yes, companion object is Kotlin's equivalent of static members in Java. Everything that applies to static, applies to companion object as well.
The use of companion object depends on how it interacts with the state of class's object.
If you are using some methods which are totally Pure functions Or Some final values which you need to provide access to outside the class itself, in that case using companion object makes total sense.
It is recommended for the above conditions because it does not interfere with the state of the class's object.
So, for the given code snippet it is a valid use of companion object.
Observe that methods inside companion object do not interact with something which is not passed to them as parameters. Everything that you see is created/initialized or used inside the methods only, Just the result it gets out.
Note:
However, if your companion object members(values or functions) interfere with the state of the object, it will cause leaks, which will lead you to troubles you have never faced before.
@Joshi: Yes, companion object is Kotlin's equivalent of static members in Java.
Careful: a companion object can inherit a class or interfaces — something that is not viable in Java static members. So, if you need Java-interoperable code, the solution is @JvmStatic functions and @JvmStatic properties. By annotating a companion object’s members with @JvmStatic, you will gain better Java interoperability.
Yes, it is equivalent to static. No, it is not recommended, as it leads to problems with mocking for testing, for example.
| common-pile/stackexchange_filtered |
MongoDb direct client connections
I have a c# client application which connects directly to a Mongodb server with the 10gen c# driver. In terms of pure number of connections that can be held is it sensible to have clients connect directly to the db? Could too many clients swamp the db and crash it? is it more sensible to always use an app server to process db read write requests?
As the wiki says:
The C# driver has a connection pool to use connections to the server
efficiently. There is no need to call Connect or Disconnect; just let
the driver take care of the connections (calling Connect is harmless,
but calling Disconnect is bad because it closes all the connections in
the connection pool).
| common-pile/stackexchange_filtered |
How std::shared_ptr is deallocated?
When does memory deallocation occur in the code below?
#include <memory>
int main()
{
auto p = std::make_shared<int>(5);
std::weak_ptr<int> wp = p;
p = nullptr;
return wp.lock() == nullptr ? 0 : 1;
}
As follows from this post std::make_shared performs one heap-allocation. Does this mean that until at least one std::weak_ptr is alive the memory can't be deallocated?
After all shared_ptr are destructed or reset, the weak_ptr will keep the control block in memory "alive" until all weak pointers have destructed. Using make_shared will allocate the control block and space for the object in one allocation, hence weak_ptr will keep the memory for the control block and the memory for the destructed object until all weak_ptr are destructed.
@Eljay I expected some kind of a miracle to happen, but you disappointed me... :)
Alas, there are no miracles in C++. There are lots of incredibly impressive clever things, though! I've heard of attempts to incorporate garbage collection with C++, but I've not tried that kind of memory management miracle myself.
(Had to edit the answer since I have not read the question properly).
Yes, the memory itself will be around in your snippet, since you have allocated a single block for both control block and the object via make_shared call.
@Dmitriano I did misread the question initially, so edited my answer now.
Usually, the std::make_shared is a good thing to have both control block and space for the object. But some scenarios (like lots of long lived weak pointers, and shorter lived strong shared pointers) where the overhead of fallow control blocks is acceptable, but fallow space for the objects is not, then intentionally not using std::make_shared and doing the two-step approach is a reasonable alternative.
@Eljay I am yet to see a design where weak_ptr makes code better and easier to reason about. Even shared_ptr alone is a thing which is rarely (albeit sometimes!) is approriate, but coupled with weak_ptr it becomes a code maintainer nightmare.
Why isn't this solved that shared_ptr makes a realloc for just the control block in such case? Is this seen as such a not worthy niche?
@bloody I do not see how realloc can be used in such scenario. All objects which hold a pointer to control block would need to somehow be notified about the change of pointer to control block, and there are no mechanisms to do so. Logistically, it would be the same thing as just allocating the new control block - existing objects would have no idea it happened.
Aftermath of such is obvious, I just missed that there are no means for preserving the same address / selective memory deallocation. Thanks.
I've not run into that scenario, in real life, either. I've found shared_ptr itself to be something I have little interest in actually using, because a shared_ptr is effectively the same as a global variable (well, unless the object it holds is genuinely const). Once you have federated ownership, the owning objects (plural) cannot ensure their own invariant. And then you get spooky action at a distance sorts of bugs.
std::make_shared<T>() allocates a control block containing a constructed T instance, and then returns a std::shared_ptr that refers to that block. The T instance is destructed when no more std::shared_ptrs refer to the control block, but the control block itself is not freed until there are no more std::shared_ptrs or std::weak_ptrs referring to it. Which, in this example, is when both wp and p go out of scope when main() exits:
#include <memory>
int main()
{
auto p = std::make_shared<int>(5);
std::weak_ptr<int> wp = p;
p = nullptr; // <-- the int is destroyed here
return wp.lock() == nullptr ? 0 : 1;
} // <-- the control block is freed here when p and wp are destroyed
| common-pile/stackexchange_filtered |
TFS PowerTools Proxy Authentication Required HTTP code 407
Am behind a corporate proxy server and when attempting to use TFS Power Tools 2015 to view history or indeed any operation from the explorer extension I get a "HTTP code 407: Proxy Authorization Required" when connecting to Visual Studio Team Services online @ xyz.visualstudio.com.
Encountered a similar issue with VS2015 but resolved it by using the default proxy setting as mention in an answer in this question Visual Studio Error: (407: Proxy Authentication Required)
Tried applying the same setting to TF.exe.config and TFPT.EXE.config but got no joy, the error still occurs. Any hints or tips as to how to resolve would be appreciated!
<system.net>
<settings>
<ipv6 enabled="true"/>
<servicePointManager expect100Continue="false" />
</settings>
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>
</system.net>
Since you just apply the setting to TF.exe.config and TFPT.EXE.config file, this setting should be only applicable to TF.exe and TFPT.exe. Call them from Command Line to check if the setting works.
When you view history or do other operation from Windows Explorer Extension, it calls "TfsCommandRunnerSvr.exe" which locates in TFS Power Tools installation folder. Try to create a "TfsCommandRunnerSvr.exe.config" file in the folder and apply the setting to it to see if it can works.
| common-pile/stackexchange_filtered |
Why is only one of my two modals showing?
I have two modals on the same page. One of them shows just fine. The second one, however, doesn't show. Only the dark "opacity" screen shows, but not the modal itself. I've read other posts and nothing seems to work for me.
Here's my codepen
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<button type="button" class="btn btn-primary mt-4 ml-2 mb-4" data-toggle="modal" data-target="#modalPublishWithAccount">Usuario registrado quiere publicar</button>
<button type="button" class="btn btn-success mt-4 ml-2 mb-4" data-toggle="modal" data-target="#solicitarArriendoWithAccountModal">Usuario registrado quiere publicar</button>
<!-- MODAL THAT WORKS -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="modalPublishWithAccount">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that works
</div>
</div>
</div>
</div>
<!-- /MODAL THAT WORKS -->
<!-- MODAL THAT DOESN'T WORK -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="solicitarArriendoWithAccountModal" style="z-index:10;">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that doesn't work
</div>
</div>
</div>
</div>
<!-- /MODAL THAT DOESN'T WORK -->
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script>
If you change the inline z:index to 10000 or something larger than 10 you will be able to interact with the modal.
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<button type="button" class="btn btn-primary mt-4 ml-2 mb-4" data-toggle="modal" data-target="#modalPublishWithAccount">Usuario registrado quiere publicar</button>
<button type="button" class="btn btn-success mt-4 ml-2 mb-4" data-toggle="modal" data-target="#solicitarArriendoWithAccountModal">Usuario registrado quiere publicar</button>
<!-- MODAL THAT WORKS -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="modalPublishWithAccount">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that works
</div>
</div>
</div>
</div>
<!-- /MODAL THAT WORKS -->
<!-- MODAL THAT DOESN'T WORK -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="solicitarArriendoWithAccountModal" style="z-index:10000;">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that doesn't work
</div>
</div>
</div>
</div>
<!-- /MODAL THAT DOESN'T WORK -->
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script>
This works with the code I shared. However, my site has many modals, about 5, and when I change the z-index as you mention, it still won't show.
Here's the code that doesn't work: https://codepen.io/paulamourad/pen/mvGJqV?editors=1000
After many attempts, I hadn't closed one </div> tag. That was it!
You have an inline style using a z-index. Just remove the z-index completely; Bootstrap handles the levels for you.
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<button type="button" class="btn btn-primary mt-4 ml-2 mb-4" data-toggle="modal" data-target="#modalPublishWithAccount">Usuario registrado quiere publicar</button>
<button type="button" class="btn btn-success mt-4 ml-2 mb-4" data-toggle="modal" data-target="#solicitarArriendoWithAccountModal">Usuario registrado quiere publicar</button>
<!-- MODAL THAT WORKS -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="modalPublishWithAccount">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that works
</div>
</div>
</div>
</div>
<!-- /MODAL THAT WORKS -->
<!-- MODAL THAT DOESN'T WORK -->
<div class="modal fade bd-example-modal-lg registration-modal" tabindex="-1" role="dialog" aria-labelledby="myLargeModalLabel" aria-hidden="true" id="solicitarArriendoWithAccountModal">
<div class="modal-dialog modal-lg modal-dialog-centered" role="document">
<div class="modal-content registration-modal-content">
<div class="modal-header registration-modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body modal-body-img registration-modal-body text-center col-lg-8 col-md-8 pb-5 pt-0 mx-auto">
I'm the modal that doesn't work
</div>
</div>
</div>
</div>
<!-- /MODAL THAT DOESN'T WORK -->
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script>
Won't work on my code. Only the dark screen come up. All the other modals work (I have about 5).
check number of div which are opened and number of div which are closed
| common-pile/stackexchange_filtered |
Nginx - Proxy subdirectory to remote server
I have two servers (server1 and server2) listening for the same domain name. Let's say www.example.com.
server1 acts as the main one, where the domain itself is pointed to.
What I'm trying to do is proxy all requests to a specific subdirectory on server1 to server2
This is my current configuration on server1, where xx.xxx.x.xxx is the IP of server2:
server
{
listen 80;
server_name www.example.com;
# proxy to port 81 on server1
location /
{
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
proxy_pass http://<IP_ADDRESS>:81;
}
# proxy to server2
location /subdirectory
{
proxy_pass http://xx.xxx.x.xxx:80;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host www.example.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Right now I'm getting 504 Gateway Time-out
This settings is not the best way, the latency is overkill.
Anyway you need to set the correctly timeout (default is 60s)
proxy_connect_timeout 90s;
proxy_read_timeout 90s;
proxy_send_timeout 90s;
Ref:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html
| common-pile/stackexchange_filtered |
How to get to Release web.config file
How do I get to release web.config ?
I am using Visual Studio 2012 and I cannot seem to find how to get to it.
Right click on your web.config file, click "add config transforms" to create the configuration transformations for each of your solution configuration.
Then to preview the files, right click on it (ie: web.release.config) and select "preview transform".
Note: if you do not have the mentionned options in the context menu, consider installing the "Web essentials" extention. It will make your life easier and save a leprauchaun from a terrible death.
Thanks. but add config transforms is disabled on my machine.
Then the configuration are already created. Try activating the "show all files" option at the top of the solution explorer. Then see if there is an expand button (the "triangle" thingy) next to your web.config. If there is one, expand it, and right click on a file to preview the transform.
| common-pile/stackexchange_filtered |
How to concatenate fields using lightning:dataTable?
I would like to combine the Account Name and Industry fields and display the values. Can someone please help?
component.set('v.mycolumns', [
{label: 'Account Name', fieldName: 'Name', type: 'text'},
{label: 'Industry', fieldName: 'Industry', type: 'text'},
{label: 'Phone', fieldName: 'Phone', type: 'Phone'},
{label: 'Website', fieldName: 'Website', type: 'url '}
]);
I have tried {label: 'Account Name', fieldName: 'Name', type: 'text'} + {label: 'Account Name', fieldName: 'Name', type: 'text'} and all sort of combination but nothing seem to work. Can someone help here?
I.e I want to display like Burlington Textile -> Textile
Thanks in advance.
Hi @HarishSridhar. Why don't use standard features like formulas?
Hello @MartinLezer - I am getting these details by hitting an endpoint and my values are not being stored anywhere and am just displaying them on the go if user clicks a button. Above is an example which I just gave it for, but this is my real scenario.
No, you can't create formulas in a data table. You would need to calculate a value based on your criteria, such as:
dataRows.forEach(row => row.nameAndIndustry = row.Name + ' ' + row.Industry);
lightning-datatable and lightning:datatable do not otherwise support expressions for fields.
Instead of lightning datatable for this specific requirement you can use the code from slds.
https://www.lightningdesignsystem.com/components/data-tables/
Yes - I've done it using HTML tables - thanks.
| common-pile/stackexchange_filtered |
Understanding function pointer declaration
I have to describe the following code:
char *(**f[][]) ();
I understand the "char *" at the beginning and the "()" at the end: It's a function which doesn't have arguments and returns a pointer to char. But what does " ( * * f [ ][ ] ) " mean?
Can anyone help me please? Thanks =D
It's not a function which doesn't have arguments, but a function whose arguments are not specified. A function with no arguments has void between the ()s.
There's a very useful website you might want to know about: http://cdecl.org/
declare f as array of array of pointer to pointer to function returning pointer to char
In a bit easier to read english, f is a 2d array of pointers to function pointers, that return strings (or pointers to characters).
Quite why you'd need that I have no idea.
| common-pile/stackexchange_filtered |
Custom theme typography in mat-toolbar
I'm theming my app defining colors and typography. But it's not working for the heading in my app header component based on mat-toolbar. My theme is overridden by default theme css rules for .mat-toolbar h1.
index.html
<html>
...
<body class="mat-typography">
<app-root></app-root>
</body>
</html>
_theme.scss file:
@import '~@angular/material/theming';
@import 'utils/palette';
// Plus imports for other components in your app.
// Define the palettes for your theme using the Material Design palettes available in palette.scss
// (imported above). For each palette, you can optionally specify a default, lighter, and darker
// hue. Available color palettes: https://material.io/design/color/
$fem-theme-primary: mat-palette($fem-palette-primary);
$fem-theme-accent: mat-palette(
$fem-palette-primary
); // NOT USED, same as $fem-theme-primary!
$fem-theme-warn: mat-palette($fem-palette-warn);
// Create the theme object (a Sass map containing all of the palettes).
$fem-theme: mat-light-theme(
$fem-theme-primary,
$fem-theme-accent,
$fem-theme-warn
);
// Include theme styles for core and each component used in your app.
// Alternatively, you can import and @include the theme mixins for each component
// that you are using.
@include angular-material-theme($fem-theme);
// Define a custom typography config that overrides the font-family as well as the
// `headlines` and `body-1` levels.
$fem-typography: mat-typography-config(
$font-family: $font-family,
$headline: mat-typography-level(32px, 48px, 700),
);
@include angular-material-typography($fem-typography);
// Include the common styles for Angular Material. We include this here so that you only
// have to load a single css file for Angular Material in your app.
// Be sure that you only ever include this mixin once!
@include mat-core($fem-typography);
topbar.component.html
<div class="topbar">
<mat-toolbar>
<h1 class="topbar__logo mat-headline">App Title</h1>
<mat-form-field
class="topbar__search"
appearance="outline"
color="primary"
>
<input
matInput
data-e2e="topbar-search-input"
class="topbar__search-field"
placeholder="Søg"
(input)="handleSearchChange($event)"
/>
<mat-icon matSuffix inline="true">search</mat-icon>
</mat-form-field>
<div>
<span class="topbar__current-user">{{ currentUser.name }}</span>
<a
mat-button
class="topbar__log-out"
href=""
data-e2e="btn-logout"
(click)="logout()"
>Log ud</a
>
</div>
</mat-toolbar>
</div>
Font family is working... But I expect the <h1 class="topbar__logo mat-headline">App Title</h1> to have css:
font-size: 32px;
line-height: 48px;
font-weight: 700;
Instead it has the default styling:
css:
font-size: 20px;
line-height: 32px;
font-weight: 500;
How can I make ALL Angular Material components (including mat-toolbar) use my own theme?
Did you ever find an answer to this? I'm facing the same problem : (
Component mat-toolbar deliberately overrides all heading tags (h1 through h6) to "title" typography, which maps to h2.
You can make it show "regular" typography for h1 with:
@import '~@angular/material/theming';
.mat-toolbar h1 {
@include mat-typography-level-to-styles($fem-typography, headline); // where headline maps to h1
}
Here are all the mappings, if you'd like to apply the above for other heading sizes:
headline: h1
title: h2
subheading-2: h3
subheading-1: h4
caption: h5
| common-pile/stackexchange_filtered |
Show that $\alpha^2 + \alpha - 1$ is a zero divisor in $R$
Studying for my algebra exam and looking through old exam exercises I came across the following problem
Let $f = X^4 + 1$, $g = X^2 + X - 1 \in \mathbb{F}_3[X]$ and $\alpha = X + \langle f \rangle \in \mathbb{F}_3[X]/\langle f \rangle$.
a) Find a polynomial $h \in \mathbb{F}_3[X]$ such that $f = gh$ and show that $g$ and $h$ are irreducible.
b) What is the size of $\mathbb{F}_3[X]/\langle f \rangle$? Show that $\alpha^2 + \alpha - 1$ is a zero divisor in $\mathbb{F}_3[X]/\langle f \rangle$
I've already solved a and found $\lvert \mathbb{F}_3[X]/\langle f \rangle \rvert = 81$ for part b, but I'm not sure how to show that $\alpha^2 + \alpha - 1$ is a zero divisor
Have you done part (a)? How might it be helpful in showing that $\alpha^2+\alpha-1$ is a zero divisor?
Yes I've found that $X^4 + 1 = (X^2 + X - 1)(X^2 - X + 2) - 3X + 3$ so $h = X^2 - X + 2$, but I'm unsure of how this helps me in finding a polynomial $k$ such that $(\alpha^2 + \alpha - 1)k = 0$
Check the answer below for a very strong hint.
This situation is analogous to considering the quotient ring $\Bbb Z/(a\cdot b)$ where $a$ and $b$ are primes (irreducibles). At the canonical homomorphism $\Bbb Z\to \Bbb Z/(a\cdot b)$ we know that exactly $a\cdot b$ and its multiples go to zero. In particular, neither $a$ or $b$ does, so in other words, in the quotient ring we have $[a],[b]\ne 0$ but $[ab]=0$. (E.g. $2\cdot 3=0 \pmod6$)
Exactly the same happens here: in $\Bbb F_3[X]/(gh)$ we have $[g],\,[h]\ne 0$ but $[gh]=0$.
Hint: $g(\alpha)h(\alpha)=f(\alpha)$
To reformulate Berci's answer in more pedagogical terms:
The point of the quotient ring $\mathbb{Z}/\langle 6\rangle$ is that 6 becomes 0. Since 6 factors as $6=2\cdot 3$ in $\mathbb{Z}$, this means that in the quotient $\mathbb{Z}/\langle 6\rangle$, $2$ and $3$ become zero-divisors.
The point of the quotient ring $\mathbb{F}_3[X]/\langle f\rangle$ is that $f$ becomes 0...
| common-pile/stackexchange_filtered |
Integer Literals in WebAssembly Binary
I am using this demo to understand how WebAssembly text format is compiled into binary.
One thing I don't understand is the integer literals used in i32.const and i64.const.
For instance, the code for i32.const -1 is as follows:
0000024: 41 ; i32.const
0000025: 7f ; i32 literal
How exactly does 0x7f (127 in decimal) relate to -1?
Here is the code for i32.const 1234:
0000024: 41 ; i32.const
0000025: d209 ; i32 literal
I know that WASM is little-endian, so the first byte in the WASM hexadecimal representation (d2) corresponds to the last byte in normal binary notation. The binary representation of 1234 in 32 bits is 00000000 00000000 00000100 11010010. The last byte is 210, which is d2 in hexadecimal. That matches what I see in the WASM code. But the byte before that is 4. Where does 09 come from in the WASM code?
There are many other examples that don't make sense to me. Where are these integer literals coming from?
0x7f is the two's complement representation of -1.
0x31 is the opcode for i32.const.
d209 (0x09d2) is the Unsigned LEB128 of 1234 (0x04d2)
Unsigned LEB128
ULEB128 is the answer! 0x7f isn't the two's complement of 1 by itself, but it's the ULEB128 encoding of the two's complement of 1. Wikipedia's page on ULEB128 includes working JavaScript code for those who are interested in trying it out: https://en.wikipedia.org/wiki/LEB128#JavaScript_code.
@grandinero -1 is a signed value. That means using Signed LEB128. For one byte signed numbers, the result is the same as two's complement.
| common-pile/stackexchange_filtered |
How to get the list of applications suitable for routing from the user’s device?
Is there any way to check if user has a certain app (google.maps, yandex.maps or native maps) on his/her device to make a list of apps suitable for routing?
No you cant do this! Imagine this would be possible: Apps could use this information as unique identifier for your iOS Device and could sell this data to make money for adds and something like this. Conclusion: Luckily for privacy reasons this is not possible.
Possible duplicate of iphone - Check if an app is installed
You can check if user has this apps on device (google.maps, yandex.maps or native maps) using URL scheme.
for Google maps:
let appURL = URL(string: “comgooglemaps://”)
if UIApplication.shared.canOpenURL(appURL!) {
// code for open URL
print(“Can Open URL”)
}
| common-pile/stackexchange_filtered |
Cluster reconciliation in the event of node loss
I have a cluster of 3 nodes that I'd like to recover fast after a single node loss. By recovering I mean that I resume communication with my service after a reasonable amount of time (preferably configurable).
Following are various details:
k8s version:
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.7", GitCommit:"8eb75a5810cba92ccad845ca360cf924f2385881", GitTreeState:"clean", BuildDate:"2017-04-27T10:00:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.7", GitCommit:"8eb75a5810cba92ccad845ca360cf924f2385881", GitTreeState:"clean", BuildDate:"2017-04-27T09:42:05Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I have a service distributed over all 3 nodes. With one node failing I observe the following behavior:
api server fails over to another node and kubernetes service endpoint shows the correct IP address (custom fail-over).
api server is not responding on <IP_ADDRESS> (its cluster IP)
after some time, all relevant service endpoints are cleared (e.g. in kubectl get ep --namespace=kube-system shows no ready addresses for all endpoints)
the service in question is not available on the service IP (due to the above)
The service has both readiness/liveness probes and only a single instance is ready at any given time with all being live. I've checked that the instance that is supposed to be available is also available - i.e. both ready/live.
This continues for more than 15min before the service Pod that was running on the lost node receives a NodeLost status, at which point the endpoints are re-populated, and I can access the service as usual.
I have tried fiddling with pod-eviction-timeout, node-monitor-grace-period settings to no avail - the time is always roughly the same.
Hence, my questions:
Where can I read up on the behavior of the key k8s components in case of a node loss in detail?
What would be the combination of parameters to reduce the time it takes the cluster to reconcile since this is supposed to be used in a test?
Can you clarify a few things. Are your master components (kube-apiserver, kube-controller-manager) on all 3 nodes? What instructions did you use to setup replicated apiserver and controller manager? Where is etcd? On all 3 nodes too? Which service are you talking about not responding? IIRC, the kube-apiserver is not behind a Kubernetes service in an HA configuration, but behind some other kind of load balancer. I'm not sure what <IP_ADDRESS> means in your setup.
Yes, these services are on all 3 nodes but only a single node is running the api server subset (apiserver/controller-manager/scheduler) at a time.
The replication is custom and uses etcd-backed leader election to choose which node runs the apiserver.
Etcd is also distributed over these 3 nodes.
The non-responding service is a user service but the point is that all endpoints are considered not ready and I guess, hence, the service itself is not available (e.g. no Pod can be reached using the service IP).
Yes, sorry. <IP_ADDRESS> is the kubernetes service cluster IP (from <IP_ADDRESS>/16)
| common-pile/stackexchange_filtered |
Identify consecutive occurances and merge two data farmes
This question relates to one posted earlier.
I have 2 data.frames that I would like to merge. The two data.frames have different sizes (eg. dim (df1) =16533, 580 and dim(df2) = 2820 , 675`).
The records were made on different days by different person/group of persons.
Variables from df1
Index = the group of person who made the record (eg. it can represent 1 person or 2 or more)
id1 = the person from the group who made the recording (eg. 12 1 =group 12 person 1; 12 2 = group 12 person 2, etc. )
id2 = the first or the second day when the record was made (eg. 12 1 1 = group 12, person 1, 1 day; 12 1 2 = group 12, person 1, 2 day;)
Day = the weekday when the diary record was made (eg. 12 1 1 Wednesday =group 12, person 1, day 1, weekday Wednesday; 12 1 2 Sunday = group 12, person 1, day 1 , weekday Sunday)
These variables are followed by 24h observations
obs1_1-obs1_144=primary observation
obs2_1-obs2_144=secondary observations
obs3_1-obs3_144=tertiary observations
obs4_1-obs4_144=quarterly observations
Example of
df1
index id1 id2 Day obs1_1...obs1_144....obs2_1...obs2_144...obs3_1...obs3_144...obs4_1...obs4_144
12 1 1 Wednesday 1 11 12
12 1 2 Sunday 2 0 0
123 1 1 Tuesday 1 0 1
123 1 2 Saturday 3 0 3
123 2 1 Monday 2 2 4
123 2 2 Saturday 1 0 8
In df2 observations were recorded just based on index and id1. There is just one observation per person. Similarly here there is also a Day variable that records when the recordings started (eg. not the day of the recordings). For example here id 12 1 Tuesday would suggest that group 12 person 1 started to record observations from Tuesday.
The week is divided as:
Monday = 95 variables starting from day11-day196
(in the actual data t0400_0415_d1-t0345_0400_d1)
Tuesday = 95 variables starting day21-day296
(in the actual data t0400_0415_d2-t0345_0400_d2)
Wednesday = 95 variables starting day31-day396
(in the actual data t0400_0415_d3-t0345_0400_d3)
Thursday = 95 variables starting day41-day496
(in the actual data t0400_0415_d4-t0345_0400_d4)
Friday = 95 variables starting day51-day596
(in the actual data t0400_0415_d5-t0345_0400_d5)
Saturday = 95 variables starting day61-day696
(in the actual data t0400_0415_d6-t0345_0400_d6)
Sunday = 95 variables starting day71-day796
(in the actual data t0400_0415_d7-t0345_0400_d7)
Example of df2
index id1 Day day11 day12 day13 day14 day15 day16 day17 .....day196......day796
12 1 Tuesday 2 1 2 1 1 3 1
123 1 Friday 0 3 0 3 3 0 3
I would like to identify the observations from df2 that were recorded on the same day as in df1.
What I aim for:
df2 to identify consecutive records (there is no gap between the daily records). For example a consecutive record would be : Recording started on Tuesday and there is records on Wednesday, Thursday Friday. This is call as three consecutive record. A non-consecutive record would be if a record started on Tuesday and there is a record on Wednesday and Friday. As there is a gap day this is a non-consecutive recording.
df1 I would like to identify the index and id1 of the person who made the consecutive records as well the position of the record in the consecutive observation (eg. in a 3 consecutive observation the observation could fall on the 1,2 or 3 day)Post related to one of my question
Outcome:
index id1 id2 obs1 obs2 obs3
12 1 1 1 11 12
12 1 2 2 0 0
123 1 2 3 0 3
123 2 2 1 0 8
Sample data
df1:
structure(list(index = c(12, 12, 123, 123, 123, 123), id1 = c(1,
1, 1, 1, 2, 2), id2 = c(1, 2, 1, 2, 1, 2), Day = structure(c(5L,
3L, 4L, 2L, 1L, 2L), .Label = c("Monday", "Saturday", "Sunday",
"Tuesday", "Wednesday"), class = "factor"), obs1 = c(1, 2, 1,
3, 2, 1), obs2 = c(11, 0, 0, 0, 2, 0), obs3 = c(12, 0, 1, 3,
4, 8)), class = "data.frame", row.names = c(NA, -6L))
df2:
structure(list(index = c(12, 123), id1 = c(1, 1), Day = structure(2:1, .Label = c("Friday",
"Tuesday"), class = "factor"), day1 = c(2, 0), day2 = c(1, 3),
day3 = c(2, 0), day4 = c(1, 3), day5 = c(1, 3), day6 = c(3,
0), day7 = c(1, 3)), class = "data.frame", row.names = c(NA,
-2L))
for day1 to day7, what would be the values
We can do this with Map to create a key/value named vector and then do the matching with the column names
lst1 <- Map(`:`, seq(11, 71, by = 10), seq(196, 796, by = 100))
names(lst1) <- c('Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday')
out <- stack(lst1)[2:1]
out$values <- paste0('day', out$values)
-checking
setNames(as.character(out$ind), out$values)[c('day41', 'day182', 'day242', 'day724')]
# day41 day182 day242 day724
# "Monday" "Monday" "Tuesday" "Sunday"
@user11964604 I guess intead of 'weekday', you may need the output from setNames(....)[daycolumnnamevector]. If I understand the 'weekday' was a fixed set and now with day variable namess changedd to 'weekday', it can be matched
| common-pile/stackexchange_filtered |
Scala: default value in case class constructor doesn't work
I'm creating a case class with default-valued constructor:
abstract class Interaction extends Action
case class Visit(val url: String)(val timer: Boolean = false) extends Interaction
But I cannot create any of its instance without using all of its parameters, for example. If I write:
Visit("https://www.linkedin.com/")
The compiler will complain:
missing arguments for method apply in object Visit;
follow this method with `_' if you want to treat it as a partially applied function
[ERROR] Visit("http://www.google.com")
What do I need to do to fix it?
Is there a reason that you are defining it using currying (and e.g. here)? If not, as suggested below, you could define all your parameters in the first set, and your default will work as requested. If you need the currying/second set of arguments, then the extra set of brackets will cause the default for that set of arguments to be applied.
You need to tell the compiler that this is not a partially applied function, but that you want the default values for the second set of parameters. Just open and close paranthesis...
scala> Visit("https://www.linkedin.com/")()
res1: Visit = Visit(https://www.linkedin.com/)
scala> res1.timer
res2: Boolean = false
EDIT to explain @tribbloid comment.
If you use _, instead of creating a visit you are creating a partially applied function which then can be use to create a Visit object:
val a = Visit("asdsa")_ // a is a function that receives a boolean and creates and Visit
a: Boolean => Visit = <function1>
scala> val b = a(true) // this is equivalent to val b = Visit("asdsa")(true)
b: Visit = Visit(asdsa)
Thanks a lot! I speculated that a partially applied function should always have a _ in case of ambiguity. Looks like I'm wrong, and this is the only option.
An interesting feature of scala is that in case of an ambiguity between an old feature and a new feature, the old feature always win and the new one is sacrificed. The infix method as operators on tuples declared by parenthesis never works, because the compiler think those parenthesis are priority brackets :)
@tribbloid, see my edit. I hope it clarifies your doubt.
I've seen this rule somewhere else before, that's why I'm confused because if this is the only way to define a PAF then Visit("Something") will have no ambiguity. Scala has some wierd rules to judge if you need that _ or not
One more question: I can overcome this by using implicit delimiter on the second parameter, but would this have any side effect?
This is actually an interesting idea. The implicit parameter with default value might work (to drop the extra parentheses). The only side effect I can think of is that if you have implicit values of the same time in some calling context, the default will be overridden (some might consider this a feature not a bug, allowing to override a default value in a context).
Please correct the syntax of specifying the optional field in your case class as follows
case class Visit(val url: String,val timer: Boolean = false) extends Interaction
| common-pile/stackexchange_filtered |
Query is taking lot of time... how to fix it?
I have a table in my SQL Server database that has 6 columns as below
CREATE TABLE Table1
(
VersionID int NOT NULL,
EventNum int NOT NULL,
LossLevelID int NOT NULL,
PerspCode char(2) NOT NULL,
Loss float NOT NULL
)
Here first 4 columns are the composite primary key.
I don't have any indexes yet.
The below query is taking forever. How to improve the performance?
SELECT TOP 100
T1.EventNum,
SUM(CASE WHEN T1.PERSPCODE = 'GR' THEN LOSS END) Gross
FROM
ART.[LA].[Table1] T1 WITH (NOLOCK)
WHERE
EXISTS (SELECT EventNum
FROM Axis_Accumulation.dbo.AIREventSet
WHERE RegionPerilId = 27)
AND EventNum IN (110000002, 110000003, 110000016, 110000019, 110000034, 110000066, 110000086, 110000116, 110000118, 110000136)
GROUP BY
T1.EventNum
HAVING
SUM(CASE WHEN T1.PERSPCODE = 'GR' THEN LOSS END) > + CAST(0 AS VARCHAR(10))
ORDER BY
EventNum DESC
Have you tried the query in SQL Server Performance Monitor? You should also look at the query plan in Management Studio and add indexes or change the query to eliminate table scans.
please show a query plan. But the obvious thing is to create indexes on AIREventSet regionPerilID and Table1 eventnum
That exists clause is not doing anything, remove it or join it to T1 in the where clause. 2. Why NOLOCK, if you are not sure then do not use it. 3. To profile a query you need to start by looking at the generated Query Plan, do this from within SSMS.
I don't have any indexes yet. Well add a index. Also your EXIST doesnt make sense. Is a constant value.
Well what does the actual execution plan tell you??
I see only 5 columns in your table?, and 4 of them are the primary key?
SUM(...) > CAST(0 AS VARCHAR(10))?, what could possibly be the point of that?
This is too long for a comment. The EXISTS expression makes no sense. Normally, this would be correlated. So, I'm guessing that you intend:
EXISTS (SELECT 1
FROM Axis_Accumulation.dbo.AIREventSet aes
WHERE aes.RegionPerilId = 27 and aes.EventNum = t1.EventNum
)
Second, the HAVING is awkward to say the least. Why would you compare a numeric SUM() to a character? Instead:
HAVING SUM(CASE WHEN T1.PERSPCODE = 'GR' THEN LOSS END) > 0
The + is a non-op. The expression would be converted back to a number anyway because of the comparison.
Then the first index I would go for (assuming the above EXISTS is correct) is: AIREventSet(EventNum, RegionPerilId).
| common-pile/stackexchange_filtered |
Is there a way to determine whether a representation of a finite group is faithful?
I want to determine whether a representation of a finite group is faithful from the character table.
Is there a such general way or specific way for finite group of small order?
Thank you for your answer. I understand the answer you refered.
It seems that your question has been settled here:
Faithful representations and character tables
In short, a finite dimensional irreducible representation over $\mathbb{C}$, say $\rho : G \to \operatorname{GL}(V)$, is faithful if and only if $\chi_\rho(g) = \dim V$ only when $g = 1$.
Thank you for your answer. The answer is quite easy and simple.
| common-pile/stackexchange_filtered |
How to set up a plotting loop correctly?
im still a rookie struggling with the set up of a plotting loop (*.png files).
"opening device failed". Honestly, I don't know how to handle that.
My approach:
names =list(Pic1,Pic2,Pic3,Pic4,Pic5,Pic6,Pic7,Pic8,Pic9,P10)
for (i in 1:10){
mypath <- file.path("C:","Users",paste("myplot_",names[i],".png"))
png(file=mypath)
mytitle = paste("Training PIC", names[i])
par(mfrow=c(3,1), oma=c(2,2,4,2))
boxplot(ERRORS.train.pic[[i]], outline=F, ylab="RMSE(-)", xlab="K-No")
abline(h = 0, col = "red")
plot(sapply(ERRORS.train.pic[[i]], median), ylab="MEDIAN-RMSE(-)", xlab="K-No",type="l", col="blue")
plot(sapply(ERRORS.train.pic[[i]], mean), ylab="MEAN-RMSE (-)", col ="red")
title(main= mytitle, outer=T)
dev.off()
}
I receive the following error code:
Error in png(file = mypath) : kann png()-Gerät nicht starten
In addition: Warning messages:
1: In png(file = mypath) :
kann Datei 'C:/Users/myplot_ A .png' nicht zum Schreiben öffnen
2: In png(file = mypath) : opening device failed
I'd highly appreciate some hints on that issue. Thanks in advance,
Olli
apparently the file.path function was the problem. I went another way and fixed that.
my solution:
dir <- "C:\\Users\\"
names =list(Pic1,Pic2,Pic3,Pic4,Pic5,Pic6,Pic7,Pic8,Pic9,P10)
for (i in 1:10){
mypath <- paste0(dir,"Training-Result",names[i],".png")
png(file=mypath)
mytitle = paste("Training PIC", names[i])
par(mfrow=c(3,1), oma=c(2,2,4,2))
boxplot(ERRORS.train.pic[[i]], outline=F, ylab="RMSE(-)", xlab="K-No")
abline(h = 0, col = "red")
plot(sapply(ERRORS.train.pic[[i]], median), ylab="MEDIAN-RMSE(-)", xlab="K-No",type="l", col="blue")
plot(sapply(ERRORS.train.pic[[i]], mean), ylab="MEAN-RMSE (-)", col ="red")
title(main= mytitle, outer=T)
dev.off()
any way someone could present a way how to use file.path I'd still appreciate that. cheers!
| common-pile/stackexchange_filtered |
How can I use `pgffor` to iterate through a database of keys and properly retrieve them
I would like to create a command that stores a database. I then want to iterate over the database and print out the results to the document. This MWE shows two different approaches I've taken. But neither works.
\documentclass{article}
\usepackage{pgffor,pgfkeys}
\pgfkeys{/ae/breakfast/menu/.cd,
fruit/.initial = grape fruit,
bread/.initial = English muffin,
eggs/.initial = hard boiled,
}
\def\allrecords{%%'
{ fruit=apple,
bread=bagel,
eggs=scrambled
},
{ fruit=orange,
bread=toast,
eggs=fried
}
}
\def\aeget#1{\pgfkeysvalueof{/ae/breakfast/menu/#1}}
\def\whatIordered{ I ordered \aeget{fruit}, \aeget{bread}, and \aeget{eggs}.}
\pagestyle{empty}
\begin{document}
\whatIordered
%<approach 1>% this fails: "whitespace" getting in the way
%%\foreach \x in \allrecords {\pgfkeys{/ae/breakfast/menu/.cd,\x}\whatIordered\newline}
%<approach 2>% this also fails: keys are misread
%%\foreach \x in \allrecords {\foreach \y in \x {\pgfkeys{/ae/breakfast/menu/\y}} \whatIordered\newline}
\end{document}
Is there a means of using pgfkeys and then iterating over a database?
Although this may be a moot point, database management of this kind is well-handled by datatool.
This is simply an expansion issue.
If you define
\pgfkeys{style/.style={#1}}
you can use /style/.expanded=\something so that \something gets expanded before pgfkeys parses it.
Also using /.estyle makes it easier and gives a slightly better control over expansion context
| common-pile/stackexchange_filtered |
SyntaxError for Flask apscheduler
I am trying to launch a Flask App on AWS EC2 and I am trying to use flask-apscheduler to enable background threading. However, when importing the library, my code breaks.
/etc/httpd/logs/error_log:
mod_wsgi (pid=29266): Target WSGI script '/var/www/html/flaskapp/flaskapp.wsgi' cannot be loaded as Python module., referer: http:/$
mod_wsgi (pid=29266): Exception occurred processing WSGI script '/var/www/html/flaskapp/flaskapp.wsgi'., referer: http:/$
Traceback (most recent call last):, referer: http://example.com/map-day
File "/var/www/html/flaskapp/flaskapp.wsgi", line 6, in <module>, referer: http://example.com/map-day
from flaskapp import app as application, referer: http://example.com/map-day
File "/var/www/html/flaskapp/flaskapp.py", line 3, in <module>, referer: http://example.com/map-day
from flask_apscheduler import APScheduler, referer: http://example.com/map-day
File "/usr/local/lib/python2.7/site-packages/flask_apscheduler/__init__.py", line 17, in <module>, referer: http://example.com$
from apscheduler.schedulers.base import STATE_PAUSED, STATE_RUNNING, STATE_STOPPED, referer: http://example.com/map-day
File "/usr/local/lib64/python2.7/site-packages/apscheduler/schedulers/base.py", line 19, in <module>, referer: http:/$
from apscheduler.jobstores.memory import MemoryJobStore, referer: http://example.com/map-day
File "/usr/local/lib64/python2.7/site-packages/apscheduler/jobstores/memory.py", line 4, in <module>, referer: http:/$
from apscheduler.util import datetime_to_utc_timestamp, referer: http://example.com/map-day
File "/usr/local/lib64/python2.7/site-packages/apscheduler/util.py", line 141, referer: http://example.com/map-day
values = {k: int(v or 0) for k, v in values.items()}, referer: http://example.com/map-day
^, referer: http://example.com/map-day
SyntaxError: invalid syntax, referer: http://example.com/map-day
A quick search on the Internet reveals few results of similar problems and are experienced by users using Python 2.6 while importing other libraries. These users were recommended to upgrade to Python 2.7 and their issue seemed to be resolved by doing so thereafter.
I am, however, using Python 2.7.14 and Flask-APScheduler 1.10.1. While I can surely use another library to do background threading, I am curious to find out if I am missing something - the issue was with dictionary comprehension that cannot be done using Python 2.6, yet I am experiencing the same issue using 2.7. Am I missing something?
@DeepSpace what do you mean?
Yeo Ignore that, my bad, I was thrown away by Flask's debugging logs. However, are you sure you execute the code with Python 2.7 interpreter?
The line contains a dictionary comprehension, a feature added in Python 2.7. Running it on Python 2.6 will result in exactly the syntax error you are showing us. So I guess you should double check your Python version.
After diving further, I found out that specifying the python version in my flask wsgi configuration does not affect which python is used to execute my flask app.
Running this gives the system default for my RHEL VM which is Python 2.6:
$ which python
/usr/bin/python
While I can change the python default version or create a venv and specify the python to be used, I have switched to another distribution using Python 2.7 as the system default due to this among other reasons. Hope this will help anyone who are experiencing a similar problem.
| common-pile/stackexchange_filtered |
Should my flatmate remove her pets?
I am living in a shared house in the NI, UK with five people. One of my flatmates has eight pet rats in her room for three months now. Me and the rest of my flatmates have concluded that our health is in risk.
According to my contract and the rest of my flatmates contract, no pets are allowed.
We all want the rats to be removed from our house, is there anything we can do about it?
In addition to seeking legal advice it might be worth checking in with https://interpersonal.stackexchange.com/ as they may have suggestions about how to manage the communication with this roommate regarding their rats.
Your health is not at risk, domestic rats are no more hazardous to humans than domestic cats or dogs. She still has no right to have them though.
We all want the rats to be removed from our house, is there anything
we can do about it?
From a purely legal point of view, probably not. While the pet owner is clearly violating her rental contract (assuming it forbids pets), the rental contract is between her and the flat owner. That means only the owner can enforce the terms. If they choose to ignore her contract violation, legally there is no way you can force them to enforce the contract.
The only legal options I can imagine would be if your personal rights are somehow violated by the pets - for example, if they smell excessively, make a lot of noise, spread contagious diseases or similar. However, for well-kept pet rats I don't see how you could make that case.
So, practically speaking, the only option is to ask nicely (either the flatmate directly, or the flat owner). If they do not want to do something, you'll have to live with the rats (or move out).
| common-pile/stackexchange_filtered |
CodeMirror - Check if cursor is at end of line
I set readonly lines in my editor this way:
editor.on('beforeChange', function(cm, change) {
if (~readOnlyLines.indexOf(change.from.line)) {
change.cancel();
}
}
Where readOnlyLines is an array containing numbers of the lines to be readonly.
The problem is that when I am on an editable row with a readonly one below, if I press "Del" the readonly row comes upside and I can edit it.
The same doesn't work if I have a readonly row above and I press "BackSpace".
I think I should add an if that checks if at the same time:
Del is pressed (I used a catch event)
The line below is readonly (I did it the same way I did with the if in the code above)
The cursor is at the end of line (Does a specific function exist?)
The cursor is at the end of line (Does a specific function exist?)
if (cm.doc.getLine(change.from.line).length == change.from.ch) {
If the readOnlyLines array is a range of contigous lines you may do something like:
$(function () {
var editor = CodeMirror.fromTextArea(document.getElementById('txtArea'), {
lineNumbers: true
});
var readOnlyLines = [1,2,3];
editor.on('beforeChange', function(cm, change) {
if (~readOnlyLines.indexOf(change.from.line)) {
change.cancel();
} else {
// if you are deleting on the row before the next readonly one
if ((change.origin == '+delete') && ~readOnlyLines.indexOf(1+change.from.line)) {
// when you press DEL at the end of current line
if (cm.doc.getLine(change.from.line).length == change.from.ch) {
change.cancel();
}
// if you are deleting the whole line
if (cm.doc.getSelection() == cm.doc.getLine(change.from.line)) {
change.cancel();
}
// if the line is empty
if (cm.doc.getLine(change.from.line).trim().length == 0) {
change.cancel();
}
}
}
});
});
<script src="https://code.jquery.com/jquery-1.12.4.min.js"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.16.0/codemirror.css" rel="stylesheet">
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.16.0/codemirror.js"></script>
<textarea id="txtArea">
1111
2222 READ ONLY
3333 READ ONLY
4444 READ ONLY
5555
6666
7777
</textarea>
| common-pile/stackexchange_filtered |
WinSock Send Multiple Request & Shutdown Issue
I need to send multiple request (command) to server at port 55005.
My first request get process successfully & I received out put also.
But for second request it gives error (WSAESHUTDOWN - Error 10058).
After first request I call
shutdown(ConnectSocket, SD_SEND);
Then only server process the first request & send me output.
Now can re-open socket processing next request ?**
How I can process multiple request after shutdown(ConnectSocket, SD_SEND) ?
Thanks in Advance for your suggestions.
I send the first request , wait for first reply. After first request reply, I can send next request. This is business logic requirement. I don't want to open a new connection for each request.
Snapshot Code start here ---------------->
**//This for loop will send multiple request to Servers.**
for(it = CommandList.begin(); it != CommandList.end() ; it++ )
{
//get each command request & send it to server.
std::string sendBuf; // = (*it);
sendBuf= *it;
int length = (int)strlen(sendBuf.c_str());
//----------------------
// Send an initial buffer
iResult = send( ConnectSocket, (char*)sendBuf.c_str(), length, 0 );
if (iResult == SOCKET_ERROR) {
wprintf(L"send failed with error: %d\n", WSAGetLastError());
closesocket(ConnectSocket);
WSACleanup();
return 1;
}
printf("Bytes Sent: %d\n", iResult);
// shutdown the connection since no more data will be sent
iResult = shutdown(ConnectSocket, SD_SEND);
if (iResult == SOCKET_ERROR) {
wprintf(L"shutdown failed with error: %d\n", WSAGetLastError());
closesocket(ConnectSocket);
WSACleanup();
return 1;
}
// Receive until the peer closes the connection
do {
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
if ( iResult > 0 )
wprintf(L"Bytes received: %d\n", iResult);
else if ( iResult == 0 )
wprintf(L"Connection closed\n");
else
wprintf(L"recv failed with error: %d\n", WSAGetLastError());
} while( iResult > 0 );
}
Why would you shutdown the socket for sending when you have more data to send? It doesn't make any sense.
And why do you use strlen to get the length of a std::string?
I send the first request , wait for first reply. After first request reply, I can send next request. This is business logic requirement. I don't want to open a new connection for each request.
And so what? You still have more requests to send, so don't shutdown. It's as simple as that. You should not do it unless you have absolutely no more to send. You can still receive data, all sockets are fully duplex, i.e. can send and receive at the same time.
I don't want to open a new connection for each request.
shutdown tell to server first request is received & process it. So server can reply to my first request. Till this it look good. But After Shutdown How I can send my next request ?
Once you shutdown you have to open again. You should think about modifying the protocol to include some kind of end-of-request marker, or to include the length of the message.
As Joachim said, don't call shutdown until you are done with the connection. This TCP socket is a "stream"... stuff can go back and forth at will. You as a programmer have to determine where "requests" and "responses" begin and end... don't use shutdown for that.
since you seem to be sending zero-terminated strings, use a zero as your request/response delimiter. instead of sending length, send sendBuf.length() + 1. delete the shutdown code (or move it to after the loop). in your receive loop, keep appending to a std::string until you receive a 0 in the data.
| common-pile/stackexchange_filtered |
Error while writing Live Transcribing Phone Calls using Twilio Media Streams and Google Speech-to-Text
We referred this link - twilio.com/blog/live-transcribing-phone-calls-using-twilio-media-streams-and-google-speech-text. in which the below mentioned part of the code gives us the error,
//Create Stream to the Google Speech to Text API
recognizeStream = client
.streamingRecognize(request)
.on("error", console.error)
.on("data", data => {
console.log(data.results[0].alternatives[0].transcript);
wss.clients.forEach( client => {
if (client.readyState === WebSocket.OPEN) {
client.send(
JSON.stringify({
event: "interim-transcription",
text: data.results[0].alternatives[0].transcript
})
);
}
});
});
break;
case "start":
console.log(`Starting Media Stream ${msg.streamSid}`);
break;
case "media":
// Write Media Packets to the recognize stream
recognizeStream.write(msg.media.payload);
break;
case "stop":
console.log(`Call Has Ended`);
recognizeStream.destroy();
break;
}
});
});
Error:
recognizeStream.write(msg.media.payload);
TypeError: Cannot read property 'write' of undefined
at WebSocket.incoming (C:\Users\Administrator\Documents\COE\Augular-ALP\route\routes.js:210:31)
at WebSocket.emit (events.js:315:20)
at Receiver.receiverOnMessage (C:\Users\Administrator\Documents\COE\Augular-
ALP\node_modules\ws\lib\websocket.js:789:20)
Please guide us in solving this error!
| common-pile/stackexchange_filtered |
I need some help formatting a file in an organized way
I'm writing a program to update, delete and list all tools from a hardware store in a file. I'm able to update and list all the tools in the file but they are in a very unorganized format on the screen. Here is my code to output. Could someone give me an idea on how to format this so the headers in the file line up more evenly? Thanks.
#include <iostream>
#include <fstream>
#include <string>
#include "HardWare.h"
#include <iomanip>
using namespace std;
int main()
{
int ch = 0, count=0, rNo, qty;
string fileName, h1, h2, h3, h4,hName;
double c;
ifstream inFile;
ofstream outFile;
HardwareData hwd[10];
cout<<endl<<endl<<"Enter 1 for opening data file."<<endl;
cout<<"Enter 2 to list all records."<<endl;
cout<<"Enter 3 to add a record."<<endl;
cout << "Enter 4 to delete an entry."<<endl;
cout<<"Enter 5 to exit the program."<<endl;
cout<<"Choice: ";
cin>>ch;
while(ch!=5)
{
switch(ch)
{
case 1:
inFile.open("Hardware.dat");
if(!inFile)
break;
case 2:
{
while(!inFile.eof())
{
inFile>>h1>>h2>>h3>>h4;
cout<<h1<<"\t"<<h2<< "\t"<<h3<<"\t"<<h4 <<endl;
}
}
break;
this is the hardware.dat file
Record_num Tool_name Quantity Cost
3 Electric Sander 7 57.98
17 Hammer 76 11.99
24 Jig Saw 21 11.00
39 Lawn Mower 3 79.50
56 Power Saw 18 99.99
68 Screwdriver 106 6.99
77 Sledge Hammer 11 21.50
83 Wrench 34 7.50
Welcome to Stack Overflow. Could you give us an example of Hardware.dat, so that we can see what you're talking about? (You can hit edit at the bottom of your question and add the new text.)
Look at std::setw(), it's part of iomanip
If you use setw(n) to adjust the space taken up by each field you it will space them correctly, provided n is greater than the maximum number of characters in that field.
I believe the output is auto aligned left, but if not you will need to add in std::left as well (or just left since you are using the namespace std).
while(!inFile.eof())
{
inFile>>h1>>h2>>h3>>h4;
cout << setw(10) << h1;
cout << setw(10) << h2;
cout << setw(10) << h3;
cout << setw(10) << h4;
cout << endl;
// cout <<h1<<"\t"<<h2<< "\t"<<h3<<"\t"<<h4 <<endl;
}
Thanks! That fixed the format issue, but for some reason, my output now is not correct. It seems completely random. The tool names are not under the tool name header etc. Would you suggest a better way of reading the data in from the file?
@PerrinHawver Could you add what your new output appears like to the question?
@PerrinHawver It is either because of one or both of the following. The value of n in setw(n) is too small ("Electric Sander") is 15 characters so for that field your value of n needs to be at least 15. Or it could be that the window you are printing the output to is too small and so the output overflows onto the next line, make the window larger before printing to it.
| common-pile/stackexchange_filtered |
Visual Studio 2010 Code Coverage - Cannot find the back up file, created by instrumentation utility
I am trying to run code coverage in VS 2010 and I am running into the following error. No coverage information is generated.
Code coverage in-place instrumentation: Cannot fully backup the binary 'MyProject.dll'. Cannot find the back up file, created by instrumentation utility: 'MyProject.dll.orig'.
Check which artifacts are selected for code coverage for your test run config - it's quite well hidden:
Under menu: Test -> Edit Test Settings select your active test setting
In the Test settings Dialog select the Data and Diagnostics tab
Select the Code Coverage item, and then hit the Configure button above
This opens up the Code Coverage Detail window. Check the MyProject.dll that is giving you problems: Does the dll exist? Is it a debug build? Is it signed? Do you have any tests which hits this project?
According to SP's answer to a similar question at msdn, this error can occur if the project or a file in the project does your project does not contain instrumentable code - eg, interfaces only, or resources only. Is that the case in your project?
| common-pile/stackexchange_filtered |
Binding TabItem Visibility
I'm trying to have my TabItem Collapsed or Hidden. I've tried many solutions and none have worked. The Tab Item still remains
If I may get some guidance please.
one solution I've Tried
<TabItem >
<TabItem.Header>
<StackPanel Visibility="Collapsed">
<TextBlock Text="Transactions" />
</StackPanel>
</TabItem.Header>
<panes:Transactions />
</TabItem>
private Visibility statementVisibility;
public Visibility StatementVisibility { get { return statementVisibility; } set { statementVisibility = value; OnPropertyChanged("StatementVisibillity"); } }
Changed "Collapsed" to StatementVisibility and still nothing.
UPDATE:
After poking around, I've found a link to the TabItems that I think may play a factor.
Generic.xaml
<ListBox Foreground="#FFF" Name="TabSelector" Grid.Row="2" ItemsSource="{Binding Path=Items, ElementName=Tabs}">
<ListBox.Background>
<SolidColorBrush Color="#333"/>
</ListBox.Background>
<ListBox.ItemTemplate>
<DataTemplate>
<Border BorderThickness="0 0 0 1" SnapsToDevicePixels="False" BorderBrush="#22000000">
<TextBlock FontSize="14" Height="30" VerticalAlignment="Center" Margin="0" Padding="6" Text="{Binding Header}"/>
</Border>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
<Border Grid.Column="1" Grid.Row="2" Background="White" BorderThickness="0">
<ContentPresenter Name="PART_TabbedFormPresenter"
Content="{Binding TabbedForm, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type shell:ActionScreenControl}}}"
DataContext="{Binding DataContext, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type shell:ActionScreenControl}}}">
<ContentPresenter.Resources>
<Style TargetType="TabItem">
<Setter Property="Visibility" Value="Collapsed"/>
<Setter Property="BorderThickness" Value="0"/>
</Style>
</ContentPresenter.Resources>
</ContentPresenter>
</Border>
Loanview.xaml.cs
<shell:ActionScreenControl.TabbedForm>
<TabControl>
<TabItem......./>
<TabItem......./>
<TabItem >
<TabItem.Header>
<StackPanel Visibility="Collapsed">
<TextBlock Text="Transactions" />
</StackPanel>
</TabItem.Header>
<panes:Transactions />
</TabItem>
</TabControl>
</shell:ActionScreenControl.TabbedForm>
Create another tab with nothing but a simple TextBlock and test.
Hi @Blam it shows "System.Windows.Controls.TextBlock"
Then you are doing something wrong it works from me. Post a simple but complete example to reproduce the problem
@Blam Updated code btw if you didn't notice, thanks for trying though. I Got it to collapse. Had to set DataTrigger for the ListBox.ItemContainerStyle
This is from production code and it works
<TabItem Visibility="{Binding Path=MyGabeLib.CurUser.DisplayTSQL, Converter={StaticResource bvc}}">
<TabItem.Header>
<TextBlock Style="{StaticResource HeaderTextBlockStyle}">TSQL</TextBlock>
</TabItem.Header>
<ScrollViewer VerticalScrollBarVisibility="Visible">
<TextBox Text="{Binding Path=MyGabeLib.Search.CurrentTSQL, Mode=OneWay}" IsReadOnly="True"
TextWrapping="Wrap" FontFamily="Courier New"/>
</ScrollViewer>
</TabItem>
If you are returning Visibility then you would not need a converter
Try with a simple TextBlock - I suspect you have a datacontext problem
Try setting the Visibility property on the actual TabItem itself:
<TabControl>
<TabItem Visibility="Collapsed">
<TabItem.Header>
<StackPanel>
<TextBlock Text="Transactions" />
</StackPanel>
</TabItem.Header>
<panes:Transactions />
</TabItem>
</TabControl>
Ahhhh... you want to data bind. Then you'll need to use a BooleanToVisibilityConverter element and a bool property:
<TabItem Visibility="{Binding YourBoolProperty,
Converter={StaticResource BooleanToVisibilityConverter}">
<TabItem.Header>
<StackPanel>
<TextBlock Text="Transactions" />
</StackPanel>
</TabItem.Header>
<panes:Transactions />
</TabItem>
See the IValueConverter Interface page on MSDN to see how to use a converter.
Tried that and still appearing but instead it says System.Windows.Controls.Stackpanel
Try using it in a new project... yours sounds messed up. In a new project, you won't see this TabItem... of course you need to put it into a TabControl first.
I've tried Boolean to Visibility Converter also. I just set Collapsed to see if it'll actually collapse before setting bindings.
Hi Sheridan, your comment made me very curious if there was more to my tabItem and it looks like it's encapsulated within some other stuff.
I think edits have to be made to the contentpresenter but Unsure how to do that for the visilibities.
| common-pile/stackexchange_filtered |
How to design an architecture of a system as described by Uncle Bob?
As per the "Software Architecture" explained by Uncle Bob, your architecture should be able to defer the framework and DB related decisions as much as possible.
Consider an example of a Payroll system to be developed in Java. I assume that the core application will be a standalone jar file. The "delivery mechanism" will be over the web, a separate war file. The DB will again be a separate jar file.
The webapp as well as the DB project should be dependent on the core application.
I am a bit confused here, how to organize different projects? The webapp will have the core application as a dependency. So, what about the DB project?
The organization of your project is entirely up to you. Your project's organization doesn't really have much to do with your software architecture, except that good organization tends to reflect the architectural perspective. Make sure you understand why Bob's principles apply in specific situations.
Your core application should go into a (static) library - or possible multiple libraries, one for the entities, one or more others for use cases, etc. This depends on the size of your application. For simplicity, I'll assume that the entities and use cases are all in a core library.
The core library contains interfaces for communicating with the rest of the world.
Now you can have additional infrastructure libraries:
a library for the web-interface
a library for the DB
Both of these implement some of the core library's interfaces, so they obviously have a dependency on the core library (but not the other way around). Therefore, you can replace any of them later with a different implementation.
The core library classes expect to be given implementations of the interfaces. This is where you "plug in" the actual implementations from your infrastructure libraries. For this purpose, you will have a separate project that ties all of your libraries together via dependency injection, and produces your final executable that you can deploy.
Typically, your DB and webapp will not be literally plugins in the sense that they are dynamically loaded at runtime. Rather, they are kept separately only until the final executable is created.
Of course, you can have the DB, etc. as dynamically loaded libraries, but this only makes sense if you have some reason for extending the functionality after deployment.
Final notes: nothing forces you to put things into separate libraries, but I recommend it. Also, I'm still waiting for the clean architecture book, but I'm assuming it will basically follow this one.
Extending my answer to address your comment
Maybe I can clarify with a more specific example:
If you use an IDE, and you create a java project, it will probably ask you whether you want to create a library, or an executable. First, you create a library, called "core".
In that library, you implement your controllers (e.g. ReportController, EmployeeManager), your entities (e.g. Report, Employee, Department, etc.) and interfaces for things like database access - e.g. EmployeeRepository. The EmployeeManager takes a reference to an EmployeeRepository in its constructor that it can use to add and remove employees. (See dependency injection).
Now you might create a new library project and call it "mysqlDB". You make it reference your core library and create a class MysqlEmployeeRepository that implements EmployeeRepository. Separating this out allows you to later replace it with an OracleEmployeeRepository without having to touch the core library at all. This is what's meant by the DB being a plug in to the application.
However, usually (for moderately sized projects), there is no need to allow for the DB to be plugged in at runtime. Instead, you often just want to end up with a single executable that you can deploy. So as a final step, you add a new project called "MyPayrollSystem" that references your core library, your DB library and the rest.
In this project, all you do is
create an instance of your MysqlEmployeeRepository
create an instance of your EmployeeManager and give it your MysqlEmployeeRepository instance.
do similar things to set up all other parts of your system, e.g. the web interface
What you end up with is a complete self-contained executable, but your individual components are only put together (plugged in) in the last part of your compilation.
Again, I want to stress that this is not the only way to do it. Sometimes, you may want to defer loading the correct library even more (i.e. to runtime), so you may load the DB library dynamically depending on a config file or command line parameters. But this is a step you should only take if your really need it.
I got the overall idea now. I am still a bit unsure how to achieve the "you will have a separate project that ties all of your libraries together via dependency injection" thing in a real life project, but still, I get the high-level picture.
I have edited my answer. I hope this clears up how the parts are eventually assembled and why.
As Uncle Bob says, your application should be the group of use cases that define it, as well as the business rules that you have.
Regardless of your project's organization, you should be able to test your business rules (application) without any Web app or DB. The data persistence must be a plugin to your application.
Any additional component must be a plugin to your application, and in production you can resolve all dependencies in the initialization (eg: use Dependency Injection to make your application rely on abstractions, and in the 'main' part of your actual application, you resolve all dependencies using some DI framework).
If you can achieve the above, the your project's organization will reflect its architecture naturally.
Exactly, the GUI (Webapp) and the DB should be a plugin. This means that the webapp depends upon the application, the DB depends upon the application. How do you achieve such a structure with Java/Maven? I am sure you can add the dependencies as a jar file, but how will you deploy it?
| common-pile/stackexchange_filtered |
Run multiple instances of gtkmm applications under linux
I'm working on a c++ project that uses gtkmm libraries, each application is assigned an application ID during startup, this is required by Gtk::Application.
because of that I'm not able to run second instance.
How can I achieve the goal to run more instances?
You mean a second instance of Gtk::Application within the same process?
no, I mean second process. so that when clicking on executable to make executable run, it does not run if already running(opened).
So two instances of the same executable? Have you tried explicitly running it from the command line twice? I don't think this is a Gtk limitation.
thanks, I tried that, and the second instance just terminates with no output to the console.
Run the first instance, and then step through the second instance using a debugger to see what happens. For instance, an error code might be thrown by the gtkmm framework before it terminates, if that is in fact what is happening.
You have to pass the APPLICATION_NON_UNIQUE value from ApplicationFlags using the set_flags() method of your application.
| common-pile/stackexchange_filtered |
ffmpeg converting m4s to mp4
I'm working on DASH, trying to optimize QoE for the end user.
I had a video and encoded it using ffmpeg into different bitrates and everything is fine, and the video is playable using dash.
What I want is to combine the received segments from the users into one m4s and convert that m4s to mp4.
I tried a lot of ways in ffmpeg, but it always give me this error:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x9a9e500] could not find corresponding track id 1
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x9a9e500] could not find corresponding trex
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x9a9e500] error reading header
test2.m4s: Invalid data found when processing input
segment_1.m4s is there.
How can I resolve this issue?
You are missing the initialization fragment at the start of the file.
@szatmary I encoded the videos my self and 38 segments are all what I got. they are all combined into that m4s file where should be the initialization segment ?
This works
Combine the m4s segments together into one file, making sure the file order is correct. E.g.
cat video-0.m4s >> all.m4s
cat video-1.m4s >> all.m4s
cat video-2.m4s >> all.m4s
cat video-3.m4s >> all.m4s
cat video-4.m4s >> all.m4s
cat video-5.m4s >> all.m4s
cat video-6.m4s >> all.m4s
cat video-7.m4s >> all.m4s
cat video-8.m4s >> all.m4s
cat video-9.m4s >> all.m4s
cat video-10.m4s >> all.m4s
And then do all your conversions at once.
ffmpeg -i all.m4s -c copy video.mp4
This doesn't
I get the same issue (could not find corresponding trex) trying the streaming method.
I had all the files I wanted in a all.txt file, which contained
file 'video-0.m4s'
file 'video-1.m4s'
file 'video-2.m4s'
file 'video-3.m4s'
file 'video-4.m4s'
file 'video-5.m4s'
file 'video-6.m4s'
file 'video-7.m4s'
file 'video-8.m4s'
file 'video-9.m4s'
file 'video-10.m4s'
And I tried ffmpeg -f concat -safe 0 -i all.txt -c copy video.mp4, resulting in the same issue.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55fde2d0c520] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1280x720): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55fde2d0c520] Auto-inserting h264_mp4toannexb bitstream filter
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55fde2d0c520] could not find corresponding track id 1
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55fde2d0c520] could not find corresponding trex
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55fde2d0c520] error reading header
[concat @ 0x55fde2cff900] Impossible to open 'rifle-1.m4s'
[concat @ 0x55fde2cff900] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 1280x720): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, concat, from 'all.txt':
[... omitted ...]
all.txt: Input/output error
Yes same issue all files in same place but still gives error
How about init segment? what needs to do for init segment?
@user8783065 you concatenate the m4s files after the init segment, I do for x in *.dash *.m4s; do cat $x >> output.mp4; done where the init segment is a .dash file
I agree with the other answers but I found a case that is slightly different.
Concatenating the m4s files I got the same could not find corresponding trex error.
I got this m3u8 playlist:
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:5
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MAP:URI="/mypath/something.mp4"
#EXTINF:3.000,
something001.m4s
#EXTINF:3.000,
something002.m4s
#EXTINF:3.000,
something003.m4s
#EXTINF:3.000,
something004.m4s
#EXT-X-ENDLIST
And I had to concatenate all of them starting with the mp4 file. The mp4 file was only 1.1kB and had no video but only the headers, I guess.
After concatenating the files it may be played by some softwares but it's not very compatible so then you have to let ffmpeg fix it for you.
This worked for me:
ffmpeg -i allgluedtogetherinorder.mp4 -vcodec copy -strict -2 video_out.mp4
UPDATE:
If your ffmpeg had the m3u support activated at compile time then you can convert the playlist to mp4 and it will resolve everything and give you a perfect video.
You may need to edit the paths to the files in the m3u files so it can find them.
Then:
ffmpeg -i playlist.m3u -vcodec copy -strict -2 video_out.mp4
First file to be concatenated the initialization segment, followed by other segment file.
windows powershell:
Sort and list the Video files
Get-ChildItem -name v1*.m4s| Sort-Object { [regex]::Replace($_, '\d+', { $args[0].Value.PadLeft(20) }) } > list.txt
Audio files
Get-ChildItem -name a1*.m4s| Sort-Object { [regex]::Replace($_, '\d+', { $args[0].Value.PadLeft(20) }) } > list.txt
( refer : How to sort by file name the same way Windows Explorer does?)
type *init*.mp4 list.txt > Filename.mp4
Linux :
ls -1v v1*.m4s > list.txt
cat *init*.mp4 list.txt > Filename.mp4
Please check if all your data is present init.mp4 - sps,pps etc.
| common-pile/stackexchange_filtered |
Autofac: creating nested scopes per instance on-the-fly
I would like to implement an application-wide container and a (nested) one for each project created by the user. I looked into Owned<T>, but then - as far as I could figure it out - my internal collection of projects would have to be <Owned<Project>> which I do not want and also I failed to inject a project dependency into objects used within the project scope ("circular component dependency"). I considered using a new ContainerBuilder within the project factory, but then the "nested" aspect is missing.
A few exapmles of classes (with the dependencies) I would like to have:
In a global scope: ProjectManager(IProjectFactory)
In each project's scope: Project(IDocumentFactory documentFactory), Document(IProject project, IProjectSettings settings).
So for the project's scope I would register IDocumentFactory, IProjectSettings (and the project itself?).
When a project is closed/disposed all created dependencies should, of course, also be disposed.
If possible, the concrete classes (except for the ProjectFactory) should be Autofac-agnostic.
FYI: The application is a desktop application using C# and Autofac 4.8.
Thanks!
UPDATE: Thanks for your comments, the discussion helped me find my own opinion. Currently I'm settling for something like this in my ProjectFactory:
public Project Create()
{
var scope = _globalScope.BeginLifetimeScope(MyIocHelper.RegisterProjectDependencies);
var p = scope.Resolve<Project>();
_projectScopes.Add(p, scope);
p.Disposing += project_Disposing;
return p;
}
Things to note:
As far as I can tell, using a tag for the lifetime scope is not necessary.
Project raises a Disposing event when its Dispose method is called the first time.
The factory keeps a Dictionary<Project, ILifetimeScope> and cleans it up when the project is disposed.
Actually, it just occurred to me that you probably want to have some collection of documents in the project and use it somewhere, for example, on UI to show doc names to the user. In this case I would suggest against using autofac for managing documents. Container does not have such a function as providing a list of documents currently available in the scope, so you'll have to keep track of them in the project. But now you are introducing conflict of lifecycle management - now Project AND container are concerned about document objects lifetime, and who should be responsible for disposing them?
So, please clarify the very high level functions of this system so that we could understand lifecycles of the entities involved. Better not to provide your thoughts on the implementation because it will not be useful while not having design settled first.
I get your notion about conflicting dispose chains and am considering using autofac for the factories only - also because of the performance hit I'm expecting from resolving the dependencies for every instance (of documents and nested items).
You can accomplish what you are looking for with a combination of named lifetime scopes and instance-per-lifetime-scope registrations.
Documentation here: http://autofac.readthedocs.io/en/latest/lifetime/working-with-scopes.html#tagging-a-lifetime-scope
You need to:
register your ProjectManager as SingleInstance
register Project as this:
builder.Register<Project>()
.As<IProject>()
.InstancePerMatchingLifetimeScope("project");
This will guarantee that a Project can be resolved (e.g. by a Document) once per each scope tagged as "project".
Implement an OpenProject (or something along) method in ProjectManager. This method should instantiate a LifetimeScope tagged as "project", register in it the IDocumentFactory, IProjectSettings, so they are resolved only once for each project scope, and attach the scope itself onto the Project instance. This is crucial: you need the scope to be disposed when you dispose the project.
public class ProjectManager : IProjectFactory
{
private readonly ILifetimeScope _scope;
public ProjectManager(ILifetimeScope scope)
{
// this is going to be the global scope.
_scope = scope;
}
public Project OpenProject(IDocumentFactory docFactory, IProjectSettings settings)
{
var projectScope = _scope.BeginLifetimeScope("project");
projectScope.RegisterInstance(docFactory).AsImplementedInterfaces();
projectScope.RegisterInstance(settings).AsImplementedInterfaces();
return projectScope.Resolve<Project>();
}
}
public class ProjectScope : IDisposable
{
private readonly ILifetimeScope _scope;
public ProjectManager(ILifetimeScope scope)
{
// this is going to be the project scope.
_scope = scope;
}
public void Dispose() {
if (_scope != null) {
_scope.Dispose();
_scope = null;
}
}
}
public class Project : IDisposable
{
private readonly ProjectScope _scope;
public Project(ProjectScope scope /*, ...*/)
{
_scope = scope;
}
public void Dispose() {
// pay attention that this method will be called 2 times, once by you
// and another time by the underlying LifetimeScope. So this code should
// handle that gracefully (so the _scope == null).
if (_scope != null) {
_scope.Dispose();
_scope = null;
}
}
}
Given all this, you keep every using Autofac out of every class, with the 2 exceptions of the global manager and the ProjectScope. You can change some bits on how the scope is handled, if you accept a single using Autofac in the Project class itself: you can get directly the ILifetimeScope and dispose of it directly.
Hope this helps!
ProjectScope doesn't seem to be correct. It's provided with the external lifetime scope and then this scope gets disposed of explicitly by the ProjectScope object - which is not what it should be doing. Scope is not owned by the ProjectScope object, thus it cannot know when is the correct time to kill the scope. Most likely this will break autofac's scope management and will lead to hard-to-understand errors in the application. But in general I'd say that your answer points to the right direction.
The idea is that disposal of Project cascades to the LifetimeScope containing it. Ence, I dispose the Autofac's LifetimeScope from ProjectScope. I know it's kind of opposite of what someone would usually do, but I don't think it's wrong. Simply put, disposing of Project would automagically Dispose all the other objects contained in the enclosing "project" scope. Am I wrong?
Yes, you are wrong. You absolutely should not dispose lifetime scope that you did not start. In any case. Besides, disposing of Project should mean that it gets done through the killing the scope as well - which will take care of killing everything under it without requiring you to do anything.
if a. chiesa's suggestion is "sub-optimal", what would be a better solution? are my requirements far fetched or heading in the wrong direction?
Well, even if I consider (as a personal opinion) the position of Alexander a little too dogmatic, the points he makes are right. You should ensure that the objects living in the scope of the Project are NOT externally referenced (or you will have references to disposed objects when the Project scope disposes). As an alternative, you could implement the logic of scope instantiation and management directly onto the Project class. In this case, you would give the responsibility of managing the disposal of the LifetimeScope to the same class instantiating it, keeping the code "tighter".
I'm not about being dogmatic. :) It's more of a common sense. If someone gave you some object then you should ONLY dispose it if you are explicitly directed so. It is especially right with autofac. One more time - you should dispose only those scopes that you created manually using BeginLifetimeScope(), otherwise you'll break something in autofac. And if you create project by resolving it from the DI container then you should not dispose project yourself as well. Scope is called "lifetime" for a good reason. ;) You should dispose scope instead and that will kill everything else.
I’m with you about every word. I was suggesting a solution in which the creation of the scope is always performed by a class and the disposal is always performed by another. I agree it could be problematic in some circumstances, but in some other it could be acceptable, if other conditions prevent the bad consequences you where mentioning. Great points, still.
@A.Chiesa: Thanks for the detailed answer! This may be a bit of autofac abuse, but sometimes it's best to know when not to follow best practices. And in this case I also think it may be acceptable. If I find any drawbacks, I'll update this post.
| common-pile/stackexchange_filtered |
Ajax Forms with rails 4, :remote => true and JQuery - What are the Best Practices?
I'm building a multi-part form with ajax in Rails 4, and I'd like to know if I'm doing this the correct way.
Here's the strategy I've used so far. My controller is called ajaxform.
At the top of my ajaxform_controller.rb, inside the class definition but not in a function, I have
respond_to :js
Stage One
The link from another view to this tool is routed with
get 'ajaxform' => 'ajaxform#stage_one'
The primary view, "stage_one.html.erb" contains the first part of the form, defined with a form_tag using the :remote => true option. The controller method of the same name is empty.
<%= form_tag("/stage_one_form", method: 'post', :remote => true) do %>
<%= label_tag(:stage_one_data, "Stage one data: ") %>
<%= select_tag(:stage_one_data, options_for_select([['Option1', :option1], ['Option2', :option2], ['Option3', :option3]], 1)) %>
<%= submit_tag 'Next' %><span id="stage_two_waiting" style="display:none;"></span>
<% end %>
<br />
<span id="stage_two_form" style="display:none;"></span>
Hidden span tags: To the right of the "Next" button, there is a hidden span tag with an id attribute of "stage_two_waiting". When the form submits, routes.rb tells the application to go to stage_two_waiting, which will display a "please wait..." message inside the span tag.
Below it, there's another hidden span tag with an id of "stage_two_form" — this is where stage two of our form will render after we get the data back from a query based on the stage one selection.
Stage Two
post 'stage_one_form' => 'ajaxform#stage_two_waiting'
stage_two_waiting is a partial view with two files: a stage_two_waiting.js.erb and a _stage_two_waiting.html.erb. An underscore at the beginning of the html.erb file indicates that it is a partial, but the .js.erb file of the same name does not have an underscore.
Here is stage_two_waiting.js.erb
$('span#stage_two_waiting').append("<%= escape_javascript (render partial: 'stage_two_waiting') %>");
$('span#stage_two_waiting').slideDown(350);
$("#trigger_stage_two").trigger("submit.rails");
The first two lines cause _stage_two_waiting.html.erb to render inside the span tag on the stage one form with a nifty slideDown effect. The third line triggers submission of a hidden form in _stage_two_waiting.html.erb, which passes the data along to the next form and routes to it:
<%= form_tag("/stage_two_waiting_form", :id => 'trigger_stage_two', method: 'post', :remote => true) do %>
<!-- hidden field to pass the selected information from stage 1 on to stage 2-->
<%= hidden_field_tag(:stage_one_data, params[:stage_one_data])%>
<% end %>
When the JQuery triggers that hidden form submission, routes.rb specifies the location of the actual stage two form itself:
post 'stage_two_waiting_form' => 'ajaxform#stage_two_form'
stage_two_form also has two files, stage_two_form.js.erb and _stage_two_form.html.erb. Before these files can render, the controller needs to place a REST call.
def stage_two_form
stage_one_selected_option = params[:stage_one_data]
rest_response = `curl -k -X GET https://myrestservice.net/v1/myfunction/#{stage_one_selected_option}`
#convert the rest response into an array of select list options for the stage two form
@stage_two_options = Array.new
@stage_two_options.push "parsed information from rest_response would go in here"
respond_with(@stage_two_options) do |format|
format.js
end
end
stage_two_form.js.erb looks like this:
$('span#stage_two_form').append("<%= escape_javascript (render partial: 'stage_two_form') %>");
$('span#stage_two_form').slideDown(350);
The routing doesn't fire off this Javascript until the controller method is completely finished executing. Until then, the user just sees the "please wait..." message. Once the method is done, the javascript renders the form and its instance variable, @stage_two_options
<%= form_tag("/stage_two_form", method: 'post', :remote => true) do %>
<%= label_tag(:stage_two_data, "Available options based on stage one: ") %>
<%= select_tag(:stage_two_data, options_for_select(@stage_two_options.transpose[0].collect)) %>
<%= hidden_field_tag(:stage_one_data, params[:stage_one_data]) %>
<%= submit_tag 'Next' %><span id="stage_three_waiting" style="display:none;"></span>
<% end %>
<br />
<span id="stage_three_form" style="display:none;"></span>
The hidden_field_tag passes along the selection that was made all the way back in stage one, just in case it's needed by stage three. We could also write a second hidden_field_tag to pass the entire array of options that the controller method provided, if it will be needed later.
<%= hidden_field_tag(:stage_two_options, @stage_two_options) %>
The route when posting this form looks like:
post 'stage_two_form' => 'ajaxform#stage_three_waiting'
Further stages
Stage three also has a total of four files, two for the waiting message and two for the form. They are stage_three_waiting.js.erb, _stage_three_waiting.html.erb, stage_three_form.js.erb, and _stage_three_form.html.erb. As before, the stage three waiting partial has a hidden form inside of it to pass along any needed params from stage two. Both stage three partials are rendered inside of the span tags in _stage_two_form.html.erb with JQuery append. This process can continue on for as many stages as needed.
Problems
It seems a little excessive to me that each "stage" of a multi-part form should rely on code that's located in up to five different places: two files for a "please wait..." message, two more files for the form itself, code inside the controller method. It also seems odd to need to pass data from one stage to the next as a parameter in hidden fields. It's unwieldy, and the form isn't working in safari 8 for some reason either, because it keeps routing as HTML and ignoring the JQuery.
Conclusion
I have a suspicion that I'm not doing this in the most graceful way — Any help would be appreciated.
The question is — Is it okay to pass data in params from one partial view to another to another using those hidden fields, or is there a cleaner way to do this that doesn't involve so many different files and hidden fields?
This kind of question is better suited for http://codereview.stackexchange.com since you are looking for review and general advice about your code. Not answers to a specific problem.
-1 Stackoverflow is not a forum - we don't have threads - we have questions and answers and this question is not very suited to the format.
Perhaps I wasn't clear enough - The specific problem is: What is the best way to do this? Am I doing it the right way?
Global variables An alternative to passing parameters from partial view to partial view through the hidden fields would be to use global variables (beginning with $) in the controller. This makes them available to any of the controller's views and all their functions in the controller itself
ajaxform_controller.rb
def stage_two_form
# define a global variable that we expect to use later from a different partial
$global_selected_option = params[:stage_one_data]
...
end
def stage_three_form
# the "stage_three_form" method still has access to the global variable
$stage_three_option = $global_selected_option + ' In stage 3'
end
_stage_three_form.html.erb
<!-- _stage_three_form.html.erb and all other partial views have access to globals -->
<p>The original option was: <%= $global_selected_option %></p>
<%= form_tag("/stage_three_form", method: 'post', :remote => true) do %>
<%= label_tag(:stage_three_selection, "#{$stage_three_option}") %>
...
<% end %>
Would global variables like this work on a PaaS like Heroku? Another option would be placing the variables in the session cookie.
I can't really speak to PaaS and global variables - but global variables usually seem to be frowned upon by the Ruby community.
| common-pile/stackexchange_filtered |
framework opencv2 missing module
I created foo.framework including opencv2.framework with Embed Without Signing option.
I can't used foo.framework in other project, because opencv2 missing module error
enter image description here
opencv2.framework is included module folder
enter image description here
but foo.framework inside opencv2.framework is excluded module
how can i included module folder?
manually copied opencv2.framework folder in foo.framework it's work!
but i want to creating framework to copy opencv2.framework
To include the opencv2.framework module folder within the foo.framework, you can follow these steps:
Open Xcode and navigate to the project that contains the foo.framework target.
Select the foo.framework target and go to the "Build Phases" tab.
Expand the "Embed Frameworks" section.
Click on the "+" button to add a new framework.
In the file chooser dialog, navigate to the location where the opencv2.framework is located.
Select the opencv2.framework and click on the "Add" button.
This should add the opencv2.framework as an embedded framework within the foo.framework. Now, when you use the foo.framework in other projects, the opencv2 module should be included along with it.
Note: Make sure that the opencv2.framework is properly linked and included as a dependency in the project that creates the foo.framework. Also, ensure that you have the correct version of opencv2.framework that includes the required module.
| common-pile/stackexchange_filtered |
Why do we need the event StreamSocketListener.ConnectionReceived when we use SocketActivityTrigger?
StreamSocketListener is used in UWP Apps to make the App act like a Server, i.e. listen on a network port and respond when it receives a connection on that port. Thus, we assign a handler to the event ConnectionReceived which is invoked whenever a connection is received.
And as explained here, when we want to make this possible when the app is suspended as well, we configure a background task that is triggered using a SocketActivityTrigger whenever a connection is received. Thus, the process happens in the Run method of the background task in this case. So does this mean, that we don't need to assign a ConnectionReceived event handler anymore when SocketActivityTrigger is used?
Why do we need the event StreamSocketListener.ConnectionReceived when
we use SocketActivityTrigger?
There's a difference between the two. The StreamSocketListener acts like a Server, when your app connects to the listener, the StreamSocketListener.ConnectionReceived event will be triggered. You can get the connected socket in it and send data to it.
But about SocketActivityTrigger, when your app receives data on a socket, a keep alive timer expired or the socket was closed, the Run method will be triggered.
| common-pile/stackexchange_filtered |
Is a macroscopic pair where I observe one of them quantum entanglement?
Of what little I know/understand about quantum entanglement can somebody confirm if the following experiment is a good analogy to quantum entanglement of pair of particles? PS: please don't laugh as this could be very very lame!
I take an orange and an apple (of similar shape & weight). I put each of them in a non transparent bag separately and seal it off. I put both the bags in a box. I close my eyes and shake it thoroughly such that I no longer know which bag contains what fruit. Then I randomly pick one bag and take a flight to other part of the country.
Now given the above setup, I will not know what fruit is in my bag. In other words the fruit in my bag could be both apple/orange at the same time until I open the bag and see what is inside. As soon as I open it it is determined (similar to wave function collapsing) that I have an orange (for example) and there by making the fruit in the bag that was left in the box an apple (or vice versa).
Does this in anyway come close to what they are doing with quantum entanglement?
Your explanation looks like more the start of the development of the Bell's Theorem for discrete variables. The best explanation of entanglement that I read its the John Preskill's one; its on page 10 of the following link:
http://www.theory.caltech.edu/~preskill/ph229/notes/chap4_01.pdf
It's very simple and clear to understand; this you can explain to your friend too.
And, if you want a video explanation, the Scientific American one its very good too:
http://www.youtube.com/watch?v=xM3GOXaci7w
ohh great. The video really nailed it for me. Thanks for the link.
Someone please correct me if I am wrong, as my understanding in the field is also very limited.
With my current understanding, what the poster was talking about is actually the concept of uncertainty principle. Without observation, an entity will be in an undertermined state regarding its properties. Once the object is observed, then the wave function would collapse and a single state is determined. This is the foundation of quantum mechanics, and it's the state of quantum entanglement pair before observation.
An analogy of quantum entanglement with an orange and apple would be. Assuming that a state of the orange and apple is whether or not they are peeled. Initially you would put an apple and an orange each in a bag and seal it off, without knowing they are peeled or not (some super entity will be peeling and unpeeling both at the same time inside each bag). Then you take a random bag and bring it to the other side of the world with you (large distance not required). You open the bag and you would find the orange or apple peeled or unpeeled, and the other one would be in the opposite state. E.g. if you brought apple and found that it's unpeeled, then the orange on the other side of the world will be peeled. (This is in the event that an entangled pair will have opposite state when observed. However, this is not always the case. In general, observing an entangled entity would determine its and its entangled pair's associated states.)
Quantum entanglement is essentially a phenomenon of a entangled pair of two entities sharing a set of a undetermined properties until one is observed. At which, the other one's state will also be determined and is no longer uncertain.
I don't understand what the "peeling" analogy is for.
I guess what I was trying to describe with peeling is a description of a state of an entity. That state will be uncertain in both of the entities in the entangled pair until observation. Of course, I should add that the states are not always going to be the opposite of each other (will edit in post). Once again though, my knowledge is limited, and if you or someone have better understanding, please enlighten! Thanks!
Thanks for the answer but I am not sure I fully follow... which is fine given this is way over my head :). However does it not mean that the state of the entangled pair are only uncertain for an outside observer? For the particles itself will they not know what state they are in right?
Thanks for the link zephyr, it was a definitely a good read!
@zephyr, thanks for the link. Too mathematical but gets the point across.
It's a very good analogy to "spooky action at a distance" experiment.
It's not a good analogy to quantum entanglement. We don't need an analogy to quantum entanglement. "Quantum entanglement" means "correlation" or "information", or something like that.
If you try to do a spooky action at a distance experiment with apples and oranges, it's very difficult, because apples and oranges are so much quantum entangled with the environment.
If you manage to shuffle the fruits so that the environment does not know which is which, then according to quantum mechanics, there must be some spooky action at a distance happening, when you look into a fruit bag.
You're in good company in thinking that this is what quantum entanglement is about - this is what Einstein thought, and he wrote a famous paper along with Boris Podolsky and Nathan Rosen (usually called the "EPR paper", after their initials), which made exactly this argument. However, Bell's theorem (which was discovered later) is generally accepted as showing that this analogy doesn't work - statistically speaking, it seems that looking in one bag does have to physically affect what's in the other one, rather than just changing your knowledge about it.
There are various ways that people have attempted to get around Bell's theorem, so the view you put across is still held by some people in the theoretical physics community (and may ultimately turn out to be correct after all), but Bell's theorem does make it a bit tricky, and the majority of physicists do not currently believe that quantum reality works in the way you describe.
| common-pile/stackexchange_filtered |
Parameter not valid when I retrieve image from database
I want to retrieve an image from my database, but when application is running, I always get an error
parameter not valid
This is my code:
void showPic()
{
konek.Open();
SqlCommand Show = new SqlCommand("Select Image from [Akun] where Username = '" + username.Text + "'", konek);
SqlDataReader baca = Show.ExecuteReader();
baca.Read();
if (baca.HasRows)
{
byte [] images = (byte [])baca[0]; //this is line error "parameter not valid"
MemoryStream memo = new MemoryStream(images);
pictureBox1.Image = Image.FromStream(memo);
}
konek.Close();
}
WARNING: Your code is dangerous. It is wide open to SQL injection attacks. Always, always, always parametrise your code. Why do we always prefer using parameters in SQL statements?
The following may be helpful: https://stackoverflow.com/a/66616751/10024425
Which line is giving error?
in line 12 error
If anywhere I would expect a Parameter is not valid exception message to be raised from Image.FromStream(memo). That can happen when the stream doesn't contain a supported graphic image format, or because it's running on a Linux/macOS system that doesn't have libgdiplus installed and configured correctly. Please Edit your question to include the full exception message - as text, not screen shot(s).
This error usually occurs when you try to convert an object to a type that it isn't compatible with.
It seems like you're trying to convert the SqlDataReader object baca to a byte[] array, but baca doesn't contain an array of bytes.
To fix this, you need to get the actual data from the SqlDataReader object and then convert it to a byte[] array.
You could do this by using the GetBytes method of the SqlDataReader object and passing the index of the column that contains the image data.
if (baca.HasRows)
{
// Get the image data from the SqlDataReader object
byte[] images = (byte[])baca.GetBytes(0);
MemoryStream memo = new MemoryStream(images);
pictureBox1.Image = Image.FromStream(memo);
}
| common-pile/stackexchange_filtered |
Is it possible to write Visual Studio 2010 macros in C#?
Possible Duplicate:
C# for writing macros in Visual Studio?
Is there a way to write Visual Studio 2010 macros in C#? If not, any idea when this will be possible?
http://stackoverflow.com/questions/1441944/c-for-writing-macros-in-visual-studio
| common-pile/stackexchange_filtered |
how to get value from another table
I have the following query
select sub1_s,
IIf(([sub1_s]<15)," /math","") AS sub1_end
And the following table (subject_minmum)
english_m | math_m | scince_m |
12 | 15 | 10 |
I changed the select command to get the value 15 like this
select sub1_s,
IIf(([sub1_s]<(select math_m from subject_minmum))," /math","") AS sub1_end
It works okay, but when replace many fields, I get this error message
Too many fields defined
Is there another way to do this?
What do you mean by "when replace many fields?"
i have many fields (sub1_s and sub2_s .and so ...........
Google is your friend. Link
Compact the database. On the Tools menu, point to Database Utilities, and then click Compact and Repair Database
i ask about another way to get the value
| common-pile/stackexchange_filtered |
Is there a tool to obfuscate my transactions history by funneling it through a bunch of addresses I control?
I occasionally gift ETH to friends and family and I'd like a bit of financial privacy after gifting them. I understand this method is not fool proof to find out the source of funds, but at least it defeats nosy individuals looking at how much ETH I have and who I'm paying by just visiting a block explorer page.
I usually funnel my ETH through another account before I send it to them. I do this manually which can get time consuming. Is there a tool that does this for me automatically?
probably not automatic but at least more efficient: google for Ethereum tumblers
those all require a 3rd party fee. i rather do it myself with a script or something that does it automatically.
| common-pile/stackexchange_filtered |
How often does Ginter restock?
Near Galaxy Hall, the merchant Ginter occasionally sells a random item at a fixed price. After buying an item, I can't buy from him again until he restocks with a different item.
How often does Ginter restock his items? Will he restock with a different item if I don't buy his current item?
As of v1.1.0, Ginter now offers multiple options when buying items.
According to Serebii:
He will offer you an item for a short time, with it changing after you have captured 20 Pokémon.
I've confirmed myself that Ginter will restock even if you haven't bought his previous item.
| common-pile/stackexchange_filtered |
Can't insert in a table or create a table in MySQL using Python
I have a arduino uno and temp36. I read the sensor value from the arduino serial Monitor with Python. It works!!
But I can't insert in a created MySQL Table using Python and I don't know why. I have no error or warning.
import serial
import time
import MySQLdb
dbhost = 'localhost'
dbname = 'test'
dbuser = 'X'
dbpass = 'Y'
ser = serial.Serial('COM7',9600,timeout=1) # On Ubuntu systems, /dev/ttyACM0 is the default path to the serial device on Arduinos, yours is likely different.
while 1:
time.sleep(1)
the_goods = ser.readline()
str_parts = the_goods.split(' ')
conn = MySQLdb.connect (host = dbhost,
user = dbuser,
passwd = dbpass,
db = dbname)
cursor = conn.cursor ()
sql = "INSERT INTO arduino_temp (temperature) VALUES ('%s');" % (str_parts[0])
print "Number of rows inserted: %d" % cursor.rowcount
try:
cursor.execute(sql)
except:
pass
cursor.close ()
conn.autocommit=True
conn.close ()
print the_goods
Am I doing something wrong?
Those Value in SQL-table need to be plot in php (real time ploting)
Hardware: windows 7, Python 2.7, Python editor:Pyscripter, arduino uno, MySQL Workbench
You need to start with removing the try/except blanket exception handling. Your autocommit change comes too late to apply to the insert statement.
you shouldn't string-format the sql-command, use placeholders instead: cursor.execute('INSERT INTO arduino_temp (temperature) VALUES (%s)', str_parts[0])
Thanks a lot for your response. i´m getting an error: programmingerror:" Table test.arduino_temp does not exist. " What is the meaning of that error?
If you open database connection it is initially with autocommit=False mode, so after you execute SQL statement you must commit() it. Also do not pass on exception. This way you will not see problems. You can log exception into file. You also use cursor.rowcount before you execute INSERT statement. Use it after it.
From your comments it seems that you do not defined arduino_temp database. I do not have MySQL database but I have tested such code with PostgreSQL and ODBC driver. You can use it to test your database and driver:
import MySQLdb
import traceback
def ensure_table(conn):
cursor = conn.cursor()
try:
cursor.execute('SELECT COUNT(*) FROM arduino_temp')
for txt in cursor.fetchall():
print('Number of already inserted temepratures: %s' % (txt[0]))
except:
s = traceback.format_exc()
print('Problem while counting records:\n%s\n' % (s))
print('Creating arduino_temp table ...')
# table arduino_temp does not exist
# you have to define table there
# you will probably have to add some columns as:
# test_id SERIAL primary key,
# test_date timestamp default CURRENT_TIMESTAMP,
# and add index on test_date
cursor.execute('CREATE TABLE arduino_temp (temperature integer)')
print('table created')
cursor.close()
def insert_temperature(conn, temperature):
cursor = conn.cursor()
#cursor.execute("INSERT INTO arduino_temp (temperature) VALUES (?)", (temperature, ))
#cursor.execute("INSERT INTO arduino_temp (temperature) VALUES (%s)", (temperature, ))
cursor.execute("INSERT INTO arduino_temp (temperature) VALUES (%d)" % (int(temperature)))
print("Number of rows inserted: %d" % (cursor.rowcount))
conn.commit()
cursor.close()
def main():
# ...
conn = MySQLdb.connect (host = dbhost, user = dbuser, passwd = dbpass, db = dbname)
#conn = pyodbc.connect('DSN=isof_test;uid=dbuser;pwd=dbpass')
ensure_table(conn)
insert_temperature(conn, 10)
main()
If you have problem with database create test code and use small functions. Your code is mixed reading temperature from serial interface and inserting it into database. Make separate functions for each operation.
Thanks a lot for your response. i´m getting an error: programmingerror:" Table test.arduino_temp does not exist. " What is the meaning of that error?
It means that in test database there is no arduino_temp table. You connected to wrong database or misspelled table name or you must create arduino_temp table.
thanks a lot for your answers. but it is not working.
If you have problem with database create test code and use small functions. Your code is mixed reading temperature from serial interface and inserting it into database. Make separate functions for each operation. Show us error information. Ensure that arduino_temp table is in database.
Hi, i decided to create a new table in phpmyadmin. But only the first data is writing into MysqlTable from phpMyadmin. i don´t know why. It doesn´t work with Workbench. I have no error in python or in mysql.
when i run your code i have a typeerror: type error: must be string or read-on-buffer,not tuple.
It it crashes at execute() then different db drivers makes prepared statements differently. I have changed code to use Python % operator instead of preparing statement, but you can also test version with %s instead of ? in prepared statement. Those prepared versions are in comments. If you have problem then show as much of stacktrace as you can.
i found out the problem. it was an error in python´s configuration. Thanks very much for your help.
You have to commit after executing the sql statement:
cursor.execute(sql)
cursor.commit()
or set autocommit=True after the connection is established.
The OP did try to set autocommit=True but does it too late to matter.
| common-pile/stackexchange_filtered |
Multiple Rails ORM
We have a Rails 3 application with a PostgreSQL database (with ~10 tables) mapped by activerecord. Everything's working fine.
However, we could also like to use:
a MongoDB database in order to store images (probably with mongoid gem).
a Neo4j database (probably with neo4j-rails gem) instead of PostgreSQL for some tables.
Using a database with one Rails ORM is simple, thanks to database.yml. But when there's more than one ORM, how can we process? Is there a good way to do so? For instance, ActiveHash (and ActiveYaml) can work well with ActiveRecord. I think there could be a possibility to let differents ORM working together. Thanks for any tips.
This really depends on the type of ORM. A great way to do this is by using inheritance. For example you can have multiple databases and adapters defined in your database.yml file. You can easily talk to these using the ActiveRecord establish_connection method.
# A typical Active record class
class Account < ActiveRecord::Base
...
end
# A new database connection
class NewConnection < ActiveRecord::Base
self.abstract_class = true
establish_connection "users_database"
end
# A new Active record class using the new connection
class User < NewConnection
...
end
The only down side here is that when you are connection to multiple active record databases migrations can get a little bit dicey.
Mixing ORM's
Mixing ORMS is easy. for example mongodb (with mongoid), simply dont inherit from active record and include the following in the model you want to use mongo:
class Vehicle
include Mongoid::Document
field :type
field :name
has_many :drivers
belongs_to :account
end
ORMs built on top of active model play very nicely together. For example with mongoid you should be able to define relations to ActiveRecord models, this means you can not only have multiple databases but they can easy communicate via active model.
Well, I had the same problem today using neo4j gem. I added require 'active_graph/railtie' in my application.rb.
So, when I want generate a model with ActiveGraph I use: rails generate model Mymodel --orm active_graph, with --orm option you can specify an orm to use.
Without --orm option, it will use AR, by default.
First off, I strongly recommend you do not try to have multiple ORMs in the same app. Inevitably you'll want your Mongoid object to 'relate' to your ActiveRecord object in some way. And there are ways (see below)...but all of them eventually lead to pain.
You're probably doing something wrong if you think you 'need' to do this. Why do you need MongoDB to store images? And if you're using it just as an image store, why would you need Mongoid or some other ORM (or more accurately, ODM)? If you really, really need to add a second data store and a second ORM/ODM, can you spin it off as a separate app and call it as a service from your first one? Think hard about this.
That said, if you really want to go with "polyglot persistence" (not my term), there is a decent gem: https://github.com/jwood/tenacity. It's no longer actively developed, but the maintainer does fix bugs and quickly responds to inquiries and pull requests.
Hi there, I was your comment. Listen I was thinking to do the same: have one DB-type for user-registration and Neo4J for the business-logic. Now u think I could have a mini Ror that just takes care of the sign-in process and then other that just focus on the business logic of the product? thanks!
| common-pile/stackexchange_filtered |
Distinction of del pezzo surfaces and weak del pezzo surfaces
I am a bit confused about the definition of weak del pezzo surface. Can someone give an example that what kind of weak del pezzo surface is not a del pezzo surface?
A surface $S$ is del Pezzo if $-K_S$ is ample. It is weak del Pezzo if $-K_S$ is nef and big.
To get examples of (true) weak del Pezzos, remember that a del Pezzo of degree $d$ is the blowup of $\mathbf P^2$ in $9-d$ general points.
If $S$ is the blowup of $\mathbf P^2$ in points $p_1,\ldots,p_r$, then $-K_S=3H-E_1-\cdots-E_r$ (in the obvious notation).
So the trick is to choose the points so that $-K_S$ is nef and big, but has degree $0$ on some curve. For example, choose 6 points in $\mathbf P^2$ such that 3 of them lie on a line. Then on the blowup, $-K_S \cdot L=0$ where $L$ is the proper transform of the line. However, one can verify that $-K_S$ is still basepoint-free, hence nef, and has 4-dimensional space of sections, giving a birational map onto the image of $S$ in $\mathbf P^3$, hence is big.
The simplest example is the second Hirzebruch surface.
| common-pile/stackexchange_filtered |
Connect to azure services (blob storage/servicebus) via a VPN
We are planning to connect to azure servicebus and blob storage from multiple sites. Is it possible to connect to them through a VPN instead of directly over the Internet to improve the security of the connection?
If it's possible can anyone advise how?
Yes, without know what kind of VPN you want/ what your networking set up is, I would suggest this:
https://azure.microsoft.com/documentation/articles/vpn-gateway-site-to-site-create/
Just put a machine on the azure network and send whatever bus data you want. You could even set up a proxy machine through that VPN if you are worried about publicly routed traffic.
Assuming they are in the same geo-location, any machine that is on the azure network that makes a request to blob storage or a service bus will never go through the internet. It will still use the public domain name but it will be sent through the azure networking fabric.
Without knowing the structure of what you're looking for, I would say the easiest way would be to set up an azure vm in the same geo-location.
This all being said, all traffic going to and from azure storage and service buses is encrypted and pretty safe.
Hope this helps.
Thanks for your response. This was one of the possibilities I was considering, however I was hoping for a solution that didnt require an additional VM (proxy) to live inside Azure when it's sole purpose would be to proxy traffic.
We already have a site-to-site IPSEC VPN. I was hoping there might be some kind of virtual way of connecting to Azures service but I haven't been able to find anything.
The other option is to configure your router to send all traffic to that domain "blank.servicebus.azure.com" through the site to site vpn tunnel. The name will still resolve but will never "go over the internet"
| common-pile/stackexchange_filtered |
WWF service - how do I make a service to be asyncronous?
I want to create a service which receives a request from the client, adds the request to a database and than calls another WWF service ASYNCRONOUS which does some time consuming job withthe data from the database.
How do I make a service to be asincronous in Windows Workflow service?
I use the second Windows Workflow service as a queue(as it can only be one instance of this service=I set canCreateInstance to false).
To make a Workflow Service behave asynchronously create a One Way contract by using a Receive activity without a correlated SendReply.
When another Workflow (or WCF client proxy) calls this service it will not wait for a reply from the service.
As for your comment about only one instance of a service you are mistaken. There is no way to have a singleton workflow service (as there is with WCF services) and CanCreateInstance has no effect on this behavior.
| common-pile/stackexchange_filtered |
SAS: Print to Log AND View Live Log Window Simultaneously
I understand that PROC PRINTTO LOG="C:TEMP\SAS LOG.TXT" outputs the entire contents of a SAS program log, but this also essentially leaves the log window blank while the program is running and I am unable to view the 'live' progress of the SAS program so to speak.
I want to ultimately save the log for further review, but I also want to keep an eye on things as they're happening live when I'm running tests, etc. -- is there a way to print the log and keep the contents of the log live as they're happening simultaneously?
what type of system are you using and which SAS editor?
I'm running SAS 9.3 on Windows 7 64-bit and use the standard SAS editor window for programming purposes -- does that give you what you need? I apologize if my lingo is off.
That's fine. Short answer is no, SAS only allows 1 stream for the log. I THINK you can script a IDE macro to save the contents of a window. So run your program, watch the log, and when done, hot key the save. Not 100% sure and I don't have to figure it out right now. Personally, in these situations I put the log to a file with PRINTTO and watch it in a text editor with periodic refresh.
Thanks for the insight @DomPazz, I was wondering what others might do in this case. I'll go ahead and do that -- I can't recall Notepad++ having a refresh button, so how do you go about that?
I open my program, save my log to a file, using point and click and then run. The log is then saved to a text file and you can see it as it generates as well.
Also, look into the ALTLOG specification. I'm not sure how to call it, but it seems that it should offer that functionality. http://support.sas.com/documentation/cdl/en/hostwin/63047/HTML/default/viewer.htm#n02cl0iq0k1fmxn11p83yirplodk.htm
@Reeza, ALTLOG is a command line option (specify on SAS startup) that send a copy of the log to a file. That is probably the best solution. I didn't know it exists.
Can you throw command lines in SAS code, or do they have to be entered manually in the command bar?
a raised this with sas support some years ago.. The ultimate response was no, it's not possible (in the programmatic / system options sense)
If you are using Enterprise Guide or any of the EBI clients you could enable logging on the application server. This will give you a copy of the log along with your regular log. Won't work for Base SAS though..
Steps:
Navigate to: [sasconfig]\Lev1\SASApp\WorkspaceServer
Rename logconfig.xml to logconfig.xml.orig
Rename Logconfix.trace.xml to logconfig.xml
Restart the object spawner
EDIT: if you were happy to accept sequential - as opposed to simultaneous - logging, I'd recommend the approach outlined in the answer to this question (basically read the external log file back in and print to session log)
I agree with @Reeza's suggestion to try -altlog. Unfortunately, this option needs to be specified when SAS is invoked. One way is to add a line to your SAS config file (mine is in C:\Program Files\SASHome\SASFoundation\9.4\nls\en\sasv9.cfg):
-altlog d:\junk\MySASaltlog.log
Each time you start SAS, it will write to MySASaltlog.log in addition to your log window. MySASaltlog.log is overwritten for each session. You have to jump through some hoops to generate a separate log for each session.
I think it would be great if you could specify altlog on an options statement during a SAS session, e.g.:
options altlog="d:\junk\MySASaltlog_%sysfunc(today(),yymmddn8)";
If you agree, please like / upvote my SAS ballotware idea that proposes this: https://communities.sas.com/t5/SASware-Ballot-Ideas/Allow-ALTLOG-to-be-specified-on-OPTIONS-statement/idi-p/219628
Another approach for PC SAS is to use the DM statement. Submitting the following statement will copy the content of the current log window to MyLog_YYYYMDD.log:
dm "log; file ""d:\junk\MyLog_%sysfunc(today(),yymmddn8).log"" replace;";
You could probably assign that command to a function key as well.
A last thought is to question why you want to save the log from an interactive SAS session. Most folks use interactive sessions to develop code. Then when they are done, they batch submit the program for the final production run. This has the benefit of starting with a clean SAS session, as well as writing a log file automatically. With that approach, it's rarely useful to save a log file from an interactive session.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.