text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Microsoft translator nb-NO translates fine, but supported language api call does not list as supported language
I would like to check if a language is supported by Microsoft Translator, before sending the request of to translate text.
I make a call to this api:
http://api.microsofttranslator.com/V2/Http.svc/GetLanguagesForTranslate
and it returns a list of languages. One of them is: "no" for Norweigan.
My application has support for nb-NO... so my language check essentially comes down to this code:
string language = "nb-NO";
this.cachedSupportedLanguages = string[] { "no" };
return this.cachedSupportedLanguages.Contains(language);
The issue i'm having is if i send the request off to this api with nb-NO as the "to Language", the translation falls back to Norweigan:
http://api.microsofttranslator.com/v2/Http.svc/Translate?text=textToTranslate&from=fromLanguage&to=toLanguage ...
...but I cannot find a way of pre-checking if a language is supported because even if i do:
new CultureInfo(language)
It doesn't have any knowledge of the language being able to fall back to Norweigan.
Any ideas how i can check this in a better way than an explicit switch?
Edit
The cultures have a hierarchy, such that the parent of a specific
culture is a neutral culture and the parent of a neutral culture is
the InvariantCulture. The Parent property returns the neutral culture
associated with a specific culture.
https://msdn.microsoft.com/en-us/library/system.globalization.cultureinfo(v=vs.71).aspx
If i do this:
CultureInfo cultureInfo = new CultureInfo(language);
// For languages like en-US
if (this.cachedSupportedLanguages.Any(x => x.Equals(cultureInfo.TwoLetterISOLanguageName, StringComparison.OrdinalIgnoreCase)))
{
return true;
}
// For languages like nb-NO where the explicit language is not supported but its parent culture is
if (!string.IsNullOrEmpty(cultureInfo.Parent.ToString()))
{
if (cultureInfo.Parent.IsNeutralCulture)
{
if (!string.IsNullOrEmpty(cultureInfo.Parent.Parent.ToString()))
{
if (!string.IsNullOrEmpty(cultureInfo.Parent.Parent.CompareInfo.ToString()))
{
return this.cachedSupportedLanguages.Any(x => x.Equals(cultureInfo.Parent.Parent.CompareInfo.Name, StringComparison.OrdinalIgnoreCase));
}
}
}
}
I get a true... But i do not fully understand if the Parent is always going to be a safe bet to go to for this information?
There is a Microsoft translator method that returns the information you need. The data is returned as JSON. You can get supported languages for text translation, speech translation and text to speech in one api call. You also get more information.
Learn about it and try it out at: http://docs.microsofttranslator.com/languages.html
GetLanguagesForTranslate returns nb for Norwegian, not no as stated in the post. Go to http://docs.microsofttranslator.com/languages.html, click Try it, and see this in the result:
"nb": {
"name": "Norwegian",
"dir": "ltr"
},
One approach would be to keep a table with the result of the GetLanguagesForTranslate. First look up your culture code directly, and if no match, then look for a match on the Parent. Parent should be reliable for this purpose. This will let your code work for those supported languages like sr-Cyrl, sr-Latn, zh-Hans, and zh-Hant, and fall back to the neutral when the qualified name isn't available.
| common-pile/stackexchange_filtered |
Disassembly of Visual Studio Intel compiler c++ code
Most of my question is mentioned in the title of this question.
I recently wanted to learn (and play :-) ) with the Intel C++ Compiler, so I downloaded and plugged it into a Visual 2010 c++ (win32 console) project. I am just curious to look on the disassembly.
Most of the dissemblers cannot open produced code, so just asking does anyone known how to preview compiled code?
Thanks for help.
What about using the appropriate option to let the compiler produce assembly code. I just know, that it's -S for GCC, but I'm pretty sure there's a similar option available for the MSVC toolchain.
dumpbin /disasm x.obj does not work?
Pass /Fa on the command line to generate assembly language output.
Inside Visual Studio, pull up the properties page for your project, look in the Output Files section under C/C++, and turn on assembly language output:
| common-pile/stackexchange_filtered |
Gridview Sorted Event
I've very small question that drives me mad :)
I've a Gridview (bind from db nothing special there) and I use small function that runs on the griviewrows and sets .Visable to false in case they don't match search criterias. It works fine but when I try to sort the grid view (by clicking on the header) all the "hidden" rows shows up again.
I tried to use the "GridView_Sorted" event in order to run on the gridview and hide again but it doesn't seem to do anything. The select statement is stored procedure so I can't use filtering expressions.
My Question is - Is there a way to run the hiding function after the sort
(as "Occurs when the hyperlink to sort a column is clicked, but after the GridView control handles the sort operation." {http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.gridview.sorted.aspx} suggests )
The GridView's PreRender event should do the trick.
You could just walk GridView.Rows and apply your logic there... That way it's guaranteed to occur at the right time wether the sort happens or not.
no luck - The PreReinder runs and still all the rows are shown
How are you binding the data?
Maybe it would help only to bind the used data (rows) to the grid, because binding not displayed data is kind of an overhead.
| common-pile/stackexchange_filtered |
I want to iterate through my priority queue in C#
I am implementing a generic priority queue in C# i used SortedDictionary for the priority keys and Queue for the values.
Now i want to iterate through it with a foreach loop with a condition to skip over queue items with priority 0.
This is my priority queue class
public class PQ<Tpriority, Titem>
{
readonly SortedDictionary<Tpriority, Queue<Titem>> value;
public PQ(IComparer<Tpriority> priorityComparer)
{
value = new SortedDictionary<Tpriority, Queue<Titem>>(priorityComparer);
}
public PQ() : this(Comparer<Tpriority>.Default) { }
public bool Check{ get { return value.Any(); }}
public int Count
{
get { return value.Sum(q => q.Value.Count); }
}
public void Add(Tpriority priority, Titem item)
{
if (!value.ContainsKey(priority))
{
AddQueueOfPriority(priority);
}
value[priority].Enqueue(item);
}
private void AddQueueOfPriority(Tpriority priority)
{
value.Add(priority, new Queue<Titem>());
}
public Titem Next()
{
if (value.Any())
return Next_FHP();
else
throw new InvalidOperationException("The queue is empty");
}
private Titem Next_FHP()
{
KeyValuePair<Tpriority, Queue<Titem>> first = value.First();
Titem nextItem = first.Value.Dequeue();
if (!first.Value.Any())
{
value.Remove(first.Key);
}
return nextItem;
}
public SortedDictionary<Tpriority, Queue<Titem>>.Enumerator GetEnumerator()
{
return value.GetEnumerator();
}
}
public interface IEnumerator<Tpriority, Titem>{}
And this is my main program
I was able to get all my needed actions to work fine just this
static
void Main(string[] args)
{
var first = new PQ<int, string>();
first.Add(1, "random1");
first.Add(0, "random2");
first.Add(2, "random3");
while (first.Check)
{
Console.WriteLine(first.Next());
}
Console.WriteLine(first.Check);
/*it gives the error "Can not convert
type 'System.Collections.Generic.KeyValuePair<int,System.Collections.Generic.Queue<string>>'
to exam.PQ<int, string>"*/
foreach (PQ<int, string> h in first)
{
}
}
Thank you.
The error is clear. Each dictionary item is a KeyValuePair, so you cannot iterate the dictionary items (unless complicating the logic) to do something about each Queue. What have you tried to solve the issue?
I was thinking of using the IEnumerator but now i know i can't iterate through, i'm not really sure it would work so i might try another approach in implementing the priority queue. Thank you
One way to do it would be to add your own method on your PQ class, that returns a collection of Queue items that pass your (any) validation check.
So something like this on your PQ class
public IEnumerable<Queue<Titem>> GetItems(Tpriority priority)
{
var validKeys = value.Keys.Where(x => !x.Equals(priority) );
return value.Where(x => validKeys.Contains(x.Key)).Select(x => x.Value);
}
Then you can iterate over it like this
foreach (var h in first.GetItems(0))
{
}
You might want a better name for the method though, which clearly states what it's going to do.
Update...
To just print out the values, you could do something like this.
public IEnumerable<Titem> GetItems(Tpriority priority)
{
var validKeys = value.Keys.Where(x => !x.Equals(priority) );
return value
.Where(x => validKeys.Contains(x.Key))
.SelectMany(x => x.Value);
}
And here it is broken up into bits with a few comments
public IEnumerable<Titem> GetItems(Tpriority priority)
{
// Find all the keys in the dictionary which do not match the supplied value
var validKeys = value.Keys.Where(x => !x.Equals(priority) );
// Find all the values in the dictionary where the key is in the valid keys list
var validItems = value
.Where(x => validKeys.Contains(x.Key));
// Because an individual queue item is actually a collection of items,
// SelectMany flattens them all into one collection
var result = validItems.SelectMany(x => x.Value);
return result;
}
Thank you the method and iteration works but it prints out this System.Collections.Generic.Queue1[System.String] System.Collections.Generic.Queue1[System.String.
instead of the values.
A queue is a collection of items. I'll update my answer to just give you values, if that's all you want
Yes please and Thank you.
If you found it helpful, please feel free to upvote the answer
Thank you very much work perfectly, now i just need to understand it.
I've updated it and added a few comments to explain the bits
| common-pile/stackexchange_filtered |
Vertex label to change from id to another column in igraph
I am creating a graph that's taking the id as vertex names. However I want to change the vertex label to another column values. How can I do that?
Columns in df1 are ID, name and I want the vertex labels to be name
My code:
df1:
ID NAME
1 Ada
2 Cora
3 Louise
df2:
SOURCE TARGET TYPE ID WEIGHT
1 2 DIRECTED 2 2
1 3 DIRECTED 1 2
2 1 DIRECTED 3 1
g = graph.data.frame(d = df2, directed = TRUE, vertices = df1);
V(g)$size<-degree(g)
plot(g,
layout=layout.circle, main="circle",
vertex.label.dist=0.4,
vertex.label=V(g)$id,
vertex.label.cex=1, edge.arrow.size=0.
Thanks
Please use dput(df1) and dput(df2) to share you data in your post.
You can try
g %>%
set_vertex_attr(
name = "name",
value = V(.)$NAME
)
which gives
IGRAPH 19e91db DN-- 3 3 --
+ attr: name (v/c), NAME (v/c), TYPE (e/c), ID (e/n), WEIGHT (e/n)
+ edges from 19e91db (vertex names):
[1] Ada ->Cora Ada ->Louise Cora->Ada
| common-pile/stackexchange_filtered |
Simplifying array diff code using syntax sugar
Can I simplify this array diff code, using some coffeescript syntax sugar I didn't know about?
first = [{id:1},{id:2},{id:3},{id:4},{id:5}]
second = [{id:3},{id:4},{id:5},{id:6},{id:7}]
first = first.filter (first_element) ->
second = second.filter (second_element) ->
if first_element.id == second_element.id
first_element.remove = second_element.remove = true
return !second_element.remove?
return !first_element.remove?
console.log(first.concat second) # [{id:1},{id:2},{id:6},{id:7}]
is it necessary to have no remove property on the remaining objects? this wouldn't add syntactic shugar, but gets rid of a few lines. Also, you can get rid of the returns and replace == with is and true with on but that's not sugar, more icing.
It is not necessary to have no remove property on the remaining objects.
Do you need the remove at all?
No. I just tried to figure out how to diff array of objects by property value using as little code as possible without ruining readability and came to remove as a result. I actually interested in both "as little code as possible" and "+ without ruining readability" solutions.
This one translates to exactly the same javascript:
first = [{id:1},{id:2},{id:3},{id:4},{id:5}]
second = [{id:3},{id:4},{id:5},{id:6},{id:7}]
first = first.filter (first_element) ->
second = second.filter (second_element) ->
first_element.remove = second_element.remove = yes if first_element.id is second_element.id
!second_element.remove?
!first_element.remove?
console.log(first.concat second) # [{id:1},{id:2},{id:6},{id:7}]
The explicit returns were removed, the if was put at the end of the line. yes is replaced with true and == is replaced with is.
While not shorter in lines, this one preserves the objects in the Arrays:
first = first.filter (firstElement) ->
keepFirst = yes
second = second.filter (secondElement) ->
keepFirst = (keepSecond = secondElement.id isnt firstElement.id) and keepFirst
keepSecond
keepFirst
Note that you can't use the and= operator, because it short cirquits.
| common-pile/stackexchange_filtered |
React router 4 renders wrong route?
I run into a slightly problem. I have Root component which contains Switch and two Route. When the user loads the page I want to render the initial <Route path="/" /> This route contains a component with a Route pointing to the gallery I want to show.
When the user clicks on the images the route changes and another set of images based on the route change. This works as expected. But when the use goes for the second <Route path="/login" /> inside the Root, the path changes but the initial is rendered. Even the gallery is not rendered because of the not matching path, but the Header and HeaderAbout is rendered. How?
class Root extends Component {
render() {
return (
<div>
<Navigation />
<Switch>
<Route component={Home} path="/" />
<Route component={Login} exact path="/login" />
</Switch>
<Footer />
</div>
);
}
}
Route inside initial
class Home extends Component<Props> {
_scrollDown = event => {
event.preventDefault();
this.gallery.scrollIntoView({ behavior: 'smooth' });
};
render() {
return (
<div>
<Header onClick={event => this._scrollDown(event)} />
<HeaderAbout />
<Route
render={props => (
<div
ref={node => {
this.gallery = node;
}}
>
<RelayGallery {...props} />
</div>
)}
path="/home/:galleryRef?"
/>
</div>
);
}
}
Change it like this
<Route component={Login} exact path="/login" />
<Route component={Home} path="/" />
Switch is unique in that it renders a route exclusively. In
contrast, every that matches the location renders inclusively.
Thanks! I did not know that the order of declaration inside switch matters, but it makes sense now :)
You could keep your route structure and just add the 'exact' prop to your home component route as well. Then the router will only render components that have the exact '/' route.
| common-pile/stackexchange_filtered |
Is selecting from the same server but different database slower than the same database?
I am thinking about 2 possibilities that I want to use in my project. So there are 2 projects, with different databases each. So Project P1, has Database D1, and Table T1. Project P2, has Database D2.
For Project P2, I need to select the data at Table T1 quite often. Is it better to create a replicate of T1 inside Database D2 (name it T2), and do SELECT * from T2 instead of doing SELECT * FROM [D1].[T1] ?
The 2 databases D1 and D2 are in the same server.
have you already did your testing?
As a side note: you can't parameterise database names when referring to tables by db.schema.tablename so may end up with many references to change if the name of database 2 changes for any reason. Instead of direct references everywhere I suggest creating views in a separate schema in database 1 of the form CREATE VIEW external.sometable AS SELECT * FROM database2.schema.sometable and referring to those in other code instead. That way you can update database names with a lot less work.
@EdgarAllanBayron For some reason, I did not notice any difference when selecting over 400K records for same or different server. But I'm looking to any other insight that I might have missed.
@DavidSpillett Yes, I'm planning to use VIEW for this. Thanks for your info.
To add a bit to TomTom's comment: The optimizer has full access to meta-data etc regardless of whether the data is in one database or several databases. And the optimizer or execution engine is on constrained to one database. As long at it is in the same instance! If you talk about cross-instance queries (using linked servers, for instance), then you will definitely hurt!
Could you make this a "bit more" of an answer? It's close - maybe some references and/or extra content?
Is it better to create a replicate of T1 inside Database D2
No. Besides using double disc scpace, you also use couble memory if you access identical data from 2 copies of a table. And as memory is what is the most efficient buffer, you at least theoretically reduce system effectiveness. Theoeretically because a proper server may just not care - if you have half a terabyte ram and the duplicate table is 100kb then all impact is purely theoretical.
| common-pile/stackexchange_filtered |
Selenium click at text field fails during test, passes single command test
I'm using the selenium plug in, through firefox, i have a test case set up to do a search through some fields, the problem is that when the page loads to the search screen its failing the click at command to the text field. it then fails to edit the text in the field obviously.
This command passes though if i test the single command after the test initially fails. then it allows the text to be edited.
Unfortunately I can not give to much info about the site or the page, its proprietary. I can provide some screen grabs of the commands failing though (this is a lie i need 10 rep to post images) sorry. Any info that can explain why this is happening would great though.
the last commands to pass are as follows:
click at xpath=(//a (its a link)
pause for 3000 ms
fails here:
click at id=searchName 85,17
This is the error message: Trying to find id=searchName... Failed:
Implicit Wait timed out after 30000ms
it brings up the link and after the pause it fails, i allowed the pause to give the full page time to load. It doesn't seem to help at all. Again it will run the command it i test the single command on the page immediately after i stop the test
Edit: html code for the element being searched
<input class="form-control" placeholder="Enter full name or organization name..."
name="search" id="searchName" value="Mr And Mrs Ronald J Ulrich" autofocus="" type="text">
edit2:if i switch the item being searched to name=search i get a different error:
type on name=search with value Ronald J Ulrich... Failed:
Element is not currently interactable and may not be manipulated
It would be awesome, if you show us your code and relevant HTML
Test your xpath in chrome console and see if it finds the element.
The 'code' you gave says nothing. Provide a normal code you used, it is impossible to help James Bond
added html code for the element trying to be found also im not using webdriver but the plug in for mozilla, there is no code other than the pages html just the display window. and when i enter "id=searchName in the console in firefox and chrome it finds the element just fine.
It looks like i was missing a select window command before trying to click and edit the element. Being new to this platform I wasn't taking into consideration any new tabs that we're opening. It failed the overall test due to opening a new tab and not having the select window command. Thanks folks.
| common-pile/stackexchange_filtered |
How to get the Units by day via API - App store connect
Is it possible to get the data from Units download by day or installed via API ? but the problem is hard to find the resources of documentation of it.
this image below is the data I want to have.
https://i.sstatic.net/P3NDF.png
The only problem is that in this report generated by the API, the "Units" column counts the downloads, the "in-app purchases", "re-downloads" or other things, and this causes a difference in the number of units seen in the review on the Apple Connect Store, as @CameronPorter mentioned. However, when reading the documentation, I couldn't find a way to get only the downloads (units without the in-app purchase). See explanations below:
Explanation of the problem cited in the Apple documentation
Link for reference: https://help.apple.com/app-store-connect/#/dev5340bf481
It has several steps to archieve. First you have to follow 2 links here:
Create keys: https://developer.apple.com/documentation/appstoreconnectapi/creating_api_keys_for_app_store_connect_api
Create and sign JWT Token
https://developer.apple.com/documentation/appstoreconnectapi/generating_tokens_for_api_requests
These important keys to get is:
IssuerId
KeyId
VendorId
PrivateKey
If you are using Python, I would suggest using PyJWT to sign it
from datetime import datetime, timezone
import jwt
def sign_appstore_token(issuer_id, key_id, generated_private_key):
bin_private_key = generated_private_key.encode()
current_unix = int(datetime.now(tz=timezone.utc).timestamp())
token = jwt.encode({
"iss": issuer_id,
"iat": current_unix,
"exp": current_unix + 1000,
"aud": "appstoreconnect-v1",
}, key= bin_private_key, algorithm= 'ES256', headers= {
"alg": "ES256",
"kid": key_id,
"typ": "JWT"
})
return token
From generated token, continue following this link
https://developer.apple.com/documentation/appstoreconnectapi/download_sales_and_trends_reports
To get the Units, reportType should be SALES. Also noticed that reportDate and frequency have to consistency each other, if you specify filter[frequency] = YEARLY, then filter[reportDate] = 2021 or filter[frequency] = MONTHLY, then filter[reportDate] = 2021-06. For more details, please refer to the above link
Sample query here:
https://api.appstoreconnect.apple.com/v1/salesReports?filter[frequency]=YEARLY&filter[reportDate]=2021&filter[reportSubType]=SUMMARY&filter[reportType]=SALES&filter[vendorNumber]=YOUR_VENDOR_ID
Headers: Authorization: Bearer YOUR_ABOVE_TOKEN
You will get binary response if it is success, represented for .gz file as well. Extract gz to get .txt schema deliminated by \t
Columns:
Provider Provider Country SKU Developer Title Version Product Type Identifier Units Developer Proceeds Begin Date End Date Customer Currency Country Code Currency of Proceeds Apple Identifier Customer Price Promo Code Parent Identifier Subscription Period Category CMB Device Supported Platforms Proceeds Reason Preserved Pricing Client Order Type
Python script here returns file content as text, you can do your next step, pandas table, or to model, it is up to you
import requests
import gzip
def download_appstore_objects(token, vendor_id, frequency, reportDate):
link = f'https://api.appstoreconnect.apple.com/v1/salesReports?filter[frequency]={frequency}&filter[reportDate]={reportDate}&filter[reportSubType]=SUMMARY&filter[reportType]=SALES&filter[vendorNumber]={vendor_id}'
response = requests.get(link, headers= {'Authorization': f'Bearer {token}' })
file_content = gzip.decompress(response.content).decode('utf-8')
return file_content
I find that the units returned by this report do not match the units shown from the app store connect website.
I do not double check the Units because I have no rights to access portal, but I think it could be suffered by delay, or not correct filter day/report day. As it is official documents so I have no doubt about the matching units. Could you please explore more? @CameronPorter
| common-pile/stackexchange_filtered |
Swift 3.0 Thread Class Target Does not Implement Selector
I declare a property for my thread like so:
let loggingThread = Thread.init(target: self, selector: Selector(("loggingThreadProcess:")), object: nil)
@objc func loggingThreadProcess(object: AnyObject?) {
}
But I am getting the this error:
[NSThread initWithTarget:selector:object:]: target does not implement
selector (*** -[_SwiftValue loggingThreadProcess:])
Any suggestions?
You should define it as:
let loggingThread = Thread.init(target: self, selector: #selector(loggingThreadProcess:), object: nil)
Nevermind, I got it. Removed the colon from "loggingThreadProcess:", but effectively you were right @Michael. Also, I was trying to declare/define a thread instance property. Removed that and added code to create the thread when the user actually presses the logging button.
| common-pile/stackexchange_filtered |
JQuery not summing the values of a set of input boxes in certain situations
I am using the following function by calling it inside document.Ready() that is defined inside a file called startSript.js file to sum the values of a set of input boxes.
The one example of a call to this function would be: autoSumBySelector('.class1', 'outputClass1') that would sum the values of input boxes with class attribute class1 and display the sum in a div tag with class attribute outputClass1.
The function works fine except that after inputting the values into input boxes (of say class1) if I change the value of one of those input boxes to 0 or if I remove that value completely to make it blank. In that case, the output div tag would not change the value.
Say, if I input the values 5,10,15 in three boxes, the output div would correctly display 30. And if I change the value 10 to 1 the sum would correctly display as 21. But if I change any of the three values to 0 or blank, the output div would not change and would still display the previous value (say, 21).
var autoSumBySelector = function (selector, outputselector, func, fixed) {
//Attaches the Event to the selector or array of selector
if (selector != undefined) {
$(selector).change(function (event) {
var validation = true;
if (fixed == undefined) {
fixed = 0;
}
if (typeof (func) == "function") {
validation = func(this, outputselector);
}
if (validation) {
//This is to sum currency value of given selector
var total = 0;
$(selector).each(
function () {
var vl = this.value.replace(/,/g, '');
if (!isNaN(vl) && vl.length != 0) {
total += parseFloat(vl);
}
});
if (outputselector != undefined) {
$(outputselector).val(total).change();
}
else //if the output div is not defined then it returns the value
return sum;
}
else {
event.preventDefault();
}
});
}
};
EDIT
This is an ASP.NET MVC app. I have two script files common.js and startScript.js that get loaded as follows in the Index.cshtml (a view) file) as shown below. The :
@{
ViewBag.Title = "App title";
}
@section Scripts{
<script type="text/javascript" src="~/Scripts/common.js">
</script>
<script type="text/javascript" src="~/Scripts/startScript.js">
.........
}
<form>
HTML tags here
.....
partial views rendered here
</form>
The function is defined in the common.js and the calls are made from the starScript.js as follows:
$(document).ready(function () {
$("#tabs").tabs();
.....
.....other calls
autoSumBySelector('.class4', '.outputClass4');
autoSumBySelector('.class3', '.outputClass3');
autoSumBySelector('.class2', '.outputClass2');
autoSumBySelector('.class1', '.outputClass1');
.....other calls
}
A working example on http://jsfiddle.net would be helpful. Also, I hope you didn't capitalize the R in $(document).ready({...}) in your code.
I created a jsfiddle (http://jsfiddle.net/7ZmCq/) and it works perfectly for me. How are you invoking autoSumBySelector?
@Oscar and @Blazemonger To answer your question, I have added more details in the Edit section of my original post. As Oscar shows in his demo the function should work. But for some reason it's not working under two scenarios in my app.
In which browser are you testing this? Maybe it could be a specific browser question.
@Oscar, I am using IE9, 10 and Chrome 33. It's not working on both.
| common-pile/stackexchange_filtered |
How do I SSH tunnel to a remote server whilst remaining on my machine?
I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?
Many thanks,
John
when you say "deployment machine" you meant the bastion server right?
No. Concourse -> Bastion -> Cluster. Our Concourse machine has tools, such as kubectl, to run against the cluster, but needs a tunnel established through the Bastion.
Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.
Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:
First make so called local ssh port forwarding on Bastion (pretty well described here):
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user@<kubernetes-cluster-ip-address-or-hostname>
Now you can access your kubernetes api from Bastion by:
curl localhost:<kube-api-server-port>
however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user@<bastion-server-ip-address-or-hostname>
From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:
curl localhost:<kube-api-server-port>
or incorporate it in your .kube/cofig.
Let me know if it helps.
You can also make such tunnel more persistent. More on that you can find here.
In AWS
Scenario 1
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps
Scenario 2
when you have private cluster endpoint enabled (which seems to be your case)
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.
Either you can modify your private endpoint Steps here OR Follow these Steps
| common-pile/stackexchange_filtered |
Arrow is too low
There used to be an arrow showing what section you are in. The earliest design post) shows it pretty clearly in this image under "Questions":
Another more recent example of the correct behavior can be found here.
Actually, this arrow is not completely gone. It's just a little too low now, as you can see on the main site (look carefully; it's a slightly different color than the background):
(Meta has the same problem, but you can't see the arrow because it's the same color as the background. I think that the arrow on main would look better if it matched the color of the background, but that's beside the point right now.)
The problem is in the CSS for this:
<li class="youarehere">…</li>
The padding for this element is 0 0 35px 0;. If you change the padding to 0 0 20px 0;, then the problem is fixed. I suspect that the new top bar height is responsible for messing things up.
The arrow no longer exists, so I guess it's status-completed:
| common-pile/stackexchange_filtered |
Ionic: silently login an "admin" account to Firebase
I'm running an Ionic app on localhost. I need to login an "admin" account to manage some data. I get the attached error when I run following code:
firebase.initializeApp({
apiKey: "****",
authDomain: "****.firebaseapp.com",
databaseURL: "https://****.firebaseio.com",
storageBucket: "****.appspot.com",
messagingSenderId: "****"
});
this.db = firebase.database();
this.db.ref('/settings').once('value').then((snap)=>console.log(snap.val()))
I think the problem is that I'm not authenticated (I temporarily changed the firebase rules to read/write = true and the error disappeared). How can I authenticate? Maybe, I have to use a different approach? (I tried a lot using angularfire, without success)
Thanks.
I opted for 1) logout then 2) login anonymously
https://firebase.google.com/docs/auth/web/anonymous-auth
| common-pile/stackexchange_filtered |
Dynamically adding multiple rows to a datatable
I see similar questions have been asked before but I cannot find the answer I'm after.
My question is, how do I add multiple rows of data at the same time, I have a working example below which will slowly add 1000 rows using the row.add() but I cannot for the life of me work out how to add these rows in one batch using the rows.add()
$('#addRow').on( 'click', function () {
for (i =0; i < 1000; i++) {
r = [i+'.1', i+'.2', i+'.3', i+'.4', i+'.5', i+'.6', i+'.7'];
mytable.row.add( [ r ] ).draw( false );
}
});
I have gone through all the examples on the web I can find but all of the examples I have found work using a set amount of predefined data, not how to handle an unknown number of rows.
Any help would be greatly appreciated.
Regards, Chris
Welcome to Stack Overflow. Please provide a Minimal, Reproducible Example: https://stackoverflow.com/help/minimal-reproducible-example It would also be good to take the Tour: https://stackoverflow.com/tour
OK I've got it working, so if anyone else gets stuck on this same issue, this is my code now :
$('#addRow').on( 'click', function () {
var arrayAll = [];
for (i =0; i < 1000; i++) {
var arrayRow = [i+'.1', i+'.2', i+'.3', i+'.4', i+'.5', i+'.6', i+'.7'];
arrayAll = arrayAll.concat([arrayRow]);
}
t.rows.add( arrayAll ).draw();
} );
This will add a 1000 rows within a second :)
Without a proper example, I am unable to test the results. Consider the following code.
function addRows(n, tbl) {
var lastRow = tbl.row(":last");
var index = lastRow.index();
for (var i = (index + 1); i < (index + n); i++) {
r = [i + '.1', i + '.2', i + '.3', i + '.4', i + '.5', i + '.6', i + '.7'];
tbl.row.add(r).draw(false);
}
}
$('#addRow').on('click', function() {
addRows(1000, mytable);
});
Assuming you have an amount of data in place, you want to find the last row in the Table and then build on that. You cannot build rows with an infinite number of rows, but you can use a dynamic variable amount. You can also build an Object of Rows and then use table.rows.add().
See More:
https://datatables.net/reference/api/rows()
https://datatables.net/examples/api/add_row.html
https://datatables.net/reference/api/rows.add()
Thank you very much Twisty, I will look at your suggestions and also take the tour and return with a minimal\reproductive example or even a working answer if I can work it out myself.
| common-pile/stackexchange_filtered |
Scala: conforming concrete type to generics with bounds
I've defined a function with generics and I have some trouble understanding the error that the compiler is giving me. The problem can be expressed simply with :
def myfunc[T <: MyClass](param:MyClass):T = param
It gives me this error on param in my the body : Expression of type MyClass doesn't conform to the expected type T.
Why? param fits the upper bound of T. How can I make something like this work without resorting to casting param to T?
Ok. Assume, you have the below:
class Animal
class Dog extends Animal
And now lets have your function below:
def myfunc[T <: Animal](param:Animal):T = param
For now assume, compiler doesn't throw error. On calling myfunc[Dog](new Animal), it should return a Dog according to function definition. But In reality, you are just sending back Animal. Which should not be allowed. Hence the error.
Now had it been:
def myfunc[T >: Dog](param:Dog):T = param
Here on calling myfunc[Animal](new Dog). The return type is of Animal. But the function returns a Dog which is correct as Dog is an Animal. Hope it clarifies
The clearest answer on the whole page. Breath of fresh air given the example with animals rather than Ts and Us.
You have the variance backwards. If T <: MyClass, then you can supply a T where MyClass is expected. But you cannot give a MyClass where T is expected.
To illustrate:
def myfunc(x: Any): String = x
This gives the same error, for the same reason. An Any is not a String, and your MyClass is not a T.
This is what you may have meant:
def myfunc[T >: MyClass](param:MyClass):T = param
Which works fine.
Yes, but no. I don't want T to be a supertype of MyClass. I really want T to be a subtype. However, I realized that the return type cannot be T, since T could be any subtype of MyClass and the runtime type of param also could be a subtype of MyClass, and they don't necessarily match.
Try pretending that Scala does not have subtyping, and figure out the requirement from there. Then reintroduce subtyping if you need it.
Look at this:
class A(); class B extends A;
scala> def myfunc[T <: A,U >: A](param: U):T = param
<console>:8: error: type mismatch;
found : param.type (with underlying type U)
required: T
def myfunc[T <: A,U >: A](param: U):T = param
Your param:MyClass does really have a lower bound (>: MyClass), like in Java - it can be any descendant of MyClass. So, how could you run myfunc(B) and return type that is higher on class diagram than A? B considered lower than A in an inheritance diagram, you return B but say that it is really <: A. Error.
Sometimes its useful to explicitly declare all those types like, T, U, V etc and see what compiler says.
Your assumption that param relates to T because they share a common ancestor is wrong.
For example, a base class A can have two subclasses B and C. If T is of type B and param is of type C, param is not of a type relatable to T.
I hope to help you.
The argument type is T not MyClass.
object StackOverFlow13851352 extends App {
class MyClass
class YourClass extends MyClass
def myfunc[T <: MyClass](param: T): T = param
val clazz: YourClass = myfunc(new YourClass)
}
Scala code runner version 2.10.0-RC2 -- Copyright 2002-2012, LAMP/EPFL
java version "1.7.0_09"
Yes, I know I can do that, but that's not the point. I want to know why the compiler refuse to understand that param's type conforms to T.
| common-pile/stackexchange_filtered |
Documentation for Proguard
Can anyone point me towards the un-obfuscated version of the Proguard documentation?
I've seen the obfuscated version here: http://proguard.sourceforge.net/#manual/
Too localized?! Because the manual is understandable outside the UK?!
How about http://proguard.sourceforge.net/index.html#manual/examples.html ? There is unfortunately not much more available.
| common-pile/stackexchange_filtered |
How to Scheduling a Pipeline in Azure Data Factory
I followed the tutorials of MS and created a pipeline to move data from the on premise SQL Tablet to Azure Blob and then there's another pipeline to move the blob data to azure sql table. From the document provided, I need to specify the active period (start time and end time) for the pipeline and everything run well.
The question is what can I do if I want the pipeline to be activated every 3 hours, until I manually stop the operation. Currently I need to change the start time and end time in the json script and publish it again everyday and I think there should be another way to do it.
I'm new to azure, any help/comment will be appreciate, thank you.
p.s. I can't do the transactional replication since my on-premise SQL is lied on SQL2008.
You can set infinity as the end of the pipeline:
"end": "9999-99-99T12:00:00Z
and in Your activity, add
"scheduler": {
"frequency": "Hour",
"interval": 3
},
to set scheduler. I do not see posibility to manually stop pipeline(except restart).
It's not quite clear what you're trying to do from the description but if the goal is to have it temporarily "not run" you could just pause the Pipeline itself
viz.:
"properties": {
"isPaused": false,
}
Set that to "true" in your pipeline and re-deploy and it's paused. You could then set it back to "false" and unpause it the next day.
Keep in mind that if you have slice activity scheduled during the paused period it will run at the time it's unpaused.
| common-pile/stackexchange_filtered |
how to add formatted string (for multiple filenames etc.) in tchar
Like in win32 C++ we use to use following code if needed to add multiple files as
file_name0.txt, filename1.txt, ...., filename30.txt etc.
char fname[20];
for i = 0 -> 30 {
sprintf(fname, "filename%d.txt", i);
}
How can we do the same with TChar since I need to read multiple files in this case too
thanks
Kashan
You can specify string literals to use wide chars (16bit) like this L"filename%d.txt"
@πάνταῥεῖ but that is wchar not tchar.
@BartlomiejLewandowski What's tchar at all by means of standards?
_stprintf( fname, TEXT("filename%d.txt"), i);
You will need to define fname as a tchar.
| common-pile/stackexchange_filtered |
Non-linear system of two equations (Method of Moments for Generalized Pareto)
In a Method of Moments Estimation problem involving the generalized Pareto Distribution, the following system of 2 non-linear equations arise
\begin{align*}
\bar{X} &= \frac{\alpha \beta}{\alpha - 1}. \\
\frac{1}{n}\sum_{i=1}^{n} X^2_i &= \frac{\alpha \beta^2}{\alpha - 2}.
\end{align*}
The problem is to solve the system for $\alpha$ and $\beta$ in terms of $\bar{X} $ and $\frac{1}{n}\sum_{i=1}^{n} X^2_i $. The thing is, it's easy to isolate $\alpha$ or $\beta$ from any of the two questions, but after this has been done you cannot isolate the other parameter in the other equation. Using Mathematica you can get two solutions but I cannot just say "According to Mathematica, the solution is ..."
$$
\begin{cases}
\frac{a b}{a-1}=x\\
\frac{a b^2}{a-2}=y\\
\end{cases}
$$
multiply the first equation by $b$
$$
\begin{cases}
a b^2=(a-1)bx\\
a b^2=(a-2)y\\
\end{cases}
$$
thus
$$(a-1)bx=(a-2)y\to b=\frac{(a-2) y}{(a-1) x} $$
Now plug this in the first, the very first equation
$$ab=x(a-1)\to a\cdot \frac{(a-2) y}{(a-1) x}=x(a-1)$$
You get a quadratic equation in $a$ which gives
$$a=\frac{x^2-y\pm\sqrt{y^2-x^2 y}}{x^2-y}$$
plug back to get $b$
$$b_1=\frac{y \left(\frac{x^2-y-\sqrt{y^2-x^2 y}}{x^2-y}-2\right)}{x \left(\frac{x^2-y-\sqrt{y^2-x^2 y}}{x^2-y}-1\right)}=\frac{y}{x}\cdot \frac{x^2-y-\sqrt{y \left(y-x^2\right)}}{\sqrt{y \left(y-x^2\right)}}=\frac{y-\sqrt{y \left(y-x^2\right)}}{x}$$
and
$$b_2=\frac{y+\sqrt{y \left(y-x^2\right)}}{x}$$
| common-pile/stackexchange_filtered |
How can I see the parameters in intellisense in a spark file?
I've got spark intellisense working but when I open the parameters () of the method I can not see what is supposed to go in there and several of the methods have overloads so I can't see what options I have.
For example !{Html.Hidden()} once I open the () I can not see what parameters I am to pass
any ideas?
Found the problem
Resharper interferring with the intellisense.
solution is here http://sparkviewengine.com/usage/intellisense
| common-pile/stackexchange_filtered |
Retrieving 2 separate portions of a text in Wordpress
I have a page in Wordpress containing a text with 5 paragraphs. Each paragraph is separated by a <!--more-->.
Now I need to retrieve the first the 2 first paragraphs to display on my front_page.php, each paragraph in a different <section>
What I am using now is the following:
in functions.php
function split_content() {
global $more;
$more = true;
$content = preg_split('/<span id\=\"(more\-\d+)"><\/span>/', get_the_content('Read more...'), 0);
for($c = 0, $csize = count($content); $c < $csize; $c++) {
$content[$c] = apply_filters('the_content', $content[$c]);
}
return $content;
}
in front_page.php
<div>
<?php
$my_query = new WP_Query( 'page_id=26' );
if ($my_query->have_posts()):$my_query->the_post();
$content = split_content();
?>
<?php echo array_shift($content);?>
</div>
<div>
<?php echo array_shift($content);?>
<?php endif; ?>
</p>
</div>
I am getting the right output for the first div and for the second div I am getting the rest of the text (total text - first paragraph)
Anyone could suggest some explanation please, I am quite new to this.
If I understand your problem, then there is a much easier solution, you can just use the read more function in the text editor within the post, and display the_content within your loop. here is a reference. The Content function
So forget about the split_content function, and within your query, literally just use the function the_content, and then in the post, set your read more seperator after that second paragraph.
Thank you very much for your great help. How could i retrieve the text before the 2nd <--more--> tag and ignoring the first one?
You need to remove all of those more tags except for one, under the 2nd paragraph. Then to seperste them into sections. Just use HTML markup in the text editor within the page. Your question is a bit vague so I'm not sure if this fully answers it but it should work if your just trying to print 2 of the 5
I need to print the 1st and the 2nd text before the <--more--> tag, but each in a different place and section.
And you can't do that within the Wordpress text editor? If your doing this within a post or page, you shouldn't have a problem doing it within the text editor. If it's on the home page then you should be using custom fields or custom post types. Not a page, or even two seperate posts within a new category.
| common-pile/stackexchange_filtered |
Can mythological names be trademarked and become associated with an entity (eg. a company)?
Can the names of creatures, deities (gods), locations, etc. be trademarked and become associated with a specific entity (so it receives protection against dilution)?
Simple answer: Nike..
I looked up “Thor” (assuming a movie and comic book character might be trademarked) and there are dozens of trademarks in all different areas. So I guess you can’t create a movie character or a power drill named “Thor” but other things will be open. Probably weaker than “Nike”.
Yes. For example, search the USPTO trademark database for "Deimos."
See also "Lamborghini Trademark for 'Deimos' May Point to Future Model".
I cannot believe you missed Nike or Amazon.
I suppose many people won’t even know that “Nike” is a Greek goddess.
| common-pile/stackexchange_filtered |
Angular 2 - Getting object id from array and displaying data
I currently have a service that gets an array of json objects from a json file which displays a list of leads. Each lead has an id and when a lead within this list is clicked it takes the user to a view that has this id in the url ie ( /lead/156af71250a941ccbdd65f73e5af2e67 )
I've been trying to get this object by id through my leads service but just cant get it working. Where am I going wrong?
Also, i'm using two way binding in my html.
SERVICE
leads;
constructor(private http: HttpClient) { }
getAllLeads() {
return this.http.get('../../assets/leads.json').map((response) => response);
}
getLead(id: any) {
const leads = this.getAllLeads();
const lead = this.leads.find(order => order.id === id);
return lead;
}
COMPONENT
lead = {};
constructor(
private leadService: LeadService,
private route: ActivatedRoute) {
const id = this.route.snapshot.paramMap.get('id');
if (id) { this.leadService.getLead(id).take(1).subscribe(lead => this.lead = lead); }
}
JSON
[
{
"LeadId": "156af71250a941ccbdd65f73e5af2e66",
"LeadTime": "2016-03-04T10:53:05+00:00",
"SourceUserName": "Fred Dibnah",
"LeadNumber": "1603041053",
},
{
"LeadId": "156af71250a999ccbdd65f73e5af2e67",
"LeadTime": "2016-03-04T10:53:05+00:00",
"SourceUserName": "Harry Dibnah",
"LeadNumber": "1603021053",
},
{
"LeadId": "156af71250a999ccbdd65f73e5af2e68",
"LeadTime": "2016-03-04T10:53:05+00:00",
"SourceUserName": "John Doe",
"LeadNumber": "1603021053",
}
]
You didn't used the newly created leads array (const leads is not this.leads), so do this:
getLead(id: any) {
return this.getAllLeads().find(order => order.LeadId === id);
}
And change your map to flatMap, because from the server you get an array, but you have to transform it to a stream of its items:
getAllLeads() {
return this.http.get('../../assets/leads.json').flatMap(data => data);
}
Don't forget to import it if you have to: import 'rxjs/add/operator/flatMap';
I'm getting the error Property 'find' does not exist on type 'Observable'
@DBoi You have to import find: import 'rxjs/add/operator/find';
Thanks Roland. i'm now getting this error on the first 'id' - Property 'id' does not exist on type 'Object' even though my ILead model has an id:string;
@DBoi Even if your ILead model has id, are you sure that the object has? In your JSON file you have LeadId. Try this: order.LeadId === id. If this is not the issue, try to log your object to check what's the problem with it.
Unfortunately i'm still getting the same error using LeadId. Have I mapped it correctly in the getAllLeads function? Its logging the array in the console if I do .map((response) => console.log(response));
I think, I know the problem now: in your getAllLeads method change map to flatMap (with the same parameter). It should work. I update my answer too.
thanks Roland, just in case you need to update your answer... I had to import 'rxjs/add/operator/mergeMap';
Getting the following error: Argument of type '(data: Object) => Object' is not assignable to parameter of type '(value: Object, index: number) => ObservableInput<{}>'.
Type 'Object' is not assignable to type 'ObservableInput<{}>'.
The 'Object' type is assignable to very few other types. Did you mean to use the 'any' type instead?
Type 'Object' is not assignable to type 'ArrayLike<{}>'.
The 'Object' type is assignable to very few other types. Did you mean to use the 'any' type instead?
Property 'length' is missing in type 'Object'.
You should play with console.log in your getAllLeads() too to check the issue. Maybe try something like this: .flatMap((data: any[]) => data).
That worked. Thanks so much for taking the time to help - its much appreciated. Also learnt something new - mergeMap!
You can have getLead in your component level itself since you are not making any api to get the information. In your component,
this.lead = this.leads.find(order => order.id === id);
or to make the above service work, just do leads instead of this.leads
const lead = leads.find(order => order.id === id);
im getting the error Property 'find' does not exist on type 'Observable'
in my getAllLeads function?
are you referring to my JSON set up?
| common-pile/stackexchange_filtered |
Docker AWS access container from Ip
Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]
I have a ubuntu 16.04 and I am using docker image ubuntu and I have installed apache in docker and port 80 is open. I don't know what else I need to do.
I haven't installed apachee on aws ubuntu
Make sure aws security group is open for Apache.
Login into container and check for apache service status and open ports.
Docker exec -it bash
it is working when I am using docker run -d -p 80:80 kyo /usr/sbin/apache2ctl -D FOREGROUND is there any easy way to do this
| common-pile/stackexchange_filtered |
Where are the system spellchecking dictionaries?
I would like to use the spellchecking dictionaries bundled with OS X from the command line (hunspell), but can't seem to find them. In /System/Library/Spelling there are only 2 files pl_PL.{aff,dic}, and find / -name '*.dic' revealed nothing.
I know I can dowload dictionaries from OpenOffice etc., but I'd like to find the ones bundled with OS X.
EDIT To clarify, there are at least two kinds of dictionaries in OS X:
Definitions used in Dictionary.app. I'm not interested in those.
Word lists used by the system spellchecker (red dotted lines). I know OS X uses hunspell because the hunspell website says so, and there are numerous posts on how to add new ones (1, 2). Just, I don't want to add new ones but use the English one that obviously comes with the system.
What build of hunspell is in play? (as it's not explicitly included as a command line tool on OS X - adding that detail might help you get a better quality answer.) Another answer here shows that the spell check routines for TextEdit source words from the same place as Dictionary app - so be clear to explain how your problem is different than locating /Library/Dictionary or ~/Library/Dictionary
@bmike libhunspell is in /usr/lib/libhunspell-<IP_ADDRESS>.0.dylib. I'm not sure which of the answers you're referring to, the top-rated answer copied words from Wikipedia? Another answer + comments in this thread however is correct and exactly what I'm after (http://apple.stackexchange.com/a/21446/54379)
Now that you have added information to your question it makes more sense so my answer was wrong and I removed it.
/usr/share/dict there is a words file in there
@thipani, libhunspell is a library and not callable directly from the command line, what is hunspell
The spelling dictionaries you are interested in appear to be located in the following location (checked on 10.8.4 and 10.6.8):
/System/Library/Services/AppleSpell.service/Contents/Resources/
The word lists are stored in this directory by language, so U.S. English is in the English.lproj folder.
However, these files are stored in a binary format that I haven't deciphered yet...
Under macOS Catalina...
There are two locations:
/System/Library/Services/AppleSpell.service/Contents/Resources/AppleSpell.8
/Users/${HOME}/Library/Dictionaries/
The first is the system spell check dictionary and the second is the user dictionary that is created/modified when the user adds a learned word. The second one can be edited using any standard text editor.
On macOS Sonoma (14.4), the ~/Library/Spelling/LocalDictionary seems to remain unchanged regardless of the words added to / removed from the system dictionary.
By inspecting the files opened by the AppleSpell process, I was able to identify another file which appears to contain the user dictionary word list:
~/Library/Group Containers/group.com.apple.AppleSpell/Library/Spelling/LocalDictionary
Editing the contents of this file appears to work.
After editing, you will need to either log out / in or restart your Mac. (Simply restarting the AppleSpell process doesn't seem to be enough.)
Tangential note: It's rumored that it's important to maintain the word list in the LocalDictionary file in alphabetical order (the system seems to), but I haven't tested or verified this.
| common-pile/stackexchange_filtered |
How bind mouse move to all scale widget?
I am developping a Form generator on TCL TK.
I would bind a mouse move to all Scale widget.
I generate a scale widget:
grid [ttk::scale .frm.fgs_$name -length $length -from -100 -to 100 ] -column 2 -row $row -sticky w
I tried binding that doesn't works:
bind Scale <B1-Motion> {puts "Scale: %W"}
(also the <Leave> event doesn't works.
This is an example which worked for me:
bind .c <Motion> {displayXY .info %x %y}
And you need a process to bind to. Often you need to rescale
the movement of the cursor. In this case I create a cursor
that was a horizontal scale line traversing the Y-axis:
proc displayXY {w cx cy} {
set x [expr {floor(double(($cx-$::dx)/50.))}]
set y [expr {double((-$cy+$::dy))}]
set ::cursorPosition [format "x=%.2f y=%.2f" $x $y]
# remove old cursor "line"
.c delete yTrace
....much Cartesian math....
# redraw new cursor "line"
if { $y1 > -250 } {
.c create line $x1 $y1 $x2 $y2 -width 3 -fill grey60 -tags "yTrace"
.c scale yTrace 0 0 1 -1
}
}
In general, binding to a widget class is tricky since it affects all users of that widget, including ones that are in library code. It's not recommended, at least not with existing widget classes. (In contrast, it's a very good idea to do if you're making your own widget class.) For normal user code, it's better to either bind to the widget name:
bind .frm.fgs_$name <B1-Motion> {puts "Scale: %W"}
Or to bind to a new binding tag and install that:
# You're advised to begin custom binding tags with a lower-case letter
bind myScale <B1-Motion> {puts "Scale: %W"}
# Install after the instance-level bindings but before the class-level bindings
bindtags .frm.fgs_$name [linsert [bindtags .frm.fgs_$name] 1 myScale]
Your immediate problem is that ttk::scale uses the class name TScale. It also has an existing <B1-Motion> binding.
| common-pile/stackexchange_filtered |
Matlab ODE Solvers Providing Incorrect Solutions
I am having a huge headache dealing with Matlab's ode solvers. Very few (If any at all) want to help me on the Matlab forums because it has been a persistent issue that I have found no solution to in the last 4 months and there is no error message to provide. Essentially, I simply want to solve the Lorentz Force differential equations using one of Matlab's ode solvers. What is so strange however, is that the ode solvers do not throw any error at me even though the solutions they provide me are wrong. I know based on my initial conditions, what a specific solution should look like. Either that, OR the solver completely ignores half the equation (Again without throwing an error) and simply solves the other half as if it's the only thing there. It is so odd and because there is no error message it has been so difficult to figure out.
Here are the files from y system in which the script does not run on my end:
"diffiqtest.m"
v0 = [0 0 0];
p0 = [0 0 0];
s0 = [v0 p0];
tspan = [0 5];
[t,S] = ode15s(@reffun,tspan,s0);
"reffun.m"
function refsolve = reffun(t,s)
Ex = 0;
Ey = 0;
Ez = 0;
persistent Bx By Bz
%Used so that the B-field function is only run once
if isempty(Bx)
[Bx, By, Bz] = B_test();
end
%Reference: s(1) = vx, s(2) = vy, s(3) = vz, s(4) = x, s(5) = y, s(6) = z
ode1 = Ex + s(2).*Bz(s(4),s(5),s(6)) - s(3).*By(s(4),s(5),s(6));
ode2 = Ey + s(3).*Bx(s(4),s(5),s(6)) - s(1).*Bz(s(4),s(5),s(6));
ode3 = Ez + s(1).*By(s(4),s(5),s(6)) - s(2).*Bx(s(4),s(5),s(6));
ode4 = s(1);
ode5 = s(2);
ode6 = s(3);
refsolve = [ode1; ode2; ode3; ode4; ode5; ode6];
end
"B_test.m"
function [Bx, By, Bz] = B_test()
%Bfieldstrength = 0.64; %In (Teslas)
Bfieldstrength = 0;
magvol = 3.218E-6; %In (m)
mu0 = (4*pi)*10^-7;
magnetization = (Bfieldstrength*magvol)/mu0;
syms x y z
m = [0,0,magnetization];
r = [x, y, z];
B = mu0*(((dot(m,r)*r*3)/norm(r)^5) - m/norm(r)^3);
Bx = matlabFunction(B(1),'vars',r);
By = matlabFunction(B(2),'vars',r);
Bz = matlabFunction(B(3),'vars',r);
end
Initially I was getting the error:
Error using symengine>@()0.0
Too many input arguments.
Error in reffun (line 14)
ode1 = Ex + s(2).*Bz(s(4),s(5),s(6)) - s(3).*By(s(4),s(5),s(6));
Error in odearguments (line 90)
f0 = feval(ode,t0,y0,args{:}); % ODE15I sets args{1} to yp0.
Error in ode15s (line 150)
odearguments(FcnHandlesUsed, solver_name, ode, tspan, y0, options, varargin);
Error in diffiqtest (line 7)
[t,S] = ode15s(@reffun,tspan,s0);
However, on the Matlab forums someone recommended that I added another input "vars,r" when calling a matlabFunction. Another oddity here is that even though I applied this (and cleared all persistent variables), the code still did not run at all. I told him this, so he provided me his code and it worked just fine even though the exact same lines are typed out. Using his files I attempted to input initial conditions given that I knew what the solutions should look like, but they were incorrect. Again, his code was EXACTLY the same as mine, copied and pasted and it just happened to work.
This issue occurs in some much longer code I have been working on, however I decided to go back to the simplest form I could to reproduce the issue so that others may not be dissuaded from providing assistance. I am really at a stand still so any constructive advice is appreciated.
What solution were you expecting? From the current code, all fields and forces are zero, so you should get a solution that stays at the origin. // Why are you using symbolic expressions to generate a much more inefficient numeric function? You could just take the code throwing out the symbolic declarations and directly evaluate the field.
That was an error prior to posting. Comment out the assignment "Bfieldstrength = 0;" and un-comment "Bfieldstrength = 0.64;". The issue is the same nonetheless. In regards to the symbolic expression, I did that so that the differential equations can reference the local E and B fields at every time step.
Ok to the first. Still, could you explain exactly what is wrong, perhaps give a test condition for the (partial) correctness of a numerical solution? And to the second, you can achieve the same with a function call to a purely numerical evaluation of the field equations, there is no need for a symbolic pre-processing (such as deriving the dynamical equations from a Hamiltonian).
Yes, a condition I have used is as follows:
vperp = -1.3414e+12;
v0 = [vperp/2 vperp/2 0];
p0 = [0.05 0 0];
s0 = [v0 p0];
tspan = [0 10];
Where "vperp" is the perpendicular component of the initial velocity of the electron which is then split into the x and y hat components.The z hat component stays zero so that what should occur is the particle should oscillate in circles in the equatorial plane and around the entire magnet due to a magnetic field gradient affect.
I know the radius (gyro-radius) of the circles the electron will oscillate at will be 1 cm and that the radius of the orbit around the origin will be 5 cm given these initial conditions.
I am also not sure how to go about the other method you described because from what I understand, the equations need to be symbolic themselves in order to be solved and since the reference functions are a subset of them I thought those must be also.
Are you sure about that "which is then split into the x and y hat components" using division by 2? Or should it be cos(45°)*vperp, sin(45°)*vperp so that it has the correct Euclidean length?
I am fairly certain that just splitting the magnitude of the velocity into two different components is the correct way. Even if I did try it by dividing by sqrt(2) I still get a straight line.
It is not unexpected that a particle with a great velocity moves in a straight line. You could discuss the derivation of the exact circular solution on the physics or mathematics stackexchange forum.
| common-pile/stackexchange_filtered |
How can i maximize an iFrame with jQuery when clicked
Can any one tell me how can I Maximize a iFrame when the Div which has iFrame is clicked?? Say for example i have 3 div which has 3 different iFrame on a single page now if anyone clicks on the 1st div I want to maximize that Div to the whole page of browser showing the content of that div. Also I will display a close option on the top which when clicked will close the fullscreen and will return back to the original. Can i do this with jQuery?? or any other javascript library??
Please help me.
Thanks
Pranay
$("div.iframewrapper").click(function() {
$(this).addClass("maximized");
});
div.maximized { position:fixed; top:0; left:0; width:100%; height:100%; }
div.maximized iframe { width:100%; height:100%; border:0 none; }
You may also need to set the z-index if you got other positioned content on the page.
Working demo: http://jsfiddle.net/6SvsX/2
+1, but it might be nice to have the div shrink back down, so change addClass to toggleClass
@fudgey The shrinking is done by clicking on a close button as the OP said.
@Parry Here: http://jsfiddle.net/6SvsX/ I placed a button which maximizes the DIV.
thanks it is working now how can i get back I mean minimizing this thing??
| common-pile/stackexchange_filtered |
How do I get gvim/vim to open a file in a path with non-ascii characters?
At first I had this path to one of my files in Windows:
C:\mine\NOTES
I renamed it such that the last part of the path i.e 'NOTES' had the U+2588 FULL BLOCK=solid
before and after it. I inserted the character using the key combination ALT+219(alt code) which is an extended ascii drawing character. I did this so that the folder could stand out from the 20 plus folders in the directory. At this point, I'm quite happy that it looks cool and does stand out. However, when the files in the folder using a text editor all hell breaks loose. These are the results when I try to open the file scheme_notes.scm in different editors:
path="C:\mine\&#brevbar NOTES &#brevbar"
(The solid character is rendered as a &#brevbar character instead of a solid block.
GVIM prints:
"path" [New directory]`
"path" E212: Can't open file for writing.
Python(IDLE) opens the file but does not display the contents which the file has and prints the following error when you try to run it(I know it's not a python file, I was testing)
IOError: [Errno 2] No such file or directory: 'C:\\mine\\\xa6 NOTES \xa6\\python_notes.py'
(The \xa6 is a &#brevbar character)
Pspad does not allow you to edit the file:
File has set attributes: Hidden ReadOnly System
Save Changes Made to File?`
Scite displays a dialog saying:
Cannot save under this name. Save under different name?
If you click yes, the dialog keeps popping up repeatedly. I had to kill the program using
powershell after there were 15 plus dialogs on the window.(and it hadn't stopped)
Jedit prints:
The following I/O operation could not be completed:
Cannot save: "path" (The system cannot find the path specified)
Netbeans:
Failed to set current directory to "path" The system cannot find the file
specified.
I want Vim to be able to open the file without resorting to renaming the folder. Does vim have a setting for opening paths with unicode characters?
I'm asking this because python was able to change to that directory when I did this:
import os
os.chdir(u"\u2588 NOTES \u2588")
os.listdir('.')`<br> `==> ['scheme_notes.scm','python_struff.py']
| common-pile/stackexchange_filtered |
PHP Pdo Help... can't get pdo fetchall to work for me
Trying to migrate to php pdo... can someone please tell me why on earth this bit of code is not working?
$stmt = $db->prepare("SELECT SUM(aw_score) AS awscoreaw, SUM(hm_score) AS awscoreaw_def FROM nfl_new WHERE away=:away AND date<:date AND Season=:season");
$stmt->bindValue(':away', $row['away'], PDO::PARAM_STR);
$stmt->bindValue(':date', $row['date'], PDO::PARAM_STR);
$stmt->bindValue(':season', $row['Season'], PDO::PARAM_STR);
$stmt->execute();
$affected_rows = $stmt->rowCount();
echo $affected_rows.' ';
$rows = $stmt->fetchAll(PDO::FETCH_ASSOC);
echo $rows['awscoreaw_def'].' '.$row['away'].'<br />';
what are you getting as result ? nothing?
away is not in your select list, and you use fetchAll, you need to iterate the result.
$rows = $stmt->fectchAll(PDO::FETCH_ASSOC);
foreach ($rows as $row) {
echo $row awscoreaw['awscoreaw_def'].' '.$row['awscoreaw'].'<br />';
}
Thank you... fetchAll was the issue. Replacing with fetch does the trick.
| common-pile/stackexchange_filtered |
How to draw a Pascal's triangle?
How to draw a Pascal's triangle, similar to what follows?
Could you show some code that you've tried? Starting from scratch ain't much fun...
| common-pile/stackexchange_filtered |
How can I get HBase to play nicely with sbt's dependency management?
I'm trying to get an sbt project going which uses CDH3's Hadoop and HBase. I'm trying to using a project/build/Project.scala file to declare dependencies on HBase and Hadoop. (I'll admit my grasp of sbt, maven, and ivy is a little weak. Please pardon me if I'd saying or doing something dumb.)
Everything went swimmingly with the Hadoop dependency. Adding the HBase dependency resulted in a dependency on Thrift 0.2.0, for which there doesn't appear to be a repo, or so it sounds from this SO post.
So, really, I have two questions:
1. Honestly, I don't want a dependency on Thrift because I don't want to use HBase's Thrift interface. Is there a way to tell sbt to skip it?
2. Is there some better way to set this up? Should I just dump the HBase jar in the lib directory and move on?
Update This is the sbt 0.10 build.sbt file that accomplished what I wanted:
scalaVersion := "2.9.0-1"
resolvers += "ClouderaRepo" at "https://repository.cloudera.com/content/repositories/releases"
libraryDependencies ++= Seq(
"org.apache.hadoop" % "hadoop-core" % "0.20.2-cdh3u0",
"org.apache.hbase" % "hbase" % "0.90.1-cdh3u0"
)
ivyXML :=
<dependencies>
<exclude module="thrift"/>
</dependencies>
Looking at the HBase POM file, Thrift is in the repo at http://people.apache.org/~rawson/repo. You can add that to your project, and it should find Thrift. I thought that SBT would have figured that out, but this is an intersection of SBT, Ivy and Maven, so who can really say what really should happen.
If you really don't need Thrift, you can exclude dependencies using inline Ivy XML, as documented on the SBT wiki.
override def ivyXML =
<dependencies>
<exclude module="thrift"/>
</dependencies>
Re: dumping the jar in the lib directory, that would be a short term gain, long term loss. It's certainly more expedient, and if this is some proof of concept you're throwing away next week, sure just drop in the jar and forget about it. But for any project that has a lifespan greater than a couple of months, it's worth it to spend the time to get dependency management right.
While all of these tools have their challenges, the benefits are:
Dependency analysis can tell you when your direct dependencies have conflicting transitive dependencies. Before these tools, this usually resulted in weird runtime behavior or method not found exceptions.
Upgrades are super-simple. Just change the version number, update, and you're done.
It avoids having to commit binaries to version control. They can be problematic when it comes time to merge branches.
Unless you have an explicit policy of how you version the binaries in your lib directory, it's easy to lose track of what versions you have.
Thanks, @dave! The ivyXml bit did the trick.
I think the reason that sbt didn't pick up Thrift is because the ~rawson repo wasn't in its list of blessed repos.
I have a very simple example of an sbt project w/ Hadoop on github: https://github.com/deanwampler/scala-hadoop.
Look in project/build/WordCountProject.scala, where I define a variable named ClouderaMavenRepo, which defines the Cloudera repository location, and the variable named hadoopCore, which defines the specific information for the Hadoop jar.
If you go to the Cloudera repo in a browser, you should be able to navigate to the corresponding information for Hive.
| common-pile/stackexchange_filtered |
Merge profile based on 2 property in Apache-Unomi
I am trying to build an customize logic in action for profile merging, can anybody suggest me how to create a rule where I can merge profile based on email and phone-number, as of now I am able to do with only one property value email. you can find the sample rule below in code :
"metadata": {
"id": "exampleLogin",
"name": "Example Login",
"description": "Copy event properties to profile properties on login"
},
"condition": {
"parameterValues": {
"subConditions": [
{
"type": "eventTypeCondition",
"parameterValues": {
"eventTypeId": "click"
}
}
],
"operator": "and"
},
"type": "booleanCondition"
},
"actions": [
{
"parameterValues": {
"mergeProfilePropertyValue": "eventProperty::target.properties(email)",
"mergeProfilePropertyName": "mergeIdentifier"
},
"type": "mergeProfilesOnPropertyAction"
},
{
"parameterValues": {
},
"type": "allEventToProfilePropertiesAction"
}
]
}
In order to be able to merge based on multiple identifiers you would have to extend the default built-in action to support that.
This can be done by creating a module but it will require some Java knowledge since this is how Unomi is implemented.
The code for the default merge action is available here:
https://github.com/apache/unomi/blob/master/plugins/baseplugin/src/main/java/org/apache/unomi/plugins/baseplugin/actions/MergeProfilesOnPropertyAction.java
| common-pile/stackexchange_filtered |
POST and GET in the same form created by BeginForm method
How to create a form that has a search function (pull data from the database) AND a submit function (add data to the database) at the same time using the BeginForm() method? I am reviewing the overloads on MSDN and I don't seem to find one.
Code:
@using (Html.BeginForm()){
<table>
@*Bunch of textboxes and dropdown lists*@
</table>
<div id=" buttonHolder">
<input id="Search" type="button" value="Search" />
<input id="Reset1" type="reset" value="Reset" />
<input id="Submit1" type="submit" value="Add" />
</div>
}
Are you submitting and searching at the same time?
@drew not the same time. when Search is hit, search, and when Add is hit, submit. I just want both methods in the same form
Possible duplicate of http://stackoverflow.com/questions/442704/how-do-you-handle-multiple-submit-buttons-in-asp-net-mvc-framework/7111222 or http://stackoverflow.com/questions/36555265/asp-net-mvc-core-6-multiple-submit-buttons/36557172
You can use two approaches here:
handle onsubmit and fetch/save data with AJAX (you can do it even with Html.BeginForm but it's easier to go just with regular <form ...)
@using (Html.BeginForm("DoIt", "DoItAction", FormMethod.Post, new { onsubmit = "submitWithAjax(event); return false;" }))
create two separate forms with a different action/controller pair
Can I use Ajax.BeginForm?
| common-pile/stackexchange_filtered |
Popover on hover vue headllessui
I'm trying to implement the popover from headlessui of vue package with hover. I try to use the mouseenter and mouseleave and the other mouse events but nothing change.
Any solution? There is a better solution? i search on internet I cant nothing about this. I search on headlessui github discussions but nothing.
<template>
<div class="fixed top-16 w-full max-w-sm px-4">
<Popover v-slot="{ open }" class="relative">
<PopoverButton
:class="open ? '' : 'text-opacity-90'"
class="group inline-flex items-center rounded-md bg-orange-700 px-3 py-2 text-base font-medium text-white hover:text-opacity-100 focus:outline-none focus-visible:ring-2 focus-visible:ring-white focus-visible:ring-opacity-75"
>
<span>Solutions</span>
<ChevronDownIcon
:class="open ? '' : 'text-opacity-70'"
class="ml-2 h-5 w-5 text-orange-300 transition duration-150 ease-in-out group-hover:text-opacity-80"
aria-hidden="true"
/>
</PopoverButton>
<transition
enter-active-class="transition duration-200 ease-out"
enter-from-class="translate-y-1 opacity-0"
enter-to-class="translate-y-0 opacity-100"
leave-active-class="transition duration-150 ease-in"
leave-from-class="translate-y-0 opacity-100"
leave-to-class="translate-y-1 opacity-0"
>
<PopoverPanel
class="absolute left-1/2 z-10 mt-3 w-screen max-w-sm -translate-x-1/2 transform px-4 sm:px-0 lg:max-w-3xl"
>
<div
class="overflow-hidden rounded-lg shadow-lg ring-1 ring-black ring-opacity-5"
>
<div class="relative grid gap-8 bg-white p-7 lg:grid-cols-2">
<a
v-for="item in solutions"
:key="item.name"
:href="item.href"
class="-m-3 flex items-center rounded-lg p-2 transition duration-150 ease-in-out hover:bg-gray-50 focus:outline-none focus-visible:ring focus-visible:ring-orange-500 focus-visible:ring-opacity-50"
>
<div
class="flex h-10 w-10 shrink-0 items-center justify-center text-white sm:h-12 sm:w-12"
>
<div v-html="item.icon"></div>
</div>
<div class="ml-4">
<p class="text-sm font-medium text-gray-900">
{{ item.name }}
</p>
<p class="text-sm text-gray-500">
{{ item.description }}
</p>
</div>
</a>
</div>
<div class="bg-gray-50 p-4">
<a
href="##"
class="flow-root rounded-md px-2 py-2 transition duration-150 ease-in-out hover:bg-gray-100 focus:outline-none focus-visible:ring focus-visible:ring-orange-500 focus-visible:ring-opacity-50"
>
<span class="flex items-center">
<span class="text-sm font-medium text-gray-900">
Documentation
</span>
</span>
<span class="block text-sm text-gray-500">
Start integrating products and tools
</span>
</a>
</div>
</div>
</PopoverPanel>
</transition>
</Popover>
</div>
</template>
Seems like common request in HeadlessUI community.
Another solution found this solution on Github that worked fine for me.
for vanilla vue 3 with js the original solution link Github Issue
Nuxt 3 with typescript maintaining accessibility here's the code ↓
<script setup lang="ts">
import { Popover, PopoverButton, PopoverPanel } from '@headlessui/vue'
interface Props {
label: string
hasHref?: boolean
href?: string
}
const props = defineProps<Props>()
const popoverHover = ref(false)
const popoverTimeout = ref()
const hoverPopover = (e: any, open: boolean): void => {
popoverHover.value = true
if (!open) {
e.target.parentNode.click()
}
}
const closePopover = (close: any): void => {
popoverHover.value = false
if (popoverTimeout.value) clearTimeout(popoverTimeout.value)
popoverTimeout.value = setTimeout(() => {
if (!popoverHover.value) {
close()
}
}, 100)
}
</script>
<template>
<Popover v-slot="{ open, close }" class="relative">
<PopoverButton
:class="[
open ? 'text-primary' : 'text-gray-900',
'group inline-flex items-center rounded-md bg-white text-base font-medium hover:text-primary focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2'
]"
@mouseover="(e) => hoverPopover(e, open)"
@mouseleave="closePopover(close)"
>
<span v-if="!hasHref">{{ props.label }}</span>
<span v-else>
<NuxtLink :to="href">
{{ props.label }}
</NuxtLink>
</span>
<IconsChevronDown
:class="[
open ? 'rotate-180 transform text-primary' : '',
' ml-1 h-5 w-5 text-primary transition-transform group-hover:text-primary'
]"
aria-hidden="true"
/>
</PopoverButton>
<transition
enter-active-class="transition ease-out duration-200"
enter-from-class="opacity-0 translate-y-1"
enter-to-class="opacity-100 translate-y-0"
leave-active-class="transition ease-in duration-150"
leave-from-class="opacity-100 translate-y-0"
leave-to-class="opacity-0 translate-y-1"
>
<PopoverPanel
class="absolute left-1/2 z-10 mt-3 ml-0 w-auto min-w-[15rem] -translate-x-1/2 transform px-2 sm:px-0"
@mouseover.prevent="popoverHover = true"
@mouseleave.prevent="closePopover(close)"
>
<div
class="overflow-hidden rounded-lg shadow-lg ring-1 ring-black ring-opacity-5"
>
<div class="relative grid gap-1 bg-white p-3">
<slot> put content here </slot>
</div>
</div>
</PopoverPanel>
</transition>
</Popover>
</template>
I favor this answer, because it does not break the accessibility, like mentioned in the link. However, when the mouseover event is triggered by the ChevronDownIcon the referenced element is wrong. It might be better to reference the the element directly.
Hey, I have solved it like this, I had to spend sometime on it thou but it solved the issue when focusing the parent. I am using nuxt3 with typescript. I'll modify the response as here is not letting me
Hey @WnaJ the code with nuxt solution solves mostly the requirements of a menu that should be opened on hover/click and could have a link in the parent menu element.
It is in the docs:showing-hiding-popover.
open is an internal state used to determine if the component is shown or hidden. To implement your own functionality, you can remove it and use the static prop to always render a component. Then you can mange the visibility with your own state ref and a v-if/v-show. The mouse-action has to be in the upper scope, so it is not triggered leaving the component, e.g. moving the mouse from button to panel.
Below is a modified example from the API documentation:
<template>
<Popover
@mouseenter="open = true"
@mouseleave="open = false"
@click="open = !open"
>
<PopoverButton @click="open = !open">
Solutions
<ChevronDownIcon :class="{ 'rotate-180 transform': open }" />
</PopoverButton>
<div v-if="open">
<PopoverPanel static>
<a href="/insights">Insights</a>
<a href="/automations">Automations</a>
<a href="/reports">Reports</a>
</PopoverPanel>
</div>
</Popover>
</template>
<script setup>
import { ref } from 'vue';
import { Popover, PopoverButton, PopoverPanel } from '@headlessui/vue';
import { ChevronDownIcon } from '@heroicons/vue/20/solid';
const open = ref(false);
</script>
Your link is wrong
@GʀᴜᴍᴘʏCᴀᴛ This might happen because it's an active project. In the future I will only reference to the docs in general. Anyway I updated the link.
| common-pile/stackexchange_filtered |
sending SMS from android & eclipse
I'm trying to send sms from one emulator to another emulator,
I've 3 string, then I'm adding this 3 string to one string & then trying to send an SMS, but its just sending `null'.
private String fm = mSelectedItemExam+" "+mSelectedItemBoard+" "+mSelectedItemYear;
public void Submit(View view) {
// sendSMS function
sendSMS("5556", fm);
}
private void sendSMS(String phoneNumber, String message)
{
SmsManager sms = SmsManager.getDefault();
sms.sendTextMessage(phoneNumber, null, message, null, null);
}
Check the Screenshot to see what I'm receiving on another emulator
Where is the value of mSelectedItemExam, mSelectedItemBoard, mSelectedItemYear being initialised?
those are on top, those are assigned as string, if I send only mSelectedItemExam instead of fm then its send succesfull. I means working fine.
this is working fine
public void Submit(View view) {
// sendSMS("5556", location [index]);
//OR you can also send sms using below code.
sendSMS("5556", mSelectedItemExam);
}
private void sendSMS(String phoneNumber, String message)
{
SmsManager sms = SmsManager.getDefault();
sms.sendTextMessage(phoneNumber, null, message, null, null);
}
Try this:
Intent sendIntent = new Intent(Intent.ACTION_VIEW);
sendIntent.putExtra("sms_body", "default content");
sendIntent.setType("vnd.android-dir/mms-sms");
startActivity(sendIntent);
Try this,
public void Submit(View view) {
private String fm = mSelectedItemExam+" "+mSelectedItemBoard+" "+mSelectedItemYear;
// sendSMS function
sendSMS("5556", fm);
}
private void sendSMS(String phoneNumber, String message)
{
SmsManager sms = SmsManager.getDefault();
sms.sendTextMessage(phoneNumber, null, message, null, null);
}
Glad I could help. I've also done an edit to your question. Please accept if its good.
already accepted, you can also vote up for this problem, so others can get help form it
Log.i("Send SMS", "");
String phoneNo = "1212";
String message = "msms";
SmsManager smsManager = SmsManager.getDefault();
smsManager.sendTextMessage(phoneNo, null, message, null, null);
This is working fine, but my problem is here sendSMS("5556", fm);
it can't send perfect sms, http://prntscr.com/79ltpc please check the screenot
| common-pile/stackexchange_filtered |
Oracle 12c CLOB data type is not working as expected
I have This Oracle 12c Procedure
CREATE OR REPLACE PROCEDURE LOGINCHECK(SQLQRY IN CLOB)
AS
C INTEGER;
N INTEGER;
RC SYS_REFCURSOR;
stmt clob:= To_Clob('begin ' || sqlqry || '; end;');
BEGIN
C := SYS.DBMS_SQL.OPEN_CURSOR;
SYS.DBMS_SQL.PARSE(C,stmt ,DBMS_SQL.native);
N := SYS.DBMS_SQL.EXECUTE(C);
SYS.DBMS_SQL.GET_NEXT_RESULT(C,RC);
SYS.DBMS_SQL.RETURN_RESULT(RC);
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
when OTHERS then
RAISE;
END LOGINCHECK;
I Call This Procedure in Anonymous Block Like This (Download XML Data from here: Link)
declare stmt clob := 'INWARDPKG.MACHINEINWARD_VALIDATING(XMLDOC => XMLTYPE.CREATEXML(paste xml from link))'; --The parameter value is a xml you can download it from above link
begin
LOGINCHECK(SQLQRY => STMT);
end;
But I am getting Error PLS-00172: string literal too long.
If i reduce xml size to 40-50 elements like remove some elements. this works fine.
Not sure I quite understand how you're assigning/passing the document you linked to. But that is rather more than 32k characters, so you can't assign it to a CLOB as a string literal. Your concatenation inside the procedure will fail too. Where is the document coming from? You may need to read it from somewhere in chunks and append it to the CLOB variable.
I call logincheck(sqlqry=> parameter) procedure from my .net code and the sqlqry parameter is passed from .net alongwith xml data which is around 5 MB. the begin procedure_name end; string is created in .net and passed to sqlqry parameter. thus i need not to do add stmt clob:= To_Clob('begin ' || sqlqry || '; end;'); in logincheck(sqlqry=> parameter). also i cannot create chunk of xml.
I am sending sqlqry parameter as OracleDbType.Clob from .net code but getting same pls-00172 error
Cross post: http://dba.stackexchange.com/q/80629/1822
sir if this is treated as cross post pls let me know which one should i delete stackexchange or stockoverflow post.
Dear Alex, you told Your concatenation inside the procedure will fail too. but if i send only 40-50 elements in xml then i successfully executes. no error occures.
In your first line declare stmt clob := 'INWARDPKG.MACHINEINWARD_VALIDATING... you are defining your CLOB. Since you are using a string literal to define your CLOB, you are facing the limits of string literals (see Oracle 12c Documenation).
To solve your problem you have to build your CLOB step by step, using the DBMS_LOB package and appending strings not longer than 4000 bytes until your CLOB is complete.
The basic idea:
DECLARE
C CLOB := TO_CLOB('First 4000 bytes');
V VARCHAR2(4000);
BEGIN
V := 'Next 4000 bytes';
DBMS_LOB.WRITEAPPEND(C, LENGTH(V), V);
-- more WRITEAPPEND calls until C is complete
DBMS_OUTPUT.PUT_LINE('CLOB-Length: ' || DBMS_LOB.GETLENGTH(C));
END;
| common-pile/stackexchange_filtered |
Why did my Ubuntu 22.04 LTS reboot during an xorg upgrade?
My Laptop had to upgrade some xorg upgrades recently, and this required a reboot, where during the boot process some stuff was installed. Why was that necessary? I have never experienced this with a linux operating system, only with windows. What was done during the boot process which could not be done while the system was booted?
Edit: These are the packages that were updated:
Start-Date: 2022-12-13 10:41:13
Commandline: packagekit role='update-packages'
Upgrade: frr:amd64 (8.1-1ubuntu1.2, 8.1-1ubuntu1.3), tzdata:amd64 (2022f-0ubuntu<IP_ADDRESS>, 2022g-0ubuntu<IP_ADDRESS>), xserver-xorg-core:amd64 (2:21.1.3-2ubuntu2.3, 2:21.1.3-2ubuntu2.4), xserver-xorg-legacy:amd64 (2:21.1.3-2ubuntu2.3, 2:21.1.3-2ubuntu2.4), xserver-common:amd64 (2:21.1.3-2ubuntu2.3, 2:21.1.3-2ubuntu2.4), frr-pythontools:amd64 (8.1-1ubuntu1.2, 8.1-1ubuntu1.3), xserver-xephyr:amd64 (2:21.1.3-2ubuntu2.3, 2:21.1.3-2ubuntu2.4)
End-Date: 2022-12-13 10:41:15
Snap updates sometimes install during reboot or shutdown
You could look in the apt log at /var/log/apt/history.log and figure out what packages were updated at that time. If you provide that information via an edit to your question, it should be possible to figure out why a reboot was necessary. Otherwise we're just guessing since we don't know what "some stuff" is.
I edited the question
| common-pile/stackexchange_filtered |
How to add a css reference within a server control?
I've built a custom server control that uses custom CSS. The problem that I have is that I have to set a reference to the css file on each page I use the control.
Can I set this reference inside the control ? So that I could just add the control and not worry about the reference.
You need to follow the below steps to add the css/javascript/image in the web control itself.
Modify the AssemblyInfo.cs file, to add the web resource
[assembly: System.Web.UI.WebResource("CustomControls.Styles.GridStyles.css", "text/css"), PerformSubstitution = true)]
Adding the required files(css/javascript/images) to the custom server control solution. Note that we can add folders in the solution and just add separate it using '.'(dot)
More importantly, we should change the BuildAction Property from Content to Embedded Resource of the newly added css/javascript/image files.
Further we should load the stored resources from the DLL. Best event for this would be OnPreRender
Below is the sample code rendering css
protected override void OnPreRender(EventArgs e)
{
bool linkIncluded = false;
foreach (Control c in Page.Header.Controls)
{
if (c.ID == "GridStyle")
{
linkIncluded = true;
}
}
if (!linkIncluded)
{
HtmlGenericControl csslink = new HtmlGenericControl("link");
csslink.ID = "GridStyle";
csslink.Attributes.Add("href", Page.ClientScript.GetWebResourceUrl(this.GetType(), "CustomControls.Styles.GridStyles.css"));
csslink.Attributes.Add("type", "text/css");
csslink.Attributes.Add("rel", "stylesheet");
Page.Header.Controls.Add(csslink);
}
}
Similarly for Adding javascript
protected override void OnPreRender(EventArgs e)
{
string resourceName = "CustomControls.GridViewScript.js";
ClientScriptManager cs = this.Page.ClientScript;
cs.RegisterClientScriptResource(this.GetType(), resourceName);
}
Similarly using the Added Image in CSS file. Use the below code
background: url('<%=WebResource("CustomControls.Styles.Cross.png")%>') no-repeat 95% 50%;
Thanks.
Here what I use to add css reference to Page programmatically :
HtmlLink link = new HtmlLink();
link.Href = relativePath;
link.Attributes["type"] = "text/css";
link.Attributes["rel"] = "stylesheet";
Page.Header.Controls.Add(link);
Maybe you should add some code to check if the css file added to the header control.
Where should I embeed this in the custom server control ?
In which phase of the creation cycle ?
CreateChildControls is suitable, I think.
If you want build webcontrol, that will be reusable and in one assembly with css, js and other resources, than you can use WebResources
Working with Web Resources in ASP.NET 2.0
You could do it with a ScriptManager - and this will also help you embed the stylesheet in the custom control library's DLL.
Or you could just reference the CSS from your master page. Unless you're packging a custom control library to sell etc, ScriptManager is a LOT of extra effort vs the Master Page solution
I would think you could add Canavar's code to a base class that would be included with all the classes that need it.
public class myclass : BaseClass
{
var customCSS = customcss();
Page.Header.Controls.Add(customCSS); }
and your baseclass:
public class BaseClass : Page
{
public HtnlLink customcss(){
HtmlLink link = new HtmlLink();
link.Href = relativePath;
link.Attributes["type"] = "text/css";
link.Attributes["rel"] = "stylesheet";
return link;
}
}
or you could go down the route of
myObject.Attributes.Add("style","width:10px; height:100px;");
or
myObject.Attributes.Add("style",customStyle(););
where this is in your baseclass
public String customStyle()
{
return "width:10px; height:20px;";
}
and customstyle would be a function like so:
But I would assume that you use CSS for the rest of your site, so maybe a style could just be added to your stylesheet that you use on all pages through this method you could use the below code:
myObject.Attributes.Add("class","customControl");
This will then reference the correct CSS style from your main, always included stylesheet.
Unless I am missing something here....
| common-pile/stackexchange_filtered |
PHPExcel Downloaded after remaining code execution stops?
I have a problem like when I download excel sheet after remaining code not executing. Please look below mentioned sample code.
$data[]=array('EANCODE'=>6161106690015,'ItemDesc'=>'Electrical hammer mill 15hp'
,'UnitDesc'=>'PIECES','qty'=>10);
$object = new PHPExcel();
$object->setActiveSheetIndex(0);
$object->getActiveSheet()->setCellValue('A1', "EANCODE");
$object->getActiveSheet()->setCellValue('B1', "Item Name");
$object->getActiveSheet()->setCellValue('C1', 'Units');
$object->getActiveSheet()->setCellValue('D1', 'Quantity');
$excel_row = 2;
foreach($data as $item)
{
$object->getActiveSheet()->setCellValue('A'.$excel_row, $item['EANCODE']);
$object->getActiveSheet()->setCellValue('B'.$excel_row, $item['ItemDesc']);
$object->getActiveSheet()->setCellValue('C'.$excel_row, $item['UnitDesc']);
$object->getActiveSheet()->setCellValue('D'.$excel_row, $item['qty']);
$excel_row++;
}
$store_name='Insertion Failed Records - '. date('d-m-Y').".xlsx";
header('Content-Type: application/vnd.openxmlformats-
officedocument.spreadsheetml.sheet');
header('Content-Disposition: attachment;filename="'.$store_name.'');
header('Cache-Control: max-age=0');
$objWriter =new PHPExcel_Writer_Excel2007($object, 'Excel2007');
$objWriter->save('php://output');
echo "something else";
Output : Excel Sheet downloading but given echo statement not showing.
To output the XLS sheet to the page you are on, just make sure that page has no other echo's,print's, outputs.
As per my knowledge when we create and download or save file in any directory all echo content before this statement $objWriter->save('php://output'); is written in your file and after that statement, all part is skipped.
Right but I want to remain code executable. If any solution is there or not.
I think there is no option because it considers this line is the endpoint of your code and stops the execution.
There is no solution that suit your needs (output to browser after sending a file to it).
Basically your headers tell the browser that it's expecting a certain file, with a certain filename etc.
From 7.3 you need to change PHPExcel library to phpoffice
| common-pile/stackexchange_filtered |
Azure VM Core vs vCPU
When comparing two different VM series in Azure, I see that one has Cores and the other one vCPUs. Keeping aside the number of Cores/CPUs, Memory and Processor Type (Intel Xeon E/Platinum etc), what is the advantage of one over the other? I understand that CPU can have multiple cores, but in Azure what is the difference between 4 vCPUs and 4 vCores?
G Series with Core
D Series with vCPU
I'm not super familiar with Azure terminology but I suspect it's the same as in AWS:
"Core" sounds like a real physical CPU core while "vCPU" typically refers to 1 thread in hyperthreading-enabled .
See
Optimizing CPU Options: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html
in all cases vCPUs is number of threads per core which defaults to 2 and only 1 or 2 are valid values
This is down to whether you get a real core or virtual core.
Older VM SKUs, and some specialised SKU's like H series have a 1 to 1 mapping between physical cores in the host machine and cores in the VM, so you are getting a real core dedicated to your VM, no hyperthreading.
Most recent SKU's, v3 and newer, are using hyperthreading and so the core allocated to your VM does not map 1 to 1 to a physical core in the host machine.
| common-pile/stackexchange_filtered |
Angular stripe add card form allow only one enter button submit
I have an angular add stripe card form and it is working fine. However I need to add a funcationality for this card to be submitted with an enter button click if all fields are correct. So I decided to add a keyup.enter functionality that just call the function stripeCard.createToken and this function also gets called on clicking save button via mouse. But if I press enter key immediately 3 times then it adds 3 cards instead of one. How can I stop this from happening? I tried to reset form right after token creation too but it just doesn't work. Here is my form code
P.S I am just going to add a few lines of form as I think that all that's required.
<div class="add-card-module" (click)="$event.stopPropagation();">
<form [formGroup]="paymentForm" appDebounceSubmit (debounceSubmit)="stripeCard.createToken();" (keyup.enter)="stripeCard.createToken()">
<div class="form-group curnt-plan-box">
<div class="px-3">
<div class="text-left">
<h4 class="mt-4 mb-3">Add Debit or credit Card</h4>
<div class="card-types">
<ul class="d-inline-flex" type="none">
<li class="mr-2"><a><img src="assets/images/visa-img.svg" alt=""></a></li>
<li class="mr-2"><a><img src="assets/images/master-card.svg" alt=""></a></li>
<li class="mr-2"><a><img src="assets/images/amex.svg" alt=""></a></li>
<li class="mr-2"><a><img src="assets/images/discover-network.svg" alt=""></a></li>
</ul>
</div>
</div>
<div class="form-group search-box my-3 position-relative">
<img src="assets/images/user-gray.svg" alt="" class="user-name" />
<input class="form-control" alphabetAndSpaceOnly type="text" formControlName="nameOnCard" placeholder="Name on card" />
<label class="d-none">Name on card</label>
</div>
any help will be appreciated
You need to add code that will prevent the function you're trying to call from firing multiple times. Simplified example:
var formSubmissionInProgress = false;
function submitForm() {
if (formSubmissionInProgress) { return; }
formSubmissionInProgress = true;
// Handle form submission here
// Call stripeCard.createToken(), etc.
// If an error is encountered or you otherwise
// need to allow form submission again set
// formSubmissionInProgress to false again
}
Then call submitForm() in the code you shared in your question instead of stripeCard.createToken() directly.
| common-pile/stackexchange_filtered |
Select query returning only one result per product?
I'm working with a MySQL database, where I have to tables. A product-table with unique product and a price-table that contains Several prices for each product.
Today I use the following query, that will return rows for as many different prices present in price-table for each product:
SELECT product.id as id, price.unit_price as price
FROM product
INNER JOIN price
ON price.product_id=product.id
WHERE category_id = 234
How can I change this query to only return a single row for each product with the lowest price (price.unit_price) present in the price-table and is it possible with only a single query?
Do a GROUP BY. Use MIN().
This is probably the simplest way:
SELECT product.id as id, min(price.unit_price) as price
FROM product
INNER JOIN price
ON price.product_id=product.id
WHERE category_id = 234
group by product.id
Thanks .. Initial I missed the GROUP BY which only returns a single result, but now it seems to work very nice .. Thanks for the answer .. all of you.
Worth considering Gordon's answer as well; window functions can look a bit more complex but they're also more flexible. For example, if you added a third column later you'd need to add it to the group by or also aggregate it.
A simple method uses window functions:
SELECT p.id as id, pr.unit_price as price
FROM product p INNER JOIN
(SELECT pr.*,
ROW_NUMBER() OVER (PARTITION BY pr.product_id ORDER BY pr.unit_price DESC) as seqnum
FROM price pr
) pr
ON pr.product_id = p.id AND seqnum = 1
WHERE p.category_id = 234;
This allows you to bring in addition columns from either table. Otherwise, just use aggregation:
SELECT p.id as id, MIN(pr.unit_price) as price
FROM product p INNER JOIN
price pr
ON pr.product_id = p.id
WHERE p.category_id = 234;
| common-pile/stackexchange_filtered |
Raw socket not picking up ARP requests
I am attempting to write a custom packet sniffer. I am following the following tutorial...
http://www.binarytides.com/packet-sniffer-code-in-c-using-linux-sockets-bsd-part-2/
Doing so, I am unable to pick up an ARP requests packets. I do successfully pick up all other packets including ICMP, IP, etc...
Here is an overview of the code. Again, I am reading all other packets (every byte of every other packet) but I am not reading any ARP.
int main()
{
int saddr_size , data_size;
struct sockaddr saddr;
unsigned char *buffer = (unsigned char *) malloc(65536); //Its Big!
if(logfile==NULL)
{
printf("Unable to create log.txt file.");
}
printf("Starting...\n");
int sock_raw = socket( AF_PACKET , SOCK_RAW , htons(ETH_P_ALL)) ;
setsockopt(sock_raw , SOL_SOCKET , SO_BINDTODEVICE , "eth0" , strlen("eth0")+ 1 );
if(sock_raw < 0)
{
perror("Socket Error");
return 1;
}
while(1)
{
saddr_size = sizeof saddr;
//Receive a packet
data_size = recvfrom(sock_raw , buffer , 65536 , 0 , &saddr , (socklen_t*)&saddr_size);
if(data_size <0 )
{
printf("Recvfrom error , failed to get packets\n");
return 1;
}
//Now process the packet
ProcessPacket(buffer , data_size);
}
close(sock_raw);
printf("Finished");
return 0;
}
As ARP doesn't use IP packets, you can't use recvfrom, you have to use recv.
See e.g. this example.
Thanks! When I followed that tutorial, suddenly I stopped receiving ICMP packets...
So I tried a hybrid approach and it seems to be working and that was keeping the PF_PACKET instead of the AF_PACKET, but still using the recvfrom instead of the recv
| common-pile/stackexchange_filtered |
When will ASP.NET kill a new thread?
I've tried some googling on this subject but i would like to have som more info.
I'm trying to start a new thread inside an ASP.NET app that will take care of some work that takes long time. If I put this in my web.config:
<httpRuntime executionTimeout="5" />
A regular request will timeout after 5 secounds. Remember this is for testing. When I start a new thread from the code:
var testThread = new Thread(new ThreadStart(CustomClass.DoStuffThatTakesLongTime));
testThread.Start();
This thread will run for longer than 5 secounds, that's what I want. BUT. For how long will it run? Let's say this thread takes 5h (just as an example). When will the thread be killed? Will it run until the app pool is recycled? Or is there anything else that kills this thread?
Try it out, see what happens. (Let the new thread write the time to a textfile or so)
EDIT: check if there is a difference between using Thread and Task.
That's want I did but I would like some more understading. The thread seams to run, but I would like to know more about what could go wrong and what to look up for.
ASP.NET has no knowledge of the thread that you have created - it will run until the AppPool is recycled or it completes.
Since ASP.Net has no knowledge of this thread however, it could be aborted quite abruptly at any point if the server thinks that it should recycle the AppPool, this would not be a good thing! Phil Haack wrote a blog post on how to subscribe to the 'AppDomainIsGoingDown' event.
In regards to what can cause this, I'd recommend reading this blog post by Tess Ferrandez, but in a nutshell they are:
It has been scheduled to do so
Machine.Config, Web.Config or Global.asax are modified
The bin directory or its contents is modified
The number of re-compilations (aspx, ascx or asax) exceeds the limit specified by the setting in machine.config or web.config (by default this is set to 15)
The physical path of the virtual directory is modified
The CAS policy is modified
The web service is restarted
(2.0 only) Application Sub-Directories are deleted
So that means that this "exemple"-thread will run for 5h. Will GC take care of stuff when the thread complets or do I need to release resources in any way?
@MarkusKnappenJohansson The thread will run until it completes or the AppPool is shut down. All the resources acquired during the lifetime of the Thread are managed, all necessary GC will still be performed.
So. About the fact that the thread could be aborted by the server. Would this happen is the thread is actually working with something? Or just if its idle? Is there any way to "protect" against this?
@MarkusKnappenJohansson Yes, irrespective of whether the thread is working on something or idle it could be aborted. In order to inform ASP.Net that your thread is working you'll need to implement IRegisteredObject and call the HostingEnvironment.RegisterObject and UnregisterObject methods during the relevent lifetime of your work. I highly recommend reading Phil Haack's blog post - it describes exactly what you're trying to achieve here.
@MarkusKnappenJohansson I've updated my answer, hopefully this provides the information you were looking for. If not just let me know and I'll see if I can help.
| common-pile/stackexchange_filtered |
handle exceptions in java
Let's suppose we have a situation where one of the record is missing in the books.txt file. How can We handle such a case, so that the program logs a situation to the console and skips it in further processing?
Another situation, we input a wrong category the program should inform the user about this and allowed him to enter the correct value.
Any suggestion would be warmly welcome.
books.txt
[Author]
[Title]
[Category]
[Author]
[Title]
[Category]
'''
App.java
import java.util.List;
public class App {
public static void main(String[] args) {
String file = "books.txt";
Parser parser = new Parser();
List<Book> books = parser.parse(file);
Stats stats = new Stats();
stats.countBooksPerCategory(books);
stats.topAuthors(books);
stats.topAuthorsInCategory(books, "category");
}
}
Book.java
import java.util.Objects;
public class Book {
private final String author;
private final String title;
private final String category;
public Book(String author, String title, String category) {
if( author == null || title == null || category == null)
throw new IllegalArgumentException();
this.author = author;
this.title = title;
this.category = category;
}
public String getAuthor() {
return author;
}
public String getTitle() {
return title;
}
public String getCategory() {
return category;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o ==null || getClass() != o.getClass()) return false;
Book book = (Book) o;
return author.equals(book.author) && title.equals(book.title) && category.equals(book.category);
}
@Override
public int hashCode() {
return Objects.hash(author, title, category);
}
@Override
public String toString() {
return "Books{" +
"author='" + author + '\'' +
", title='" + title + '\'' +
", category='" + category + '\'' +
'}';
}
}
Parser.java
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
public class Parser {
public List<Book> parse(String pathToFile) {
File in = new File(pathToFile);
List<Book> books = new LinkedList<>();
String line = "";
BufferedReader inReader = null ;
try {
inReader = new BufferedReader(new FileReader(in));
int i = 1;
String author = "";
String title = "";
String category = "";
while ((line = inReader.readLine()) != null) {
if (i == 1) {
author = line.trim();
} else if (i == 2) {
title = line.trim();
} else if (i == 3) {
category = line.trim();
} else {
i = 0;
books.add(new Book(title, author, category));
}
i++;
}
} catch (IOException e){
e.printStackTrace();
}
return books;
}
}
Stats.java
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class Stats {
public void countBooksPerCategory(List<Book> books) {
Map<String, Integer> counterBookPerCategory = new HashMap<>();
for (Book book : books) {
if(!counterBookPerCategory.containsKey(book.getCategory())){
counterBookPerCategory.put(book.getCategory(),1);
}else{
int counter = counterBookPerCategory.get(book.getCategory())+1;
counterBookPerCategory.put(book.getCategory(), counter);
}
}
for(String k : counterBookPerCategory.keySet()){
System.out.println("Number book in category" + k + " is " + counterBookPerCategory.get(k));
}
}
public void topAuthors(List<Book> books) {
Map<String, Integer> authorStats = new HashMap<>();
for (Book book : books) {
int counter = authorStats.getOrDefault(book.getAuthor(), 0) + 1;
authorStats.put(book.getAuthor(), counter);
}
for (String author : authorStats.keySet()) {
System.out.println(author + " -> " + authorStats.get(author));
}
}
public void topAuthorsInCategory(List<Book> books, String category) {
Map<String, Integer> authorStats = new HashMap<>();
for (Book book : books) {
if (book.getCategory().equals(category)) {
int counter = authorStats.getOrDefault(book.getAuthor(), 0) + 1;
authorStats.put(book.getAuthor(), counter);
}
for (String author : authorStats.keySet()) {
System.out.println(author + " -> " + authorStats.get(author));
}
}
}
}
Generally you would fix this with if statements before you create your book. Of course, you would have to know what a good / bad category is before you can check anything. You could create an enum for that.
Create a custom Exception to alert the console if a book is missing while looping over the file.
For that to work, your input file must have structure. You assume the first line is author, second line is title, and third line is category. There's no way to tell that the first line is actually author. It could be the category entered by mistake. Maybe you need to include tags to identify each record, and each fields.
@MaartenBodewes I believe enum is usefull but not in this particuall case. What if we load next file with same data structure and different category?
Well, it depends of course how the categories are handled. If they are more of a dynamic system then enums don't make sense.
| common-pile/stackexchange_filtered |
What does -> ? mean at the end of a linux command?
For example
Path to MySQL shared libraries directory:
/usr/local/mysql/lib -> ?
as seen here: http://www.macports.info/MySQL
That tutorial refers to Webmin. Demo installation is here, username and password are demo. See the screen at Servers » MySQL Database Server » Module Config.
It means that '/usr/local/mysql/lib' is a link that should point to where you have locally installed mysql and it's library folder.
Why would it appear like that in a Webmin MySQL module configuration guide? Wouldn't it suffice to enter /usr/local/mysql/lib?
It is custom practice to make link such as /usr/local/mysql/lib, so when you get new version of mysql, you only need to update one link
So is this not a linux directive, but one specific for Webmin?
Yep. That is correct.
| common-pile/stackexchange_filtered |
Apache Tomcat 7 on windows 2008: how to make two hosts point to same website
I’m new to Apache Tomcat. I have been asked to point new domain name to the existing domain name. E.g. We have https://xyz.abc.com, we want new domain name http://123.abc.com to point to xyz.abc.com. The Apache Tomcat 7 server is installed Windows 2008 R2 server. We have created the DNS entry for 123.abc.com Can someone tell me how to do this? Do I need to create a virtual host and if so how. How do I restart tomcat server on windows 2008 R2?
I read that if I modify server.xml then I need to restart Tomcat. How do I restart Tomcat?
I have a following entry in config/server.xml file:
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
Will this work. I want 123.abc.com to go to xyz.abc.com. So 123.abc.com is an alias for xyz.abc.com. Both the sites are pointing to the same appbase.
<Host name="www.knowledgefolders.com"
appBase="D:/webpage_demos/akc"
unpackWARs="true"
autoDeploy="true"
xmlValidation="false"
xmlNamespaceAware="false">
<Alias>knowledgefolders.com</Alias>
<Alias>www.knowledgefolders.net</Alias>
<Alias>knowledgefolders.net</Alias>
<Alias>www.knowledgefolders.org</Alias>
<Alias>knowledgefolders.org</Alias>
<Alias>www.satyakomatineni.com</Alias>
<Alias>www.kavithakomatineni.com</Alias>
<Context path="" docBase="D:/webpage_demos/akc"
debug="0" reloadable="false"/>
<Context path="/akc" docBase="D:/webpage_demos/akc"
debug="0" reloadable="false"/>
</Host>
Notice how all of the following host names point to the same web app, akc (which was the previous name for Knowledge Folders).
knowledgefolders.com
www.knowledgefolders.com
knowledgefolders.net
www.knowledgefolders.net
knowledgefolders.org
www.knowledgefolders.org
www.satyakomatineni.com
www.kavithakomatineni.com
You want to add a new valve for the rewrite engine. It's almost the same as for Apache's httpd server's mod_rewrite. Alternatively you could install Apache/Nginx HTTP and redirect on it.
I'm very new with no experience with Tomcat. Can you explain. It's a production server so I do not wish the install another web server. I don't understand what the valve is.
You should create a test box to practice/test then, not make untested changes on a production box Running tomcat behind httpd/nginx is quite common. What are you familiar with?
I know IIS. So you are saying creating alias like above won't work? I thought if I add line similiar to this www.knowledgefolders.net inside my host tag it should work.
| common-pile/stackexchange_filtered |
JavaFX window resizing when running the application on two different sized screens
I have created a JavaFX window on my laptop using scene builder..however when i try to run the same on a workstation with a larger screen the JavaFX application does not resize itself and is displayed towards the left top of the screen..is there any way by which I can have the application window to be resized as and when the laptop/workstation screen size varies?
Welcome to Stack Overflow! To give you a great answer, it might help us if you have a glance at [ask] if you haven't already. It might be also useful if you could provide a [mcve].
When you launch your JavaFX application you can maximize the Stage window using .setMaximized(true). This will make it "fill the screen" no matter what the screensize is. Be sure to use a layout that resizes well with different screen sizes if you do this.
To get the Stage, you can call .getWindow() on the Scene and cast it to the Stage it is. Here is an example using a Node node I have access to in my application:
((Stage) node.getScene().getWindow()).setMaximized(true)
You could set up a ratio. Determine what you want the height ratio and width ratio to be. Then get the width and height of the screen and apply the ratio.
import javafx.application.Application;
import javafx.geometry.Rectangle2D;
import javafx.scene.Scene;
import javafx.scene.layout.StackPane;
import javafx.stage.Screen;
import javafx.stage.Stage;
/**
*
* @author blj0011
*/
public class JavaFXApplication287 extends Application
{
@Override
public void start(Stage primaryStage)
{
double heightRatio = .25;
double widthRatio = .30;
Rectangle2D primaryScreenBounds = Screen.getPrimary().getVisualBounds();
double sceneHeight = primaryScreenBounds.getHeight() * heightRatio;
double sceneWidth = primaryScreenBounds.getWidth() * widthRatio;
StackPane root = new StackPane();
Scene scene = new Scene(root, sceneWidth, sceneHeight);
primaryStage.setTitle("Hello World!");
primaryStage.setScene(scene);
primaryStage.show();
}
/**
* @param args the command line arguments
*/
public static void main(String[] args)
{
launch(args);
}
}
| common-pile/stackexchange_filtered |
Non-exhaustive patterns in function buildTree' Haskell
I use function that returns tuples. But when I try to run the function it gives me an exception: Non-exhaustive patterns in function.
buildTree' :: String -> Tree -> (String,Tree)
buildTree' (x:xs) currenttree
|null (x:xs) = ("empty", currenttree)
|isDigit x && ( take 1 xs == ['+'] || take 1 xs == ['-'])= buildTree' xs (Node [x] Empty Empty)
|isDigit x = buildTree' newstring1 (snd (buildrecursion (getminiexpr(x:xs)) Empty))
|elem x "+-" = buildTree' newstring (buildTree2 currenttree newtree [x])
where newtree = (snd (buildrecursion (getminiexpr xs) Empty))
newstring = drop (length(getminiexpr xs)) xs
newstring1 = drop (length(getminiexpr (x:xs))) (x:xs)
getminiexpr :: String -> String
getminiexpr input = takeWhile ( \y -> y /= '+' && y /= '-') input
It really looks like you are doing too much in your functions. You should really try to implement simpler functions that each do a very simple thing. You can compile with -Wall and Haskell will show what the problem is. At first sight, it looks like you did not cover buildTree [].
The null (x:xs) guard doesn't do what I expect you think it does. x:xs is a pattern that can never be matched by an empty list. You need a separate pattern for buildTree' [] currentTree to cover this case. In general, pattern matching is more idiomatic in Haskell than using guards - but guards can do more so you do need them sometimes.
(x:xs) is not an arbitrary list/string, it is a non empty list/string whose head is x and whose tail is xs. Hence
buildTree' (x:xs) currenttree
...
only handles non empty lists. Compiling with -Wall would warn that the empty list case is missing. So, you need:
buildTree' [] currenttree = ...
buildTree' (x:xs) currenttree
...
Adapting your code, we can remove the null guard:
buildTree' [] currenttree = ("empty", currenttree)
buildTree' (x:xs) currenttree
| isDigit x && ( take 1 xs == ['+'] || take 1 xs == ['-'])= buildTree' xs (Node [x] Empty Empty)
...
Similarly, the take checks need more than the head x of the list. You can write, instead:
buildTree' [] currenttree = ("empty", currenttree)
buildTree' (x1:x2:xs) currenttree
| isDigit x1 && ( x2 == '+' || x2 == '-') = buildTree' (x2:xs) (Node [x1] Empty Empty)
...
or even
buildTree' [] currenttree = ("empty", currenttree)
buildTree' (x1:x2:xs) currenttree
| isDigit x1 && (x2 `elem` "+-") = buildTree' (x2:xs) (Node [x1] Empty Empty)
...
I have no idea about whether your logic is correct.
| common-pile/stackexchange_filtered |
Symfony AbstractToken private properties
I'm extending symfony's AbstractToken to fit my own needs. However, AbstactToken declares its properties $user, $roles and $authenticated as private. Furthermore, there is no setRoles()-function, it can only be initialized through the constructor.
This seems to not make sense: why should you provide the roles at the constructor and only then?
My expected control flow would be:
create an unauthenticated token with session info
find the user based on this info and set this in the token
perform some checks and based on that: setAuthenticated(true);
and add the roles the user has to the token
Step 4 isn't possible, so I think I misunderstand something about how tokens should be used, but reading the docs I can't figure out what it is.
Because your flow does not match Symfony's flow, which is creating the token once with all the required information. Take a look at UserAuthenticatedProvider for an example. What kind of needs you have for your token?
It's needed to integrate Symfony with an authsystem already in place.
The thing is, in your workflow, if you authenticate a user, then he should not have a "anonymous token" anymore.
I think you're kinda mixing up responsabilities : an unauthenticated token should be, as its name implies it, for an unauthenticated user, and when he authenticate himself, it should be a AuthenticatedToken or something alike, with its own roles then, as you should know the roles of the user that was just authenticated.
But if you really want to have the same token through and through, you may instead of using the AbstractToken extends and implements an interface based on TokenInterface...
So the unauth-token could / should be a different class too? It just feels so natural to me to give someone a token at the start, and then flip the $authenticated to indicate authentication. You're right about implementing myself. It's just that I'd want to understand why Symfony expects it differently,
You can always change the token (having a AuthenticatedToken instead of the UnauthenticatedToken) in the security context though...
But yeah, implementing yourself a token that changes should be okay though, even though I wouldn't really recommend it
| common-pile/stackexchange_filtered |
Are passive scanners subset of active scanners?
Assuming:
Passive scanners are those which do not actively send request packets, all they can do is monitor packets (which can be probe replies and beacon frames) which are passing through the interface and analyze followed by some action(s)
Active scanners are those which sends broadcasts/probe-requests and can process beacon frames and probe replies.
It seems to be passive scanners are just active scanners with sending functionality disabled, is that correct way to see the things?
Did any answer help you? If so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you can post and accept your own answer.
Yes and no.
While passive scanning is (mostly) a subset of active scanning, a passive scanner doesn't necessarily have active scanning functions. Note that passive scanning usually requires a network 'tap' to gather significant information while active scanning can work without - so the techniques are somewhat different.
Also, active scanning can be less or more intrusive - probing standard transport-layer ports for common services may already be considered intrusive by many, but massive probing of all possible ports definitely is.
| common-pile/stackexchange_filtered |
Get ffmpeg version number
I'm trying to get the version number from ffmpeg in a reliable way, hopefully without parsing a string.
I can parse the output of ffmpeg -version, however i see no guarantee that string won't change in the near future.
$ ffmpeg -version
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
[...]
I'm interested in a slightly more programmatic way to get the version number, like maybe telling ffmpeg to output JSON instead or just the version number without me having to extract it from a body of text.
If you have the companion ffprobe tool, you can run
ffprobe -v 0 -of default=nw=1:nk=1 -show_program_version | head -1
which will print just the version string.
e,g,
N-100221-g18befac5da
You can also emit JSON.
ffprobe -v 0 -of json
gives
{
"program_version": {
"version": "N-100221-g18befac5da",
...
}
}
The ffmpeg binary itself doesn't provide these outputs.
Note that the version string format and composition can vary - it's not standardized and can have custom strings added by the builder. But in general, a git build will have a commit hash (with or without the g prefix) and a release build will have a sequence of the form x.y.z. See my Windows builds for a couple of examples of other forms - I include the commit date as well.
The N count in my example above is the commit count of the master branch from when the binary was built.
Thank's for this but i simply can't assume ffprobe will exist alongside ffmpeg and their versions will be the same. At best this is speculative.
You're out of luck then as ffmpeg doesn't emit self-contained version string. It would be unusual for ffprobe to be from a different build.
It is what it is. Thanks for your answer, your approach is probably the best precision one you can get right now.
Based on this Stack Overflow answer, this should work:
ffmpeg -version | sed -n "s/ffmpeg version \([-0-9.]*\).*/\1/p;"
Tested on FFmpeg as installed via Homebrew on macOS and I get the following output:
4.3.1
And after reviewing the helpful feedback left in the comments below, I think this might be a better future-proof solution:
ffmpeg -version | awk -F 'ffmpeg version' '{print $2}' | awk 'NR==1{print $1}'
The reason being is the Sed command above is mainly looking for a major.minor.patch version number and that would exclude non-traditional version numbers like the nightly build format of N-100221-g18befac5da and such.
This can be tested by running this command that outputs 4.3.1:
echo 'ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers' | awk -F 'ffmpeg version' '{print $2}' | awk 'NR==1{print $1}'
And this command that deals with those nightly build “numbers”; it will output N-100221-g18befac5da:
echo 'ffmpeg version N-100221-g18befac5da Copyright (c) 2000-2020 the FFmpeg developers' | awk -F 'ffmpeg version' '{print $2}' | awk 'NR==1{print $1}'
So until FFmpeg allows for a clean output of the version number directly from the binary itself, some command line scraping seems to be the best way to go… For now!
But i explicitly asked not to parse strings that may change in the future.
Also this won't match nightly builds like N-100221-g18befac5da. Not trying to be dismissive, i appreciate your answer but hope is not really a strategy.
awk would be better than sed here and would get the "nightly" builds too.
| common-pile/stackexchange_filtered |
React Native componentWillMount / componentDidMount not going triggered after pop navigating and navigate again
I am new in React Native ...just want ask nub question, my App wont triggered componentWillMount and componentDidMount when navigate back to other page,
I use react-native-router-flux for navigator
componentWillMount() {
this.props.searchRequest();
}
exp: I have homescreen and categories, from home I want go to categories: on categories I have that code, but sometime when I back to previouse page (homescreen) and back again on categories componentWillMount not triggered sometime,
I think my categories scene not unmounted
Have you tried interaction manager?
Please post relevant code to help us debug your issue.
just like this issue link ,still got nothing ... @RRikesh
We can detect props change with
componentWillReceiveProps()
but we need set state to accommodate new props value,
| common-pile/stackexchange_filtered |
Solr: Localities & solr.ICUCollationField usage?
I'm learning Solr and have become confused trying to figure out ICUCollation, what it does, what it is for and how to use it. From here. I haven't found any good explanation of this online. The doc appear to be saying that I need to use this ICUCollation and implies that it does magical things for me, but does not seem to explain exactly why or exactly what, and how it integrates with anything else.
Say I have a text field in French and I want stopwords removed, accents, punctuation and case ignored and stemming... how does ICUCollation come into this? Do I set solr.ICUCollationField and locale='fr' and it will do everything else automatically? Or do I set solr.ICUCollationField and then tokenizer and filters on this in addition? Or do I not use solr.ICUCollationField at all because that's for something completely different? And if so, then what?
Collation is the organisation of written information into an order - ICUCollactionField (the API documentation also provides a good description) is meant to enable you to provide locale aware sorting, as the sort order is defined by cultural norms and specific language properties. This is useful to allow different sorting based on those rules, such as the difference between Norwegian and Swedish, where a Swede would order Å before Æ/Ä and Ø/Ö, while a Norwegian would order it Æ/Ä, Ø/Ö and then Å.
Since you usually don't want to sort by a tokenized field (exception: KeywordTokenizer) or a multivalued field, these fields are usually not processed any more than allowing for the sorting / collation to be performed.
There is a case to be made for collation filters for searching as well, as search in practice is just comparison. This means that if you're aiming to search for two words that would be identical when compared in the locale provided, it would be a hit. The tokens indexed will not make any sense when inspected, but as long as the values are reduced to the same token both when indexing and searching, it would work. There's an example of this on the wiki under UnicodeCollation.
Collation does not affect stopwords (StopFilterFactory), accents (ICUFoldingFilterFactory), punctuation, case (depending on locale - if the locale for sorting is case aware, then it does not) (LowercaseFilterFactory or ICUNormalizer2FilterFactory) or stemming (SnowballPorterFilterFactory). Have a look at the suggested filters for that. Most filters or tokenizers in Solr does very specific tasks, and try to avoid doing "everything and the kitchen sink" in one single filter.
OK, so all this does is decide on the sorting order? Considering that I will be removing accents from characters and using only Latin alphabet, does that mean that I do not need to use ICUCollactionField at all and would be better off just with a standard string field?
As long as you're happy with the way the standard latin1 char values sort, you can just use StrField or a TextField with KeywordTokenizer (if you need to lowercase it on the Solr side or something similiar).
Sorry to ask something else, but what does it matter what order the strings sort?
I'm not sure what you're asking .. if you want your hits sorted by a string, the sort order matter. If you don't, then it doesn't matter. :-)
You normally have two or more fields for one text input if you want to do different things like:
search: text analysis
sort: language sensitive / case insensitive sorting
facet: string
For search use something like:
<fieldType name="textFR" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.ICUFoldingFilterFactory"/>
<filter class="solr.ElisionFilterFactory"/>
<filter class="solr.KeywordRepeatFilterFactory"/>
<filter class="solr.FrenchLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType>
For sorting use:
<fieldType name="textSortFR" class="solr.ICUCollationField"
locale="fr"
strength="primary" />
or simply:
<fieldType name="textSort" class="solr.ICUCollationField"
locale=""
strength="primary" />
(If you have to support many languages. Should work fine enough in most cases.)
Do make use of the Analysis UI in the SOLR Admin: open the analysis view for your index, select the field type (e.g. your sort field), add a representative input value in the left text area and a test value in the right field (in case of sorting, this right side value is not as interesting as the sort field is not used for matching).
The output will show you whether:
accents are removed
elisions are removed
lower casing is applied
etc.
For example, if you see that elisions (l'atelier) are not remove (atelier) but you would like to discard it for sorting you would have to add the elision filter (see example for search field type above).
https://cwiki.apache.org/confluence/display/solr/Language+Analysis
| common-pile/stackexchange_filtered |
SQL filtering by multiple columns with indexes
I'd like to expand a question I posted a while ago:
I'm querying a table for rows in which pairs of columns are in a specific set. For example, consider the following table:
id | f1 | f2
-------------
1 | 'a' | 20
2 | 'b' | 20
3 | 'a' | 30
4 | 'b' | 20
5 | 'c' | 20
And I wish to extract rows in which the pair (f1, f2) are in a specified set of pairs, e.g. (('a',30), ('b', 20),...). In the original question, Mark answered correctly that I can use the following syntax:
SELECT * FROM my_table WHERE (f1,f2) IN (('a',30), ('b',20))
This works fine, but I see some unexpected behavior regarding indexes:
I've defined a multi-column index for f1, f2, named IndexF1F2. Using the EXPLAIN phrase, I see that MySql uses the index for a single comparison, e.g.:
SELECT * FROM my_table WHERE (f1,f2) = ('a',30)
but not when using the 'IN' clause, as in the example above. Giving hints, e.g. USE INDEX(IndexF1F2) or even FORCE INDEX(IndexF1F2), does not seem to make any difference.
Any ideas?
This is a known bug in MySQL.
Use this syntax:
SELECT *
FROM composite
WHERE (f1, f2) = ('a', 30)
OR (f1, f2) = ('b', 20)
| common-pile/stackexchange_filtered |
how to write advanced concat grunt script that find matches in seperate folders?
I need to write an advanced concat script using grunt. here is my boilerplate:
___js
|____dist
| |____vents
| | |____commonEvents.js
| | |____compare.js
|____libs
|____src
| |____events
| | |____carousel.common.js
| | |____compare.js
| | |____styles.common.js
| |____handlers
| | |____carousel.common.js
| | |____compare.js
| | |____style.common.js
I want the concat task to look into "src/events" and "src/handlers" directory and find all the files ending with ".common.js" and concat them together and put them in the "dist/vents" directory ("commonEvents.js"), other files that are not ending with ".common.js" I want the script to find the pair in the other directory and concat them together and put them into "dis/vents/filename.js" (example: events/compare.js and handlers/compare.js are pair and not ending with common.js).
take a look for this theme :http://stackoverflow.com/questions/15199092/configure-grunt-copy-task-to-exclude-files-folders - it's looks like exactly that you need
Thanks dude, but it really has nothing to do with my problem.
I guess that we already know about https://github.com/gruntjs/grunt-contrib-concat module. You just need two different tasks. What about this:
grunt.initConfig({
concat: {
common: {
src: ['src/events/**/*.common.js', 'src/handlers/**/*.common.js'],
dest: 'dist/vents/commonEvents.js'
},
nocommon: {
src: ['src/events/**/*.js', 'src/handlers/**/*.js', '!src/events/**/*.common.js', '!src/handlers/**/*.common.js'],
dest: 'dist/vents/filename.js'
}
}
});
I don't think there is anything similar ready to use.
If you plan to create your own solution, I think this package can be a good starting point:
https://github.com/yeoman/grunt-usemin
It manipulates the config of other plugins as well.
| common-pile/stackexchange_filtered |
How to serialize data in C to send in a socket
I have to serialize some data and after send it using socket AF_UNIX.
I have read (here) that I can't create a struct and just send it using a cast (void *). So, I would like to know which is the best method to handle this problem.
Thank you!
Check this question of mine: https://stackoverflow.com/questions/30945121/dealing-with-data-serialization-without-violating-the-strict-aliasing-rule
As long as sending and receiving system have the same endianness, encoding, and type widths, and your struct does not contain pointers, you can simply send its contents as is.
@thebusybee And the most important - struct padding
@EugeneSh. Correct. ;-)
You basically have two options:
Roll your own exactly as in the question you referenced.
Use an off-the-shelf serialization/deserialization system. If you go this route, Google's Protocol Buffers are pretty much industry standard. For constrained projects (embedded, etc.) nanopb is a good choice for the implementation.
| common-pile/stackexchange_filtered |
Computer shuts down many times between each Windows session
Upgraded from Windows 8.1 to Windows 10 with a clean install, and the following behavior is now happening every time I turn it on :
computer is shut down in Windows
next time it is turned on, it won't show BIOS and lit Num Lock, just a non-blinking cursor
computer turns off, does that 2 to 3 times in a row
finally the computer starts properly and loads Windows
Not that after fresh installation it did not happen, it started once I configured it (drivers, software)
Looking at the event viewer I can see this ATIeRecord: ATI EEU PnP start/stop failed, however I couldn't find much info about it on the web.
Any ideas ?
What information there is suggests that it is a issue with the ATI GPU drivers. See if the manufacturer of your PC has updated drivers on their website.
Well, already did that and it is the latest version :(
Did you check the ATI website as well?
Yes of course, by the way I've also checked Windows Update and obviously it tells they are the latest ones ... I'm starting to think that some people will invariably get affected by Win10 bugs since it's a really new product.
Look that way. It seems a lot of people are waiting for updated ATI drivers.
Yes, definitely. There are some people suggesting turning EnableULPS off in registry about this issue but I don't really feel it's a good fix... Anyway, thanks for your help !
Apparently that just turns off the event being logged - it doesn't fix the problem.
The problem has grown since this afternoon and I've learned a bit more about it : motherboard (guess). Basically I am now running with only 1 RAM stick as no matter what I do with multiple sticks it always fails XD. Been updating the BIOS and slowly testing ... crossing my fingers. It is extremely annoying in the sense that I have work to do and cannot afford waiting for RMA or whatever, if I could precisely know what's faulty I'd replace it and RMA the old component (if it's still under warranty).
I'm having this same problem since windows 8.1, I don't know if is my motherboard or my graphic card, both are out of warranty and expensive enough to make think about change any component.
Progressed a bit, it seems to be the MB, removed/plugged things 1 by 1 until finding that only 1 RAM stick works. HOWEVER, :D since I left GPU and RAM unplugged for a day and plugged things back in, it started working again, though I've only put in 2 sticks. Rough guess it might be due to some static. Been in touch with Gigabyte and they didn't even ask a question but told me to send it back so I guess they know it's faulty by the symptoms described, so I'll do it soon. BTW this guide has been handy : http://www.tomshardware.co.uk/forum/261145-31-perform-steps-posting-post-boot-video-problems
| common-pile/stackexchange_filtered |
Using Regex, how do I extract the numbers from this serial code?
I have over 1,000 serial codes I need to enter into a database but they have to be completely numerical for conversion identification purposes. They all look similar to this format but contain different characters/numbers:
d47a3c06-r188-4203-n838-fefd32082fd9
I've been trying to figure out how to use regex to remove all letters and dashes but I'm now at a loss.
I need to know how to turn this:
d47a3c06-a188-4203-b838-fefd32082fc9
Into this:
473061884203838320829
Using regex. Then possibly trim it down to a 5 digit number using the first 5 numbers.
Thank you so much!
http://stackoverflow.com/questions/1533659/how-do-i-remove-the-non-numeric-character-from-a-string-in-java - just \D is enough.
Any feedback? Tried \D? Please post what you tried so far.
Are you aware that that is a UUID, and as such, is in fact a 32-digit hexadecimal number? (I notice where the first one has 'r' and 'n', the other has 'a' and 'b'...)
Since you are using Drupal, if what you need is an answer in PHP, then a PHP translation of the answer made by @jay-jargot is like this:
$input = "d47a3c06-r188-4203-n838-fefd32082fd9";
$str = preg_replace("/[^0-9]/", "", $input);
$str = substr($str, 0, 5);
echo $str, "\n"; ## output: 47306
Depending on your programming language, you can easily filter digits and join them afterwards.
Here's an example in Python with the help of the re module and list comprehensions:
import re
serials = ['d47a3c06-r188-4203-n838-fefd32082fd9', 'e48a3c08-r199-4203-n838-fefd32082fd0']
corrected_serials = []
for serial in serials:
numbers = re.findall(r'\d+', serial)
corrected_serials.append(''.join(numbers))
corrected_abbreviated = [item[0:5] for item in corrected_serials]
print corrected_serials
print corrected_abbreviated
# output
# ['473061884203838320829', '483081994203838320820']
# ['47306', '48308']
See a demo on ideone.com
So far..
Thank you all for your help, I am still struggling with this.
I'm using the Regex tool here: http://regexr.com/v1/
and using the expression :
[a-z/-]
I am so far able to get all the non-numerical characters removed, I am still stumped on how to trim to 5 characters.
Eventually this regex expression will be going into a Drupal site that uses feeds to import data from a CSV file, inside the CSV file there are the serials which will be parsed using Drupal feeds CSV.
I am using the listed regex tool listed above to make sure I get the right expression.
Using a first regex with s (search and replace) command, all non digit can be removed s/[^0-9]//g
The result is used with a second regex with s command, only the digits before the fith one are printed "/^\(.\{5\}\).*$/\1/.
Use these with bash shell and the sed command.
If the serial numbers are in serials.txt file:
cat serials.txt
d47a3c06-r188-4203-n838-fefd32082fd9
sed -e "s/[^0-9]//g" -e "s/^\(.\{5\}\).*$/\1/" serials.txt
47306
Using printf:
printf d47a3c06-r188-4203-n838-fefd32082fd9 | sed -e "s/[^0-9]//g" -e "s/^\(.\{5\}\).*$/\1/"
47306
The serials are in a CSV file which are then imported intro Drupal using the Feeds module. I've come close to what I need with a bit of your help by using [a-z/-] which removed all letters and dashes but I'm still clueless on how to trim the results.
| common-pile/stackexchange_filtered |
What is the proper regular expression to validate pakistani mobile number?
I want to validate pakistani mobile number in my php registration form. I need a regular expression that validate all the pakistani numbers (zong, ufone, telenore, jazz, warid) e.g<PHONE_NUMBER>6 or +923124432876
^((\+92)|(0092))-{0,1}\d{3}-{0,1}\d{7}$|^\d{11}$|^\d{4}-\d{7}$
I used this code but its not working
if(!preg_match("/^((\+92)|(0092))-{0,1}\d{3}-{0,1}\d{7}$|^\d{11}$|^\d{4}-\d{7}$/", $mobile)){
echo "Mobile number is valid";
}
There is not any error but it show no effect our any number
? serves the same purpose as {0,1} and takes up way less space
you made a mistake, try ^((\+92)|(0092))-{0,1}\d{3}-{0,1}\d{7}$|^\d{11}$|^\d{4}-\d{7}$
The "+" need to be escaped
You can use this format:
^((\+923|923|03)(([0-4]{1}[0-9]{1})|(55)|(64))[0-9]{7})$
If a user puts "0378 XXXXXXX", we know that this one number is not valid in Pakistan.
As a user puts "0300-0309/0310-0319/0320-0329/0330-0339/0340-0349/0355/0364", these are only valid phone numbers in Pakistan.
Its as simple as
**/^923\d{9}$|^03\d{9}$/**
This code will validate both type of numbers with 923 or with 03 if you need to validate +923 then use following code.
**/^+923\d{9}$|^923\d{9}$|^03\d{9}$/**
Thanks
Regular expression to validate the Pakistan Mobile number
^((+92|0092||92|0){1}(3)[0-9]{9})$|^(((+92|0092|92){1}((\s?)|(-?))(3)|(03))[0-9]{2}((\s?)|(-?))[0-9]{7})$
Example:
03457172848 Valid
0345-7172848 Valid
0345 7172848 Valid
345-7172848 NOT Valid
-345-7172848 NOT Valid
034571-72848 NOT Valid
0-3457172848 NOT Valid
+923457172848 Valid
00923457172848 Valid
923457172848 Valid
920457172848 NOT Valid
92-345-7172848 Valid
92<PHONE_NUMBER> Valid
92-0345-7172848 NOT Valid
(+92)3457172848 NOT Valid
| common-pile/stackexchange_filtered |
Putasync 400 Bad Request c# - google APi works
I want to sent a base64pdf, via http put.
Unfortunatly I get the 400 bad request error.
I tried it with the google rest api and there it worked fine.
string finalURL = upURL + pdf.Id + "/signedpdf";
string json = "{ 'base64Pdf' : '" + pdf.Base64Pdf + "' }";
using (var client = new HttpClient())
{
var response = await client.PutAsync(finalURL, new StringContent(json, Encoding.UTF8, "application/json"));
if (response.IsSuccessStatusCode)
return true;
}
It works when I create the Json by Javascriptserializer.
TransferObject to = new TransferObject(pdf.Base64Pdf);
var json2 = new JavaScriptSerializer().Serialize(to);
You have to read the response content to see what's the reason behind the 400
I only get this much info from the Server. At first i thought that because it is a modified pdf, that it won't work. but the same json works in the google api. so i do not know what causes the problem.
What is the size of your json2? Chances are you may exceed allowed limit for data sent as single request (so, you have to use multi-part approach then, sending the data as sequential chunks).
I believe the JSON standard only supports double quote characters. Your code is using single quote characters in your concatenated JSON. Base64 encoded string will not contain double quote characters so you should be good there. Try this:
string json = "{ \"base64Pdf\" : \"" + pdf.Base64Pdf + "\" }";
| common-pile/stackexchange_filtered |
when im submit data from Jsp to spring contraller @ModelAttribute is empty
When I submit data from Jsp to spring controller @ModelAttribute is empty. Please help me to solve this.
@Controller
@RequestMapping("/mainCategory")
public class MainCategoryController {
@RequestMapping(value = "/test", method = RequestMethod.GET)
public ModelAndView load() {
ModelAndView modelAndView = new ModelAndView("mainCatTile","mainCategory", new MainCategory());
return modelAndView;
}
@ResponseBody
@RequestMapping(value = "/save", method = RequestMethod.POST, headers = "Accept=application/json")
public Object save(@ModelAttribute MainCategory mainCategory, ModelMap modelMap,BindingResult result) {
System.out.println("");
return null;
}
}
Please post you code as text not as picture.
https://stackoverflow.com/questions/21824012/spring-modelattribute-vs-requestbody
| common-pile/stackexchange_filtered |
Let $a\mid bc $ then prove or disprove $a\mid (a,b)c$
Prove or disprove:
Let $a\mid bc$ then $a\mid (a,b)c$
Here is my approach, but I am not sure if I am doing this correctly or efficiently.
Let $a\mid bc$. It follows that either
$a\mid b$ Proof: $b=ar, a\mid bc => (ar)c = a \rightarrow a(rc)=a \rightarrow a|a(rc)$
$a\mid c$.
Since $rc$ is an integer. $a\mid bc$. Similar for $(2). a|c $
Let $a\mid (a,b)c$.
Using: the definition of $\gcd(a,b)=1=ax+by$ if $\exists x,y \in Z$
then we can rewrite it as $a\mid dc$. This is as far as I go. I can't manipulate it so that I show that $a\mid (a,b)c$. Does this mean that I would have to disprove $a\mid bc$ then $a\mid (a,b)c$?
Any help would be appreciated.
Is (a,b) the gcd? If yes, then try out some examples. How can it maybe go wrong?
$a|bc$ does not imply that $a|b$ or $a|c$. For example, $6|2 \cdot 3$, but $6$ divides neither $2$ nor $3$. The implication holds if $a$ is prime though. More generally, $a|bc$ implies $a|c$ if $\textrm{gcd}\left(a,b\right) = 1$.
@Amjad (a,b) is the gcd. I used $gcd(a,b) = gcd(15,20) = 5$. I inserted it into $a|(a,b)c$ and it turned into $5c = 15q$. The only way this works out is if $c = 3q$, but if it doesn't this doesn't work out, I believe.
@SamStreeter My wording was off. I didn't mean to say that a|b or b|c was implied. I just knew that a|b can be a property of $a|bc$. So if $a$ is relatively prime is $a|(a,b)c$ able to be proved by just using $a|bc => a|c$ if $gcd(a,b)=1$ then rewriting as $a|(a,b)c = a|1c. $?
@HawaiianRolls Sure, the result you're after follows from the result I have stated in the special case where $a$ is relatively prime to $b$, but this doesn't help with the general solution. However, the answers below, as you have seen, give a nice and simple proof by Bézout's identity.
I'm really confused about your solution. $a\mid bc$ doesn't imply that either $a\mid b$ or $a\mid c$ unless $a$ is a prime number. Anyway, the statement you try to prove/disprove is true. You can write $(a,b)=ka+lb$ when $k,l\in\mathbb{Z}$. Then $(a,b)c=kac+lbc$. $a$ divides $a$ and so $a\mid $. Also $a\mid bc$ which implies $a\mid lbc$. And hence $a$ divides the sum $kac+lbc=(a,b)c$.
I did not know that you can have a divide the sum and the conclude that a|(a,b)c because of it. Thanks
If $a|x$ and $a|y$ then also $a|(x+y)$, that is what I used in the last step. It is very easy to prove.
Yeah just did it now. Thanks for your help Mark.
The main problem in your proof is that knowing that $a|bc$ does not imply that $a|b$ or $a|c$.
Hint: Use the fact that the $\gcd(a,b)$ can be written as follows:
$$
\gcd(a,b)=ax+by
$$
for some integers $x$ and $y$. By substituting this into the expression $\gcd(a,b)c$, you get
$$
\gcd(a,b)c=acx+bcy.
$$
Both of the terms $acx$ and $bcy$ are divisible by $a$ (but for different reasons). Can you work from here?
Thank you for your input. I think Mark explained the different reasons. I got to where you were on my notebook, I just couldn't figure out why $acx + bcy$ were divisible by $a$. I did not know you use the fact that $a$ divides the sums as proof that $a|(a,b)c$.
Note that if you haven't proved it yet, you might need to prove the claim that if $a\mid b$ and $a\mid c$, then $a\mid(b+c)$.
Try $\dfrac aA=\dfrac bB=(a,b)\implies(A,B)=1\ \ \ \ (1)$
$a|bc\implies A|Bc\implies A|c$ by $(1)$
Now $(a,b)c=dc$ is divisible by $dA=a$
first of all, you need to pay attention when you said if $a|bc$ then $a|b$ or $a|c$ this is not true unless $a$ is prime. Consider this counter example: $6|2 \times 3$.
For the proposition you claimed, you may need to be familiar with this result:
$a|bc \to \frac{a}{(a,b)}|c$
Using this fact, consider the following:
$\frac{a}{(a,b)}|c \to c = \frac{a}{(a,b)} k \to c(a,b) = ak \to a|(a,b)c$
Now, can you prove this generalized version of your result:
$a|bc \to a|(a,b)(a,c)$
$a\:\!\:\!|\:\!\:\!(a,b)c\color{#c00}{\overset{\rm D}{=}}(ac,bc)\!\!\!\!\color{#0c0}{\overset{\rm U\!\!}{\iff}}\!\! a\:\!|\:\!ac,bc\!\!\iff\!\! a\:\!|\:\!bc\,$ by $\rm \color{#0c0}U\! =\! $ Universal Property,
$\rm\color{#c00}D\! =\! $ Distributive Law.
Remark $ $ Other proofs using Bezout are essentially Bezout-based proofs of the gcd distributive law so they are less general - they don't work in domains like $\:\!\Bbb Z[x]\:\!$ and $\:\!\Bbb Q[y,z]$ where Bezout fails, e.g. $\,(x,2) = 1 = (y,z)\,$ but these gcds cannot be written as linear combinations, else, e.g. $x\!f\!+\!2g\!=\!1\overset{x\ =\ 0}\Longrightarrow 2g(0)=1\Rightarrow 2\mid 1\,$ in $\Bbb Z\Rightarrow\!\Leftarrow\,;\,$ ditto for $(y,z)=1$ by eval at $\,y\!=\!0\!=\!z$.
| common-pile/stackexchange_filtered |
Understanding Max Pooling
I understand max pooling in CNNs can help decrease the computational load due to downsampling.
Another thing mentioned is that max pooling can help provide a sort of "spatial invariance" for learned features.
So essentially if say the upper left corner of an image has some feature let's say a circle, then max pooling might help in case that same circle is say shifted to the right slightly? Whereas had we not used max pooling this feature might seem like something new to the CNN despite it only being shifted by a small amount.
I have a hard time sort of understanding why. I understand that (assuming no padding is used) that the next layer holds a smaller size image. In deep learning for python this passage on why a CNN without small pooling isn't good states:
"It isn’t conducive to learning a spatial hierarchy of features. The 3 × 3 windows in the third layer will only contain information coming from 7 × 7 windows in the initial input. The high-level patterns learned by the convnet will still be very small with regard to the initial input, which may not be enough to learn to classify digits (try recognizing a digit by only looking at it through windows that are 7 × 7 pixels!). We need the features from the last convolution layer to contain information about the totality of the input."
What the model looks like:
Layer (type) Output Shape Param #
================================================================
conv2d_4 (Conv2D) (None, 26, 26, 32) 320
________________________________________________________________
conv2d_5 (Conv2D) (None, 24, 24, 64) 18496
________________________________________________________________
conv2d_6 (Conv2D) (None, 22, 22, 64) 36928
================================================================
Total params: 55,744
Trainable params: 55,744
Non-trainable params: 0
I get that the third layer contains info about a smaller chunk of the main input image which just happens due to convolution. So can max pooling almost be thought of as normalizing? And this makes sure the "totality" is preserved? If so that makes sense to me, but I just wanted to make sure I understood the concept right. So I suppose it would be wrong to feed a 22*22 output from the third layer considering that the third layer really only knows about 7*7 windows of the input image. Is this what the author is saying here? That seems to make more sense now that I typed it out.
The pooling layers are a very important part of CNN architectures. The main idea is to "accumulate" features from strides or maps generated by convolving a filter over an image.
Purpose is to gradually reduce the spatial size of representations to reduce the amount of parameters and computations in the network. They also, help in providing a sort of "spatial invariance" for learned features because of feature aggregation.
There are multiple types of pooling layers:
Max-Pool (most common)
Average-Pool
Min-Pool
| common-pile/stackexchange_filtered |
EF Transaction max time out not enough
i am using transaction scope with entity framework. I did some configuration for transaction timeout. I don't find error where it is. when i want to do some insert i do that but in loop index:83 i get "Underlying provider cannot open" error. I think transaction has time out.
TransactionOptions transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
transactionOptions.Timeout = TimeSpan.MaxValue;
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
app.config
<system.transactions>
<defaultSettings timeout="03:00:50" />
</system.transactions>
If you set the TransactionScope's timeout it won't change the underlying resources' timeouts (SQL).
It seems you're using Entity framework, in that case, if you want to change the EF timeout, do this (EF6):
this.context.Database.CommandTimeout = 180;
This timeout cannot obviously be greater than the one set on TransactionOptions.
Keep in mind that your transactionss should take as little time as possible (long runnning processes can cause locking on the DB).
| common-pile/stackexchange_filtered |
How Count each vehicle only once in Traffic vehicle counter?
I have been doing the traffic vehicle counter using Python and OpenCV. My current algorithm is counting the number of vehicles per frame. It results in the same vehicle is counting more than once per each frame. Instead, i want the Unique vehicle count in a video. Count each car only once. What technique i have to use to achieve this.
import cv2
print(cv2.__version__)
cascade_src = 'cars.xml'
video_src = 'dataset/video2.avi'
#video_src = 'dataset/video2.avi'
cap = cv2.VideoCapture(video_src)
car_cascade = cv2.CascadeClassifier(cascade_src)
while True:
ret, img = cap.read()
if (type(img) == type(None)):
break
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.1, 1)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
cv2.imshow('video', img)
print "Found "+str(len(cars))+" car(s)"
b=str(len(cars))
a= float(b)
if a>=5:
print ("more traffic")
else:
print ("no traffic")
if cv2.waitKey(33) == 27:
break
cv2.destroyAllWindows()
Use background subtraction
[See this] (https://stackoverflow.com/questions/36254452/counting-cars-opencv-python-issue)
1) Apply Background subtraction
2) Apply moments function to each frame to get the centroid of the moving cars
3) Define a region of pixel values(x,y) .When the centroid of moving car crosses this range ,increment the counter by one
refer to this https://medium.com/machine-learning-world/tutorial-making-road-traffic-counting-app-based-on-computer-vision-and-opencv-166937911660
| common-pile/stackexchange_filtered |
Using variable-controlled logging level in Python logger
I want to write a function that will get the requested logging level from the user, i.e. something like:
import logging
logger = logging.getLogger()
def func(log_level: <Type?>):
logger.log_level("everything is bad")
Can this be done?
Just use the log() method of a logger, which takes a level as well as a format string and arguments.
import logging
logger = logging.getLogger()
def func(log_level: int, message: str):
logger.log(log_level, message)
logging.basicConfig(level=logging.DEBUG, format='%(levelname)-8s %(message)s')
func(logging.DEBUG, 'message at DEBUG level')
func(logging.INFO, 'message at INFO level')
func(logging.CRITICAL, 'message at CRITICAL level')
Which prints
DEBUG message at DEBUG level
INFO message at INFO level
CRITICAL message at CRITICAL level
when run.
You could use this. It first looks a little verbose, but you can easily add addtional handlers (maybe some that log to a file...)
import logging
logger = logging.getLogger()
handler = logging.StreamHandler() #you can also use FileHandler here, but then you need to specify a path
handler.setLevel(logging.NOTSET)
LOG_FORMAT = "%(levelname)s: %(message)s"
formatter = logging.Formatter(LOG_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.log(logging.CRITICAL, "your logging message 1") #==> CRITICAL: your logging message 1
logger.log(logging.WARN, "your logging message 2") #==> WARN: your logging message 2
Note 1: LOG_FORMAT is defined according to the logging-documentation
Note 2: the logging levels (logging.DEBUG, logging.INFO...) are explained here. You can also use integer values accordingly:
logging.XX
Value
NOTSET
0
DEBUG
10
INFO
20
WARN
30
ERROR
40
CRITICAL
50
You can use the context manager, I quote for the documentation "There are times when it would be useful to temporarily change the logging configuration and revert it back after doing something. For this, a context manager is the most obvious way of saving and restoring the logging context. Here is a simple example of such a context manager, which allows you to optionally change the logging level and add a logging handler purely in the scope of the context manager:"
import logging
import sys
class LoggingContext:
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
Exactly in the documentation, it is here: https://docs.python.org/3/howto/logging-cookbook.html#using-a-context-manager-for-selective-logging .
You can also rewrite any of the sample code snippets from: https://docs.python.org/3/howto/logging-cookbook.html#using-logging-in-multiple-modules .
| common-pile/stackexchange_filtered |
How to increase fonts in all UI elements in IntelliJ IDEA?
Is there a possibility to increase/decrease font size in all UI elements throughout the IntelliJ IDEA?
It's possible to override font/size for the UI in Settings | Appearance & Behavior | Appearance | Use custom font (Size):
Editor font is configured in Settings | Editor | Font.
I can see the overriden font also in the project view and trees
@mirelon, I think the answer is for an earlier version. The font change does the right thing in 14.
It would be awesome if the answer were updated for the sake of the future readers (it changes the font everywhere, including the project vies/trees).
@CrazyCoder updating fonts works for the entire UI now (2018.1). You should update your answer.
I have recently begun using Intellij Idea.. (version 2018.2) I cannot determine how to get to this screen... any suggestions?
@TheGeeko61 https://www.jetbrains.com/help/idea/accessing-settings.html, https://www.jetbrains.com/help/idea/settings-appearance.html
Oh! The [Settings] from the File menu... Thanks! I'm using Ubuntu and I think that it is undermining the Ctrl-Alt-S keystroke.
Follow these three steps to change all fonts in the IDE (including project tree and console):
1)
Perform a search based on fonts in the settings menu:
2)
3)
Excellent, this changes the project tree and console as well.
As of version 2020.3 the UI looks like this:
Press Ctrl + Alt + S
Settings:
I'm now using Intellij 2019.12.3, and the answer should be:
Preferences -> Appearance & Behaviors -> Appearance -> Use custom font
This should be top answer. Thank you.
This seems to have an answer to a similar question: Is it possible to change the font size of the project panel?
You could also try changing your system font and see if Intellij picks that up. Might only work for the menus though and if you are using the system theme like GTK+ on Linux.
For IntelliJ 2022.2 it´s pretty simple:
For the Editor Area
Setings->Editor->General
under font-size changing with Ctrl+Mouse Wheel you can set "All editors"
For the UI Area
Setings->Apperance & Behavior->Appearance
There you can direct change the Font and Size
Editor
UI
The simple one is:
Press Ctrl+Shift+A.
In the popup frame, type Increase font size or Decrease font size, and then click Enter.
Doc
This just changes the editor font. The question is about every other UI element in Intellij, since the default font size is absurdly tiny.
This also changes only the editor font size from the current class, not from all opened classes
This does not answer the OP question.
| common-pile/stackexchange_filtered |
How does a gas emit radiation with temperature when it's particles motion are linear?
Particles in gas move faster with temperature in a linear motion (root mean velocity equation?) right? It explains increase in pressure and effusion proportional to temperature.
Solids emit radiation if their temperature is high enough because heat makes their atoms vibrate thus producing electromagnetic waves.
Gas, though, emits radiation due to temperature too.
How is this possible for a gas with its particles moving in a linear motion?
The key to emitting radiation is acceleration. Acceleration can arise due to a change in speed or due to a change in direction.
We could take bremsstrahlung as an example of radiation emitted by a gas of ionised atoms and electrons. The radiation here occurs because the electrons are accelerated by the electric fields of the ions in the gas. The acceleration can take the form of a change of speed or change of direction (usually both).
In an atomic gas the radiation comes from atomic processes. This might involve the excitation of atoms, perhaps by collisions, followed by the dexcitation by the emission of radiation. Or one could imagine photo-recombination, where a free electron is "captured" by an atom to form an ion, which obviously leads to an acceleration and the emission of light.
It is not obvious that Bremsstrahlung would lead to the Planck distribution...
@Floris that is a completely different question covered many times before. The spectrum of Bremmstrahlung isn't the Planck function because it is optically thin emission from a hot gas. Only optically thick (at all wavelengths) gases can emit the Planck function.
that makes sense - the emissivity dominates the spectral shape in this case.
| common-pile/stackexchange_filtered |
using htmlentities with superglobal variables
I'm working on php with a book now. The book said I should be careful using superglobal variables, so it's better to use htmlentities like this.
$came_from = htmlentities($_SERVER['HTTP_REFERER']);
So, I wrote a code like this;
<?php
$came_from=htmlentities($_SERVER['HTTP_REFERER']);
echo $came_from;
?>
However, the display of the code above was the same without htmlentities(); It didn't change anything at all. I thought that it would change \ into something else. Did I use it wrong?
Why would you expect it to encode ""?
No, you did it correctly, it's just that htmlentities will only escape values if they need to be escaped - it's a security thing. So if there's nothing that needs to be escaped, then it will be the same.
BTW you only need to use htmlentities (or htmlspecialchars) if you plan to output that value at some point as part of your page. If you're just checking it within your PHP code then it doesn't matter. Usually it's more relevant for $_GET and $_POST.
But I do see that you're echoing it here, so if that's not just test code that you plan to remove later, then you should indeed be using htmlentities in this case.
So, by default, htmlentities() encodes characters using ENT_COMPAT (converts double-quotes and leave single-quotes alone) and ENT_HTML401. Seeing as the backslash isn't part of the HTML 4.01 entity spec (as far as I can see anyway), it won't be converted.
If you specify the ENT_HTML5 flag, you get a different result
php > echo htmlentities('abc\123');
abc\123
php > echo htmlentities('abc\123', ENT_HTML5);
abc\123
This is because backslash is part of the HTML5 spec. See http://dev.w3.org/html5/html-author/charref
Sorry. My previous answer was absolutely wrong. I was confused with something else. My apologise. Let me refrain my answer:
htmlentities will convert special characters into their HTML entity. "<" for example will be converted to "<". Your browser will automaticly recognise this HTML entity and decode it back to "<". So you won't notice any difference.
The reason for this is to prevent problems when saving your document in something different then UTF-8 encoding. Any characters not encoded might become screwed up for this reason.
This is completely incorrect in regards to what htmlentities() does. htmlentities() / htmlspecialchars() is used to encode HTML entities, usually to sanitize content output in an HTML document. It has absolutely nothing to do with "confusing your PHP server"
I'd say the biggest use of HTML encoding is to avoid XSS vulnerabilities
| common-pile/stackexchange_filtered |
connect to specific DB on a specific machine in SSMS
I have DB machine M1 (Dev.) hosting databases : DB1 , DB2, DB3 and having another setup for test M2, hosting the databases : DB1, DB2, DB3.
Now had connected to these 2 machines in SSMS. I can switch to the DB on the latter machine that was connected using the below command "Use ".
What is the way to switch to another machine and connect to the relavent DB's in SSMS.
I found a way by going to the object explorer and select the relevant object on a specific DB under specific env (right Click on specific table -> select Top 1000 rows). Is there any other approach like the "use" command or any other efficient approach.
what are you trying to achieve? do you want to execute same select query on multiple databases on multiple servers?
are you asking how to switch between m1 and m2 similar like use command..you are using for databases?
Your question looks similar to this Post already answered here
Yes am looking to switch the connection from m1 to m2, without actually again reconnecting it. Is there a command similar to use or is it that i have to go only with the below approach posted in the link
There are two ways
Using linked servers
Right-click in the query window and select Connection > Change Connection:
That will bring up the standard connection dialog from which you can connect to whatever other server / instance you want ...
| common-pile/stackexchange_filtered |
Combine gtk3 in C with gtkmm in C++
I was looking at converting an existing project slowly from C written in gtk3 to C++. Originally, I started to build up classes and was using extern "C" to move one function over at a time. However, gtk has it's whole GObject system with special things to instantiate and dispose. (With tons of magic macros created on the fly.)
I started to make a wrapper for Gtk in C++ but it occurs to me that gtkmm already exists for that. Can gtk code be combined with gtkmm? Will signals and slots work across the two, and will it play nicely with the c++ gtkmm objects and c gtk "objects"?
Worst case is I could mirror the objects and cast them back and forth... And handle the creation and deletion in C with gtk so it doesn't break anything, but eventually I want to pull them out of C completely and I think that last bit will be a pain.
All Gtkmm widgets have a method named gobj() to give you the C pointer of your widget so you can call your C functions on it.
https://developer.gnome.org/gtkmm-tutorial/stable/sec-basics-gobj-and-wrap.html.en
https://developer.gnome.org/gtkmm/stable/classGtk_1_1Widget.html
I guess gtk C callbacks are compatible with the gtkmm callbacks:
https://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en
| common-pile/stackexchange_filtered |
Open an Android Studio project in a new desktop
Am using the latest version of Android Studio, so am working on project A but want to open a different project B in a in a new desktop using Windows + Tabs and switching between the desktops using CTRL + WIN + Right Arrow - for quick reusing of code.
Why won't my android studio app open in the second desktop? When i click the launcher it only loads but doesn't open. Is there a setting or a correct way to do it?
Navigate to File>Settings>Appearance & Behavior>System Setting then change Project Opening setting according to your need.
| common-pile/stackexchange_filtered |
onWheelCapture vs onWheel
Is there any difference between the two? React offers both options and I don't see a huge difference in the way it is triggered.
Here is some code that I was tinkering with
// Add page transition functions here too
onWheelCapture={e=>{
console.log(e);
}}
onWheel={(e)=>{
console.log(e);
}}
For context, I have been trying to hijack the default scroll behaviour and need to change some states based on the scroll. I just want to know which is better to use and which one is the standard way of going about and the better events.
https://react.dev/learn/responding-to-events#capture-phase-events
The onWheelCapture is a "Capture Phase event handler". As mentioned here:
In rare cases, you might need to catch all events on child elements, even if they stopped propagation. For example, maybe you want to log every click to analytics, regardless of the propagation logic.
In most cases, onWheel should be sufficient.
So the Capture phase events should, in theory, run before the event handler?
Yes, onWheelCapture would be fired first, followed by onWheel
| common-pile/stackexchange_filtered |
How get android cpu model name?
I want to learn my phone cpu model name and I tryed to use /proc/cpuinfo and a lot of code but I failed. Can anyone help me?
Do not put irrelevant tags on your question - this has nothing to do with activities or intents. It's unclear why you removed "java" as that presumably is relevant to how you want to parse this. "Linux" could also be relevant as the information originates at that level.
Run
$ adb shell cat /proc/cpuinfo
Here is my code
public static String getCpuName() {
try {
FileReader fr = new FileReader("/proc/cpuinfo");
BufferedReader br = new BufferedReader(fr);
String text = br.readLine();
br.close();
String[] array = text.split(":\\s+", 2);
if (array.length >= 2) {
return array[1];
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
What about your code?
There are two screenshot.First one is myapp ( using /proc/cpuinfo like ur function) second one is Antutu Tester. MyApp Antutu Tester
| common-pile/stackexchange_filtered |
Recognize a touch for a UIView that is only created at the moment that touch occurs
I created a "slide view" (a UIView subclass) which animates on screen by dragging it up. The animation and everything else related to the animation works perfectly fine. This question targets only the very first touch on the screen when the slide view itself will be initialized:
The slide view itself uses the UIPanGestureRecognizerto recognize touches. The thing is, my slide view will be initialized only at the time when the user touches down a UIButton. Parts of the slide view are initially locates on that button, so that when the user touches that button, the touch is also located inside the slide view's frame.
I only want to create the view at the time the touch occurs, because the view is pretty heavy. I don't want to waste resources cause often the button is not even used.
How can I make the slide view recognize that first touch that also initializes (and adds it as a subview to super) the slide view itself?
You are saying your slide view has pangestures then you also want to detect a single tap on that slide view right ?
@ChinabS. Yeah, that is correct. Is that wrong? The gesture works at least
You can check this out for more details:
Gestures
Well and you can add both gesture pan as well as tap gesture. It will definitely work as tap is not the first action of the pan gesture. So no need to wait for tap gesture to fail.
In short you can add both gestures and handle them simply.
I am struggling with your answer: Are you saying I should use both gestures at the same time or should I change the gesture to use tab gesture? can you give me some more details? thx.
Absolutely you should use both the gestures on your slideview.
| common-pile/stackexchange_filtered |
TouchBegan/TouchEnded with Array
I have 3 arrays of nodes, each array with 5 nodes. The nodes in this example are squares.
I want to move them using touchesBegan and touchesEnded, saving which array the user touched and then the position of the finger when he removes from the screen. I already know how to do that, with a node.
My problem is that i dont know how to tell my code what array to move, since i cant use something like array.name to tell the difference how can i do such a thing?
For example, if i touch my Array1 he'll detect that its my Array1 and then when i remove my finger he'll do a SKAction to move the nodes inside my Array1.
I tried to use array.description but that didn't work.
Thanks.
Since Sprite Kit provides convenient ways to access sprites in the scene's node tree, there is almost no reason to use arrays to manage your sprite nodes. In this case, you can add your sprites a set of SKNodes, because you can easily access the "container" that a sprite is in with node = sprite.parent. You can then iterate over the sprites in that container by looping over node.children. Here's an example of how to do that:
var selectedNode:SKSpriteNode?
override func didMoveToView(view: SKView) {
scaleMode = .ResizeFill
let width = view.frame.size.width
let height = view.frame.size.height
let colors = [SKColor.redColor(), SKColor.greenColor(), SKColor.blueColor()]
// Create 3 container nodes
for i in 1...3 {
let node = SKNode()
// Create 5 sprites
for j in 1...5 {
let sprite = SKSpriteNode(imageNamed:"Spaceship")
sprite.color = colors[i-1]
sprite.colorBlendFactor = 0.5
sprite.xScale = 0.125
sprite.yScale = 0.125
// Random location
let x = CGFloat(arc4random_uniform(UInt32(width)))
let y = CGFloat(arc4random_uniform(UInt32(height)))
sprite.position = CGPointMake(x, y)
// Add the sprite to a container
node.addChild(sprite)
}
// Add the container to the scene
addChild(node)
}
}
Select a sprite to move in touchesBegan
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
for touch in (touches as! Set<UITouch>) {
let location = touch.locationInNode(self)
let node = nodeAtPoint(location)
selectedNode = node as? SKSpriteNode
}
}
Move the selected sprite
override func touchesMoved(touches: Set<NSObject>, withEvent event: UIEvent) {
for touch in (touches as! Set<UITouch>) {
let location = touch.locationInNode(self)
selectedNode?.position = location
}
}
Rotate all children in the node that contains the selected sprite
override func touchesEnded(touches: Set<NSObject>, withEvent event: UIEvent) {
if let parent = selectedNode?.parent?.children {
for child in parent {
let action = SKAction.rotateByAngle(CGFloat(M_PI*2.0), duration: 2.0)
child.runAction(action)
}
}
selectedNode = nil
}
Thats nice. =) Its a smart way to get an group of nodes. Thx a lot!
| common-pile/stackexchange_filtered |
PDO Select Statement with MVC
"
im working on a small MVC website and ive run into a small problem. I want to be able to populate the HTML select dropdown with data pulled from my database. Ive the Select statement in my Model class, the standard HTML stuff in my View class and my controller is currently empty as im a little confused as to what im supposed to add there.
Code works fine with no errors. However, my view renders twice.
Here is my code:
Model:
<?php
require_once('../Controller/config.php');
class AppCalc
{
public $dbconn;
public function __construct()
{
$database = new Database();
$db = $database->dbConnection();
$this->dbconn = $db;
}
public function fillDropdown(){
$stmt = $dbconn->prepare("SELECT Appliance FROM appliances");
$stmt->execute();
$results = $stmt->fetchAll(PDO::FETCH_ASSOC);
}
}
?>
View:
<div id="appForm">
<form id="applianceCalc">
Typical Appliance:
<select name="appliances">
<?php foreach($appliances as $appliance): ?>
<option value="<?php echo $appliance['Appliance']; ?>">
<?php echo $appliance['Appliance']; ?>
</option>
<?php endforeach; ?>
</select>
<br>
Power Consumption:
<input type="text"></input>
<br>
Hours of use per day:
<input type="text"></input>
<br>
</div>
<input type="submit" name="btn-calcApp" value="Calculate"></input>
<input type="submit" name="btn-calRes" value="Reset"></input>
</form>
</div>
Controller:
<?php
require_once('../Model/applianceModel.php');
require_once('../Tool/DrawTool.php');
$newCalc = new AppCalc();
// instantiate drawing tool
$draw = new DrawTool();
// parse (render) appliance view
$renderedView = $draw->render('../View/applianceCalculator.php', array(
'appliances' => $newCalc->fillDropdown()
));
echo $renderedView;
?>
My config file:
<?php
class Database
{
private $host = "localhost";
private $db_name = "energychecker";
private $username = "root";
private $password = "";
public $dbconn;
public function dbConnection()
{
$this->dbconn = null;
try
{
$this->dbconn = new PDO("mysql:host=" . $this->host . ";dbname=" . $this->db_name, $this->username, $this->password);
$this->dbconn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
}
catch(PDOException $exception)
{
echo "Connection error: " . $exception->getMessage();
}
return $this->dbconn;
}
}
?>
This is the table im using:
CREATE TABLE IF NOT EXISTS `appliances` (
`AppId` int(11) NOT NULL AUTO_INCREMENT,
`Appliance` varchar(50) NOT NULL,
`PowerConsumption` int(8) NOT NULL,
PRIMARY KEY (`AppId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;
This is what i currently see:
http://i66.tinypic.com/30b31mx.jpg
Any help will be appreciated
Thanks!!
Model function fillDropdown should return array of fetched rows, e.g.:
public function fillDropdown(){
$stmt = $dbconn->prepare("SELECT Appliance FROM appliances");
$stmt->execute();
return $stmt->fetchAll(PDO::FETCH_ASSOC);
}
Then, you have to somehow pass data to the View. I recommend to build some class to render views. This can be very simple:
class DrawTool
{
public function render($viewPath, array $context = array())
{
// start output buffering
ob_start();
// make variables with $key => value pairs from $context associative array
// context will be gone after method execution thanks to variable scope
extract($context);
// render View
include $viewPath;
// return rendered View as string data type
return ob_get_clean();
}
}
Now, if You have your Draw class to render Views you can simply draw View with code below, in Controller:
require_once('../Model/applianceModel.php');
require_once('../Tool/DrawTool.php'); // or wherever else you plan to put this class
$newCalc = new AppCalc();
// instantiate drawing tool
$draw = new DrawTool();
// parse (render) appliance view
$renderedView = $draw->render('../View/applianceView.php', array(
'appliances' => $newCalc->fillDropdown()
));
echo $renderedView;
And, inside View file:
<div id="appForm">
<form id="applianceCalc">
Typical Appliance:
<select name="appliances">
<?php foreach($appliances as $appliance): ?>
<option value="<?php echo $appliance['id']; ?>">
<?php echo $appliance['name']; ?>
</option>
<?php endforeach; ?>
</select>
<br>
Power Consumption:
<input type="text"></input>
<br>
Hours of use per day:
<input type="text"></input>
<br>
</div>
<input type="submit" name="btn-calcApp" value="Calculate"></input>
<input type="submit" name="btn-calRes" value="Reset"></input>
</form>
</div>
In View file you just traverse $appliances array (fetched rows), where single $appliance is one row represented by array with e.g. id and name keys.
And a couple of tips at the end:
do not end pure PHP file with ?> tag, because any character (most common is white character) after that tag will trigger error with already sent headers, if you wish to set some headers before outputting content
think about some value check at fillDropdown method, becuase method $stmt->fetchAll() can also return false on failure
Thank you so much for the reply! Really appreciate it. However, im now getting an error: Notice: Undefined variable: dbconn This was after calling my controller class in the view. I have no also edited my original post to include the table im working with.
Right, instead of $stmt = $dbconn->prepare("SELECT Appliance FROM appliances"); should be $stmt = $this->dbconn->prepare("SELECT Appliance FROM appliances");
Hi that seems to have worked. However the HTML form prints twice. Once with the populated dropdown and once without. Ive edited my original post
Your controller echo'es the whole View file, so you propably echo it again somewhere further, i think. Is your Controller entry point file (something like index.php)?
In my view file I've required the controller. Should I remove it?
Yes. Your controller render View and all needed data are passed by Controller into View.
Thanks! If i wanted to use another select statement in the same page. Would i call it in the same controller?
Yes. You should add another variable to the $context array, and use it in View file the same way.
Sorry I'm a little confused. Would you be able to give a quick example?
| common-pile/stackexchange_filtered |
PHP include different php pages depending on results
NEW PLEASE LOOK--- so great news i got it working but its only looking at my first if statement I just need help so if its not TX then it should look at the other ELSEIF statements! Please and thxs!
OLD---I'm passing via HTML form btn to this code and I'm wanting to grab it look into my database and find the state assigned to it via that billing telephone number so if the state comes back as TX I want it to display in east.php if it comes back at KS I want it to be displayed in kc.php
and of course if its CA I want it to display ca.php I've been racking at it for awhile so unless I have then put in what state the customer is from which I don't want to do cause not everyone will know what state the person is from! i've looked at a couple different things and cant find anything closest I've seen is this using php and mysql to display a specific URL in iFrame V2
include 'db_connect.php';
$mysqli = new mysqli($host, $user, $password, $database_in_use);
$btnfromform = $_GET['btn'];
$sql = "SELECT btn, state FROM legacy_escalation Where btn LIKE '".$btnfromform."'";
$result = $mysqli->query($sql);
?>
<?php
if ($result->num_rows > 0)
while($row=mysqli_fetch_assoc($result))
{
$state=$row['state'];
?>
<?php
if($state='tx')
include 'east.php';
elseif ($state='KS')
include 'kc.php';
elseif ($state='CA')
include 'ca.php';
?>
<?php
}
else {
echo "<span style='color:red; font-size:20px;'>No results found.</span>";
}
?> ```
Warning: You are wide open to SQL Injections and should use parameterized prepared statements instead of manually building your queries. They are provided by PDO or by MySQLi. Never trust any kind of input! Even when your queries are executed only by trusted users, you are still in risk of corrupting your data. Escaping is not enough!
Yes I am aware thank you! Right now I just want to get it working and it will only be used by me!
What's the actual problem you're experiencing? You haven't actually asked a question.
Im having the user go to '''
Search by Billing Telephone Number:
''' where they put in the btn and it takes them to search.php which is the orginal code I posted. My question how do I display diffrent html pages depending on those results! so if the btn is<PHONE_NUMBER> and it looks at the database and sees it
see its TX then it needs to display east.php but if its say a diffrent btn and the state in the database is KS then it should display KC.php
Right, so... what's not working with the code you have?
Well because im having the user just type in the BTN im having issuies with it grabbing the state in the database assigned to that btn and having issuies grabbing the diffrent html files depending on the state!
Dude. Slow down! $state='tx' is not the same as $state==='tx'
Yes I messed up when doing the equal when running $state==='tx' it comes back that my state is Undefined now.
Well great news after playing with it I got it fully working! I apperciate yall getting mad at me! Sorry i know my code and understanding is rough I think im just biting off more then I can chew!
Quick answer, if($state='tx') should likely be if ($state === 'tx')
Longer answer. You're breaking all the rules.
mixing HTML and PHP,
badly named variables,
indentation issues,
SQL injection,
opening and closing PHP tags for no apparent reason,
mixed case values ex. tx vs. KS
Start paying attention to the style as well as the logic and you'll catch these issues easily.
| common-pile/stackexchange_filtered |
React 0.14 error: Module build failed: ReferenceError: [BABEL] .../node_modules/eslint-loader/index.js!/.../main.jsx: Unknown option: base.stage
This is my package.json:
"scripts": {
"test": "echo \"Error: no test specified\n\" && exit 1",
"lint": "npm run lint-js && npm run lint-css",
"lint-js": "echo \"\\033[33mLinting JS\\033[0m\n\" && eslint . --ext .js --ext .jsx",
"lint-css": "echo \"\\033[33mLinting CSS\\033[0m\n\" && csslint app/style --quiet",
"start": "echo \"\\033[33mStarting dev server at localhost:8080\\033[0m\n\" && TARGET=dev webpack-dev-server --devtool eval-source --progress --colors --hot --inline --history-api-fallback",
"compile": "echo \"\\033[33mBuilding for production\\033[0m\n\" && TARGET=build webpack -p",
"build": "npm run lint-js && npm run lint-css && npm run compile"
},
"private": true,
"dependencies": {
"alt": "^0.17.8",
"babel-core": "^6.1.2",
"babel-loader": "^6.1.0",
"babelify": "^7.2.0",
"css-loader": "^0.22.0",
"csslint": "^0.10.0",
"csslint-loader": "^0.2.1",
"eslint": "^1.9.0",
"eslint-plugin-react": "^3.8.0",
"file-loader": "^0.8.4",
"flowcheck": "^0.2.7",
"flowcheck-loader": "^1.0.0",
"gsap": "^1.18.0",
"html-webpack-plugin": "^1.6.2",
"jquery-browserify": "^1.8.1",
"node-libs-browser": "^0.5.3",
"radium": "^0.14.3",
"react": "^0.14.2",
"react-bootstrap": "^0.27.3",
"react-bootstrap-modal": "^2.0.0",
"react-dom": "^0.14.2",
"react-hot-loader": "^1.3.0",
"react-odometer": "0.0.1",
"react-slick": "^0.9.1",
"react-swf": "^0.13.0",
"style-loader": "^0.13.0",
"superagent": "^1.4.0",
"url-loader": "^0.5.6",
"video.js": "^5.0.2",
"webpack": "^1.12.3",
"webpack-dev-server": "^1.12.1",
"webpack-merge": "^0.2.0"
}
}
This is the complete error message, I read that this error can be solved using babelify so I added it, although I don't need it.
ERROR in ./app/
main.jsx
Module build failed: ReferenceError: [BABEL] /Dev/Fanatico/node_modules/eslint-loader/index.js!/Dev/Fanatico/app/main.jsx: Unknown option: base.stage
at Logger.error (/Dev/Fanatico/node_modules/babel-core/lib/transformation/file/logger.js:43:11)
at OptionManager.mergeOptions (/Dev/Fanatico/node_modules/babel-core/lib/transformation/file/options/option-manager.js:245:18)
at OptionManager.init (/Dev/Fanatico/node_modules/babel-core/lib/transformation/file/options/option-manager.js:396:10)
at File.initOptions (/Dev/Fanatico/node_modules/babel-core/lib/transformation/file/index.js:191:75)
at new File (/Dev/Fanatico/node_modules/babel-core/lib/transformation/file/index.js:122:22)
at Pipeline.transform (/Dev/Fanatico/node_modules/babel-core/lib/transformation/pipeline.js:42:16)
at transpile (/Dev/Fanatico/node_modules/babel-loader/index.js:14:22)
at Object.module.exports (/Dev/Fanatico/node_modules/babel-loader/index.js:83:14)
@ multi main
It all started when I wanted to upgrade to React 0.14, and ended up installing all the packages one by one.
are you using webpack?
You will need to have installed:
babel-core
babel-loader
babel-preset-es2015
babel-preset-react
babel-preset-stage-0
Your dependencies in your package.json would be:
{
"name": "react-transform-example",
"devDependencies": {
"babel-core": "^6.0.20",
"babel-loader": "^6.0.1",
"babel-preset-es2015": "^6.0.15",
"babel-preset-react": "^6.0.15",
"babel-preset-stage-0": "^6.0.15",
"express": "^4.13.3",
"webpack": "^1.9.6"
},
"dependencies": {
"react": "^0.14.0",
"react-dom": "^0.14.0"
}
}
And your .babelrc file
{
"presets": ["es2015", "stage-0", "react"]
}
More info at setting-up-babel-6
You're using babel 6 which doesn't have a stage option anymore, instead you have to use presets, e.g:
http://babeljs.io/docs/plugins/#presets
http://babeljs.io/docs/plugins/preset-stage-0/
Installation
$ npm install babel-preset-stage-0
Usage
Add the following line to your .babelrc file:
{"presets": ["stage-0"] }
Note you'll also need the es2015 and react preset. Also note that at least some of the hot reloading plugins are not compatible yet.
| common-pile/stackexchange_filtered |
What is the best use of icons in the header? (icons replacing free text)
What is the best use of icons in the header? In my application I am using a number of icons for simple features such as "find friends". "ask a question" etc. This is mainly because I need to keep the header uncluttered and of minimal width to allow a straightforward use of the bootstrap collapse functionality (I can alter the collapse point default but then I run into other problems not explained here). I have added some popovers to them but is this enough. From a UX perspective one might argue that the user has to hover over them before he sees this. I also have a "tour" button which creates a simple tour of these buttons. I am debating whether to have this or not.
What is your opinion on replacing text with icons.....
I've noticed, so far in my years of Online browsing that many websites start with Text on the Header.
Once the website starts to become popular, the Text is replaced with an Icon to consume less estate but yet provide enough sensibility to know what the icon stands for.
This makes perfect sense, since for a new website, if you have a few actions on the Header which are unique to your website, using an icon might be confusing to the user as to what does it do, unless it has a tooltip. Even with a tooltip, most users won't find it familiar and might not click on it.
When you use text as an action instead of an icon to kick off the website for a few months, users become aware of it's place. Hence, then later when you do a subtle change of the text to an icon which is relatable to the text, they'll easily learn and comprehend that what the icon does if it's placed as the text was in the same order.
So, depending on the universal acceptability of the icon relevant to the action on your header and the familiarity with the website's UI you should choose to choose text or icon.
I reccommend using icons and text together.
The graphical representations are attractive since they are very easy to recognize by the human brain. They do have a disatvantage since perception is biased by the mental model of the users and their previous experiences and it is hard to find a generic graphical representation understandable by all users.
This is where the text helps, in case you have a doubt, the label is there to clarify it.
| common-pile/stackexchange_filtered |
Sql update from select error
Update PSSFAAssets
Set PSSFAAssets.ProjectID = xhh_quickscan.ProjectID,
PSSFAAssets.CpnyAssetNo = xhh_quickscan.Barcode,
PSSFAAssets.CustomDate01 = xhh_quickscan.Date1,
PSSFAAssets.CustomDate02 = xhh_quickscan.Date2,
PSSFAAssets.CustomDate03 = xhh_quickscan.Date3,
PSSFAAssets.CustomDate00 = xhh_quickscan.Date4,
PSSFAAssets.SerialNo = xhh_quickscan.Serial,
PSSFAAssets.Custom10Char02 = xhh_quickscan.AStatus,
PSSFAAssets.CustomDate04 = xhh_quickscan.Scandate
From
(select * from xhh_quickscan
where crtdate > '8/17/2013 7:55 AM'
and crtdate <= '8/18/2013 2:37 PM')
ON PSSFAAssets.AssetID = xhh_quickscan.AssetID
throws error: "Incorrect syntax near the keyword 'ON'."
Tried sub "WHERE" for "ON"; no luck.
Why not even simpler? The subquery seems superfluous to me.
UPDATE p
SET ProjectID = s.ProjectID,
CpnyAssetNo = s.Barcode,
CustomDate01 = s.Date1,
CustomDate02 = s.Date2,
CustomDate03 = s.Date3,
CustomDate00 = s.Date4,
SerialNo = s.Serial,
Custom10Char02 = s.AStatus,
CustomDate04 = s.Scandate
FROM dbo.PSSFAAssets AS p
INNER JOIN dbo.xhh_quickscan AS s
ON p.AssetID = s.AssetID
WHERE s.crtdate > '20130817 07:55'
AND s.crtdate <= '20130818 14:37';
This is the right syntax for an UPDATE with a JOIN on SQL Server:
UPDATE A
SET A.ProjectID = B.ProjectID,
A.CpnyAssetNo = B.Barcode,
A.CustomDate01 = B.Date1,
A.CustomDate02 = B.Date2,
A.CustomDate03 = B.Date3,
A.CustomDate00 = B.Date4,
A.SerialNo = B.Serial,
A.Custom10Char02 = B.AStatus,
A.CustomDate04 = B.Scandate
FROM PSSFAAssets A
INNER JOIN (SELECT *
FROM xhh_quickscan
WHERE crtdate > '8/17/2013 7:55 AM'
AND crtdate <= '8/18/2013 2:37 PM') B
ON A.AssetID = B.AssetID
This will be the best way to do it.
Update PSSFAAssets
Set PSSFAAssets.ProjectID = xhh_quickscan.ProjectID,
PSSFAAssets.CpnyAssetNo = xhh_quickscan.Barcode,
PSSFAAssets.CustomDate01 = xhh_quickscan.Date1,
PSSFAAssets.CustomDate02 = xhh_quickscan.Date2,
PSSFAAssets.CustomDate03 = xhh_quickscan.Date3,
PSSFAAssets.CustomDate00 = xhh_quickscan.Date4,
PSSFAAssets.SerialNo = xhh_quickscan.Serial,
PSSFAAssets.Custom10Char02 = xhh_quickscan.AStatus,
PSSFAAssets.CustomDate04 = xhh_quickscan.Scandate
From xhh_quickscan
where xhh_quickscan.crtdate > '8/17/2013 7:55 AM'
and xhh_quickscan.crtdate <= '8/18/2013 2:37 PM'
and PSSFAAssets.AssetID = xhh_quickscan.AssetID
| common-pile/stackexchange_filtered |
AngularJS - array in $scope re-defined on each $scope.function call
So for my website I am trying to set up some custom functions with AngularJS.
My output is as follows:
<div data-ng-repeat="filter in menu.filters" data-ng-controller="MenuController">
<label>{{filter.title}}</label>
<input type="checkbox" data-ng-click="setSelectedFilter()" />
</div>
My controller looks like this:
.controller("MenuController", ['$scope','$log', function($scope, $log) {
$scope.selectedFilters = [];
$scope.setSelectedFilter = function () {
var stub = this.filter.stub;
if (_.contains($scope.selectedFilters, stub)) {
$scope.selectedFilters = _.without($scope.selectedFilters, stub);
} else {
$scope.selectedFilters.push(stub);
};
$log.log($scope.selectedFilters);
return false;
};
}])
The problem I am having is that whenever I log $scope.selectedFilters at the beginning of $scope.setSelectedFilter, it's always a blank array. When I log $scope.selectedFilters at the end of my function, it contains the value I pushed there, but it doesn't hold onto it.
If I define $scope.selectedFilters as holding several values, those values show up in place of an empty array, like the array is getting re-built from the original declaration each time my function runs.
How do I get the $scope.selectedFilters array to hold on to the values I push to it from the $scope.setSelectedFilters function?
Here is a fiddle showing the problem: http://jsfiddle.net/HAz3p/
You are initializing new controller for every ng-repeat div. So the three checkboxes dont share a controller. they have their own seperate.
here is the fixed code
<div data-ng-controller="MenuController" >
<form>
<div data-ng-repeat="filter in menu.filters" >
<label>{{filter.title}}</label>
<input type="checkbox" data-ng-click="setSelectedFilter()" />
</div>
</form>
</div>
http://jsfiddle.net/HAz3p/1/
There are multiple answers to this small problems of yours:
But the best is:
Define a factory and initialize "selectedFilters" into that factory and it will be initialized only once:
.factory('myfactory', function(){
var selectedFilters = {};
return selectedFilters;
});
.controller("MenuController", ['$scope','$log', function($scope, $log, myfactory) {
$scope.selectedFilters = myfactory;
...
...
}]);
as you declare "myfactory" and returning your empty object from that factory i.e. selectedFilters;
$scope.selectedFilters = myfactory;// myfactory contains the object returned from factory ans stored into the scope of the controller, now it would be initialized only once.
| common-pile/stackexchange_filtered |
Swift String Interpolation displaying optional?
When i use the following code and have nameTextField be "Jeffrey" (or any other name)
@IBAction func helloWorldAction(nameTextField: UITextField) {
nameLabel.text = "Hello, \(nameTextField.text)"
}
nameLabel displays... Hello, Optional("Jeffrey")
But, when I change the previous code to include a "!" like this:
@IBAction func helloWorldAction(nameTextField: UITextField) {
nameLabel.text = "Hello, \(nameTextField.text!)"
}
The code works as expected and nameLabel displays.... Hello, Jeffrey
Why is the "!" required, in the video tutorial I used to create this simple program he did not use the "!" and the program worked as expected.
Optionals must be unwrapped. You must check for it or force unwrap as you do. Imagine the optional as a box where you put a value. Before you can access it, you need to put it out.
if let name = nameTextField.text {
nameLabel.text = "Hello, \(name)"
}
This seems like a terrible design choice by the creators of Swif for string interpolation. I don't know of any other language that does this. It makes way more sense to print the unwrapped value if it exists, otherwise print blank or nil. Or don't allow optionals inside of string interpolation. This has caused tons of bugs for us.
Another alternative is to use the null coalescing operator within the interpolated string for prettier text without the need for if let.
nameLabel.text = "Hello, \(nameTextField.text ?? "")"
It's less readable in this case, but if there were a lot of strings it might be preferable.
Whenever I use this it makes my build time quadruple.
Unfortunately, in my case I'm trying to interpolate optional Int values -- can't coalesce with empty string.
@RonLugge No but you can with a zero :-)
@CraigMcMahon and a zero would be erroneous if you're building a path that may (or may not) have an object ID at the end. E. G. http://foo.bar/api/v1/foos/1/bars/(optional)
Here's a handy extension to unwrap Any? to String.
Set a default value for nil values.
extension String {
init(_ any: Any?) {
self = any == nil ? "My Default Value" : "\(any!)"
}
}
// Example
let count: Int? = 3
let index: Int? = nil
String(count)
String(index)
// Output
// 3
// My Default Value
You can also use optional map.
This is where I learned of how to use it.
Basically, map will take an optional and return a value if there's a value and return nil if there's no value.
It think this makes more sense in code, so here's the code I found useful:
func getUserAge() -> Int? {
return 38
}
let age = getUserAge()
let ageString = age.map { "Your age is \($0)" }
print(ageString ?? "We don't know your age.")
I guess this may not be super helpful in the case where you're passing in an optional string, (nil coalescing works just fine in that case), but this is super helpful for when you need to use some value that isn't a string.
It's even more useful when you want to run logic on the given value since map is a closure and you can use $0 multiple times.
| common-pile/stackexchange_filtered |
How do I import a CSV file?
I am new to using Python and Pandas and I am trying to import a CSV or text file to an array with quotes in between the issues like
sp500 = ['appl', 'ibm', 'csco']
df = pd.read_csv('C:\\data\\stock.txt', index_col=[0])
df
which gets me:
Out[20]:
Empty DataFrame
Columns: []
Index: [AAPL, IBM, CSCO]
Any help would be great.
What have you read/tried so far?
Sorry I can never get the code thing to work right. I have only tried to do a basic import and get it into an array but I can't get the ' ' around each issue.
Please show a few lines of the stock.txt file. What is the problem with what you're getting? Where does sp500 fit into the question/answer?
As for indenting code, write it as you want it to appear, without tabs. Paste it into the edit box, select it, then press the {} button above the edit box.
Below is the sample list of like 150 stocks that I want to scan in a index. I am defining the list after I import the classes and methods. I am trying to get the 150 stocks formatted like the sp500 array at the top. I can use csv, txt or excel. I would just like to not have to do the whole thing be hand or with vi or something. Since its for a matplotlib/pandas 1st project I thought I would try to do it all in the same environment. ---------------------
AAPL
FB
BAC
TWTR
YHOO
PBR
i cannot see the sample list. What should the result look like? What is sp500 useful for?
I was trying to loop through a list of stocks to compare indicator value. I found a sentdex video that showed what I was try to do. Thank you everyone for your replies. I'll try to make future post clearer. code{import numpy as np
def main():
try:
issue = np.loadtxt('C:\\data\\stock.txt',delimiter=',',unpack=True,dtype='str')
print issue
except Exception, e:
print str(e)}
| common-pile/stackexchange_filtered |
Sampling groups in Pandas
Say I want to do a stratified sample from a dataframe in Pandas so that I get 5% of rows for every value of a given column. How can I do that?
For example, in the dataframe below, I would like to sample 5% of the rows associated with each value of the column Z. Is there any way to sample groups from a dataframe loaded in memory?
> df
X Y Z
1 123 a
2 89 b
1 234 a
4 893 a
6 234 b
2 893 b
3 200 c
5 583 c
2 583 c
6 100 c
More generally, what if I this dataframe in disk in a huge file (e.g. 8 GB of a csv file). Is there any way to do this sampling without having to load the entire dataframe in memory?
You could count the number of lines in the csv and then read_csv and set nrows to the number of lines/20
Thanks @EdChum but that would not return a stratified sample.
Hmm. you'd probably have to store this in pytables or hdf5 and then run a query, not sure how else to do this without loading into a dataframe. The other way is to use the chunksize param which returns a TextFileRader and query if the row is of interest and if not skip but this would be very slow potentially
The biggest issue is that there is no way to know that you've hit the 5% mark until you've read the entire file. It would be more feasible to request the "first 10 instances" because we can stop recording a specific Z instance after a while. But a percentage cut-off means we have to count everything.
@chrisaycock I would imagine one could technically "grep" the file to do the count first (i.e. without loading the file in memory) and then sample accordingly?
Would you need the rows to be taken randomly, or are they already sufficiently random? Meaning, you could take the first 5% of a's that occur?
@DataSwede sampling in order would be enough! (I can always shuffle lines prior to the sampling using command line tools)
Using the options skiprows and nrows you can break up the read_csv into chunks you can fit into working memory.
How about loading only the 'Z' column into memory using the 'usecols' option. Say the file is sample.csv. That should use much less memory if you have a bunch of columns. Then assuming that fits into memory, I think this will work for you.
stratfraction = 0.05
#Load only the Z column
df = pd.read_csv('sample.csv', usecols = ['Z'])
#Generate the counts per value of Z
df['Obs'] = 1
gp = df.groupby('Z')
#Get number of samples per group
df2 = np.ceil(gp.count()*stratfraction)
#Generate the indices of the request sample (first entrie)
stratsample = []
for i, key in enumerate(gp.groups):
FirstFracEntries = gp.groups[key][0:int(df2['Obs'][i])]
stratsample.extend(FirstFracEntries)
#Generate a list of rows to skip since read_csv doesn't have a rows to keep option
stratsample.sort
RowsToSkip = set(df.index.values).difference(stratsample)
#Load only the requested rows (no idea how well this works for a really giant list though)
df3 = df = pd.read_csv('sample.csv', skiprows = RowsToSkip)
| common-pile/stackexchange_filtered |
Sidekiq generating errors on startup : Error fetching message: Invalid argument - connect(2)
Hi sidekiq / ruby / redis experts :
I am not sure if this issue has to do with sidekiq, or redis, or ruby or even rails for that matter.
We are trying to start sidekiq on our dev server (effectively one step away from prod, so we are using rails in prod mode), and the sidekiq log continuously produces the error message :
Error fetching message: Invalid argument - connect(2)
The relevant portion of the log is as follows :
# Logfile created on 2014-04-30 15:57:05 -0400 by logger.rb/31641
Running in ruby 1.9.3p484 (2013-11-22) [i386-mingw32]
See LICENSE and the LGPL-3.0 for licensing details.
Starting processing, hit Ctrl-C to stop
{:queues=>["default"], :concurrency=>25, :require=>".", :environment=>"production", :timeout=>8, :error_handlers=>[#<Sidekiq::ExceptionHandler::Logger:0xef8410>], :lifecycle_events=>{:startup=>[], :quiet=>[], :shutdown=>[]}, :strict=>true, :tag=>"price_tracker_rewrite"}
Booting Sidekiq 3.0.0 with redis options {}
Error fetching message: Invalid argument - connect(2)
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:129:in `connect_nonblock'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:129:in `rescue in connect_addrinfo'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:121:in `connect_addrinfo'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:162:in `block in connect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:160:in `each'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:160:in `each_with_index'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:160:in `connect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/connection/ruby.rb:211:in `connect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:285:in `establish_connection'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:79:in `block in connect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:257:in `with_reconnect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:78:in `connect'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:240:in `with_socket_timeout'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis/client.rb:178:in `call_with_timeout'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis.rb:1038:in `block in _bpop'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis.rb:37:in `block in synchronize'
E:/Rewrite/Ruby193/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis.rb:37:in `synchronize'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis.rb:1035:in `_bpop'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/redis-3.0.7/lib/redis.rb:1080:in `brpop'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq/fetch.rb:101:in `block in retrieve_work'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/connection_pool-2.0.0/lib/connection_pool.rb:58:in `with'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq.rb:69:in `redis'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq/fetch.rb:101:in `retrieve_work'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq/fetch.rb:36:in `block in fetch'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq/util.rb:15:in `watchdog'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/sidekiq-3.0.0/lib/sidekiq/fetch.rb:32:in `fetch'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `public_send'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in `dispatch'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/calls.rb:122:in `dispatch'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/actor.rb:322:in `block in handle_message'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416:in `block in task'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55:in `block in initialize'
E:/Rewrite/Ruby193/lib/ruby/gems/1.9.1/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13:in `block in create'
My colleagues & I really have nowhere to go with this, and google has not been of any help. Thanks for helping!
UPDATE : I am executing sidekiq with the following line : "sidekiq -e production", since this is essentially the mirror of our prod. I also do not have a sidekiq config file (sidekiq.yml), maybe I need to have one? Really not sure, but it worked for us without a hitch during testing. We are also executing sidekiq in Windows environment, in case that matters.
can you provide your sidekiq config, and your launch line (e.g. bundle exec sidekiq -e ...)
nothing come to mind with your updated info, however since this is a redis connection issue, this could stem from your sidekiq.rb and redis.rb initializers. I could provide some suggestions next, but woudl be good to see these files first.
@blotto there are no sidekiq.rb nor redis.rb custom initializers at all. We just start the redis server we have installed (as a service), and just run sidekiq with the command I noted above (out of the box, if you will). Everything worked perfectly fine on our test environment, and now it gives this error on dev/prod.
I recommend trying with some initializers. it wont hurt, and may ensure you have more visible control on your configuration in any env. I'll add some recommendations in an answer to start
firstly setup some initializers, try these out ( note Heroku ENV var can be replaced with something else)
config/redis.rb
module MyApp
class << self
def redis
@redis ||= Redis.new(url: (ENV['REDISTOGO_URL'] || 'redis://<IP_ADDRESS>:6379'))
end
def sidekiq?
begin
redis_info = Sidekiq.redis { |conn| conn.info }
true
rescue
false
end
end
end
end
config/sidekiq.rb
Sidekiq.configure_server do |config|
Rails.logger = Sidekiq::Logging.logger
config.redis = { :url => MyApp.redis[:url], :namespace => 'sidekiq', :size => 5 }
end
Sidekiq.configure_client do |config|
config.redis = { :url => MyApp.redis[:url], :namespace => 'sidekiq', :size => 1 }
end
That looks like a more fundamental Ruby/windows issue. Upgrade your MRI version if possible, or give JRuby a shot.
| common-pile/stackexchange_filtered |
Divergence of integral
I found in web the following problem:
Let $f,g$ be continuous non-negative decreasing functions on $\mathbb R_{\ge 0}$ such that $$\int_0^\infty f(x)\,dx\to\infty, \int_0^\infty g(x)\,dx\to\infty$$. Define the function $h$ by $h(x)=\min \{f(x),g(x)\},\forall x\in\mathbb R_{\ge 0}$. Prove or disprove that $$\int_0^\infty h(x)\,dx$$ also diverges.
I bet the statement is true, so I attempt to prove it. Firstly, since $h$ is also continuous, non-negative and decreasing, we know that the statement is equivalent to say
Let $f,g$ be continuous non-negative decreasing functions defined on $\mathbb N$, show that
$$(\sum_{k=0}^\infty f(k)\to\infty\land \sum_{k=0}^\infty g(x)\to\infty )\implies \sum_{k=0}^\infty \min\{f(k),g(k)\}\to\infty $$
Now, if one has assumed the limits
$$\lim_{n\to\infty} f(n)/\min\{f(n),g(n)\}, \lim_{n\to\infty} \min\{f(n),g(n)\}/g(n), \lim_{n\to\infty} f(n)/g(n)$$ all exist, I think I could give a proof.
Let $$\lim_{n\to\infty} f(n)/\min\{f(n),g(n)\}=L_1, \lim_{n\to\infty} \min\{f(n),g(n)\}/g(n)=L_2, \lim_{n\to\infty} f(n)/g(n)=L$$. We know that any of $L_1,L_2, L$ is nonnegative. Since $L_1L_2=L$, if $L>0$, then we are done by the limit comparison test. If $L=0$, then in the long run $\min\{f(x),g(x)\}\sim f(x)$, we are done as well.
However, I could hardly think of an approach if the assumption is not made. Any help would be appreciated. Thanks in advance.
The statement is false. Then take $f(x) = \frac{1}{x^2}$ and $$g(x) = \begin{cases}
1 & x \in (0, 1]\\
\frac{1}{x} & x \in [1, \infty)
\end{cases}$$.
My apologies. When I said $\mathbb R_+$ I really meant $\mathbb R_{\ge 0}$.
@Verbe how so? Shifting $\frac{1}{x^2}$ to get rid of the singularity will result in it being integrable on $(0, \infty)$.
here is my attempt to prove (it's only a rought idea)
let define
$$M(x)=max(f,g)$$
and
$$I_1(x)=\frac{M(x)+h(x)}{2}$$
thus by comparison $I_1(x)$ diverges
then define
$$I_k(x)=\frac{I_{k-1}(x)+h(x)}{2}$$
thus by induction $I_k(x)$ diverges
as $$I_k(x) \rightarrow h(x)$$
$h(x)$ also diverges $\square$?
But could induction imply the case when taking limit?
@BAI I'm not sure, hope someone else will give some hints on that.
| common-pile/stackexchange_filtered |
How to solve $x-\sin x = \dfrac {\pi}{10}$?
Solve $x-\sin x = \dfrac {\pi}{10}$.
I got $1.26$ using Newton Raphson method but Is there any other alternative method?
Since $|1.26 - \sin(1.26) - \pi/10| \ge .006$, it follows $1.26$ is not a solution of the equation. So your question is unclear.
@LeeMosher, What's unclear in the question? My answer might be mistake.
It makes the question unclear because the premise is wrong. Is the OP asking how to get lower errors? The answer would be: use the same method and iterate longer. Is the OP asking for different methods of finding approximate solutions? The answer would be yes, there are different methods, but that is a broad question. Is the OP asking how to find the closest answer to the hundredths digit? And so on...
@LeeMosher, I am looking for different method of solution.
Please check https://en.wikipedia.org/wiki/Root-finding_algorithm for alternative methods.
You could try with fixed point iteration, where you have two curves $f(x)=x$ and $g(x)=\sin{x}+\frac{\pi}{10}$ and you compute the intersection of both with a guessed $x$ value, being the seed $x^0$ .
This scheme is presented as follows:
$$x^{n+1}=g(x^{n})=\sin{x^n}+\frac{\pi}{10} \tag{1}$$
This scheme converges to the root $x^*$ within an interval in which the derivative of $g(x)$ is less that unity, $i.e$ $x^0\in(-\pi,\pi)$.
This can be proved computing the error:
$$|x^{n+1}-x^*|=|g(x^n)-g(x^*)|\leq L|x^n-x^*|$$
The bounding constant $L$ must be less that unity for $(1)$ to converge. This $L$ coincides with:
$$L=\max{|g'(x)|}$$
Since $g'(x) = \cos{x}$ and its maximum value is 1 in $x=\pm \pi$, equation $(1)$ will always converge to $x^*$ if the seed is chosen within $x^0\in(-\pi,\pi)$
Sorry, didn't intend to copy your answer. But fixed point iteration always pops into my head when I see equations similar to this :).
@MrYouMath don't worry about that. You're right, $x=f(x)$ is almost always suitable for fixed point method except when $L>1$. :)
One other method is to use fixed point iteration. Rewrite: $x = \phi(x)=\sin(x)+\pi/10$. The iteration is
$x_{k+1}=\phi(x_{k})=\sin(x_k)+\pi/10$.
Start with $x = 0$ and iterate 8-9 steps and you will get a pretty good numerical solution.
defining $$f(x)=x-\sin(x)-\frac{\pi}{10}$$ and $$f'(x)=1-\cos(x)\geq 0$$ and $$f(0)<0$$ and $$f(2\pi)>0$$ ntherefore we get only one solution $$x\approx 1.26894786483261674188$$
| common-pile/stackexchange_filtered |
Get atoms of a Boolean formula in Z3
I was wondering if there is a method to get the atoms of a Boolean formula:
a = Bool('a')
b = Bool('b')
c = Bool('C')
d = Bool('D')
e = Bool('E')
f = Bool('F')
formula = And(Or(a, b), Or(c, d), Or(e, f))
I wonder if something like this exists:
formula.get_atoms() or get_atoms(formula)
to give me following desired output:
{A, B, C, D, E, F}
In pySMT, get_atoms() exists the provides the atoms. However, for some reason, I need to experiment on Z3.
You can traverse through the hierarchy of children() (documentation):
def atoms(expr):
a = set()
if not str(expr) in {'True', 'False'}:
c = expr.children()
if len(c):
for child in c:
a = a.union(atoms(child))
else:
a = {expr}
return a
a = Bool('a')
b = Bool('b')
c = Bool('C')
d = Bool('D')
e = Bool('E')
f = Bool('F')
formula = And(Or(a, b), Or(c, d), Or(e, f))
print(atoms(formula))
Note that this will also return constant values. (Try it on Or(True, a) for instance.) The OP probably didn't want that, but perhaps it's not a big deal.
| common-pile/stackexchange_filtered |
Default project in Visual C++ 2008
In Visual C++ 2008 ( Professional Edition )it is impossible to create default project for a .cpp file. Sometimes it is inconvenient. Is there an edition of Visual C++ 2008 which allows it?
Can you clarify what you mean? What is it exactly that you are trying to achieve?
What do you mean by a "default project"?
Probably he meant a project wizard that will allow to choose already existing source files instead of generating default one.
It's available, assuming you've already written the .cpp file. Use File + New + Project From Existing Code. You'll get a point-and-click wizard with a bunch of questions that need to be answered.
I reckon you'll use this a few times, then discover it is just simpler to start a new project from scratch with the Win32 Console Application template. Just add your .cpp to the project's Source Files folder.
In Visual C++ 6.0 I open .cpp file and click Build without having created the project. Visual C++ prints message box - Would you like to create default project workspace? Answer = Yes - default project will be created.
Yes, VC6 usually has to be pried from a programmer's dead fingers. You are already very late, you'll have to get over it.
Why you can't do this?
You can normally create Win32 C++ project and have with this .cpp file.
You mean you don't want to create a solution each time? There's no getting around this. It can be useful to create a Sandbox solution and just fill that up with .cpp files to throw your ideas around on.
Alternatively, use a programmers text editor (such as NP++) for individual source files and only use a complete IDE when you are dealing with full projects.
| common-pile/stackexchange_filtered |
Unable to export Enterprise Build in Xcode 7.1
I am unable to export enterprise build in Xcode 7. I have correct Distribution profile and I have taken enterprise build in Xcode 5.1. But after upgrading to Xcode 7.1 I am unable to take build. Please Help me.
This is what Xcode 7.1 shows
"to save for enterprise deployment you need to add an apple id account that is enrolled"
Did you add your enterprise account under Xcode -> Preferences -> Accounts ?
Ofcourse I added the same In which I have taken Enterprise build in Xcode 5.1.
| common-pile/stackexchange_filtered |
How do I initialise an 'Abstract Entity' CoreDataClass using SwiftyJSON
I have an Abstract Entity called 'User' in my Core Data model. I would like to initialise an instance of this entity without saving it into the database. Therefore I have set it as Abstract Entity.
I would like this entity class to conform to JSONAble as I am using SwiftyJSON to simplify things.
For it to conform I must implement the init method required convenience init?(dict: [String : Any]).
This is what the method looks like:
required convenience public init?(dict: [String : Any]) {
let decoder = JSONDecoder()
guard let codingUserInfoKeyManagedObjectContext = CodingUserInfoKey.managedObjectContext,
let managedObjectContext = decoder.userInfo[codingUserInfoKeyManagedObjectContext] as? NSManagedObjectContext,
let entity = NSEntityDescription.entity(forEntityName: "User", in: managedObjectContext) else {
fatalError("Failed to decode User")
}
self.init(entity: entity, insertInto: nil)
let json = JSON(dict)
let userDictionary = json["user"]
id = userDictionary["id"].int32Value
phoneCode = userDictionary["countryCode"].stringValue
mobileNumber = userDictionary["mobileNumber"].stringValue
firstName = userDictionary["firstName"].stringValue
lastName = userDictionary["lastName"].stringValue
avatar = userDictionary["avatar"].stringValue
userRole = userDictionary["userRole"].int32Value
let dateString = userDictionary["dateAdded"].stringValue
if let date = dateString.asDate() {
dateCreated = date
}
}
Upon running this, it fails to decode the user and hits the fatalCrash that I have in place.
I am still a bit unsure on how Abstract Entities are meant to be initialised without being stored into the database, maybe if I can figure out the correct way to setup the CoreDataClass for Abstract Entities, I can figure out where I am going wrong.
The whole point about an abstract class is that you can't instantiate it, you can only instantiate its sub-classes. That's what makes it abstract.
I basically want an entity that can be initialised, but not stored into the database. Am I right in thinking that a Parent entity would better suit than an Abstract entity?
fatalError is called because one of the three let statements in your guard do not complete - which one is it? Have you thought about creating a local NSManagedObjectContext, inserting your JSON(dict) into that?
@andrewbuilder it was the let entity statement. I found another way to initialise the entity for this scenario and that seemed to work. Thanks anyway.
| common-pile/stackexchange_filtered |
Simple question about split sequence
Let $0 \rightarrow A \xrightarrow{\psi} B \xrightarrow{\sigma} C \rightarrow 0$ be a short exact sequence of $R$-modules.
If there is a homomorphism $\rho :C \longrightarrow B$ such that $\sigma \circ \rho$ is the identity in $C$, how do you prove that $B=\psi(A)\bigoplus \rho(B)$?
It is straightforward to check that $\psi(A)\cap \rho(B)=0$, I don't know how to show that $\psi(A) + \rho(B) = B$ ?
Going back to the group case, are you familiar with the fact that a short exact sequence with a section $\iff$ semidirect product?
Hint. $b - \rho(\sigma(b))\in\mathrm{Ker}(\sigma) = \mathrm{Im}(\psi)$.
thanks! One more question, if $f \frac{\mathbb{Z}}{n\mathbb{Z}} \longrightarrow \mathbb{Z}$ is a $\mathbb{Z}$-module homomorphism,then why $f(1) = 0$? Thanks.
If $n\neq 0$, Because $1+n\mathbb{Z}$ is a torsion element of the domain; and torsion elements map to torsion elements. What are the torsion elements of $\mathbb{Z}$? (If $n=0$, then the statement is false).
surely, "torsion elements map to torsion elements" because they have the same order?or it's because another reason?
No, they don't have to have the same order (think $(\mathbb{Z}/4\mathbb{Z})\to(\mathbb{Z}/2\mathbb{Z})$. But certainly, the image of a torsion element has nontrivial annihilator.
oh I see it now.Thanks.
| common-pile/stackexchange_filtered |
spring integration: sftp inbound-channel-adapter The authenticity of host 'x.x.x.x' can't be established
I tried to download files using the following code:
<int-sftp:inbound-channel-adapter id="sftpInbondAdapter"
auto-startup="true" channel="receiveChannel" session-factory="sftpSessionFactory"
local-directory="file:${directory.files.local}" remote-directory="${directory.files.remote}"
auto-create-local-directory="true" delete-remote-files="true"
filename-pattern="*.txt">
<int:poller fixed-delay="${sftp.interval.request}"
max-messages-per-poll="-1" error-channel="sftp.in.error.channel" />
</int-sftp:inbound-channel-adapter>
<bean id="defaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.host}" />
<property name="port" value="${sftp.port}" />
<property name="user" value="${user}" />
<property name="password" value="${password}" />
<property name="allowUnknownKeys" value="true" />
</bean>
I'm sure that the user is authorized because i tried it with shell:
sftp<EMAIL_ADDRESS>then i write the password and the download succeed with "get".
but i can't download files, the error is:
DEBUG LOG:
jsch:52 - Authentications that can continue: gssapi-with-mic,publickey,keyboard-interactive,password
2016-02-02 07:54:04 INFO jsch:52 - Next authentication method: gssapi-with-mic
2016-02-02 07:54:04 INFO jsch:52 - Authentications that can continue: publickey,keyboard-interactive,password
2016-02-02 07:54:04 INFO jsch:52 - Next authentication method: publickey
2016-02-02 07:54:04 INFO jsch:52 - Authentications that can continue: password
2016-02-02 07:54:04 INFO jsch:52 - Next authentication method: password
2016-02-02 07:54:04 INFO jsch:52 - Authentication succeeded (password).
2016-02-02 07:54:05 DEBUG SimplePool:190 - Obtained new org.springframework.integration.sftp.session.SftpSession@39c9c99a.
2016-02-02 07:54:05 DEBUG CachingSessionFactory:187 - Releasing Session org.springframework.integration.sftp.session.SftpSession@39c9c99a back to the pool.
2016-02-02 07:54:05 INFO jsch:52 - Disconnecting from x.x.x.x port 22
I would enable DEBUG logging for jsch as well as org.springframework.integration.
That last message is coming from this code...
@Override
public boolean promptYesNo(String message) {
logger.info(message); // <<<<<<<<< INFO message in your log line 538
if (hasDelegate()) {
return getDelegate().promptYesNo(message);
}
else {
if (logger.isDebugEnabled()) {
logger.debug("No UserInfo provided - " + message + ", returning:"
+ DefaultSftpSessionFactory.this.allowUnknownKeys);
}
return DefaultSftpSessionFactory.this.allowUnknownKeys;
}
}
Since you have not provided a delegate UserInfo (according to your configuration in the question), it should return true (because you have allowUnknownKeys set to true).
If you can't figure it out; edit your question with the appropriate part of the log.
EDIT
You removed the most useful part of the log you posted on your first edit:
2016-02-01 18:28:27 DEBUG DefaultSftpSessionFactory:544 - No UserInfo provided - The authenticity of host '<IP_ADDRESS>' can't be established.
RSA key fingerprint is 98:1d:7e:73:77:97:f6:af:f9:2a:fc:2b:21:8e:8e:bf.
Are you sure you want to continue connecting?, returning:false
"Returning:false" means that the allowUnknownKeys property is false, not true as you show in your configuration. Perhaps you have another session factory bean that's overriding this one?
See my edit; somehow the session factory you are using doesn't have allowUnknownKeys set to true.
| common-pile/stackexchange_filtered |
UIActivityViewController bug in iOS 8
Basically I was forced to check if the iOS version of the device is 8.0 and if it is an iPad to run the following code:
ActivityView.popoverPresentationController.sourceView = view;
ActivityView.popoverPresentationController.sourceRect = CGRectMake(0, 50, 1, 1);
Otherwise the application crashes.
The problem now is it crashes in iPad Mini.
Simulator or actual?
actual. it crashes in the iPad Mini.
This topic to consist of all the answer that you will need.
http://stackoverflow.com/questions/25644054/uiactivityviewcontroller-crashing-on-ios8-ipads
Basically, instead of checking for iOS 8 and iPad, I should check if the activityView responds to the selector popoverPresentationController.
[ActivityView respondsToSelector:@selector(popoverPresentationController)] ) {
ActivityView.popoverPresentationController.sourceView = view;
ActivityView.popoverPresentationController.sourceRect = CGRectMake(0, 50, 1, 1);
}
| common-pile/stackexchange_filtered |
Cronjobs Reporting Negative Execution Times?
I'm getting a fairly weird result trying to check the status of my Magento store's cron jobs.
I tried installing both AOE Scheduler and Noovias Cronjob Manager. Both of them give me something like this:
"Scheduler is working. (Last execution: -239 minute(s) ago)"
My only guess is that it last ran 1 minute ago - I am in Eastern time, which due to Daylight Saving Time is UTC-4. 240 minutes would be 4 hours. However, both my PHP configuration and MySQL database specify "America/New_York" as the timezone. Why are the times showing in the negative?
And if I am correct in thinking it's just a time offset, is this fixable?
What is the timezone configured in Magento? Does that also match the America/New York timezone?
Yes: In the admin panel, it does show Estern (America/New York) under System->Configuration->General->Locale Options->Timezone
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.