qid
int64
10
74.7M
question
stringlengths
15
26.2k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
27
28.1k
response_k
stringlengths
23
26.8k
49,700,028
As long as there are no way to 'Load' an Excel template in the client's Workbook using Javascript API for office, Is it possible to download the file through the browser's Add In? So the user can manually open it.
2018/04/06
[ "https://Stackoverflow.com/questions/49700028", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7298188/" ]
As a solution you can open new window and pass there your file url to the backend: ``` window.open("https://yourbackend.com?fileurl="+encode(url)) ``` The backend on its side should act as a proxy and return response with redirect. This way the Outlook desktop client will open a new tab in a browser and show/download there your file.
There is a document.getFileAsync() API which allows you to get all the slices of the file. For details, see here <https://dev.office.com/reference/add-ins/shared/document.getfileasync>
8,615
I am doing some computations on Matlab and I need to send those values to an Arduino Leonardo through USB serial connection. I need to send 2 variables which can vary from -400 to +400. I'm saying their values because I was able to do this with small positive values (unsigned byte), but not larger and negative numbers. Please help! Thank you
2015/02/13
[ "https://arduino.stackexchange.com/questions/8615", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/7303/" ]
You cannot send larger values because byte only covers the range from 0-255. To send larger values, you can break your int variable into 2 byte variables. Here's an example: ``` // On Arduino int myVar = -123; byte myVar_HighByte = myVar>>8; // get the high byte byte myVar_LowByte = myVar; // get the low byte // x86 compatible machines are little-endian so we send the low byte first Serial.write(myVar_LowByte); Serial.write(myVar_HighByte); % On MATLAB s = serial('COM10') fopen(s) myVar = fread(s,1,'int16') ``` Disclaimer: Syntax might not be entirely correct, since I have no MATLAB or Arduino near me when I typed this. But you should get the idea. ;-) Edit: On second thought, it might be easier to use a pointer. ``` // On Arduino float myFloat = 3.14159265359; byte* ptr = (byte*) (&myFloat); Serial.write(*ptr++); Serial.write(*ptr++); Serial.write(*ptr++); Serial.write(*ptr); % On MATLAB s = serial('COM10') fopen(s) myVar = fread(s,1,'float') ``` Having said that, you will still have to take care of the endianness, if you use a different microcontroller.
If you want to send your data directly from Matlab to the Arduino, you can try using the "Matlab Arduino Support package". It lets you read/write pins on the arduino directly from Matlab command line or script just like you would with a VERY LOW END Data Acquisition card. This has the advantage of removing the burden of managing the serial communication. I don't know how it handles large numbers though. <http://www.mathworks.com/hardware-support/arduino-matlab.html?refresh=true>
579,446
I am giving a series of talks to a .NET (C#) development team on the Ruby language and environment. I am approaching it as an opportunity to highlight the benefits of Ruby over C#. At first, I want to concentrate on the language itself before moving into the environment (RoR vs ASP MVC, etc). What features of the Ruby language would you cover?
2009/02/23
[ "https://Stackoverflow.com/questions/579446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/881/" ]
> > I am approaching it as an opportunity to highlight the benefits of Ruby over C#. > > > I'm not sure this is the right thing to do. If the tone of your talks is, "Ruby is cool because you can do *x* in it!" you'll lose your C# audience very quickly. They'll answer, "We can simulate *x* in C# if we want to, but we don't have much use for *x* in our designs." or perhaps, "If you think you need to do *x* then you're doing it wrong!" They won't understand how Ruby can help them until they understand Ruby. Why not take them through some toy problems and show them how a Ruby programmer would solve them? Teach them the Ruby way. A week later, when they're looking at a problem they're having, one of them will say, "Well gee, I know how to solve this, but if I was using Ruby it would be a whole lot easier...."
Duck Typing! This will be less of an issue in c# 4.0, but there have been times when I had to duplicate whole blocks of code because two related classes with (for my purposes) identical APIs did not share a base class. Also, blocks. C# has lambdas, but Ruby's syntax is prettier and they are used pervasively throughout the standard libraries. They are much more a part of idiomatic Ruby than idiomatic c#, and that counts for something. **Edit** Hash literals deserve a mention, too. In general I'd emphasize how concise you can be in Ruby, and how this allows you to express intent better and spend less time trying to make the compiler happy
46,032
So I have a main character who will die at the end of the first book. In the next book he will be alive, but nobody knows it. Everyone thinks he is still dead. How can I do this without using cliches and boring the reader?
2019/06/18
[ "https://writers.stackexchange.com/questions/46032", "https://writers.stackexchange.com", "https://writers.stackexchange.com/users/39785/" ]
The first rule of cheating Death in Fiction is "No Body, No Crime". You have to fool the audience into thinking the character is dead, by showing them entering the death trap... but not showing the body after the trap is sprung. This can be tricky and you'll need to work out how to put your character in a spot that they can survive but not. In real life, there are incredible acts of survival that shouldn't be. I've seen stories of sky divers whose chutes failed to deploy and hit the ground at terminal velocity and lived. (They are not fighting ready, to be sure, but they do not die). But in fiction you want to avoid this... it may be real in reality, but in fiction it's unrealistic. And in some genres, your readers will not buy it for a second. Superhero fiction has been so plagued by dead characters being resurrected, that the default reaction of any death of a character is "He'll be back". Doubly so if they're a member of the X-Men. In fact, among comic fans, there used to be a saying that only three deaths are permanent in comics: Uncle Ben (Spider-Man), Bucky Barnes (Captain America), and Jason Todd (Second Robin). And two of those were resurrected! Every trick in the book to resurrect characters has been used here; so that, when writing the comic series 52, which was a series that had one issue every week, one character was scripted to fake his death and the writers had several close calls because they couldn't convincingly "kill" the guy without the fans thinking it over and concluding he's not really dead. Best advice is to kill the guy, mourn him, miss him, and have him return at the time where he is most needed to assist and not a moment sooner. Either that or have him pull a Tom Sawyer and crash his own funeral... that's always fun.
That depends what sort of story this is. And how you killed the character off. If the story is set in a fantasy universe, you could have the character brought back with a magic spell. If it's a science fiction story, you might be able to posit some technology that brings him back. Failing that, I presume you would have to say that he was not really dead, that the idea that he was dead was a mistake. Whether you can make that believable depends on how you killed him. If he was gunned down in front of 20 eye witnesses, and several people who knew him identified the body, and he was then cremated, it could be pretty tough to explain how this was all a mistake and he's really still alive. On the other hand, if he was a soldier in a combat zone and didn't return from a mission and was declared "presumed dead", saying he wasn't really dead after all isn't too implausible. In general, I think bringing a character back from the dead is tricky. Usually if I see such a thing in a story, I say, "Oh brother, the author changed his mind about killing off this character and now he's trying to bring him back." Like many things in writing, it depends if you do it well or poorly. Two factors in doing it well: 1. Plausibility. The more you have to explain away the worse it is. 2. Foreshadowing. If before the character is killed, you drop some hints that he's planning to fake his own death or whatever the circumstances, then when you do it later the reader doesn't feel as cheated. Then it doesn't look like you just changed your mind.
560,575
I have a complex JSON object which is sent to the View without any issues (as shown below) but I cannot work out how Serialize this data back to a .NET object when it is passed back to the controller through an AJAX call. Details of the various parts are below. ``` var ObjectA = { "Name": 1, "Starting": new Date(1221644506800), "Timeline": [ { "StartTime": new Date(1221644506800), "GoesFor": 200 } , { "StartTime": new Date(1221644506800), "GoesFor": 100 } ] }; ``` I am not sure how this object can be passed to a Controller Method, I have this method below where the Timelines object mirrors the above JS object using Properties. ``` public JsonResult Save(Timelines person) ``` The jQuery I am using is: ``` var encoded = $.toJSON(SessionSchedule); $.ajax({ url: "/Timeline/Save", type: "POST", dataType: 'json', data: encoded, contentType: "application/json; charset=utf-8", beforeSend: function() { $("#saveStatus").html("Saving").show(); }, success: function(result) { alert(result.Result); $("#saveStatus").html(result.Result).show(); } }); ``` I have seen this question which is similar, but not quite the same as I am not using a forms to manipulate the data. [How to pass complex type using json to ASP.NET MVC controller](https://stackoverflow.com/questions/267707/how-to-pass-complex-type-using-json-to-asp-net-mvc-controller) I have also seen references to using a 'JsonFilter' to manually deserialize the JSON, but was wondering if there is a way to do it nativly though ASP.NET MVC? Or what are the best practices for passing data in this way?
2009/02/18
[ "https://Stackoverflow.com/questions/560575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56940/" ]
You say "I am not using a forms to manipulate the data." But you are doing a POST. Therefore, you are, in fact, using a form, even if it's empty. $.ajax's [dataType](http://docs.jquery.com/Ajax/jQuery.ajax#options) tells jQuery what type the server will *return*, not what you are passing. POST can only pass a form. jQuery [will convert data to key/value pairs](http://docs.jquery.com/Ajax/jQuery.ajax#options) and pass it as a query string. From the docs: > > Data to be sent to the server. It is > converted to a query string, if not > already a string. It's appended to the > url for GET-requests. See processData > option to prevent this automatic > processing. Object must be Key/Value > pairs. If value is an Array, jQuery > serializes multiple values with same > key i.e. {foo:["bar1", "bar2"]} > becomes '&foo=bar1&foo=bar2'. > > > Therefore: 1. You aren't passing JSON to the server. You're passing JSON to jQuery. 2. Model binding happens in the same way it happens in any other case.
in response to Dan's comment above: > > I am using this method to implement > the same thing, but for some reason I > am getting an exception on the > ReadObject method: "Expecting element > 'root' from namespace ''.. Encountered > 'None' with name '', namespace ''." > Any ideas why? – Dan Appleyard Apr 6 > '10 at 17:57 > > > I had the same problem (MVC 3 build 3.0.11209.0), and the post below solved it for me. Basically the json serializer is trying to read a stream which is not at the beginning, so repositioning the stream to 0 'fixed' it... <http://nali.org/asp-net-mvc-expecting-element-root-from-namespace-encountered-none-with-name-namespace/>
33,947,823
Is semantic segmentation just a Pleonasm or is there a difference between "semantic segmentation" and "segmentation"? Is there a difference to "scene labeling" or "scene parsing"? What is the difference between pixel-level and pixelwise segmentation? (Side-question: When you have this kind of pixel-wise annotation, do you get object detection for free or is there still something to do?) Please give a source for your definitions. Sources which use "semantic segmentation" ----------------------------------------- * Jonathan Long, Evan Shelhamer, Trevor Darrell: [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/abs/1605.06211). CVPR, 2015 and PAMI, 2016 * Hong, Seunghoon, Hyeonwoo Noh, and Bohyung Han: "Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation." [arXiv preprint arXiv:1506.04924](http://arxiv.org/abs/1506.04924), 2015. * V. Lempitsky, A. Vedaldi, and A. Zisserman: A pylon model for semantic segmentation. In Advances in Neural Information Processing Systems, 2011. Sources which use "scene labeling" ---------------------------------- * Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun: [Learning Hierarchical Features for Scene Labeling](http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf). In Pattern Analysis and Machine Intelligence, 2013. Source which use "pixel-level" ------------------------------ * Pinheiro, Pedro O., and Ronan Collobert: "From Image-level to Pixel-level Labeling with Convolutional Networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. (see <http://arxiv.org/abs/1411.6228>) Source which use "pixelwise" ---------------------------- * Li, Hongsheng, Rui Zhao, and Xiaogang Wang: "Highly efficient forward and backward propagation of convolutional neural networks for pixelwise classification." [arXiv preprint arXiv:1412.4526](http://arxiv.org/abs/1412.4526), 2014. Google Ngrams ------------- "Semantic segmentation" seems to be more used recently than "scene labeling" [![enter image description here](https://i.stack.imgur.com/OI5p1.png)](https://i.stack.imgur.com/OI5p1.png)
2015/11/26
[ "https://Stackoverflow.com/questions/33947823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/562769/" ]
I read a lot of papers about Object Detection, Object Recognition, Object Segmentation, Image Segmentation and Semantic Image Segmentation and here's my conclusions which could be not true: Object Recognition: In a given image you have to detect all objects (a restricted class of objects depend on your dataset), Localized them with a bounding box and label that bounding box with a label. In below image you will see a simple output of a state of the art object recognition. ![object recognition](https://i.imgur.com/9Y14Jo1.jpg?1) Object Detection: it's like Object recognition but in this task you have only two class of object classification which means object bounding boxes and non-object bounding boxes. For example Car detection: you have to Detect all cars in a given image with their bounding boxes. ![Object Detection](https://i.imgur.com/fRuinD0.png?1) Object Segmentation: Like object recognition you will recognize all objects in an image but your output should show this object classifying pixels of the image. ![object segmentation](https://i.imgur.com/jPTpkRo.png?1) Image Segmentation: In image segmentation you will segment regions of the image. your output will not label segments and region of an image that consistent with each other should be in same segment. Extracting super pixels from an image is an example of this task or foreground-background segmentation. ![image segmentation](https://i.imgur.com/BthG0K9.png?1) Semantic Segmentation: In semantic segmentation you have to label each pixel with a class of objects (Car, Person, Dog, ...) and non-objects (Water, Sky, Road, ...). I other words in Semantic Segmentation you will label each region of image. ![semantic segmenation](https://i.imgur.com/69SQFsT.png?1) I think pixel-level and pixelwise labeling is basically is the same could be image segmentation or semantic segmentation. I've also answered your question in [this link](https://cs.stackexchange.com/questions/51387/what-is-the-difference-between-object-detection-semantic-segmentation-and-local/51654#51654) as the same.
The previous answers are really great, I would like to point out a few more additions: **Object Segmentation** one of the reasons that this has fallen out of favor in the research community is because it is problematically vague. Object segmentation used to simply mean finding a single or small number of objects in an image and draw a boundary around them, and for most purposes you can still assume it means this. However, it also began to be used to mean segmentation of blobs that *might* be objects, segmentation of objects *from the background* (more commonly now called background subtraction or background segmentation or foreground detection), and even in some cases used interchangeably with object recognition using bounding boxes (this quickly stopped with the advent of deep neural network approaches to object recognition, but beforehand object recognition could also mean simply labeling an entire image with the object in it). **What makes "segmentation" "semantic"?** Simpy, each segment, or in the case of deep methods each pixel, is given a class label based on a category. Segmentation in general is just the division of the image by some rule. [Meanshift](https://en.wikipedia.org/wiki/Mean_shift) segmentation, for example, from a very high level divide the data according to the changes in the energy of the image. [Graph cut](https://en.wikipedia.org/wiki/Image_segmentation#Graph_partitioning_methods) based segmentation is similarly not learned but directly derived from the properties of each image separate from the rest. More recent (neural network based) methods use pixels that are labeled to learn to identify the local features which are associated with specific classes, and then classify each pixel based on which class has the highest confidence for that pixel. In this way, "pixel-labeling" is actually more honest name for the task, and the "segmentation" component is emergent. **Instance Segmentation** Arguably the most difficult, relevant, and original meaning of Object Segmentation, "instance segmentation" means the segmentation of the individual objects within a scene, regardless of if they are the same type. However, one of the reason this is so difficult is because from a vision perspective (and in some ways a philosophical one) what makes an "object" instance is not entirely clear. Are body parts objects? Should such "part-objects" be segmented at all by an instance segmentation algorithm? Should they be only segmented if they are seen separate from the whole? What about compound objects should two things clearly adjoined but separable be one object or two (is a rock glued to the top of a stick an ax, a hammer, or just a stick and a rock unless properly made?). Also, it isn't clear how to distinguish instances. Is a will a separate instance from the other walls it is attached to? What order should instances be counted in? As they appear? Proximity to the viewpoint? In spite of these difficulties, segmentation of objects is still a big deal because as humans we interact with objects all the time regardless of their "class label" (using random objects around you as paper weights, sitting on things that are not chairs), and so some dataset do attempt to get at this problem, but the main reason there isn't much attention given to the problem yet is because it isn't well enough defined. [![enter image description here](https://i.stack.imgur.com/mPFUo.jpg)](https://i.stack.imgur.com/mPFUo.jpg) **Scene Parsing/Scene labeling** Scene Parsing is the strictly segmentation approach to scene labeling, which also has some vagueness problems of its own. Historically, scene labeling meant to divide the entire "scene" (image) up into segments and give them all a class label. However, it was also used to mean giving class labels to areas of the image without explicitly segmenting them. With respect to segmentation, "semantic segmentation" *does not* imply dividing the entire scene. For semantic segmentation, the algorithm is intended to segment only the objects it knows, and will be penalized by its loss function for labeling pixels that don't have any label. For example the MS-COCO dataset is a dataset for semantic segmentation where only some objects are segmented. [![MS-COCO sample images](https://i.stack.imgur.com/vGmzy.jpg)](https://i.stack.imgur.com/vGmzy.jpg)
7,951,844
I have this image, with this CSS: ``` .containerImg { height: 420px; margin-top: 0; position: relative; width: 100%; } .img { margin: 0 auto; position: absolute; /*unfortunately i can’t remove the position:absolute*/ } ``` And the markup: ``` <div class="containerImg"> <img class="img" alt="" src="img//section_es_2442.jpg"> <div class="info"> … </div> <div class="clear"></div> </div> ``` And I post it so you can see that the img is not the only thing in the .container So behavior is that image should use all .container dimensions and crop de image but keep the original ratio, images are 1600x500 (3,2:1) So, how can i achieve this?
2011/10/31
[ "https://Stackoverflow.com/questions/7951844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/533941/" ]
If I were you, I would to use background-image instead of img using this code: ``` .containerImg { height: 420px; margin-top: 0; position: relative; width: 100%; } .img { margin: 0; /*position: absolute; I'll remove this */ height: 420px; width: 100%; /*or use pixels*/ background: transparent url('') no-repeat center top; overflow: hidden; /* not sure about you really need this */ } ``` with : ``` <div class="containerImg"> <div class="img" style="background-image: url('img/section_es_2442.jpg')"></div> <div class="info"> … </div> <div class="clear"></div> </div> ``` The idea is you have a div like view port, and the image with same ratio and size will be background, if the image is bigger, the additional size will "like" cropped :)
Not sure if I understand what you mean, but can't you set the image as background: ``` .containerImg { height: 420px; margin-top: 0; position: relative; width: 100%; background-image: url("img/section_es_2442.jpg"); background-position: center; } ``` Or if it is dynamically build in the `style` attribute of `<div class="containerImg">`
1,700,347
What is $${\lim\_{x \to 1}} \frac{1-x^2}{\sin(\pi x)} \text{ ?} $$ I got it as $0$ but answer in the book as $2/ \pi$. Can you guys tell me what's wrong?
2016/03/16
[ "https://math.stackexchange.com/questions/1700347", "https://math.stackexchange.com", "https://math.stackexchange.com/users/275884/" ]
Let $x-1=t$, then \begin{align} \lim\_{x\to 1}\frac{1-x^2}{\sin \pi x} &= \lim\_{t\to 0}\frac{-t(t+2)}{\sin (\pi(t+1))}\\ &=\lim\_{t\to 0} \left(\frac{t}{\sin \pi t}\cdot (t+2)\right)\\ &=\frac{2}{\pi} \end{align}
You have $${1-x^2 \over \sin(\pi x)}= {(1-x)(1+x)\over \sin(\pi x)}\sim {2(1-x)\over \sin(\pi x)}. $$ Now look at the definition of the derivative for $\sin$ at $\pi$.
36,347,807
I have following code, which should monitor network changes using RTNETLINK socket. However when I am setting new IP address for interface "New Addr" or "Del Addr" does not showing. What can be possible problem. ``` package main import ( "fmt" "syscall" ) func main() { l, _ := ListenNetlink() for { msgs, err := l.ReadMsgs() if err != nil { fmt.Println("Could not read netlink: %s", err) } for _, m := range msgs { if IsNewAddr(&m) { fmt.Println("New Addr") } if IsDelAddr(&m) { fmt.Println("Del Addr") } } } } type NetlinkListener struct { fd int sa *syscall.SockaddrNetlink } func ListenNetlink() (*NetlinkListener, error) { groups := syscall.RTNLGRP_LINK | syscall.RTNLGRP_IPV4_IFADDR | syscall.RTNLGRP_IPV6_IFADDR s, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_DGRAM, syscall.NETLINK_ROUTE) if err != nil { return nil, fmt.Errorf("socket: %s", err) } saddr := &syscall.SockaddrNetlink{ Family: syscall.AF_NETLINK, Pid: uint32(0), Groups: uint32(groups), } err = syscall.Bind(s, saddr) if err != nil { return nil, fmt.Errorf("bind: %s", err) } return &NetlinkListener{fd: s, sa: saddr}, nil } func (l *NetlinkListener) ReadMsgs() ([]syscall.NetlinkMessage, error) { defer func() { recover() }() pkt := make([]byte, 2048) n, err := syscall.Read(l.fd, pkt) if err != nil { return nil, fmt.Errorf("read: %s", err) } msgs, err := syscall.ParseNetlinkMessage(pkt[:n]) if err != nil { return nil, fmt.Errorf("parse: %s", err) } return msgs, nil } func IsNewAddr(msg *syscall.NetlinkMessage) bool { if msg.Header.Type == syscall.RTM_NEWADDR { return true } return false } func IsDelAddr(msg *syscall.NetlinkMessage) bool { if msg.Header.Type == syscall.RTM_DELADDR { return true } return false } func IsRelevant(msg *syscall.IfAddrmsg) bool { if msg.Scope == syscall.RT_SCOPE_UNIVERSE || msg.Scope == syscall.RT_SCOPE_SITE { return true } return false } ```
2016/04/01
[ "https://Stackoverflow.com/questions/36347807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2500806/" ]
Here's the equivalent code for \*BSD: ``` package main import ( "fmt" "log" "syscall" ) func main() { netlink, err := ListenNetlink() if err != nil { log.Printf("[ERR] Could not create netlink listener: %v", err) return } for { msgs, err := netlink.ReadMsgs() if err != nil { log.Printf("[ERR] Could not read netlink: %v", err) } for _, msg := range msgs { if _, ok := msg.(*syscall.InterfaceAddrMessage); ok { log.Printf("address change!") } } } } type NetlinkListener struct { fd int } func ListenNetlink() (*NetlinkListener, error) { s, err := syscall.Socket(syscall.AF_ROUTE, syscall.SOCK_RAW, syscall.AF_UNSPEC) if err != nil { return nil, fmt.Errorf("socket: %s", err) } return &NetlinkListener{fd: s}, nil } func (l *NetlinkListener) ReadMsgs() ([]syscall.RoutingMessage, error) { defer func() { recover() }() pkt := make([]byte, 2048) n, err := syscall.Read(l.fd, pkt) if err != nil { return nil, fmt.Errorf("read: %s", err) } msgs, err := syscall.ParseRoutingMessage(pkt[:n]) if err != nil { return nil, fmt.Errorf("parse: %s", err) } return msgs, nil } ```
the update Example should be ``` package main import ( "fmt" "syscall" ) func main() { l, _ := ListenNetlink() for { msgs, err := l.ReadMsgs() if err != nil { fmt.Println("Could not read netlink: %s", err) } for _, m := range msgs { if IsNewAddr(&m) { fmt.Println("New Addr") } if IsDelAddr(&m) { fmt.Println("Del Addr") } } } } type NetlinkListener struct { fd int sa *syscall.SockaddrNetlink } func ListenNetlink() (*NetlinkListener, error) { groups := (1 << (syscall.RTNLGRP_LINK - 1)) | (1 << (syscall.RTNLGRP_IPV4_IFADDR - 1)) | (1 << (syscall.RTNLGRP_IPV6_IFADDR - 1)) s, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_DGRAM, syscall.NETLINK_ROUTE) if err != nil { return nil, fmt.Errorf("socket: %s", err) } saddr := &syscall.SockaddrNetlink{ Family: syscall.AF_NETLINK, Pid: uint32(0), Groups: uint32(groups), } err = syscall.Bind(s, saddr) if err != nil { return nil, fmt.Errorf("bind: %s", err) } return &NetlinkListener{fd: s, sa: saddr}, nil } func (l *NetlinkListener) ReadMsgs() ([]syscall.NetlinkMessage, error) { defer func() { recover() }() pkt := make([]byte, 2048) n, err := syscall.Read(l.fd, pkt) if err != nil { return nil, fmt.Errorf("read: %s", err) } msgs, err := syscall.ParseNetlinkMessage(pkt[:n]) if err != nil { return nil, fmt.Errorf("parse: %s", err) } return msgs, nil } func IsNewAddr(msg *syscall.NetlinkMessage) bool { if msg.Header.Type == syscall.RTM_NEWADDR { return true } return false } func IsDelAddr(msg *syscall.NetlinkMessage) bool { if msg.Header.Type == syscall.RTM_DELADDR { return true } return false } // rtm_scope is the distance to the destination: // // RT_SCOPE_UNIVERSE global route // RT_SCOPE_SITE interior route in the // local autonomous system // RT_SCOPE_LINK route on this link // RT_SCOPE_HOST route on the local host // RT_SCOPE_NOWHERE destination doesn't exist // // The values between RT_SCOPE_UNIVERSE and RT_SCOPE_SITE are // available to the user. func IsRelevant(msg *syscall.IfAddrmsg) bool { if msg.Scope == syscall.RT_SCOPE_UNIVERSE || msg.Scope == syscall.RT_SCOPE_SITE { return true } return false } ```
49,320
Salvete! I have an InfoPath-based content-type. I also created a forms library in several subsites that each use this content-type. Now, I need to fix my InfoPath content-type form so that it gets submitted to its own form library. Right now, when it gets submitted, all the submitted forms go to one particular library that I made for testing - which is neat, but I need it to remain in the subsite. So, my question is, how do I make the form get submitted to its own library? Since each site has a form library, I want the submitted forms to get saved there. I don't see any way to set the url for the submitted form to anything but a static full url; but maybe I am missing something... [update] I have discovered, too, that if you "save" a form instead of "submit" the form, it does, indeed, go into the SAME library, instead of getting submitted to the library specified in the data connection. Maybe there is a workaround to prevent the "submit" action, and allow the user to only "save" the form...
2012/10/18
[ "https://sharepoint.stackexchange.com/questions/49320", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/7452/" ]
This can, indeed, be achieved via code-behind. Sharepoint only allows you to submit to a static URL. However, you *can* submit to a dynamic url (of which, the current library is a sort of dynamic url) using code-behind. Look [here](http://blogs.msdn.com/b/infopath/archive/2006/11/08/submitting-to-this-document-library.aspx) and [here](http://ddkonline.blogspot.com/2010/01/dynamically-modifying-submit-location.html) for code to achieve this. If you want to use code-behind in your form, there are a few things you need to do first. Make sure that: * Sandboxing is enabled in SCA (it prevents malicious code from affecting your server) * You have signed your form in InfoPath (this requires your certificate to be designated for "code-signing" rather than "server authentication" - you can create a self-signed certificate for dev purposes, but it is only good for a month, and is not automatically accepted by the user's browser * [Publish as an administrator-approved template](https://ashrafhossain.wordpress.com/2010/08/30/publishing-administrator-approved-infopath-form-template-with-custom-code/). This will prompt you to save the form as a file on your local drive. * After you save it, note the file location, and go to SCA in "General Application Settings" and find "Manage Form Templates" in the InfoPath area. Upload the template you just saved. Then, in SCA, in the dropdown menu of your uploaded form, select "Activate to a Site Collection" and send it to your site. * You might have to go and add this content-type as a new type in your forms libraries. If you previously tried to publish the form directly from InfoPath, SCA may tack a "1" onto the file name and treat it as a new content type.
Revised answer: The scalable and codeless solution is to use [a template part (.xtp) file](https://web.archive.org/web/20140209114122/http://office.microsoft.com:80/en-us/infopath-help/design-a-template-part-to-reuse-in-multiple-form-templates-HA010150746.aspx) for creating multiple templates each with its own submit data connection to its own library instead of (complete) one content type template per multiple libraries. Or to avoid Submit data connections and use a content type template for saving Infopath XML data forms based on it. **Update:** One can disable Submit in Form Options in Infopath Designer 2010 though by default it is not even enabled and I do not understand why to enable it if using content type ![Form Options - Submit is not configured...](https://i.stack.imgur.com/NsRtd.jpg) Fig.1. Form Options - Submit is not configured... ![Submit Options - Allow Users to submit this form ](https://i.stack.imgur.com/5wx5G.jpg) Fig.2. Submit Options - Allow Users to submit this form I proposed above the codeless way to submit a form to its own sharepoint library and it is imperative to avoid coding which incurs a maintenance hell. Concerning [the article of changing Submit data connection URL](https://web.archive.org/web/20151101085609/http://blogs.msdn.com/b/infopath/archive/2006/11/08/submitting-to-this-document-library.aspx?PageIndex=5#comments) which is possible only through code, it states: * "Set the Security Level of the form to Full Trust and sign the form template with a digital certificate" This incurs the necessity of administrator approval of forms what is quite inconvenient and straight impossible in Sharepoint Online (Office 365). Meanwhile it (Full Trust and code digital signing) is not necessary if to convert "Main submit" data connection to UDCX data connection file stored in data connection library of the same sharepoint site collection.
48,675,614
I have array of object with rule `{width:10%, float:left,display:block;height:20px}` If I have 10 objects, it will be displayed in one line,one by one, so its fine. My problem is, if i print less than 4 objects ,I want to display them on center of page, without floating left .... In another case , if I have e.g 33 objects, I want to display 3 full rows with left floating objects, 10 in one row ... and last row with 3 objects must be aligned on center page. Any suggestion how to solve this with css? Thanks
2018/02/08
[ "https://Stackoverflow.com/questions/48675614", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9330423/" ]
You can set `PUBLIC_URL` in `.env.local` on your local machine. Be careful to not have this file/variable set when building for release. You can read more about `.env` files on cra documenatation.
In your `package.json` add a `homepage` key. This will force all the files to be build with the prefix there defined. ``` // package.json { "name": "my-project", "homepage": "/my-path", // .... } ``` output: ``` /my-path/assets/my-asset.png ```
8,246,564
I have an existing program where a message (for example, an email, or some other kind of message) will be coming into a program on stdin. I know stdin is a FILE\* but I'm somewhat confused as to what other special characteristics it has. I'm currently trying to add a check to the program, and handle the message differently if it contains a particular line (say, the word "hello"). The problem is, I need to search through the file for that word, but I still need stdin to point to its original location later in the program. An outline of the structure is below: Currently: ``` //actual message body is coming in on stdin read_message(char type) { //checks and setup if(type == 'm') { //when it reaches this point, nothing has touched stdin open_and_read(); //it will read from stdin } //else, never open the message } ``` I want to add another check, but where I have to search the message body. Like so: ``` //actual message body is coming in on stdin read_message(char type) { //checks and setup //new check if(message_contains_hello()) //some function that reads through the message looking for the word hello { other_functionality(); } if(type == 'm') { //when it reaches this point, my new check may have modified stdin open_and_read(); //it will read from stdin } //else, never open the message } ``` The problem with this is that to search the message body, I have to touch the file pointer stdin. But, if I still need to open and read the message in the second if statement (if type = 'm'), stdin needs to point to the same place it was pointing at the start of the program. I tried creating a copy of the pointer but was only successful in creating a copy that would also modify stdin if modified itself. I don't have a choice about how to pass the message - it has to stay on stdin. How can I access the actual body of a message coming in on stdin without modifying stdin itself? Basically, how can I read from it, and then have another function be able to read from the beginning of the message as well?
2011/11/23
[ "https://Stackoverflow.com/questions/8246564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/818086/" ]
The short answer is that you can't. Once you read data from standard input, it's gone. As such, your only real choice is to save what you read, and do the later processing on that rather than reading directly from standard input. If your later processing demands reading from a file, one possibility would be to structure this as two separate programs, with one acting as a filter for the other.
You should use and take advantage of buffering (`<stdio.h>` provides **buffered I/O**, but see [setbuf](http://linux.die.net/man/3/setbuf)). My suggestion is to read your *stdin* line by line, e.g. using [getline](http://linux.die.net/man/3/getline). Once you've read an entire line, you can do some minimal look-ahead inside. Perhaps you might read more about [parsing](http://en.wikipedia.org/wiki/Parsing) techniques.
115,971
I'm getting an error message when I try to build my project in eclipse: `The type weblogic.utils.expressions.ExpressionMap cannot be resolved. It is indirectly referenced from required .class files` I've looked online for a solution and cannot find one (except for those sites that make you pay for help). Anyone have any idea of a way to find out how to go about solving this problem? Any help is appreciated, thanks!
2008/09/22
[ "https://Stackoverflow.com/questions/115971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1459442/" ]
How are you adding your Weblogic classes to the classpath in Eclipse? Are you using WTP, and a server runtime? If so, is your server runtime associated with your project? If you right click on your project and choose build `path->configure` build path and then choose the libraries tab. You should see the weblogic libraries associated here. If you do not you can click `Add Library->Server Runtime`. If the library is not there, then you first need to configure it. `Windows->Preferences->Server->Installed runtimes`
Add spring-tx jar file and it should settle it.
46,184
[Previously, on terminal use...](https://puzzling.stackexchange.com/q/45716/2071) A panel raises into the wall, revealing a hallway and the outline of a sword. "Must have been taken already", you figure. You walk down the hallway and enter some kind of control room. There are 3 large lights illuminating the room. From left to right, they are colored blue, red, and green. There is a circular table in the center of the room that has a full world map designed on it. Scattered around the table are various maps of different countries on the planet. On the far wall is a large terminal, spanning the length of the entire room and stretching towards the ceiling. On the screen you see the following: ``` > SNIYN Access granted. Door opened. > OSCJC Access granted. Door opened. > HUGUNGJ Access granted. Door opened. > RMLLKQK Access granted. Door opened. > MOWYNRHLFTY Access granted. Door opened. > PDRGS Access granted. Door opened. > ``` A keyboard slides out as you approach... **Hint** > > After looking around the room a while, you notice there are some faint designs in the lights. The design is a circle with a much smaller circle perfectly in the center and a line crossing through the middle of both. > > > Inside the smaller circle in each light is a number. The blue light has a `7`, the red light has a `4`, and the green light has a `1` > > > Each map also has a `90™` in the top left corner. > > > **Hint 2** > > Upon closer inspection of the maps, you notice that the colors used for the various markings on each are different. Among other options, one map has blue and red, another uses black and white, and another seems to have a silver and gold system. > > > **Hint 3** > > Feeling there must be something you've overlooked, you check every inch of the room for any additional clues. Behind the terminal, you notice a sliver of paper. It reads, "I am the first one to come through here. In the event that you get stuck, I hope my [card](https://i.stack.imgur.com/u9mcu.png) will assist you. It is kind of old and scratched up, but if you've made it this far I'm positive you can understand it." > > >
2016/11/28
[ "https://puzzling.stackexchange.com/questions/46184", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/2071/" ]
> > Ok, all entries are substitution cyphers for the real live equivalents of capital cities of pokemon regions with the first mythical pokemon of each region as the key: > > > > SNIYN = Tokyo (Region Kanto = Kanto, key Mew) > > > > OSCJC = Osaka(Region Johto = Kansai, key Celebi) > > > > HUGUNGJ = Fukuoka (Region Hoenn = Kyushu, key Jirachi) > > > > RMLLKQK = Sapporo (Region Sinnoh = Hokkaido, key Manaphy) > > > > MOWYNRHLFTY = NewYorkCity (Region Unova = New York, key Keldeo) > > > > PDRGS = Paris (Region Kalos = France, key Diancie) > > > > The correct answer is the capital of Gen 7s region (Alola) in real live (Hawaii) which is Honolulu. Using the key Magearna (first mythical Pokemon of that region) we arrive at > > > > CLKLIUIU > > >
Partial answer > > All of this seems to be about Pokemon. > > - Small circle inside big circle with line in the middle is a Pokeball. > > - The hints also refer to the multiple generations of Pokemon games. Red and blue = generation 1, silver and gold is generation 2, black and white is generation 5. > > - "tm"s are skills you teach to your Pokemon and there all come with a number depending on the skill. > > - The 3 lights could possibly refer to the 3 usual starter Pokemons (fire, water, grass) > > - The various maps could refer to the maps in the game. Like johto and kanto. > > - The number 1,4,7 probably refer to pokemon index in pokedex. > > -- 1 probably bulbazor(green starter pokemon, index 2 and 3 being its evolution) > > -- 4 being charmender(red starter pokemon, index 5 and 6 being its evolutions) > > -- 7 being squirtle(blue starter pokemon, index 8 and 9 being its evolution) > > -- as for the tm 90 skill, the number and skill associated with them change in each generations, but in one of them it refers to the skill substitute, which is probably a hint that we must substitute letters to find the cipher. > > > UPDATE > > There is currently 7 generations of pokemon and 6 answers already given, so quite likely 1 for each generation. Each generation have a main map. > > 1 = Kanto > > 2 = Johto > > 3 = Hoenn > > 4 = Sinnoh > > 5 = Unova > > 6 = Kalos > > 7 = Alola > > Unfortunately, the number and patern of letters cannot be matched to the current cipher with just a normal substitution algorithm. > > The system in pokemon is not countries but regions. The question says "maps of different countries on the planet". Combined with the fact that there is a geography tag, it really feels like we need to make a connection with real world countries somewhere but still cannot find the connection. > > > NOTE > > I haven't played those games in over 15 years and only played the first generation so please forgive my possible lack/erroneous knowledge ;P > > > UPDATE 2 > > It appears to be a list of encrypted capitals of various countries. > > The goal is probably to decrypt all of them, find the relation between them to guess the next one and then we must also find how the next encryption key is decided to encrypt the answer before entering it. > > Here is what I could find until now using a simple substitution algorithm with a key. > > \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ > > SNIYN -> Tokyo > > Keyword : mew > > ABCDEFGHIJKLMNOPQRSTUVWXYZ > > **MEW**ABCDFGHIJKLNOPQRSTUVXYZ > > \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ > > PDRGS -> Paris > > Keyword : dhaka(Captial of Bangladesh) > > ABCDEFGHIJKLMNOPQRSTUVWXYZ > > **DHAK**BCEFGIJLMNOPQRSTUVWXYZ > > (We don't write the last A because duplicate letters must be ignored.) > > This one was probably a very very very lucky guess and might not be the correct key. > \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ > > OSCJC -> > > Judging by the word length and the repetition of the **C** I suspect this to be Dhaka or maybe Kyoto(not a capital anymore) > > \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ > > >
40,386,128
I'm probably missing something very obvious and would like to clear myself. Here's my understanding. In a naive react component, we have `states` & `props`. Updating `state` with `setState` re-renders the entire component. `props` are mostly read only and updating them doesn't make sense. In a react component that subscribes to a redux store, via something like `store.subscribe(render)`, it obviously re-renders for every time store is updated. [react-redux](http://google.com) has a helper `connect()` that injects part of the state tree (that is of interest to the component) and actionCreators as `props` to the component, usually via something like ```js const TodoListComponent = connect( mapStateToProps, mapDispatchToProps )(TodoList) ``` But with the understanding that a `setState` is essential for the `TodoListComponent` to react to redux state tree change(re-render), I can't find any `state` or `setState` related code in the `TodoList` component file. It reads something like this: ```js const TodoList = ({ todos, onTodoClick }) => ( <ul> {todos.map(todo => <Todo key={todo.id} {...todo} onClick={() => onTodoClick(todo.id)} /> )} </ul> ) ``` Can someone point me in the right direction as to what I am missing? P.S I'm following the todo list example bundled with the [redux package](https://github.com/reactjs/redux/tree/master/examples/todos/src/containers).
2016/11/02
[ "https://Stackoverflow.com/questions/40386128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1100692/" ]
The `connect` function generates a wrapper component that subscribes to the store. When an action is dispatched, the wrapper component's callback is notified. It then runs your `mapState` function, and **shallow-compares** the result object from this time vs the result object from last time (so if you were to *rewrite* a redux store field with its same value, it would not trigger a re-render). If the results are different, then it passes the results to your "real" component" as props. Dan Abramov wrote a great simplified version of `connect` at ([connect.js](https://gist.github.com/gaearon/1d19088790e70ac32ea636c025ba424e)) that illustrates the basic idea, although it doesn't show any of the optimization work. I also have links to a number of articles on [Redux performance](https://github.com/markerikson/react-redux-links/blob/master/react-performance.md#redux-performance) that discuss some related ideas. **update** [React-Redux v6.0.0](https://github.com/reduxjs/react-redux/releases/tag/v6.0.0) made some major internal changes to how connected components receive their data from the store. As part of that, I wrote a post that explains how the `connect` API and its internals work, and how they've changed over time: **[Idiomatic Redux: The History and Implementation of React-Redux](https://blog.isquaredsoftware.com/2018/11/react-redux-history-implementation/)**
As I know only thing redux does, on change of store's state is calling componentWillRecieveProps if your component was dependent on mutated state and then you should force your component to update it is like this 1-store State change-2-call(componentWillRecieveProps(()=>{3-component state change}))
17,951,362
I wish to minimize (using fmincon or similar) the following function: ``` function Difference= myfun3(wk,omega,lambda,Passetcovar,tau,PMat,i,Pi,Q) wcalc=inv(lambda* Passetcovar)*inv(inv(tau * Passetcovar)+ PMat(i,:)'*inv(omega)*PMat(i,:))*(inv(tau * Passetcovar)*Pi+ PMat(i,:)'*inv(omega)*Q(i,:)); Difference=sum((wk-wcalc).^2); end ``` `wk` and `wcalc` are <8 x 1 double> column vectors where wk is known and wcalc is given by the above equation. How do I minimize `Difference` by varying `Omega` for `Omega >0` with * lambda- Scalar * Passetcovar- <8 x 8 double> * tau - Scalar * PMat- <3 x 8 double> * Omega- Scalar * Q- <3 x 1 double> * Pi- <8 x 1 double>
2013/07/30
[ "https://Stackoverflow.com/questions/17951362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/678878/" ]
First, is `sigma` a row vector? If not, then `f` is a vector too. Are you trying multi-objective optimization? Then `fminsearch` will not help. Second, read the documentation of [`fminsearch`](http://www.mathworks.com/help/matlab/ref/fminsearch.html) before using it. `f` is supposed to be a function which maps your input vector to a scalar. Also, it needs a start point `x0`. Therefore, write a function `f` which takes in `omega` and returns the a scalar objective function value. Also, figure out a feasible `x0` (i.e a starting value for `omega`). Third, `fminsearch` does not allow constraints. You could hack it by making `f` return `Inf` or something when `omega <= 0`. I would recommend [`fmincon`](http://www.mathworks.com/help/optim/ug/fmincon.html). Your function should look like this. Make sure all the other variables like `PMap,tau`,etc. are globally accessible. Otherwise, you'll need an [anonymous function](http://www.mathworks.com/help/matlab/matlab_prog/anonymous-functions.html) to pass to `fminsearch`. ``` obj = f(omega) wcalc=inv(lambda* sigma)*inv(inv(tau * sigma)+ PMap(i,:)'*inv(Omega)*PMap(i,:))*(inv(tau * sigma)*pi+ PMap(i,:)'*inv(Omega)*Q(i,:)); obj = sigma*(wk-wcalc).^2; ``` Then use `fmincon`. Assume you have a starting value for `omega`. ``` fmincon(f,omega,[],[],[],[],0,Inf); ``` The `[]` are added since we only want to bound your solutions from below using this form. ``` x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub) ``` Does your `f` looks like this? ``` obj = f(omega,PMap,sigma,.....) ``` Where the `......` represents all the other variables. Then you can use [anonymous functions](http://www.mathworks.com/help/matlab/matlab_prog/anonymous-functions.html) in the following manner. ``` g = @(omega)f(omega,PMap,sigma,.....); ``` Now you can use `g` in `fmincon` or `fminsearch`.
This objective is NOT that terribly complicated. But you have written it to look complicated. First of all, omega is a scalar! Why bother to write inv(omega)? Dividing by omega is a better idea, as it will not involve overhead for the inverse function. Next, tau is a known constant scalar, as is Passetcovar. Why compute the inverse of the matrix (tau\*Passetcovar) every time the function is called? Not only that, but you compute that same inverse matrix THREE times in one line. Learn to precompute these things. It will save you much time and many headaches. Anyway, you have an obsession with inv. It is called 6 times in one single line, and most of those calls are superfluous. Lets re-write that single line of yours. First of all, precompute inv(Passetcovar), passing in the whole thing into your objective, so you need to do it ONLY one time. Note the basic identity: ``` inv(k*A) = inv(A)/k ``` which is valid for any non-zero scalar k. ``` IP = inv(Passetcovar); ``` Again, do not repeat the computation of inv(Passetcovar) every time the objective function is called. Instead, compute it ONCE, before you start the optimization. So the computation becomes a bit simpler to read: ``` wcalc = IP./lambda*inv(IP./tau + PMat(i,:)'*PMat(i,:)/omega)*(IP*Pi./tau + PMat(i,:)'*Q(i,:)/omega); ``` Edit: Finally we learn that Pi is a vector. I suppose it must be an 8x1 vector, in order for the array multiplication to conform. We can save a few more divides and multiplies by factoring out some constants, and inserting a parens in a convenient place. Note that by computing the matrix\*vector multiply FIRST, and then multiplying by IP, we convert a 8x8 \* 8x8 multiply into a 8x8 X 8x1 multiply. For small arrays like this, the difference is not huge, but worth remembering the idea. ``` wcalc = IP*(inv(IP + tau*PMat(i,:)'*PMat(i,:)/omega)*(IP*Pi + PMat(i,:)'*Q(i,:)/omega*tau))/(lambda*tau^2); ``` The goal is to minimize a sum of squares between wcalc and wk, subject to positive omega. This is now a scalar. I would suggest plotting the function first, just to learn something about the shape of it, and to see what might be a good starting value for omega. Thus, by wrapping a function handle around myfun, ezplot will do the plot nicely, here for a range of [0,100] for omega. Pick your own upper bound for omega if that is unreasonable. ``` ezplot(@(omega) myfun3(wk,omega,lambda,Passetcovar,tau,PMat,i,Pi,Q),[0,100]) ``` So the trivial solution is to use fminbnd, providing some reasonable but sufficiently large value for an upper bound. A nice thing about fminbnd is it does not need starting values. You will need to choose a reasonable upper bound for omega. The point is, use a tool designed to minimize a scalar function. General optimizers like fmincon are not needed, and require a starting value. ``` finalomega = fminbnd(@(omega) myfun3(wk,omega,lambda,Passetcovar,tau,PMat,i,Pi,Q),[0,100]) ``` You can also employ [fminsearchbnd](http://www.mathworks.com/matlabcentral/fileexchange/8277-fminsearchbnd-fminsearchcon), found on the file exchange. It can minimize a function subject to only lower bound constraints, but fminsearchbnd will need a starting value for omega.
82,611
There is an apple on a desk. Your friend who can't see it asks, "are there any apples on the desk?" What is the short answer to this question? "Yes, there is" or "Yes, there are." Or vise versa, I mean there are some apples on the desk and your friend asks, "is there an apple on the desk?" We should answer in short form based on the structure of the question and then explain it in the long answer? Or we can simply answer based on what we know?
2016/02/25
[ "https://ell.stackexchange.com/questions/82611", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/30106/" ]
The difference is subtle. Studying is engaging in a mental process for the purpose of the acquisition of knowledge (such as reading a textbook). Learning is the actual acquisition of knowledge through some process such as studying, instruction, or experience. So, the desired outcome of studying something is learning (about) that thing.
> > What is the difference between "studying" and "learning"? > > > When you say you are "learning" something, that suggests it's more an ongoing process...it doesn't happen in an instant. When you say *"I am studying English"* you might be talking about something you are doing right now. You could just have a book open and be in the act of "studying" it. But you could also mean it is an ongoing process...since you haven't really said anything about when your studying will will end. So in the case of these two statements, the meaning *might* be the same, depending on context: * I am studying English. * I am learning English. However with these two: * I am studying English tonight. * I am learning English tonight. Really only the first would make sense *(unless you are living in [The Matrix](https://www.youtube.com/watch?v=V8ZdGmgj0PQ) and can instantly download English to your brain.)* Studying can start and stop. But hopefully, no one ever promises to stop learning things *ever* again...
73,903
I'm researching if it is feasible to backup ESXi host by running rdiff-backup inside guest operating system on that host. Is there a way for guest virtual machine to gain access to the host OS file system? If there is more than one way, which one would yield the best throughput? EDIT: I would imagine, ideal would be to use the same interfaces as the console VM is using, if those are available for a guest OS
2009/10/13
[ "https://serverfault.com/questions/73903", "https://serverfault.com", "https://serverfault.com/users/18819/" ]
Depending on what you mean by accessing the Host OS filesystem there are some mechanisms available but they all require you to use either VMware's own Management Tools (either the VI Client or the Perl\Powershell Remote CLI's) or third party tools that make use of the same remote management API's. In all cases with ESXi AFAIK you are going to be connecting via some securely authenticated network protocol (e.g. WS-MAN, SCP) and you will need to authenticate with the appropriate credentials for root access to the Host in order to connect. You can for example use the standard VI client to connect from a VM back to the ESXi Host that it is running on. Once connected you can open the DataStore Browser and explore the VMFS Datastores, copy\cut and paste files in and out if you like. [VeemFastSCP](http://www.veeam.com/vmware-esxi-fastscp.html) provides for SCP connectivity from Windows clients to ESXi ( or even directly between ESXi Hosts) and it will happily run in a Guest VM while connecting to the Host it's running on. Directly drilling up through the Hypervisor isn't possible to the best of my knowledge although in theory the VMWare Tools drivers could provide extensive interaction between the Hypervisor and the Guests if VMware wanted them to. That would be a major security problem though so I can't see that being very likely. Apart from the ESXi Datastores there's not much else involved as far as the ESXi filesystem is concerned - it's designed as an embedded hypervisor so its footprint is pretty small and VMware do not support anyone messing about inside it (even though it is possible to get there).
You could enable ssh on esxi host and use a ssh client from guest. Also you could install proftpd on esxi host and use a ftp client from guest.
94,302
I know how to use scp or wget to download a file on a remote server to my local machine. However, if I'm already logged into a server with ssh, is there a command that lets me download a file in the pwd on the server onto my local machine? I suppose I can use scp, but my local machine is usually behind a router. Would I have to open a port in the router?
2009/12/14
[ "https://serverfault.com/questions/94302", "https://serverfault.com", "https://serverfault.com/users/8827/" ]
It's a little archaic, but you may be able to use something like kermit to use a modem-era protocol (zmodem, etc.). Looks like there's a [program](http://zssh.sourceforge.net/) meant just for that purpose, too. I once needed to download a small-ish file from a remote unix server without any supporting tools, so I uuencoded the file, dumped it with cat to the terminal, and then captured the the resulting text with my local terminal program, where I uudecoded it. Sick, eh? :)
I came up with a way to do this with a standard ssh client. It's a script that duplicates the current ssh connection, finds your working directory on the remote machine and copies back the file you specify to the local machine. It needs 2 very small scripts (1 remote, 1 local) and 2 lines in your ssh config. The steps are as follows: 1) Add these 2 lines to your ~/.ssh/config ControlMaster auto ControlPath ~/.ssh/socket-%r@%h:%p Now if you have an ssh connection to machineX open, you wont need passwords to open another one. 2) Make a 1-line script on the remote machine called ~/.grabCat.sh #!/bin/bash cat "$(pwdx $(pgrep -u $(whoami) bash) | grep -o '/.\*' | tail -n 1)"/$1 3) Make a script on the local machine called ~/.grab.sh #!/bin/bash [ -n "$3" ] && dir="$3" || dir="." ssh "$1" ".grabCat.sh $2" > "$dir/$2" 4) and make an alias for grab.sh in (~/.bashrc or wherever) alias grab=~/.grab.sh That's it, all done. From now on if you're logged in to "machineX:/some/directory", just fire up a new terminal and type grab machineX filename That puts the file in your current working directory on the local machine. You can specify a different location as a third argument to "grab". Note: Obviously both scripts must be "executable", ie chmod u+x filename
38,239,337
I've been trying to come up with a couple of SQL statements on how to get this right but no luck. I've tried `between`, `>= and =<`. Basically, the SQL statement that I've used is working but to an extent only. My code works like this: the user will choose a date range (from date and to date) and the program will retrieve and show the data it has within those range. Like I said, it works but it also shows the days from the other months when what I want to show is just those particular days that the user picked. eg. from July 1, 2016 to July 5, 2016. What's happening is any month of the year that has those dates will show as well which makes that particular method a bit useless. Any help or any explanation why is this so would be appreciated. Below is my code: ``` stringFromDate = sdf.format(fromDate.getDate()); stringToDate = sdf.format(toDate.getDate()); String query = "Select * from tblSavings where date between '" + stringFromDate+ "' and '" + stringToDate+"'"; try{ pstmt = conn.prepareStatement(query); rs = pstmt.executeQuery(); tblList.setModel(DbUtils.resultSetToTableModel(rs)); ```
2016/07/07
[ "https://Stackoverflow.com/questions/38239337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6166201/" ]
It looks like ed25519 keys are supported by Net SSH 4.0.0alpha1-4. <https://github.com/net-ssh/net-ssh/issues/214> You probably can use RSA keys, upgrade to net-ssh 4.0.0alpha4, or possibly use SSH Agent to get around this.
Isn't this the problem here? ``` E, [2016-07-08T10:52:56.795652 #56100] ERROR -- net.ssh.authentication.key_manager[3fe8c68cbad8]: could not load public key file `/Users/tester/.ssh/id_ed25519.pub': Net::SSH::Exception (public key at /Users/tester/.ssh/id_ed25519.pub is not valid) ``` It seems that net-ssh doesn't like the format of your `id_ed25519.pub` public key.
29,818,411
> > ActionView::Template::Error (incompatible character encodings: UTF-8 > and ASCII-8BIT): app/controllers/posts\_controller.rb:27:in `new' > > > ``` # GET /posts/new def new if params[:post] @post = Post.new(post_params).dup if @post.valid? render :action => "confirm" else format.html { render action: 'new' } format.json { render json: @post.errors, status: :unprocessable_entity } end else @post = Post.new @document = Document.new @documents = @post.documents.all @document = @post.documents.build end ``` I don't know why it is happening.
2015/04/23
[ "https://Stackoverflow.com/questions/29818411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3972931/" ]
I'm afraid that there is no satisfactory answer to your question. On <http://marketplace.eclipse.org/content/nodeclipse-coffeescript-viewer-editor-eclipse-431> it says: > > There is problem since Eclipse 4.3.1 release <https://github.com/Nodeclipse/coffeescript-eclipse/issues/19> > Get 4.3.0, e.g. as Enide Studio 0.5.x <http://marketplace.eclipse.org/content/enide-studio> > Help us if you know Eclipse XTEXT. > > > and > > We were looking for new owner familiar with XText technology. > > > To me it seems, that there are profound problems with this plugin. I also had the problems with a missing entry in the list of internal editors after installing nodeclipse. I simply removed the plugin and reinstalled it. But than I ran into those XText-problems ![enter image description here](https://i.stack.imgur.com/2eCf4.png) and finally gave up,...
This plugin for Coffeescript in Eclipse is a little buggy but maybe you could try it - <https://github.com/adamschmideg/coffeescript-eclipse/> Installation steps are given in the README.
47,481,022
I've created a cluster on Google Kubernetes Engine (previously Google Container Engine) and installed the Google Cloud SDK and the Kubernetes tools with it on my Windows machine. It worked well for some time, and, out of nowhere, it stopped working. Every command I'm issuing with `kubectl` provokes the following: ``` Unable to connect to the server: net/http: TLS handshake timeout ``` I've searched Google, the Kubernetes Github Issues, Stack Overflow, Server Fault ... without success. I've tried the following: * Restart my computer * Change wifi connection * Check that I'm not somehow using a proxy * Delete and re-create my cluster * Uninstall the Google Cloud SDK (and kubectl) from my machine and re-install them * Delete my `.kube` folder (config and cache) * Check my `.kube/config` * Change my cluster's version (tried 1.8.3-gke.0 and 1.7.8-gke.0) * Retry several hours later * Tried both on PowerShell and cmd.exe Note that the cluster seem to work perfectly, since I have my application running on it and can interact with it normally through the Google Cloud Shell. Running: ``` gcloud container clusters get-credentials cluster-2 --zone europe-west1-b --project ___ kubectl get pods ``` works on Google Cloud Shell and provokes the `TLS handshake timeout` on my machine.
2017/11/24
[ "https://Stackoverflow.com/questions/47481022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7046455/" ]
This is what I did to solve the above problem. I simply ran the following commands:: ``` > gcloud container clusters get-credentials {cluster_name} --zone {zone_name} --project {project_name} > gcloud auth application-default login ``` Replace the placeholders appropriately.
So this MAY NOT work for you on GKE, but Azure AKS (managed Kubernetes) has a similar problem with the same error message so who knows — this might be helpful to someone. The solution to this for me was to scale the nodes in my Cluster from the Azure Kubernetes service blade web console. Workaround / Solution ===================== 1. Log into the Azure (or GKE) Console — Kubernetes Service UI. 2. Scale your cluster up by 1 node. 3. Wait for scale to complete and attempt to connect (you should be able to). 4. Scale your cluster back down to the normal size to avoid cost increases. Total time it took me ~2 mins. More Background Info on the Issue --------------------------------- Added this to the full ticket description write up that I posted over here (if you want more info have a read): ['Unable to connect Net/http: TLS handshake timeout' — Why can't Kubectl connect to Azure AKS server?](https://stackoverflow.com/questions/50726534/unable-to-connect-net-http-tls-handshake-timeout-why-cant-kubectl-connect)
7,513,922
I just updated my web server from Jetty 6.x to Jetty 8.0.1, and for some reason, when I do the exact same request, sometimes the response has been Gzipped and sometimes not. Here is what the request and response look like at the beginning of the service() method of the servlet: ``` Request: [GET /test/hello_world?param=test]@11538114 org.eclipse.jetty.server.Request@b00ec2 Response: org.eclipse.jetty.servlets.GzipFilter$2@1220fd1 WORKED! Request:[GET /test/hello_world?param=test]@19386718 org.eclipse.jetty.server.Request@127d15e Response:HTTP/1.1 200 Connection: close FAILED! ``` Here is my GzipFilter declaration: ``` EnumSet<DispatcherType> all = EnumSet.of(DispatcherType.ASYNC, DispatcherType.ERROR, DispatcherType.FORWARD, DispatcherType.INCLUDE, DispatcherType.REQUEST); FilterHolder gzipFilter = new FilterHolder(new GzipFilter()); gzipFilter.setInitParameter("mimeTypes", "text/javascript"); gzipFilter.setInitParameter("minGzipSize", "0"); context.addFilter(gzipFilter, "/test/*", all); ``` The Javadoc says that: ``` GZIP Filter This filter will gzip the content of a response if: The filter is mapped to a matching path ==> The response status code is >=200 and <300 The content length is unknown or more than the minGzipSize initParameter or the minGzipSize is 0(default) The content-type is in the comma separated list of mimeTypes set in the mimeTypes initParameter or if no mimeTypes are defined the content-type is not "application/gzip" No content-encoding is specified by the resource ``` It looks to me that all those conditions are met in my case, except maybe the last one "No content-encoding is specified by the resource". How can I verify that? Plus, for a reason I ignore too, when the response is not filtered with GzipFilter, response.getWriter() throws an IO Exception. Why is that?
2011/09/22
[ "https://Stackoverflow.com/questions/7513922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/488054/" ]
I had the same problem, and found 2 causes: 1. If any of your servlets prematurely calls response.getWriter().flush(), the GZipFilter won't work. In my case both Freemarker in Spring MVC and Sitemesh were doing that, so I had to set the Freemarker setting "auto\_flush" to "false" for both the Freemarker servlet and the Spring MVC Freemarker config object. 2. If you use the ordinary GzipFilter, for some reason it doesn't let you set any headers in the "including" stage on Jetty. So I had to use [IncludableGzipFilter](http://download.eclipse.org/jetty/9.2.0.v20140526/apidocs/org/eclipse/jetty/servlets/IncludableGzipFilter.html) instead. After I made these changes, it works for me. Hope that helps you as well.
May I suggest you to use [Fiddler](http://www.fiddler2.com/fiddler2/) to track you HTTP requests? Maybe, a header like `Last-Modified` indicates to your browser that the content of your request `/test/hello_world?param=test` is still the same... So your browser reuses what is in its cache and simply close the request, without reading its content... What happens with two different requests? * `/test/hello_world?param=test` * `/test/hello_world?param=test&foo=1`
11,511,809
I am learning Python using Zed Shaw's "Learn Python the Hard Way" on Windows using PowerShell. I am in [Exercise 46](http://learnpythonthehardway.org/book/ex46.html) where you set up a skelton project. I downloaded [pip](http://pypi.python.org/pypi/pip), [distribute](http://pypi.python.org/pypi/distribute), [nose](http://pypi.python.org/pypi/nose/), and [virtualenv](http://pypi.python.org/pypi/virtualenv) and I installed them by typing: > > `python <filename>.py install` > > > However, probably because they were not installed where they were supposed to, when I try > > `nosetests` > > > I get errors saying "The term 'nosetests' is not recognized as the name of a cmdelt, function, script file, or operable program. Check the spelling of the mae, or if a path was included, verify that the path is correct and try again.... CommandNotFoundException". I have been going through the exercises fine, so I thought that the path was correct but do you have to change it now? Right now, I have the packages under the directory where I have my skelton (..project/skelton). I am sorry for a real-beginner question but if anybody could help me with this, I highly appreciate it!!
2012/07/16
[ "https://Stackoverflow.com/questions/11511809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524501/" ]
I had the same error but the answer was in the book. Type this into powershell, hope it works for you too. ``` [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User") ```
Try this: > > > ``` > // make sure you have pip and virtualenv installed > cd project > // create a virtual environment > virtualenv venv --distribute > // activate the virtual environment > // I'm not 100% sure, but I think this is correct way on windows > venv\Scripts\activate.bat > // install nose > pip install nose > > ``` > > You should now be able to run nosetests as long as your virtualenv is activated.
18,921,174
The GUI of my system works well with 1366 X 768. When it is displayed in a different resolution, i need to scroll side by side, wherein it should not. Also, the div and section becomes disorted when i tried to press `ctr+-` in chrome. ``` <header> <h1><font color = "#666" face="Arial Black" size="5%">&nbsp;&nbsp;&nbsp;Isplika:</font> A Web-based IDE for C++ Source Content for Programming Beginners <span id = "strength" > Our Stength, Our God Ü </span> </h1> </header> <div id = "wrapper"> <section id = "board"> <section id = "board_c"> <div id = "board_ln"> </div> <div id = "board_code_w"> <div id = "tags_c" class = "tags">C++ Code</div> <div id = "board_code" contenteditable = "true" ></div> <div id = "board_code_dup" contenteditable = "false"></div> </div> </section> <section id = "board_mb"> </section> <section id = "board_code_info"> Row: <section id = "row" class="tab_space_right"> 1 </section> Col: <section id = "col" class="tab_space_right"> 2 </section> Number of Lines: <section id = "numLines" class="tab_space_right"> 3 </section> </section> </section> <section id = "interpreter"> <div id = "tags_int" class = "tags">Result</div> <section id = "interpreter_c"> </section> <section id = "interpreter_input"> <input id = "inputF" type = "text" /> </section> <div id = "inputB" class="buttons"> INPUT </div> </section> <section id = "identifier"> <div id = "tags_iden" class = "tags">Variables</div> <section id = "identifier_type"> </section> <section id = "identifier_name" > </section> <section id = "identifier_value"> </section> </section> <section id = "controls"> <div id = "run" class = "buttons" >RUN</div> <div id = "stop" class = "buttons" >STOP</div> <div id = "next" class = "buttons" >NEXT</div> <div id = "support" class = "buttons" >SUPPORT</div> </section> </div> </body> </html> ``` ![enter image description here](https://i.stack.imgur.com/8Ooaq.png) `#board` should be resizable but `#interpreter` and `#identifier` should be static
2013/09/20
[ "https://Stackoverflow.com/questions/18921174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2454644/" ]
The code below is a template. However you might want to update the default (working) directory to the location of the file. ``` Declare Function ShellExecute Lib "shell32.dll" Alias "ShellExecuteA" _ (ByVal hwnd As Long, ByVal lpszOp As String, _ ByVal lpszFile As String, ByVal lpszParams As String, _ ByVal LpszDir As String, ByVal FsShowCmd As Long) _ Function StartDoc(DocName As String) As Long Dim Scr_hDC As Long Scr_hDC = GetDesktopWindow() StartDoc = ShellExecute(Scr_hDC, "Open", DocName, _ "", "C:\", SW_SHOWNORMAL) End Function ```
[`Shell32.Shell` COM object](https://msdn.microsoft.com/en-us/library/windows/desktop/bb774094(v=vs.85).aspx) aka `Shell.Application` can be used that wraps the `ShellExecute` Win32 API function: * Add a reference to `Microsoft Shell Controls And Automation` type library to VBA project via `Tools->References...`, then ``` Dim a As New Shell32.Shell Call a.ShellExecute("desktop.ini") ``` * Alternatively, without any references: ``` Call CreateObject("Shell.Application").ShellExecute("desktop.ini") ``` Interestingly, here (WinXP), when using a typed variable (that provides autocomplete), `ShellExecute` is missing from the members list (but works nonetheless).
27,734
I'm utilising some of NeHe's spring code, and after getting some pretty weird results I eventually realised the source of my error - my "dt" value in my Update function; the value that everything is multiplied against to speed up/slow down the calculations, hopefully based on frame rate. For example: ``` public void Update(GameTime gt) { float dt = gt.ElapsedGameTime.Milliseconds / 160.0f; velocity += (force / mass) * dt; position += velocity * dt; } ``` 160.0f seems to work pretty well for the player's update function, but for my spring simulation I need a value of about 3000, or I end up with my springs located at (NaN,NaN) pretty much instantly. Why do bad values for this cause everything to go so crazy? I thought it would just slow down or speed up my simulation but it seems to cause some weird cascading failure. Edit: Sorry, forgot to link to NeHe's post on this: <http://nehe.gamedev.net/tutorial/introduction_to_physical_simulations/18005/>
2012/04/20
[ "https://gamedev.stackexchange.com/questions/27734", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/8293/" ]
I think you are using `gt.ElapsedGameTime.Milliseconds` then you should be using `gt.ElapsedGameTime.TotalMilliseconds`. Not sure if that have some influence here, but it's a potential bug. To increase precision (if that is the problem) you could switch to [ElapsedGameTime.Ticks](http://msdn.microsoft.com/en-us/library/system.timespan.ticks.aspx) which is 100-ns intervals, and to double for fp intermediate calculations and see if it helps.
I believe that your timestep should be calulcated as: ``` float dt = gt.ElapsedGameTime.Milliseconds / 1000.0f; ``` Since you want dt to be the delta time in seconds (so a dt of 1.0 would mean that 1 second passed). If that doesn't work, then it's likely that the timer you are using to calculate ElapsedGameTime isn't accurate enough (a common problem with millisecond timers on Windows, which has a resolution of 16ms). If the value of gt.ElapsedGameTime.Milliseconds is constantly 0 or 16 then you need a more precise timer.
1,735,133
I have an ASP page which will create a record set from an SQL query and create an excel page using that by using Response.ContentType = "application/vnd.ms-excel" property being set. When executing the file,it will show a save dialog for an excel file (whatever filename i have mentioned in Response.AddHeader) as ``` Response.AddHeader "Content-Disposition", "attachment; filename=" & strFileName ``` This works well in local .But when running from the server (i checked only in IE ),ITs showing me a dialog box to "OPen" "Save " the ASP file itself, IF i give Save , Its showing an error message like "Could not download ..." Any idea how to solve this ?
2009/11/14
[ "https://Stackoverflow.com/questions/1735133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40521/" ]
Make sure that the Active Server Pages web service extension is marked as Allow in the IIS settings on the server. (These are the terms from Windows 2003, in 2008 they might call it something else...)
try adding a MIME type on your IIS for your file type
63,177,779
I have a model of type `DbQuery` in my context for executing a stored procedure in it. ``` public class DynamicClass { public int ItemId { get; set; } [NotMapped] public string Title { get; set; } [NotMapped] public string TitleArabic { get; set; } public int? SeasonNumber { get; set; } } ``` Query: ``` SELECT ItemId, Title, TitleArabic, SeasonNumber FROM dynamicclass ``` ``` List<DynamicClass> Result = context.DynamicClass.FromSql("SELECT ItemId,Title,TitleArabic,SeasonNumber From dynamicclass").ToList(); ``` The query is working fine but the `NotMapped` fields are not getting values while executing the query. My query may or may not contain `Title` and `TitleArabic` and that is why I have assigned `NotMapped` annotation to these fields. If I removed all the `NotMapped` annotations and query results doesn't contain all the columns I have specified in the model, then I get an error > > The required column 'Title' was not present in the results of a 'FromSql' operation > > > How can I solve this or is there any better idea?
2020/07/30
[ "https://Stackoverflow.com/questions/63177779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10047959/" ]
Instead of using `.collect`, write `.forEach(listA::add)`. (Technically, you should also replace `Collectors.toList()` in the first example with `Collectors.toCollection(ArrayList::new)`, because `toList()` does not guarantee that it returns a mutable list.)
You can do something like this - ``` List<A> listA = xList.stream().map().collect(Collectors.toList()); yList.stream().map().collect(Collectors.toCollection(() -> listA)); ``` EDIT : `collect` method takes a `Collector` and internally `Collectors.toCollection` adds yList element to the original list as - ``` Collector<T, ?, C> toCollection(Supplier<C> collectionFactory) { return new CollectorImpl<>(collectionFactory, Collection<T>::add, (r1, r2) -> { r1.addAll(r2); return r1; }, CH_ID); } ```
22,703,978
Is there any problem creating a CSS class like this: `[test] { font: 13px; }` and use it in an HTML attribute as this: `<div test></div>` Would the performance in some browsers be affected by using this method?, I've tested it with Mozilla Firefox and Google Chrome and they seem to work with no problems.
2014/03/28
[ "https://Stackoverflow.com/questions/22703978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3376486/" ]
While there is no problem in applying styles this way, and sure it does work in the browsers, you have to understand that this is not a standard way of applying styles. Since you have also asked from a 'practice' perspective, then, yes, this surely is not the right practice. The idea is: HTML is used to define the elements to be shown within the browser window, CSS is used to apply any styling that needs to be applied on these elements and JavaScript is used to perform any 'action' on it. So, from a practice perspective, this surely is bad practice! On another note, why the reluctance to create a class and apply it on the div? After all, this class can be reused as and when required. If you need it only once, then why not create an id selector? HTML: ``` <div class="rightApproach">The right way of applying styles</div> ``` CSS: `.rightApproach { color:Red; }` See this fiddle where you can see your approach as well as the correct way of applying styles, be it class selector or id selector. <http://jsfiddle.net/JkRPt/>
``` HTML: <div class="test"> CSS: .test { font:13px; } ``` its good to use classes. Example: ``` <div class="module accordion expand"></div> /* All these match that */ .module { } .accordion { } .expand { } ```
386,890
I have a folder which is with in the www folder (`/wamp/root/www`). I need to prevent this folder from being cut, copied and opened from others. Is there is any way to do this?
2012/02/07
[ "https://superuser.com/questions/386890", "https://superuser.com", "https://superuser.com/users/-1/" ]
Yes, you can set the permissions to allow read, write, and execute permissions by specific user or group. The permissions are accessible through Folder Properties > Security. See [this link](http://www.techrepublic.com/article/how-do-i-secure-windows-xp-ntfs-files-and-shares/6152061) for details on the specific procedure. You will want to give yourself and the web server process "Full Control" permissions, and give everyone else no permissions. Note that the web server (XAMPP) most likely runs as a *different user and group* to you. Giving yourself read/write permissions does not automatically let XAMPP read and write the folder. This can be the source of all kinds of entertaining permissions errors.
There is a program named [Prevent](http://www.softpedia.com/progDownload/Prevent-Download-139078.html), that does this. Download it , try it.
8,236,892
I've written some code that contains a main and a number of subclasses that inherit variables from a superclass. E.g. ``` class superclass(object): def __init__(self, var1, var2): self.var1 = var1 self.var2 = var2 class subclass1(superclass): def method1(self): pass class subclass2(superclass): def method1(self): pass ``` The main isn't shown, nor an option factory which is used to choose the subclass to call, but hopefully the info given will be sufficient. I wish to convert those classes to standalone modules that can be imported. It is expected that additional subclasses will be written in the future so I was wondering if it is possible to save each subclass and the superclass as seperate modules and that the subclasses will still be able to use/have access too the superclass variables and definitions. The reasoning behind this is to simplify the writing of any future subclasses. Meaning they can be written as a stand alone module and the previous subclasses and the superclass don't have to be touched as part of the development, but they will still be able to use the suberclasses variables and definitions. I'm not sure how it would work or if it can be done that way. Whether I just save all the classes as superclass.py, subclass1.py and subclass2.py and import them all????? Hope that made sense. Thanks.
2011/11/23
[ "https://Stackoverflow.com/questions/8236892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/788462/" ]
Yes, obviously that is possible - That is the beauty of python ! **Module 1** ``` class base: def p(self): print "Hello Base" ``` **Module 2** ``` from module1 import base class subclass(base): def pp(self): print "Hello child" ``` **Python Shell** ``` from module2 import subclass ob = subclass() ob.p() "Hello Base" ob.pp() "Hello child" ``` :)
In superclass.py: ``` class superclass(object): def __init__(self, var1, var2): self.var1 = var1 self.var2 = var2 ``` Then in subclass1.py: ``` from superclass import superclass class subclass1(superclass): def method1(self): pass ```
25,858,354
I want to source a gist into my bash shell, how do I do this in one line? In other words, I do not want to create an intermediate file. I tried this, but it fails to source the remote file: ``` source <(curl -s -L https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash) ``` Running on Mac OSX 10.9. Thanks in advance.
2014/09/15
[ "https://Stackoverflow.com/questions/25858354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1574942/" ]
Apple ships an ancient version of Bash, Bash 3.2; Bash 4 was released 5 years ago. Here are a few possible ways to work around this: 1. Install [MacPorts](https://www.macports.org/), [Homebrew](http://brew.sh/), or [pkgsrc](https://www.pkgsrc.org/) and install Bash via one of them; or just [build and install it yourself from source](http://www.gnu.org/software/bash/). Remember to add your newly installed bash to `/etc/shells` so you can set it as your shell, then go to "System Preferences > Users and Groups", click the lock to supply your password so you can make changes, then right click (two-finger click/control click) on your user to choose "Advanced Options..." and change your shell there. 2. If you must be compatible with the 7 year old Bash shipped on OS X, you could just save the file and source it from there. Here's an example Bash function to make that easier: ``` function curlsource() { f=$(mktemp -t curlsource) curl -o "$f" -s -L "$1" source "$f" rm -f "$f" } curlsource https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash ``` 3. If you absolutely must avoid even creating a temporary file, and must run on ancient versions of Bash, the best you can do is read into a string and eval the result. I tried to emulate the effect of `source <(cmd)` by creating a FIFO (named pipe), piping the output of `cmd` into it, and reading it with `source`, but got nothing. It turns out, taking a look at the source for Bash 3.2, `source` simply reads the whole file into a string, and it checks the file size before doing so. A FIFO returns a size of 0 when you `stat` it, so `source` happily allocates a string of length 1 (for the trailing null), reads 0 bytes into it, and returns success. So, since `source` is just reading the whole file into a string and then evaluating that, you can just do the same: ``` eval "$(curl -s -L https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash)" ```
OS X still ships `bash` 3.2 by default; in that version, the `source` command does not seem to work properly with process substitutions, as can be demonstrated with a simple test: ``` $ source <(echo FOO=5) $ echo $FOO $ ``` The same `source` command does, however, work in `bash` 4.1 or later (I don't have a 4.0 installation to test, and the release notes seem to be silent on the matter.)
23,697
We use both tomato ketchup and curry ketchup as condiments in Belgium. On the curry ketchup label, amongst other ingredients is "curry (1%)". So I tried adding curry powder to regular ketchup to see whether I could end up with curry ketchup, but I think the taste was off. The colour was close though. I know "curry powder" is a spice mix that can differ, but is curry ketchup really just ketchup with curry powder added? Or do they mean a bit of a real curry (the dish)? Or are there other differences? Is it possible to make curry ketchup with regular ketchup?
2012/05/10
[ "https://cooking.stackexchange.com/questions/23697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/4580/" ]
Curry Ketchup is made with a Ketchup base, but then adds Curry, vinegar, a small amount of spices like pepprika, and two little known ingredients... apples and soy sauce... if you make it to this recipe then you can get close. See the following from Hienz Water, sugar, tomato paste (17%), vinegar, apples, modified starch, curry (2.2%) (contains mustard and celery), salt, soy sauce (water, soy beans, wheat, salt), spices, thickener (guar kernel flour, xanthan) herb extracts. Ps make sure that the apple juice/sauce and soy sauce are minimal (as they are near the end of the ingredient list its the least used but the apple does make the difference in the sauce and the small amount of soyu brings it back from tasting too much like candy :)
This article seems to suggest that this is the case. <http://www.thekitchn.com/ketchup-with-a-kick-add-curry-87686>
70,627,117
I've got a simple wav header reader i found online a long time ago, i've gotten back round to using it but it seems to replace around 1200 samples towards the end of the data chunk with a single random repeated number, eg -126800. At the end of the sample is expected silence so the number should be zero. Here is the simple program: ``` void main() { WAV_HEADER* wav = loadWav(".\\audio\\test.wav"); double sample_count = wav->SubChunk2Size * 8 / wav->BitsPerSample; printf("Sample count: %i\n", (int)sample_count); vector<int16_t> samples = vector<int16_t>(); for (int i = 0; i < wav->SubChunk2Size; i++) { int val = ((wav->data[i] & 0xff) << 8) | (wav->data[i + 1] & 0xff); samples.push_back(val); } printf("done\n"); } ``` And here is the Wav reader: ``` typedef struct { //riff uint32_t Chunk_ID; uint32_t ChunkSize; uint32_t Format; //fmt uint32_t SubChunk1ID; uint32_t SubChunk1Size; uint16_t AudioFormat; uint16_t NumberOfChanels; uint32_t SampleRate; uint32_t ByteRate; uint16_t BlockAlignment; uint16_t BitsPerSample; //data uint32_t SubChunk2ID; uint32_t SubChunk2Size; //Everything else is data. We note it's offset char data[]; } WAV_HEADER; #pragma pack() inline WAV_HEADER* loadWav(const char* filePath) { long size; WAV_HEADER* header; void* buffer; FILE* file; fopen_s(&file,filePath, "r"); assert(file); fseek(file, 0, SEEK_END); size = ftell(file); rewind(file); std::cout << "Size of file: " << size << std::endl; buffer = malloc(sizeof(char) * size); fread(buffer, 1, size, file); header = (WAV_HEADER*)buffer; //Assert that data is in correct memory location assert((header->data - (char*)header) == sizeof(WAV_HEADER)); //Extra assert to make sure that the size of our header is actually 44 bytes assert((header->data - (char*)header) == 44); fclose(file); return header; } ``` Im not sure what the problem is, i've confirmed that there is no meta data, nor is there a mis match between the numbers read from the header of the file and the actual file. Im assuming its a size/offset misallignment on my side, but i cannot see it. Any help welcomed. Sulkyoptimism
2022/01/07
[ "https://Stackoverflow.com/questions/70627117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13965231/" ]
WAV is just a container for different audio sample formats. You're making assumptions on a wav file that would have been OK on Windows 3.11 :) These don't hold in 2021. Instead of rolling your own Wav file reader, simply use one of the available libraries. I personally have good experiences using [`libsndfile`](http://www.mega-nerd.com/libsndfile/), which has been around roughly forever, is very slim, can deal with all prevalent WAV file formats, and with a lot of other file formats as well, unless you disable that. This looks like a windows program (one notices by the fact you're using very WIN32API style capital struct names – that's a bit oldschool); so, you can download libsndfile's installer from the [github releases](https://github.com/libsndfile/libsndfile/releases) and directly use it in your visual studio (another blind guess).
Apple (macOS and iOS) software often does not create WAVE/RIFF files with just a canonical Microsoft 44-byte header at the beginning. Those Wave files can instead can use a longer header followed by a padding block. So you need to use the full WAVE RIFF format parsing specification instead of just reading from a fixed size 44 byte struct.
3,714,551
I'm wondering about the effectiveness, cost of, or resources used to call a public static const from a class Let's, hypothetically, say I have a class that has quite a few resources and calling the constructor is about 40kb of memory. Is there any difference in adding static constants to the same class as opposed to creating a small class with just the constants? In the case of an event dispatcher, the class listinging to the event would have something like this `addEventListener(myClass.Holla,onHolla);` or `addEventListener(myClassEventNames.Holla,onHolla);` Is there a difference (significant enough) to warrant using an extra class to store the event names?
2010/09/15
[ "https://Stackoverflow.com/questions/3714551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197546/" ]
Accessing a static member has nothing to do with the constructor; so you won't have any performance issues. --- If the event in question is a custom event, the convention is to declare the string constants for all events of a particular class of event (subclass of `flash.events.Event`) in that event subclass itself. For example, all mouse event constants are declared in `MouseEvent`, all menu related events are defined in `MenuEvent` etc. This convention will help you in code-completion if you're using the Flex mxmlc compiler. Let's say you have added the following metadata tag on top of a class definition (MyClass). ``` [Event(name="randomEvent", type="com.domain.events.CustomEvent")] public class MyClass extends EventDispatcher { } ``` Now you declare an instance of this class and type in addEventListener: ``` var myClass:MyClass = new MyClass(); myclass.addEventListener( ``` You'll get `CustomEvent.RANDOM_EVENT` in the autocomplete dropdown list. Without the metadata tag, it will just give you the two default items (activate and deactivate). The metadata tag tells the compiler that this class dispatches an event of class `CustomEvent` and type `randomEvent` - and the compiler assumes that the string constant is defined as per convention and gives you CustomEvent.RANDOM\_EVENT as the option. Auto-completion might still work if you declare string constant in `SomeOtherClass` and provide the name of that class in the metadata tag - but that would be misleading as the class in question does not dispatch any event of `SomeOtherClass`
You should profile this. If you set -compiler.debug=false -compiler.optimize=true, if you use [SWFInvestigator](http://labs.adobe.com/technologies/swfinvestigator/) you can see the opcodes necessary to read something. This code: ``` var str:String; str = ThisClass.TEST; str = TEST; str = SomeOtherClass.TEST; str = CONFIGCONSTANT::TEST; // fastest // str = ThisClass.TEST; 8 getproperty private::TEST //nameIndex = 30 // str = TEST; 12 getlex private::TEST //nameIndex = 30 // SomeOtherClass.TEST; 16 getlex pkg::SomeOtherClass //nameIndex = 29 18 getproperty TEST //nameIndex = 36 // CONFIGCONSTANT::TEST; 22 pushstring "test" ``` Either way, the speed decrease is minimal. You would have to do tens of millions of these operations to see any noticable decrease. However, you will save some processing, so you're going green! :)
50,534,961
In my database, I have a list of bands along with a popularity column, which is incremented or decremented when a user, on a webpage, presses a like or dislike button respectively. I want to select bands based on this popularity column. The probability that a band is selected depends on this popularity column, which is an integer value, and not a decimal value like 0.3, 0.1, which should make sense if one is working with probability, but in my case, I don't think it's possible. Example of my table: ``` Bands probability Led Zeppelin 79 Megadeth 4 Queen 37 Aerosmith 20 Guns N Roses 103 ``` Based on this, Guns N' Roses should have a highest chance of being selected, while Megadeth has the lowest chance of being selected, while other bands also each have their own chances of being selected. I'll be selecting 10 bands from a list of 2000.
2018/05/25
[ "https://Stackoverflow.com/questions/50534961", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9415320/" ]
First, compute the [cumulative probability](https://en.wikipedia.org/wiki/Cumulative_distribution_function) for each band (the sort order is arbitrary; you could just as well use some ID): ```sql SELECT Band, CAST((SELECT sum(probability) FROM Bands AS b2 WHERE b2.Band <= Bands.Band ) AS FLOAT) / (SELECT sum(probability) FROM Bands) AS CumProb FROM Bands ORDER BY Band; ``` ```sql Band CumProb --------------- --------------- Aerosmith 0.0823045267489 Guns N Roses 0.5061728395061 Led Zeppelin 0.8312757201646 Megadeth 0.8477366255144 Queen 1.0 ``` (As long as SQLite has not yet window functions, doing the summing in Python would be more efficient. But for 2000 rows, this does not really matter.) Then use a random number between 0 and 1 to look up one of the rows (the first one that is equal or larger): ```sql WITH CPBands(Band, CumProb) AS ( SELECT Band, CAST((SELECT sum(probability) FROM Bands AS b2 WHERE b2.Band <= Bands.Band ) AS FLOAT) / (SELECT sum(probability) FROM Bands) FROM Bands ) SELECT Band FROM CPBands WHERE CumProb >= ? ORDER BY CumProb ASC LIMIT 1; ``` Repeat as often as needed, ignoring duplicates.
If I understand your question and problem correctly you want to select the 10 bands that have the highest popularity/probability values right? in SQL you may be able to do: Select \* FROM table\_name ORDER BY popularity DESC LIMIT 10 This selects all columns in your table, sorts the by popularity in descending order (largest to smallest), and stops after the first 10 entries.
35,395
Keturah was Abraham's wife? > > Genesis 25:1-2 Abraham had taken another wife, whose name was Keturah. She bore him Zimran, Jokshan, Medan, Midian, Ishbak and Shuah. > > > Keturah was Abraham's concubine? > > 1 Chronicles 1:32 The sons born to Keturah, Abraham’s concubine: Zimran, Jokshan, Medan, Midian, Ishbak and Shuah. > > >
2014/12/14
[ "https://christianity.stackexchange.com/questions/35395", "https://christianity.stackexchange.com", "https://christianity.stackexchange.com/users/17592/" ]
The difference might come down to the purpose of each book. Genesis is a literal account and since Abraham was without wife when he bound Keturah to himself--she became his wife, performing the duties of a wife. Chronicles on the other hand is a historical record and perhaps a legal document for things such as inheritance through genealogies and also the royal lineage. So while being a technical difference between words that can mean essentially the same thing, when it comes to inheritance the difference becomes quite relevant. This assertion is my own, but to cite a source that backs up what I say about Chronicles having genealogies [gotQuestions summary of Chronicles](https://gotquestions.org/Book-of-1-Chronicles.html) is suitable. Another site to give perspective is from [Bible.org](https://bible.org/seriespage/4-historical-books) > > When producing the Septuagint, the translators divided Chronicles into > two sections. At that time it was given the title, “Of Things > Omitted,” > > > Also, > > The books of Chronicles seem like a repeat of Samuel and Kings, but > they were written for the returned exiles to remind them that they > came from the royal line of David and that they were God’s chosen > people. The genealogies point out that the Davidic promises had their > source in those pledged to Abraham that He would make him the father > of a great nation, one through which He would bless the nations. > > > As well as, > > This book also taught that the past was pregnant with lessons for > their present. Apostasy, idolatry, intermarriage with Gentiles, and > lack of unity were the reasons for their recent ruin. > > > OUTLINE: > > > First Chronicles naturally divides into four sections: (1) The > Genealogies or the Royal Line of David (1:1-9:44); (2) the Rise of > David or His Anointing (10:1-12:40), (3) The Reign of David > (13:1-29:21), and (4) The Assession of Solomon and the Death of David > (29:22-30). > > >
The word for wife in Genesis 25 could be translated as woman, according to Strong's Lexicon: <http://www.blueletterbible.org/lang/lexicon/lexicon.cfm?Strongs=H802&t=KJV>
40,237,154
The following function creates a button, and when I press on the button, `nextButtonPressed` is called but I keep getting error. > > unrecognized selector sent to instance. > > > ``` func createButton () { button.setTitle("Next", for: .normal) button.addTarget(self, action:Selector(("nextButtonPressed:")), for: UIControlEvents.touchUpInside) button.isHidden = true } ``` This is the `nextButtonPressed` which is being called. ``` func nextButtonPressed(sender:UIButton!) { print("next button was pressed") } ```
2016/10/25
[ "https://Stackoverflow.com/questions/40237154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7068934/" ]
Try this ``` Console.WriteLine( String.Format("{0:0.0}", d + d2)); ```
Use this ``` Console.WriteLine("{0:F}+{1:F}={2:F}",d,d1,d+d1); ``` "F" is Fixed-Point Format Specifier. You can also specify the desired number of decimal places by changing the "F" to "F2,F3 and so on" as per your requirement or leave it as "F" so that it can return output according to the variable.
13,613,659
I need to create anonymous inner types that are expensive to build and need to acces a final variable in it. The problem is that I need to create many of them with the only difference that the used final variable is different (object itself and type). Is it possible to do this in a more reusable manner? **A simple example:** ``` final Object aNeededParameter = "test"; Object expensiveToBuild = new ExpensiveInnerType() { public void doSomething() { aNeededParameter.toString(); } }); ``` I need instances of expensiveToBuild for different Objects at the same time, for example 1L, new Date(), "another similar usecase" The basic idea is to create proxies for different serialized instances of concrete classes at runtime, that deserialize this objects on first usage. I made a working example (link at the bottom) but the proxy creation is very expensive. There is a ProxyFactory that needs a MethodHandler to create a Proxy object via bytecode-enhancement. The MethodHandler defines a Method ``` invoke(Object self, Method realMethod, Method proxyMethod, Object[] args) throws Throwable ``` In my case this method needs access to a byte[] containing the serialized object the proxy is build for. So I have to create a new MethodHandler and build/compile a new Proxy object for each object I want a proxy for. The invoke method is called automatically before every method call on the original object and simply checks if that object initialized and deserializes it if not. After that it invokes the called method on the original object. **If you want to see the concrete usecase look here:** [Lazy deserializationproxy for java](https://gist.github.com/4515273865853b46fac1 "Lazy deserializationproxy for java")
2012/11/28
[ "https://Stackoverflow.com/questions/13613659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Just don't make it anonymous. The point of anonymous classes is when you don't expect to reuse them. Pass the necessary final variable to the class via a constructor. Other option is to allow `doSomething` to take the parameter instead of the constructor, if you want the anonymous class to be instantiated once. You will still need to make it not anonymous and it will need to be owned by the parent class but this allows you to only use one object. Whether this is good design depends on the specifics.
Make the object the anonymous inner class references an instance of some wrapper class for the object that changes (e.g. one of the instance variables of the wrapper class is of type object, and just change that variable as needed).
60,389,716
I am using 'react-highcharts' for rendering charts in my project. Simple charts such as bar chart ,pie chart are working fine .But when I am trying to render bubble charts, I am getting typeerror. Here is the implementation: ``` import React from 'react'; import ReactHighcharts from 'react-highcharts'; import Highcharts from 'highcharts/highcharts'; import HC_MORE from 'highcharts/highcharts-more'; const BubbleChart = (props) => { if (typeof Highcharts === 'object') { HC_MORE(Highcharts); } const config = { chart: { type: 'packedbubble', height: '100%', }, title: { text: 'Carbon emissions around the world (2014)', }, tooltip: { useHTML: true, pointFormat: '<b>{point.name}:</b> {point.value}m CO<sub>2</sub>', }, plotOptions: { packedbubble: { minSize: '30%', maxSize: '120%', zMin: 0, zMax: 1000, layoutAlgorithm: { splitSeries: false, gravitationalConstant: 0.02, }, dataLabels: { enabled: true, format: '{point.name}', filter: { property: 'y', operator: '>', value: 250, }, style: { color: 'black', textOutline: 'none', fontWeight: 'normal', }, }, }, }, series: [{ name: 'Europe', data: [{ name: 'Germany', value: 767.1, }, { name: 'Croatia', value: 20.7, }, { name: 'Belgium', value: 97.2, }, { name: 'Czech Republic', value: 111.7, }, { name: 'Netherlands', value: 158.1, }, { name: 'Spain', value: 241.6, }, { name: 'Ukraine', value: 249.1, }, { name: 'Poland', value: 298.1, }, { name: 'France', value: 323.7, }, { name: 'Romania', value: 78.3, }, { name: 'United Kingdom', value: 415.4, }, { name: 'Turkey', value: 353.2, }, { name: 'Italy', value: 337.6, }, { name: 'Greece', value: 71.1, }, { name: 'Austria', value: 69.8, }, { name: 'Belarus', value: 67.7, }, { name: 'Serbia', value: 59.3, }, { name: 'Finland', value: 54.8, }, { name: 'Bulgaria', value: 51.2, }, { name: 'Portugal', value: 48.3, }, { name: 'Norway', value: 44.4, }, { name: 'Sweden', value: 44.3, }, { name: 'Hungary', value: 43.7, }, { name: 'Switzerland', value: 40.2, }, { name: 'Denmark', value: 40, }, { name: 'Slovakia', value: 34.7, }, { name: 'Ireland', value: 34.6, }, { name: 'Croatia', value: 20.7, }, { name: 'Estonia', value: 19.4, }, { name: 'Slovenia', value: 16.7, }, { name: 'Lithuania', value: 12.3, }, { name: 'Luxembourg', value: 10.4, }, { name: 'Macedonia', value: 9.5, }, { name: 'Moldova', value: 7.8, }, { name: 'Latvia', value: 7.5, }, { name: 'Cyprus', value: 7.2, }], }, { name: 'Africa', data: [{ name: 'Senegal', value: 8.2, }, { name: 'Cameroon', value: 9.2, }, { name: 'Zimbabwe', value: 13.1, }, { name: 'Ghana', value: 14.1, }, { name: 'Kenya', value: 14.1, }, { name: 'Sudan', value: 17.3, }, { name: 'Tunisia', value: 24.3, }, { name: 'Angola', value: 25, }, { name: 'Libya', value: 50.6, }, { name: 'Ivory Coast', value: 7.3, }, { name: 'Morocco', value: 60.7, }, { name: 'Ethiopia', value: 8.9, }, { name: 'United Republic of Tanzania', value: 9.1, }, { name: 'Nigeria', value: 93.9, }, { name: 'South Africa', value: 392.7, }, { name: 'Egypt', value: 225.1, }, { name: 'Algeria', value: 141.5, }], }, { name: 'Oceania', data: [{ name: 'Australia', value: 409.4, }, { name: 'New Zealand', value: 34.1, }, { name: 'Papua New Guinea', value: 7.1, }], }, { name: 'North America', data: [{ name: 'Costa Rica', value: 7.6, }, { name: 'Honduras', value: 8.4, }, { name: 'Jamaica', value: 8.3, }, { name: 'Panama', value: 10.2, }, { name: 'Guatemala', value: 12, }, { name: 'Dominican Republic', value: 23.4, }, { name: 'Cuba', value: 30.2, }, { name: 'USA', value: 5334.5, }, { name: 'Canada', value: 566, }, { name: 'Mexico', value: 456.3, }], }, { name: 'South America', data: [{ name: 'El Salvador', value: 7.2, }, { name: 'Uruguay', value: 8.1, }, { name: 'Bolivia', value: 17.8, }, { name: 'Trinidad and Tobago', value: 34, }, { name: 'Ecuador', value: 43, }, { name: 'Chile', value: 78.6, }, { name: 'Peru', value: 52, }, { name: 'Colombia', value: 74.1, }, { name: 'Brazil', value: 501.1, }, { name: 'Argentina', value: 199, }, { name: 'Venezuela', value: 195.2, }], }, { name: 'Asia', data: [{ name: 'Nepal', value: 6.5, }, { name: 'Georgia', value: 6.5, }, { name: 'Brunei Darussalam', value: 7.4, }, { name: 'Kyrgyzstan', value: 7.4, }, { name: 'Afghanistan', value: 7.9, }, { name: 'Myanmar', value: 9.1, }, { name: 'Mongolia', value: 14.7, }, { name: 'Sri Lanka', value: 16.6, }, { name: 'Bahrain', value: 20.5, }, { name: 'Yemen', value: 22.6, }, { name: 'Jordan', value: 22.3, }, { name: 'Lebanon', value: 21.1, }, { name: 'Azerbaijan', value: 31.7, }, { name: 'Singapore', value: 47.8, }, { name: 'Hong Kong', value: 49.9, }, { name: 'Syria', value: 52.7, }, { name: 'DPR Korea', value: 59.9, }, { name: 'Israel', value: 64.8, }, { name: 'Turkmenistan', value: 70.6, }, { name: 'Oman', value: 74.3, }, { name: 'Qatar', value: 88.8, }, { name: 'Philippines', value: 96.9, }, { name: 'Kuwait', value: 98.6, }, { name: 'Uzbekistan', value: 122.6, }, { name: 'Iraq', value: 139.9, }, { name: 'Pakistan', value: 158.1, }, { name: 'Vietnam', value: 190.2, }, { name: 'United Arab Emirates', value: 201.1, }, { name: 'Malaysia', value: 227.5, }, { name: 'Kazakhstan', value: 236.2, }, { name: 'Thailand', value: 272, }, { name: 'Taiwan', value: 276.7, }, { name: 'Indonesia', value: 453, }, { name: 'Saudi Arabia', value: 494.8, }, { name: 'Japan', value: 1278.9, }, { name: 'China', value: 10540.8, }, { name: 'India', value: 2341.9, }, { name: 'Russia', value: 1766.4, }, { name: 'Iran', value: 618.2, }, { name: 'Korea', value: 610.1, }], }], }; return ( <ReactHighcharts config={config} /> ); }; BubbleChart.propTypes = { }; export default BubbleChart; ``` When rendered, the above code gives error as ``` Cannot read property 'parts/Globals.js' of undefined ``` Any suggestion on how to debug this issue ?
2020/02/25
[ "https://Stackoverflow.com/questions/60389716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11011219/" ]
This feels more like a `CSS` question: I would style the output with a flex container to house all `property-details` items, and a flex container on each `property-details` as well. The outer container creates a stacked, vertical alignment using `flex-direction: column` and the inner flex items keep both `div`s on the same line using the flex's default row direction. [![enter image description here](https://i.stack.imgur.com/2FdEZ.png)](https://i.stack.imgur.com/2FdEZ.png) ```css .property-details-container { display: inline-flex; flex-direction: column; align-items: flex-start; } .property-details { display: inline-flex; align-items: center; border-bottom: 1px solid; margin-bottom: 0.5rem; } .property-details .value { font-weight: bold; padding-left: .5rem; } ``` ```html <div class="property-details-container"> <div class="property-details"> <div class="label">Price</div> <div class="value">10,000</div> </div> <div class="property-details"> <div class="label">Year Built</div> <div class="value">1982</div> </div> </div> ``` [jsFiddle](https://jsfiddle.net/e5q4c30j/3/)
Try this.. ``` $output .= '<div class="property-details">'; foreach ( (array) $this->property_details as $label => $key ) { $output .= sprintf( '<div class="label" style="display: inline-block;">%s</div><div class="value" style="display: inline-block;">%s</div>', $label, $value ) ); } $output .= '</div>'; return $output; ``` Though unless you're using some library like Bootstrap I encourage you to use `span` instead: ``` $output .= '<div class="property-details">'; foreach ( (array) $this->property_details as $label => $key ) { $output .= sprintf( '<span class="label" >%s</span ><span class="value">%s</span >', $label, $value ) ); } $output .= '</div>'; return $output; ``` Use `br` tags for multilines: ``` $output .= '<div class="property-details">'; foreach ( (array) $this->property_details as $label => $key ) { $output .= sprintf( '<span class="label" >%s</span ><span class="value">%s</span > <br>', $label, $value ) ); } $output = substr_replace($output , '', strrpos($output , '<br>'), 4); $output .= '</div>'; return $output; ```
88,511
**Context:** A company BE-Best has its headquarter in US, and legal entities in EU countries A, B and C. An employee living in country A got job at BE-Best and contract in the county A. After some time the employee moved to the country B from private reason only. **Question 1:** Does the employee has to relocate to the legal entity of his employer in the country B? **Question 2:** Is there any law that force to relocate the employee from one legal entity to the other because of his place of living (a permanent establishment)? **Question 3:** The employer has legal entities in EU countries A,B and C and open position in the country B. The employee lives in the country A. Does this fact determines that the employment has to be done in the county B (here is the open position) or in country A (because of employee place of living) or in this case employee has free of choice and can decide in which country to sign the contract?
2023/01/24
[ "https://law.stackexchange.com/questions/88511", "https://law.stackexchange.com", "https://law.stackexchange.com/users/48722/" ]
Prosecutions for falsely reporting rape are [at least as common as perjury convictions](https://law.stackexchange.com/questions/88440/how-common-actually-are-perjury-proceedings/88471#88471) (and usually don't count as perjury since the initial report is rarely made under oath), and even when charges are brought by prosecutors involving false statements made under oath, prosecutors tend to favor lesser misdemeanor false reporting charges over perjury charges. *See, e.g.*, cases with news reports of such cases in Colorado and Wyoming on [May 24, 2021](https://www.justice.gov/usao-wy/pr/lander-woman-sentenced-making-false-accusations-sexual-assault), [September 11, 2015](https://www.denverpost.com/2015/09/11/denver-woman-arrested-for-making-false-rape-accusation/), [August 29, 2014](https://www.9news.com/article/news/crime/woman-convicted-for-false-rape-report/73-249847150), and [March 18, 2008](https://www.denverpost.com/2008/03/18/woman-accused-of-false-rape-report/). This isn't to say that these cases are terribly common although they tend to generate headlines when they are brought. The State of Colorado commenced [1,801 felony sex offense cases in the 2021 fiscal year](https://www.courts.state.co.us/userfiles/file/Administration/Planning_and_Analysis/Annual_Statistical_Reports/2021/FY2021%20Annual%20Statistical%20Report.pdf), for example, which was not atypical, and false reporting of sex offense cases are brought in Colorado maybe once every year or two. The conviction rate in sex offense cases that are prosecuted isn't 100%, but something on the order of 90%-95% of sex offense cases result in a guilty plea, and well over half of the remaining cases result in convictions at trial. As an order of magnitude estimate, perhaps one in fifty to one in two hundred cases where sex offense charges are pursued, but there is not a conviction, gives rise to false reporting charges against the alleged victim. Many acquittals and dismissals of charges that do occur are best characterized as cases where there is a reasonable doubt because jurors believe that it is reasonably possible that there may have been a good faith witness misidentification, or because charges were dismissed because a confession or evidence obtained in a search was unlawfully obtained and had to be suppressed. It would be very rare for a defendant to be acquitted (after a judge in a preliminary hearing found that probable cause was present) because the jury believed that the testimony of a victim was believed to be intentionally false, and there is no way to tell from the verdict itself that the jury reached this conclusion. The problematic aspect of charges of false reporting of sex offenses is the there have been [famous instances of women being convicted for falsely reporting rape](https://en.wikipedia.org/wiki/An_Unbelievable_Story_of_Rape) (see also [here](https://isthmus.com/news/news/national-rape-story-has-madison-parallels/) focusing on a different case), only to subsequently have the allegations for which the victims were punished confirmed to be true with DNA and other evidence. The number of cases where true allegations of sex offenses are made but not pursued because law enforcement finds the allegations to not be credible, almost surely greatly exceeds the number of cases where false reports of sex offenses are made to police, although this ratio varies greatly from one police department to another based upon the institutional culture of the police department in question.
> > what happens if the allegations are found to be false? > > > what happens if the allegations are found to be patently fabricated? > > > how often does the complainant ... face consequences for making the false complaint? > > > In the course of the prosecution of the accused — **never**. That is because when someone is accused of a criminal offence, the allegations are *never* found to be false or "patently fabricated". Instead, they are found to be either: * True beyond reasonable doubt ("guilty" verdict); or * *Not* true beyond reasonable doubt ("not guilty" verdict) The latter is not the same as "false" — pretty much as "not guilty" is not the same as "innocent". It just means that there isn't enough certainty in the allegations to convict the defendant. Acquittals may also happen *before* the case progresses to trial — if the charges get dismissed for outright insufficiency of the evidence for a possible conviction. Again, this doesn't mean that the allegations are false — only that they won't *possibly* prove to be true beyond reasonable doubt. The only typical recourse for the acquitted is to initiate a civil proceeding against the complainant. Then they might be able to prove on the balance of probabilities (preponderance of the evidence in the US) that the accusations were false / patently fabricated / concocted for vindictive purposes and fetch some compensation. Another one might be to initiate a private prosecution for perjury (in the UK, Canada, New Zealand). But this time the assertion that the accusations were false (while claimed to be true from the witness stand) needs to be proved beyond reasonable doubt.
34,443,946
How would I count consecutive characters in Python to see the number of times each unique digit repeats before the next unique digit? At first, I thought I could do something like: ``` word = '1000' counter = 0 print range(len(word)) for i in range(len(word) - 1): while word[i] == word[i + 1]: counter += 1 print counter * "0" else: counter = 1 print counter * "1" ``` So that in this manner I could see the number of times each unique digit repeats. But this, of course, falls out of range when `i` reaches the last value. In the example above, I would want Python to tell me that 1 repeats 1, and that 0 repeats 3 times. The code above fails, however, because of my `while` statement. How could I do this with just built-in functions?
2015/12/23
[ "https://Stackoverflow.com/questions/34443946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4054573/" ]
If we want to count **consecutive** characters **without looping**, we can make use of `pandas`: ``` In [1]: import pandas as pd In [2]: sample = 'abbcccddddaaaaffaaa' In [3]: d = pd.Series(list(sample)) In [4]: [(cat[1], grp.shape[0]) for cat, grp in d.groupby([d.ne(d.shift()).cumsum(), d])] Out[4]: [('a', 1), ('b', 2), ('c', 3), ('d', 4), ('a', 4), ('f', 2), ('a', 3)] ``` The key is to find **the first elements** that are different from their previous values and then make proper groupings in `pandas`: ``` In [5]: sample = 'abba' In [6]: d = pd.Series(list(sample)) In [7]: d.ne(d.shift()) Out[7]: 0 True 1 True 2 False 3 True dtype: bool In [8]: d.ne(d.shift()).cumsum() Out[8]: 0 1 1 2 2 2 3 3 dtype: int32 ```
Here is my simple solution: ``` def count_chars(s): size = len(s) count = 1 op = '' for i in range(1, size): if s[i] == s[i-1]: count += 1 else: op += "{}{}".format(count, s[i-1]) count = 1 if size: op += "{}{}".format(count, s[size-1]) return op ```
33,407,462
In My database has two column and data like below ``` StartDate Enddate 2015-10-01 2015-10-30 2015-10-15 2015-11-15 2015-09-15 2015-10-15 ``` if i search with startdat : 2015-10-16 Enddate : 2015-10-20 than want all above result,Please help me My query as below ``` Select * from campaign as c LEFT JOIN campaign_team as t ON c.campaign_id=t.campaign_id where t.user_id ='6' and (campaign_sdate BETWEEN '2015-10-16' AND '2015-10-20' or campaign_edate BETWEEN '2015-10-16' AND '2015-10-20' or ( campaign_sdate <= 2015-10-16 and campaign_edate >= 2015-10-20 ) ); ``` Thanks in advance
2015/10/29
[ "https://Stackoverflow.com/questions/33407462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1215387/" ]
If you want to set it dynamically you can use: ``` ActionBarDrawerToggle actionBarDrawerToggle = new ActionBarDrawerToggle(this, drawerLayout, toolbar, R.string.open_drawer, R.string.close_drawer); actionBarDrawerToggle.getDrawerArrowDrawable().setColor(getResources().getColor(R.color.colorAccent)); ```
Just use this code: ``` navigationView.setItemIconTintList(null); ```
19,827,572
I am currently working on a simple php website The problem is , the images in my whole web site(happens in all php files) randomly corrupt and show the error **Resource interpreted as Image but transferred with MIME type text/html**, however, if I try to refresh the page several times. The image can be loaded again and the error is gone. I have checked all img path and the image is exist. Also, I checked there is no `img src=""` in my file. Is it due to server setting? I check .htaccess file and it is blank. How to fix the problem ? Thanks **Chrome web developer:** ``` Request URL:http://goodbyedear.com.hk/images/index_48.jpg Request Method:GET Status Code:200 OK Request Headersview source Accept:image/webp,*/*;q=0.8 Accept-Encoding:gzip,deflate,sdch Accept-Language:zh-TW,zh;q=0.8,en-US;q=0.6,en;q=0.4 Cache-Control:max-age=0 Connection:keep-alive Cookie:PHPSESSID=ee5297bd4973576b6a318cd9a33c4151; aaaaaaa=96b0422aaaaaaaa_96b0422a Host:goodbyedear.com.hk If-Modified-Since:Mon, 21 Oct 2013 17:59:24 GMT Referer:http://goodbyedear.com.hk/index.php User-Agent:Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 Response Headersview source Cache-Control:no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Connection:Close Content-Length:144 Expires:Sat, 6 May 1995 12:00:00 GMT P3P:CP=NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM Pragma:no-cache ``` **My code (for reference):** ``` <?php session_start(); require_once('db_connect.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>首頁</title> <link rel="stylesheet" type="text/css" href="css/style.css"> <script type="text/javascript" src="js/jquery.latest.js"></script> </head> <body> <div id="container"> <?php require_once ('header.php'); ?> <div id="center"> <img id="slider" src="images/index_17.jpg" /> <div id="subGroup"> <div class="sub"> <img class="subLeft" src="images/index_18.jpg" /> <img class="subTopRight" src="images/index_19.jpg" /> <div class="subBottomRight">作為全港第一間成立的寵物火化殯儀公司,其心路歷程盡在此。<a class="viewAll" href="intro.php">View All</a></div> </div> <div class="sub" > <img class="subLeft" src="images/index_21.jpg" /> <img class="subTopRight" src="images/index_22.jpg" /> <div class="subBottomRight">讓您了解詳細的服務流程, 令您更安心... <a class="viewAll" href="service.php">View All</a></div> </div> <div class="sub"> <img class="subLeft" src="images/index_28.jpg" /> <img class="subTopRight" src="images/index_29.jpg" /> <div class="subBottomRight">我們了解每隻寵物在主人心目中都是獨一無二,所以提供各種不同的紀念品,讓客人選擇製作獨一無二的紀念品,讓您的寵兒以另一型式留在主人身邊。<a class="viewAll" href="souvenir.php">View All</a></div> </div> <div class="sub"> <img class="subLeft" src="images/index_30.jpg" /> <img class="subTopRight" src="images/index_31.jpg" /> <div class="subBottomRight">感謝您對我們的任何意見,歡迎留言給我們!<a class="viewAll" href="board.php">View All</a></div> </div> </div> <div id="latestNews"> <img id="latestTitle" src="images/index_23.jpg" /> <img id="latestLeft" src="images/index_25.gif" /> <div id="latestContent"> <?php $sql = "SELECT * FROM pet_news LIMIT 6"; $result = $dbh->query($sql); if ($result->rowCount() == 0) { echo "<p>沒有最新消息</p>"; } else { foreach ($result->fetchAll() as $key => $row) { ?> <div class="newsItem"> <div class="newsBoxTitle"><?php echo $row["title"];?></div> <div class="newsBoxDate"><?php echo $row["date"];?></div> <div class="newsBoxContent"><?php echo mb_substr($row["content"],0,30,"UTF-8")."......";?></div> <a class="newsLink" href="news.php?page=1#news<?php echo ($key + 1);?>">詳情</a> </div> <?php } } $dbh = null; ?> </div> <img id="latestBottom" src="images/index_36.gif" /> </div> </div> <?php require_once ('footer.php'); ?> </div> </body> </html> ```
2013/11/07
[ "https://Stackoverflow.com/questions/19827572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/782104/" ]
The problem you are facing is caused (most probably) by this line of your httpd.conf: ``` #LoadModule mime_module modules/mod_mime.so ``` You must uncomment it so that it looks like (restart apache after that) this: ``` LoadModule mime_module modules/mod_mime.so ``` This module is responsible for providing mime types (Content-Type headers) to your HTTP responces. Without it (Content-Type header) your browser must perform so called type sniffing which is roughy described [here](http://msdn.microsoft.com/en-us/library/ms775147%28v=vs.85%29.aspx) for IE (but may look alike for other browsers) Another reason (if you have this module working) is that you lack **mime.types** file or it does not contain mime types for files that you are trying to serve. See below - on the left - this is how you are serving your images and on the right is how it should look like: ![enter image description here](https://i.stack.imgur.com/uBSnE.png) Left - mod\_mime commented out Right - mod\_mime uncommented
The image is not "corrupt". Rather, you get a HTML file that tries to reload the image itself; and this kind of thing is not supported by all browsers, since JS redirect is not a standard HTTP redirect. This is the anomalous contents: ``` HTTP/1.0 200 OK Expires: Sat, 6 May 1995 12:00:00 GMT P3P: CP=NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Content-Length: 144 Connection: Close <html><body><script>document.cookie='jjjjjjj=8eb87ff1jjjjjjj_8eb87ff1; path=/';window.location.href=window.location.href;</script></body></html> ``` What could causing this? Well, of course you are **not** getting the image; the request does not "land" on the image, **it is being intercepted** - check the settings for `mod_rewrite`. The system is apparently trying to set a cookie and redirecting, but it is doing so in Javascript. You ought to do this server-side in PHP and only send Cookie headers and a 302 Redirect: ``` <?php setcookie('jjjjjjj', '8eb87ff1jjjjjjj_8eb87ff1'); header("Location: {$_SERVER['REQUEST_URI']}"); exit(); ?> ``` I also got, for the same URL, a message saying that **the image is not there**. Apparently there is more than one server answering requests. When the request hits a "good" server you get the image, otherwise you get a HTML "please retry" or an outright error. ``` HTTP/1.1 406 Not Acceptable Date: Sat, 16 Nov 2013 22:46:48 GMT Server: Apache Content-Length: 389 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>406 Not Acceptable</title> </head><body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /images/index_48.jpg could not be found on this server.</p> <p>Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html> ```
53,074,786
Most of us may be aware of normal distribution curves however those who are new to front-loaded and back-loaded normal distribution, I would like to provide the background and then would proceed on stating my problem. --- **Front-Loaded Distribution**: As demonstrated below, it have a rapid start. For e.g. in a project when more resources assumed to be consumed early in the project, cost/hours is distributed aggressively at the start of project. [![Front-Loaded Distribution / S Curve](https://i.stack.imgur.com/2yAEU.png)](https://i.stack.imgur.com/2yAEU.png) --- **Back-Loaded Distribution**: Contrary to Front-Loaded distribution, it start out with a lower slope and increasingly steep towards the end of the project. For e.g. when most resources assumed to be consumed late in the project. [![Rear Load Distribution S Curve](https://i.stack.imgur.com/0PkLr.png)](https://i.stack.imgur.com/0PkLr.png) In the above charts, **green line is S-Curve** which represents cumulative distribution (utilization of resources over the proposed time) and the blue Columns represents the isolated distribution of resources (Cost/Hours) in that period. --- For reference, I am providing the Bell Curve / standard normal distribution (when Mean=Median) chart (below) and the associated formula to begin with. [![Normal Distribution S Curve](https://i.stack.imgur.com/UaP3J.png)](https://i.stack.imgur.com/UaP3J.png) --- **Problem Statement**: I was able to generate the normal distribution curve (See below with formulae) however I am unable to find a solution for Front loaded or Back Loaded curves. *How to bring the skewness to the right (front-loaded / positively skewed distribution which means mean is greater than median) and left skewed (back-loaded / negatively skewed distribution which means mean is less than median) in a normal distribution?* [![Gaussian Bell Curve with Excel Formula](https://i.stack.imgur.com/fesb9.png)](https://i.stack.imgur.com/fesb9.png) **Formula Explaned**: *Cell B8* denotes arbitrarily chosen standard deviation. It affects the kurtosis of normal distribution. In the above screenshot, I am choosing the range of the normal distribution to be from -3SD to 3SD. *Cell B9 to B18* denotes the even distribution of Z-Score using the formula: ``` =B8-((2*$B$8)/Period) ``` *Cell C9 to C18* denotes the normal distribution on the basis of Z Score and the Amount using the formula: ``` =(NORMSDIST(B9)-NORMSDIST(B8))*Amount/(1-2*NORMSDIST($B$8)) ``` --- **Update:** Following one of the link in comment, I closest got to the below situation. The issue is highlighted in Yellow pattern as due to the usage of volatile Rand() function the charts are not smooth as they should be. As my given formula above do not create ZigZag pattern, I am sure we can have skewed normal distribution and smooth too ! [![ZigZag Columns Issue in Normal Distribution](https://i.stack.imgur.com/WV2XM.png)](https://i.stack.imgur.com/WV2XM.png) Note: 1. I am using Excel 2016, so I welcome if any newly introduced formula can solve my problem. Also, I am not hesitant to use UDFs. 2. The numbers of front-load and back-load distribution are notional. They could vary. I am only interested in shape of resulting chart. Kindly help !
2018/10/31
[ "https://Stackoverflow.com/questions/53074786", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3468139/" ]
If you want to make sure the bins always have a value in them, you can use the following approach, which uses normal distributions and simply changes the mean and the standard deviation to get a curve that you want. Changing the mean moves the peak to the left or right. Changing the standard deviation makes the quantities more uniform or more variable. I've used 0-1000 as my default range in the example below, but it should be easy to modify the formula to bring any value you want. NOTE in order to fulfill your requirement that all bins must be non-zero, you need to manually adjust the numbers till you get a curve that suits. Yellow cells are for data entry, green cells are a count (so if you add bins, they would need to be numbered according to the sequence). [![normal and skewed distribution examples](https://i.stack.imgur.com/mEIlE.png)](https://i.stack.imgur.com/mEIlE.png) Formula in cell B7 (copied down to cell B16): `=NORMDIST($A7*1000/MAX($A$6:$A$17),$B$3,$B$4,TRUE)-NORMDIST($A6*1000/MAX($A$6:$A$17),$B$3,$B$4,TRUE)` Formula in cell C7 (copied down to cell C16): `=IF(A7=MAX($A$6:$A$17),$C$5-SUM(C$6:C6),ROUND(B7/SUM($B$7:$B$17)*$C$5,0))` Adding new bins is simple enough and is still based on a 0-1000 range, so you don't need to change any numbers other than adding rows and copying down the formulae: [![skewed distribution with more bins](https://i.stack.imgur.com/JFNNb.png)](https://i.stack.imgur.com/JFNNb.png) The above example is also showing how a narrow standard deviation and a high mean combine to make the starting bins have very little quantity. But there is still a value (as long as count is big enough). You may want to pre-define the different skewness selections if this is going to be used by other people (make column B dependent on a lookup, for example) but hopefully this is extensible enough for your needs.
If you are open to a Python answer the I can give you the code to get Python Pandas libary to generate the random observations from a skewed Normal and then bin (bucket) them for you. The following in a Python script which captures the use case but also can be created using COM and so creatable from VBA. ``` import numpy as np import pandas as pd from scipy.stats import skewnorm class PythonSkewedNormal(object): _reg_clsid_ = "{1583241D-27EA-4A01-ACFB-4905810F6B98}" _reg_progid_= 'SciPyInVBA.PythonSkewedNormal' _public_methods_ = ['GeneratePopulation','BinnedSkewedNormal'] def GeneratePopulation(self,a, sz): # https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html np.random.seed(10); #https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.stats.skewnorm.html return skewnorm.rvs(a, size=sz).tolist(); def BinnedSkewedNormal(self,a, sz, bins): # https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html np.random.seed(10); #https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.stats.skewnorm.html pop = skewnorm.rvs(a, size=sz); #.tolist(); bins2 = np.array(bins) bins3 = pd.cut(pop,bins2) table = pd.value_counts(bins3, sort=False) table.index = table.index.astype(str) return table.reset_index().values.tolist(); if __name__=='__main__': print ("Registering COM server...") import win32com.server.register win32com.server.register.UseCommandLine(PythonSkewedNormal) ``` And the VBA client code ``` Option Explicit Sub TestPythonSkewedNormal() Dim skewedNormal As Object Set skewedNormal = CreateObject("SciPyInVBA.PythonSkewedNormal") Dim lSize As Long lSize = 100 Dim shtData As Excel.Worksheet Set shtData = ThisWorkbook.Worksheets.Item("Sheet3") '<--- change sheet to your circumstances shtData.Cells.Clear Dim vBins vBins = Array(-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5) 'Stop Dim vBinnedData vBinnedData = skewedNormal.BinnedSkewedNormal(-5, lSize, vBins) Dim rngData As Excel.Range Set rngData = shtData.Cells(2, 1).Resize(UBound(vBins) - LBound(vBins), 2) rngData.Value2 = vBinnedData 'Stop End Sub ``` Sample output ``` (-5, -4] 0 (-4, -3] 0 (-3, -2] 4 (-2, -1] 32 (-1, 0] 57 (0, 1] 7 (1, 2] 0 (2, 3] 0 (3, 4] 0 (4, 5] 0 ``` Original code deposited on [my blog](https://exceldevelopmentplatform.blogspot.com/2018/11/python-revisited-grouping-data-from.html)
9,030,304
I have an application with several activities. Couple of them just a simple menues. Just a linear layout. A couple of buttons I've never seen the errors below during development and debugging on my mobile. But crash reports and complains from users show that sometimes by app just forse closes and iirritates people. Example of errors are below. Any ideas where to dig? Sorry, I can not show sources right now. I will add them in several hours. Layout were created in Ecliple android plugin. No extra stuff. Number 1 ======== ``` java.lang.RuntimeException: Unable to start activity ComponentInfo{com.reality.weapons.ak47/com.reality.weapons.ak47.MultiMenu}: android.view.InflateException: Binary XML file line #2: Error inflating class java.lang.reflect.Constructor at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2297) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2313) at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3307) at android.app.ActivityThread.access$2100(ActivityThread.java:115) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1725) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:123) at android.app.ActivityThread.main(ActivityThread.java:3977) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:521) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) at dalvik.system.NativeStart.main(Native Method) Caused by: android.view.InflateException: Binary XML file line #2: Error inflating class java.lang.reflect.Constructor at android.view.LayoutInflater.createView(LayoutInflater.java:512) at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:562) at android.view.LayoutInflater.inflate(LayoutInflater.java:385) at android.view.LayoutInflater.inflate(LayoutInflater.java:320) at android.view.LayoutInflater.inflate(LayoutInflater.java:276) at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:313) at android.app.Activity.setContentView(Activity.java:1683) at com.reality.weapons.ak47.MultiMenu.onCreate(MultiMenu.java:75) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2260) ``` ============================ number 2 ======== ``` java.lang.RuntimeException: Unable to start activity ComponentInfo{com.reality.weapons.ak47/com.reality.weapons.ak47.MultiMenu}: android.content.res.Resources$NotFoundException: Resource ID #0x7f030005 at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2753) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2769) at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3905) at android.app.ActivityThread.access$2600(ActivityThread.java:129) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2121) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:143) at android.app.ActivityThread.main(ActivityThread.java:4717) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:521) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) at dalvik.system.NativeStart.main(Native Method) Caused by: android.content.res.Resources$NotFoundException: Resource ID #0x7f030005 at android.content.res.Resources.getValue(Resources.java:901) at android.content.res.Resources.loadXmlResourceParser(Resources.java:1897) at android.content.res.Resources.getLayout(Resources.java:740) at android.view.LayoutInflater.inflate(LayoutInflater.java:318) at android.view.LayoutInflater.inflate(LayoutInflater.java:276) at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:226) at android.app.Activity.setContentView(Activity.java:1677) at com.reality.weapons.ak47.MultiMenu.onCreate(MultiMenu.java:75) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2717) ``` ============================
2012/01/27
[ "https://Stackoverflow.com/questions/9030304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/929298/" ]
First of all I want repeat that I don't recommend you to use local sorting and the server side paging. I find that the user can wrong interpret the result of sorting. Nevertheless, if your customer agree with restriction which have the combination of local sorting and the server side paging and if you *really* need to implement that, I can suggest you the following solution: ```js onPaging: function() { $(this).setGridParam({datatype: 'json'}).triggerHandler("reloadGrid"); }, loadComplete: function (data) { var $this = $(this); if ($this.jqGrid('getGridParam', 'datatype') === 'json') { // because one use repeatitems: false option and uses no // jsonmap in the colModel the setting of data parameter // is very easy. We can set data parameter to data.rows: $this.jqGrid('setGridParam', { datatype: 'local', data: data.rows, pageServer: data.page, recordsServer: data.records, lastpageServer: data.total }); // because we changed the value of the data parameter // we need update internal _index parameter: this.refreshIndex(); if ($this.jqGrid('getGridParam', 'sortname') !== '') { // we need reload grid only if we use sortname parameter, // but the server return unsorted data $this.triggerHandler('reloadGrid'); } } else { $this.jqGrid('setGridParam', { page: $this.jqGrid('getGridParam', 'pageServer'), records: $this.jqGrid('getGridParam', 'recordsServer'), lastpage: $this.jqGrid('getGridParam', 'lastpageServer') }); this.updatepager(false, true); } } ``` If you would don't use `repeatitems: false` the code which fills `data` parameter of jqGrid will be a little longer, but it will work.
The above solution works fine except in the case where in if we are at the last page of the grid, say i have 3 rows being displayed in the last page although the page can accommodate 5 rows. Now if I try to do client side sort, the last page will be filled with 2 additional rows and the total 5 rows will be sorted. I would say, may be the last fetched records are stored in buffer so this is occurring. As fix for this, onPagination, clear the grid before making the grid as "json", like `clickOnPagination = function() { $(this).jqGrid("clearGridData"); $(this).setGridParam({datatype: 'json'}).triggerHandler("reloadGrid"); }` and in the source code comment the lines `$t.p.records = 0;$t.p.page=1;$t.p.lastpage=0;` in `clearGridData` function so that the next pagination will work properly.
2,540,689
I have a branch in a badly structured svn repo that needs to be stripped out and moved to another svn repository. (I'm trying to clean it up some). If I do an `svn log` and not *stop on copy/rename* I can see all 3427 commits that I care about. Is there some way to dump the revisions out, short of writing some major scripts? I would follow the advice in [this question](https://stackoverflow.com/questions/417726/how-to-move-a-single-folder-from-one-subversion-repository-to-another-repository) but this branch has been moved all over the place and I would like to preserve the moves as well.
2010/03/29
[ "https://Stackoverflow.com/questions/2540689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/160208/" ]
I guess this might be similar to what @ZacThompson (and @Pekka) mean: I think `svndumpfilter` is your friend. From your question I think you have the idea what it is meant to do but struggle with the copying/moving of the branch all over the place? An answer to that can be found in the before mentioned [SVN Documentation](http://svnbook.red-bean.com/nightly/en/svn.reposadmin.maint.html#svn.reposadmin.maint.migrate), I believe: > > Also, copied paths can give you some > trouble. Subversion supports copy > operations in the repository, where a > new path is created by copying some > already existing path. It is possible > that at some point in the lifetime of > your repository, you might have copied > a file or directory from some location > that svndumpfilter is excluding, to a > location that it is including. To make > the dump data self-sufficient, > svndumpfilter needs to still show the > addition of the new path—including the > contents of any files created by the > copy—and not represent that addition > as a copy from a source that won't > exist in your filtered dump data > stream. But because the Subversion > repository dump format shows only what > was changed in each revision, the > contents of the copy source might not > be readily available. If you suspect > that you have any copies of this sort > in your repository, you might want to > rethink your set of included/excluded > paths, perhaps including the paths > that served as sources of your > troublesome copy operations, too. > > > Meaning: make `svndumpfilter` include **all** paths the branch ever lived at. Or am I missing something? Another possibility might be the `svndumpfilter2` mentioned by @compie in the thread you linked although I believe it is not even necessary (and I don't know either of @compie or `svndumpfilter2`).
You will want to use some combination of: 1. svnadmin dump 2. svndumpfilter 3. svnadmin load If you want to do the whole branch, you may not even need svndumpfilter. But if you do: <http://svnbook.red-bean.com/nightly/en/svn.reposadmin.maint.html#svn.reposadmin.maint.filtering>
71,601,494
How do i convert this ? ``` Sort sort = new Sort(Sort.Direction.DESC, MTS_DATE_CREATED_STRING); private Sort(Sort.Direction direction, List<String> properties) { } ```
2022/03/24
[ "https://Stackoverflow.com/questions/71601494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18567766/" ]
XQuery 1.0 defined an option called "static type checking": if this option is in force, you need to write queries in such a way that the compiler can tell in advance ("statically") that the result will not be a type error. The operand of "order by" needs to be a singleton, and with static type checking in force, the compiler needs to be able to verify that it will be a singleton, which is why the `[1]` has been added. It would have been better to write `exactly-one($x/year)` because this doesn't only keep the compiler happy, it also demands a run-time check that `$x/year` is actually a singleton. Very few XQuery vendors chose to implement static type checking, for very good reasons in my view: it makes queries harder to write, and it actually encourages you to write things like this example that do LESS checking than a system without this "feature". In fact, as far as I know the only mainstream (non-academic) implementation that does static type checking is Microsoft's SQL Server. Static type checking should not be confused with optimistic type checking where the compiler tells you about things that are bound to fail, but defers checking until run-time for things that might or might not be correct. Actually the above is a bit of a guess. It's also possible that some `<plane>` elements have more than one child called `<year>`, and that you want to sort on the first of these. That would justify the `[1]` even on products that don't do static type checking.
The `[1]` is a predicate that is selecting the first item in the sequence. It is equivalent to the expression `[position() = 1]`. A predicate in XPath acts kind of like a `WHERE` clause in SQL. The filter is applied, and anything that returns `true` is selected, the things that return `false` are not. When you don't apply the predicate, you get the error. That error is saying that the `order by` expects a single item, or nothing (an empty sequence). At least one of the `plane` has multiple `year`, so the predicate ensures that only the first one is used for the `order by` expression.
224,673
I was reading [this *The New York Times* (NYT) article](https://www.nytimes.com/2020/01/22/technology/jeff-bezos-hack-iphone.html?action=click&module=Top%20Stories&pgtype=Homepage) about the hack of Jeff Bezos's phone. The article states: > > The May 2018 message that contained the innocuous-seeming video file, with a tiny 14-byte chunk of malicious code, came out of the blue > > > What malicious code could possibly be contained in 14-bytes? That doesn't seem nearly enough space to contain the logic outlined by the NYT article. The article states that shortly after the message was received, the phone began sending large amounts of data.
2020/01/23
[ "https://security.stackexchange.com/questions/224673", "https://security.stackexchange.com", "https://security.stackexchange.com/users/225577/" ]
It can absolutely fit. For example, this [CTF challenge solution](https://github.com/yuawn/CTF/tree/master/2017/HITCON_2017_quals/Re_Easy_to_say) attacks a binary that executes ~12 bytes. The payload sent is: ``` 0: 54 push rsp 1: 5e pop rsi 0000000000000002 <y>: 2: 31 e2 xor edx,esp 4: 0f 05 syscall 6: eb fa jmp 2 <y> ``` *(assumes all registers are zeroed out)* This is only 8 bytes for a complete pwn which gives you code execution, which then leads to a remote shell. Of course, this is highly targeted, but it serves as an example.
As I am assuming that the 14 bytes within the video file triggers some memory vulnerability, as Peter Cordes said, those 14 bytes are machine code! That is a very important fact, as many people answering here is thinking about source code, characters and all. All of that takes ~8 bits / 1 byte per character. So with 14 characters, one possibly cannot do so much. But those 14 bytes are for sure binary! So, taking into account an ARM CPU, where one instruction is 32 bit width, including arguments, and an IP address is 32 bit. There's plenty of space to put that IP address into memory and perform a syscall.
28,833,003
I am new to opencart, and I am confused about the terminology between extension and module in opencart, could anyone explain to me?
2015/03/03
[ "https://Stackoverflow.com/questions/28833003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1241464/" ]
Extensions are add-on programs that provide extra functionality to your website. Modules are boxes of information on your site that help the customer make their purchase. These do not provide extra functionality to your website but are intended to display information. Some modules are included by default, such as Account, Affiliate, Bestsellers, Featured, Specials, etc. I hope this helps, look at this link for some more [extensions](http://www.magikcommerce.com/opencart/extensions/).
The contrast amongst modules and augmentations in Opencart are: Modules: A more lightweight and adaptable expansion utilized for page rendering is a module. Modules are utilized for little bits of the page that are by and large less perplexing and ready to be seen crosswise over various segments. Some of the time modules are connected to a segment, for example, the centre most recent news module. Augmentations: Components, dialects, modules, module and formats are altogether known as Extensions.
23,343,197
[My rsync script for creating daily incremental backups](https://stackoverflow.com/questions/23129932/bash-multiple-if-statements-to-run-different-variants-of-script-on-different-daysync) is working pretty well now. But I have noticed after a week or so that I am left with hundreds of Sleeping Rsync Processes running. Has this to do with my script? Is there a command I can add to the script to stop this?
2014/04/28
[ "https://Stackoverflow.com/questions/23343197", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3544713/" ]
1. Right click → copy → copy element [![enter image description here](https://i.stack.imgur.com/czwOu.gif)](https://i.stack.imgur.com/czwOu.gif)
1. Select the `<html>` tag in Elements. 2. Do CTRL-C. 3. Check if there is only lefting `<!DOCTYPE html>` before the `<html>`.
20,335,669
how to change background-color td with jquery ? I need to change the background-color column one in row two and three and four my table : ``` <table class="myTable"> <thead> <tr> <th>Col 1</th> <th>Col 2</th> <th>Col 3</th> <th>Col 4</th> </tr> </thead> <tbody> <tr> <td>td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> <tr> <td >td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> <tr> <td>td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> <tr> <td>td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> <tr> <td>td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> </tbody> </table> ``` demo : [jsfiddle](http://jsfiddle.net/Ug34K/) How to do it with jQuery?
2013/12/02
[ "https://Stackoverflow.com/questions/20335669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3002842/" ]
I found: `$('table.myTable tr:gt(1):lt(3)').find('td:first').css('background-color', 'red');` demo : [jsfiddle](http://jsfiddle.net/33eWz/)
I would add a class to the tds you are concerned with. A good name for the class would be something that describes why you are highlighting. Then you could just add a statement in css that will handle that. Or if you want to change the color via jquery you could do the following. ``` //Assuming highlighter is your added class name and red is the color you want to change to. $('td.highlighter').css('background-color', 'red'); ``` Your html for tr would look like this ``` <tr> <td class='highlighter'>td1</td> <td>td2</td> <td>td3</td> <td>td4</td> </tr> ```
11,981,936
> > **Possible Duplicate:** > > [Is there a LINQ way to go from a list of key/value pairs to a dictionary?](https://stackoverflow.com/questions/9203928/is-there-a-linq-way-to-go-from-a-list-of-key-value-pairs-to-a-dictionary) > > > Assume that I have a `List<string>` as below: ``` var input = new List<string>() { "key1", "value1", "key2", "value2", "key3", "value3", "key4", "value4" }; ``` Based on this list, I would like to convert to `List<KeyValuePair<string, string>>`, the reason is to allow the same key, that's why I don't use Dictionary. ``` var output = new List<KeyValuePair<string, string>>() { new KeyValuePair<string, string>("key1", "value1"), new KeyValuePair<string, string>("key2", "value2"), new KeyValuePair<string, string>("key3", "value3"), new KeyValuePair<string, string>("key4", "value4"), }; ``` I can achieve by using below code: ``` var keys = new List<string>(); var values = new List<string>(); for (int index = 0; index < input.Count; index++) { if (index % 2 == 0) keys.Add(input[index]); else values.Add(input[index]); } var result = keys.Zip(values, (key, value) => new KeyValuePair<string, string>(key, value)); ``` But feeling that this is not the best way using loop `for`, is there any another way that we can use built-in LINQ to achieve it?
2012/08/16
[ "https://Stackoverflow.com/questions/11981936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783681/" ]
I wouldn't suggest using LINQ here as there is really no reason to and you don't gain anything by using LINQ, but simply using a normal `for` loop and increasing your counting variable by two in each iteration: ``` var result = new List<KeyValuePair<string, string>>(); for (int index = 1; index < input.Count; index += 2) { result.Add(new KeyValuePair<string, string>(input[index - 1], input[index])); } ``` Note that I'm starting my index with `1` so I don't run into an exception for accessing an invalid index in case the number of items in `input` is odd, i.e. if `input` ends with a "half pair" of values.
You can use LINQ Aggregate() function (the code is longer than a simple loop): ``` var result = input.Aggregate(new List<List<string>>(), (acc, s) => { if (acc.Count == 0 || acc[acc.Count - 1].Count == 2) acc.Add(new List<string>(2) { s }); else acc[acc.Count - 1].Add(s); return acc; }) .Select(x => new KeyValuePair<string, string>(x[0], x[1])) .ToList(); ``` N.B. this works even if your initial input becomes a generic `IEnumerable<string>` and not specifically a `List<string>`
48,184
It seems strange to me that chemical reactions should be exothermic, meaning the molecules move faster after the reaction. Normally, in physics when two moving objects collide and stick together, the resultant velocity of the combined object is *less* than that of the constituent objects, due to conservation of momentum. Yet, when some molecules combine they move faster. An extreme example of this are fast reactions like explosives in which the chemicals combine and move off at high speeds, sometimes greater than the speed of sound (a "detonation"). Why would this be? Is there a theory that positively can predict this behavior? In other words, if I were to supply you the names of two chemical species of which you had no prior knowledge, could you predict ahead of time whether their reaction would be exothermic or not, and by how much? Or, can this only be determined experimentally?
2016/03/19
[ "https://chemistry.stackexchange.com/questions/48184", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/9663/" ]
You are thinking it in a pseudo classical way. In those terms, imagine bonds like some kind of springs with some potential energy. During the reaction there can be rupture or formation of springs, changing the potential energy. In consequence, due to energy conservation kinetic energy must change. A pictorial argument, that may help comparing your view point about collisions and the chemical reaction case, that is not always true, can be go like this: If we think of two balls bonded by an ordinary spring, in the "relaxed" position (that occurs somewhere between max and min elongation) we have the minimum potential energy. This is also true for a diatomic molecule (look some graph of potential energy curve for a diatomic molecule). Imagine that two of this balls bonded by the spring collide in the region of the (ideal, non mass, non perturbed by the collision etc.) spring and cut them out: [![enter image description here](https://i.stack.imgur.com/G74Ke.png)](https://i.stack.imgur.com/G74Ke.png) If you make this collision the speed of each ball will remain the same due to the spring is mass less. If molecules were like the balls no change in temperature would occurs, because kinetic energy does not change. The key point is that molecules does not behave like that, spring never cut so: potential energy always is present. So if they where molecules, as products the separation between balls is larger and so the potential energy is greater and the kinetic energy turns to be smaller. Now, think in the backward reaction, this reaction would be an exothermic reaction. I am not sure if that is the kind of answer you need. If not or if something is unclear leave a comment. **Edit** I forgot answer that last questions. Yes, the temperature change can be calculated. Normally what is calculated is the enthalpy change, but they can be related trough heat capacity. Of course there are limitations in precision and size of the molecule, but in essence the answer is *yes*, it can be calculated following the quantum mechanics principles.
Rather than try to answer your points directly I try to explain why reactions can be exothermic or endothermic. Chemical reactions depend on the different amount of chemical potential (energy) molecules have bound up in their chemical bonds. Suppose, as a thought experiment, that we could take the atoms that will form a molecule and separate then by some huge distance, and we will call the energy they have, zero, (we can assume that they are stationary). Next we bring them together to form the molecule, in doing so the electrons in the atoms interact and release energy as chemical bonds are formed. Finally the molecule will have some total negative energy $\Delta H\_1$, this is called the heat of formation and the subscript indicates its our first molecule. (In chemistry negative energy denotes stability). We can repeat this process with all sorts of atoms to form any types of molecules we want, each time we obtain a heat of formation, $\Delta H\_2$,$\Delta H\_3$,$\Delta H\_4$ etc, but these heats are not all the same, some large, some small. The reason is that not all types of chemical bonds have the energy,(C-O is different from C-C etc.) and not all types of molecules have the same number of bonds thus not all types of molecules have the same heat of formation. The important point is that different types of molecules have different amounts of internal energy bound up in their bonds. This is released when a reaction occurs and redistributed in making the chemical bonds in product molecules. Sometimes more energy is available from the reactants than is needed to form products and so this is released as heat into the solution or gas depending on how the reaction is carried out. Sometimes its the other way round. Finally, the rates of reactions (how quickly they react) depend on the size of the potential energy barrier that exists between starting (reactants) and ending(product) molecules. The barrier is called the activation energy. If there were no barrier, all molecules would react immediately on contact and we, or any other living thing, would not exist).
58,302,531
I'm wondering how to use an f-string whilst using `r` to get a raw string literal. I currently have it as below but would like the option of allowing any name to replace `Alex` I was thinking adding an f-string and then replacing `Alex` with curly braces and putting username inside but this doesn't work with the `r`. ``` username = input('Enter name') download_folder = r'C:\Users\Alex\Downloads' ```
2019/10/09
[ "https://Stackoverflow.com/questions/58302531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10242990/" ]
You can combine the `f` for an f-string with the `r` for a raw string: ``` user = 'Alex' dirToSee = fr'C:\Users\{user}\Downloads' print (dirToSee) # prints C:\Users\Alex\Downloads ``` The `r` only disables backslash escape sequence processing, not f-string processing. Quoting the [docs](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals): > > The 'f' may be combined with 'r', but not with 'b' or 'u', therefore raw formatted strings are possible, but formatted bytes literals are not. > > > ... > > > Unless an 'r' or 'R' prefix is present, escape sequences in string and bytes literals are interpreted... > > >
Alternatively, you could use the `str.format()` method. ``` name = input("What is your name? ") print(r"C:\Users\{name}\Downloads".format(name=name)) ``` This will format the raw string by inserting the `name` value.
506,096
This is what I've come up with as a method on a class inherited by many of my other classes. The idea is that it allows the simple comparison between properties of Objects of the same Type. Now, this does work - but in the interest of improving the quality of my code I thought I'd throw it out for scrutiny. How can it be better/more efficient/etc.? ``` /// <summary> /// Compare property values (as strings) /// </summary> /// <param name="obj"></param> /// <returns></returns> public bool PropertiesEqual(object comparisonObject) { Type sourceType = this.GetType(); Type destinationType = comparisonObject.GetType(); if (sourceType == destinationType) { PropertyInfo[] sourceProperties = sourceType.GetProperties(); foreach (PropertyInfo pi in sourceProperties) { if ((sourceType.GetProperty(pi.Name).GetValue(this, null) == null && destinationType.GetProperty(pi.Name).GetValue(comparisonObject, null) == null)) { // if both are null, don't try to compare (throws exception) } else if (!(sourceType.GetProperty(pi.Name).GetValue(this, null).ToString() == destinationType.GetProperty(pi.Name).GetValue(comparisonObject, null).ToString())) { // only need one property to be different to fail Equals. return false; } } } else { throw new ArgumentException("Comparison object must be of the same type.","comparisonObject"); } return true; } ```
2009/02/03
[ "https://Stackoverflow.com/questions/506096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40934/" ]
The first thing I would suggest would be to split up the actual comparison so that it's a bit more readable (I've also taken out the ToString() - is that needed?): ``` else { object originalProperty = sourceType.GetProperty(pi.Name).GetValue(this, null); object comparisonProperty = destinationType.GetProperty(pi.Name).GetValue(comparisonObject, null); if (originalProperty != comparisonProperty) return false; ``` The next suggestion would be to minimise the use of reflection as much as possible - it's really slow. I mean, *really* slow. If you are going to do this, I would suggest caching the property references. I'm not intimately familiar with the Reflection API, so if this is a bit off, just adjust to make it compile: ``` // elsewhere Dictionary<object, Property[]> lookupDictionary = new Dictionary<object, Property[]>; Property[] objectProperties = null; if (lookupDictionary.ContainsKey(sourceType)) { objectProperties = lookupProperties[sourceType]; } else { // build array of Property references PropertyInfo[] sourcePropertyInfos = sourceType.GetProperties(); Property[] sourceProperties = new Property[sourcePropertyInfos.length]; for (int i=0; i < sourcePropertyInfos.length; i++) { sourceProperties[i] = sourceType.GetProperty(pi.Name); } // add to cache objectProperties = sourceProperties; lookupDictionary[object] = sourceProperties; } // loop through and compare against the instances ``` However, I have to say that I agree with the other posters. This smells lazy and inefficient. You should be implementing IComparable instead :-).
Update on Liviu's answer above - CompareObjects.DifferencesString has been deprecated. This works well in a unit test: ``` CompareLogic compareLogic = new CompareLogic(); ComparisonResult result = compareLogic.Compare(object1, object2); Assert.IsTrue(result.AreEqual); ```
62,640
I'm starting to explore ArcGIS Online as my first introduction to web mapping (perhaps not the best choice of cloud services but thought I'd give it a shot). I was intrigued by their discussion of "story maps" and wanted to create one that overlaid raster imagery that I had on my system. However, I can't figure out how to add raster data. It seems to only accept .shp, .csv etc. I can add rasters if they're converted to layers, but can't seem to actually overlay them on the map. In looking through the help it seems the option is using a map service, the more I dig into this the more complicated it seems. Is it not possible to simply overlay raster data as one would do in ArcMap Desktop?
2013/06/04
[ "https://gis.stackexchange.com/questions/62640", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/16842/" ]
If you do not have access to a server or do not have the skills to figure that out, you can also convert to KML, upload to a site like Dropbox and using the "Add layer from web" tool, copy in the direct link. This is not as efficient as publishing a service (which I recommend) but definitely a way around if you do not have online credits, etc.
The most practical (i.e. quick) solution I found was serving it out of `MapBox`. `MapBox` offers free web mapping services including serving rasters. It converts rasters to tile layers that can be accepted by ArcOnline. However, with some kinks which require a little grinding out at times. If you go to MapBox.com and open an account you can upload your rasters and build a web map. Once you publish it, there is a button to "share&use". Find the ArcOnline export link, go to your ``` arconline map -> add layer from web -> As Tile Layer -> paste link. ``` The limitation is that since it serves out a pixel layer as straight out of `MapBox`, you have to symbolize and do your work there. I also had some trouble serving my raster without a mapbox basemap coming along with it. You'll have to make a "style" for each raster layer you need as well...you'll have to play with it, but it works.
33,667,310
I have a `DropDownList` containing a range of ten years (from 2010 to 2020) created like such : ``` var YearList = new List<int>(Enumerable.Range(DateTime.Now.Year - 5, ((DateTime.Now.Year + 3) - 2008) + 1)); ViewBag.YearList = YearList; ``` But here's my problem, I wish to **have a default value selected and keep this value when I submit** my information and I wish to use the type `List<SelectListItem>` for it, since its more practical. Once in this type I will simply do as such to keep a selected value: ``` foreach (SelectListItem item in list) if (item.Value == str) item.Selected = true; ``` How may I convert my `List<int>` into a `List<SelectListItem>` ?
2015/11/12
[ "https://Stackoverflow.com/questions/33667310", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try converting it using Linq: ``` List<SelectListItem> item = YearList.ConvertAll(a => { return new SelectListItem() { Text = a.ToString(), Value = a.ToString(), Selected = false }; }); ``` Then the item will be the list of SelectListItem
You can use LINQ to convert the items from a `List<int>` into a `List<SelectListItem>`, like so: ``` var items = list.Select(year => new SelectListItem { Text = year.ToString(), Value = year.ToString() }); ```
33,784,225
I'm putting together a quick CodeMirror input for JSON and to make sure the user doesn't mess things up, I'm using the json-lint addon. My issue is that immediately on render the empty CodeMirror input is displaying a lint error. I understand that an empty input doesn't constitute valid JSON, but I'd rather it only runs when input has been made. I'm using the `addon/lint/json-lint.js` add-on, which in turn is using the [`jsonlint`](https://github.com/zaach/jsonlint) package. **JS** ``` var jsonConfig = { mode: 'application/json', lineNumbers: true, lint: true, autoCloseBrackets: true, gutters: ['CodeMirror-lint-markers'] }; $('.json textarea').each(function (pos, el) { CodeMirror.fromTextArea(el, jsonConfig); }); ``` **Example empty input results:** [![enter image description here](https://i.stack.imgur.com/5aVWH.png)](https://i.stack.imgur.com/5aVWH.png) **Lint message:** [![enter image description here](https://i.stack.imgur.com/yNXFV.png)](https://i.stack.imgur.com/yNXFV.png) I can't see anything in the docs to disable linting for empty inputs. Am I missing something simple?
2015/11/18
[ "https://Stackoverflow.com/questions/33784225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4640496/" ]
Hope this helps: ``` var editor_json = CodeMirror.fromTextArea(document.getElementById("textAreaId"), { lineNumbers: true, mode: "application/json", gutters: ["CodeMirror-lint-markers"], lint: true, theme: '<yourThemeName>' }); //on load check if the textarea is empty and disable validation editor_json.setOption("lint", editor_json.getValue().trim() ); //once changes happen, if the textarea gets filled up again re-enable the validation editor_json.on("change", function(cm, change) { editor_json.setOption("lint", editor_json.getValue().trim() ); }); ///sometimes you need to refresh the CodeMirror instance to fix alignment problems or some other glitch setTimeout(function(){ editor_json.refresh(); },0); ``` So in short: ``` editor_json.setOption("lint", editor_json.getValue().trim() ); ``` Which is translated to: < yourCodeMirrorInstance >.setOption('< yourOption >',< true/false or any required value >) Hope this helps some one.
you could set a default value to the textarea like this ``` {\n\t\n} ``` you would have something like this ``` { } ```
34,972,760
How can I test angular promise with ajax in it? The code is to call a parent via ajax then calling the resting of its children via ajax too. Code, ``` app.controller('MyController', ['$scope', '$http', '$timeout', '$q', function($scope, $http, $timeout, $q) { $scope.myParen = function(url) { var deferred = $q.defer(); setTimeout(function() { $http({ method: 'GET', url: url }) .success(function(data, status, headers, config) { deferred.resolve([data]); }) .error(function(data, status, headers, config) { deferred.reject(data); }); }, 1000); return deferred.promise; } $scope.submit = function() { $scope.commentCollection = ''; var promise = $scope.myParen('https://example.com/parents/1'); promise.then(function(success) { var list = success; $http({ method: 'GET', url: 'https://example.com/parents/1/children' }) .success(function(data, status, headers, config) { $scope.commentCollection = list.concat(data); }) .error(function(data, status, headers, config) { $scope.error = data; }); }, function(error) { $scope.error = error; }); }; }]); ``` Test, ``` describe('MyController Test', function() { beforeEach(module('RepoApp')); var controller, $scope, $http, $httpBackend, $q; var deferred; beforeEach(inject(function ($rootScope, $controller, $http, $httpBackend, $q) { $scope = $rootScope.$new(); deferred = $q.defer(); // Create the controller. controller = $controller; controller("MyController", {$scope, $http, $httpBackend, $q}); })); it('should demonstrate using when (200 status)', inject(function($rootScope, $http, $httpBackend, $q) { var $scope = {}; /* Code Under Test */ $scope.myParen = function(url) { ... } $scope.submit = function() { ... }; /* End */ $scope.submit(); deferred.promise.then(function (value) { $httpBackend.whenGET('https://example.com/parents/1/children', undefined, {}) .respond(function(){ return [200,{foo: 'bar'}]}); expect(value).toBe(4); }); deferred.resolve(4); $rootScope.$apply(); expect($scope.commentCollection).toEqual({foo: 'bar'}); })); }); ``` Failed result, ``` Expected '' to equal { foo: 'bar' }. ``` Any ideas? **Edit:** ``` .... deferred.resolve(4); $rootScope.$apply(); $timeout.flush(); expect($scope.commentCollection).toEqual({foo: 'bar'}); ```
2016/01/24
[ "https://Stackoverflow.com/questions/34972760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413225/" ]
1) switch `setTimeout` on `$timeout` in controller 2) replace all `$httpBackend` in `beforeEach` function 3) use `.flush()` functions ``` describe('MyController Test', function() { beforeEach(module('app')); var controller, $scope, $http, httpBackend, $q; var deferred; beforeEach(inject(function ($rootScope, $controller, $http, $httpBackend, $q) { $httpBackend .whenGET('https://example.com/parents/1', undefined, {}) .respond(function(){ return [200, {parents: []}]}); $httpBackend .whenGET('https://example.com/parents/1/children', undefined, {}) .respond(function(){ return [200, {foo: 'bar'}]}); $scope = $rootScope.$new(); deferred = $q.defer(); // Create the controller. controller = $controller; controller("MyController", {$scope: $scope, $http: $http, $httpBackend: $httpBackend, $q: $q}); })); it('should demonstrate using when (200 status)', inject(function($httpBackend, $timeout) { // var $scope = {}; // don't write it, becouse you rewrite a scope which defined at beforeEach function $scope.submit(); //$httpBackend.flush(); // wait when backend return parents $timeout.flush(); // wait timeout in $scope.myParen $httpBackend.flush(); // wait when backend return children expect($scope.commentCollection[0]).toEqual({parents: []}); expect($scope.commentCollection[1]).toEqual({foo: 'bar'}); })); ``` });
Re-write your `$scope.myParen` function to use `$timeout` instead of `setTimeout`. ``` $scope.myParen = function(url) { var promise = $http({ method: 'GET', url: url }) .then (function(response) { var data = response.data; return $timeout(function(){return data;}, 1000); }) .catch(function(response) { var data = response.data; return ($timeout(angular.noop, 1000) ).then (function () { throw data; }); }); return promise; } ``` Then in tests you can use `$timeout.flush()` to synchronously flush the queue of deferred functions. > > Deprecation Notice > ------------------ > > > The $http legacy promise methods `success` and `error` have been deprecated. Use the standard `then` method instead. > > > -- [AngularJS $http Service API Reference -- deprecation notice](https://docs.angularjs.org/api/ng/service/$http#deprecation-notice) > > >
183,986
I recall hearing that the way Microsoft had to implement the JSON serialization for their AJAX framework was different than most other libraries out there. Is this true? And, if so, how is it different?
2008/10/08
[ "https://Stackoverflow.com/questions/183986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5619/" ]
There are a couple of difference, both of which are related to security. The first is that their webservices, by default, will only accept http POSTs. This is done to prevent JSON hijacking. You can disable this, and read more about it [here](http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx). The second difference pertains to the returned data. If you use a create your webservice in code-behind by decorating a static object with a [WebMethod] attribute, the return JSON is wrapedin an object naemd 'd'. This is to prevent [JSON array constructor attacks](http://www.google.com/search?q=JSON+array+object+constructor+attack). And yes, while these represent the Right Thing To Do (tm), they can make it difficult to interact with third party libraries.
As @Chris said, there isn't anything special other than how Dates are handled. the JSON specification does not have a native way in which dates are to be serialised. If you do not have any dates being returned in your JSON string you can use what ever *deserializer* you wish. The MS AJAX one is nice as it does have a way to varify the JSON string is valid first.
41,305,862
I have an external css file, linked in to the html. If i apply style for example a div class it Works, but when i apply style for html, body etc. it doesn't work. Why is that? This is the CSS code: ``` html { background: url(img/bc.jpg) no-repeat center center fixed; background-size: cover; -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; } ``` This code only work, when i put a style tag into the html file. And this is the HTML: ``` <!doctype html> <html> <head> <link rel="stylesheet" type="text/css" href="css/style.css"> </head> <body> </body> </html> ```
2016/12/23
[ "https://Stackoverflow.com/questions/41305862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7049656/" ]
[![enter image description here](https://i.stack.imgur.com/ejqiF.png)](https://i.stack.imgur.com/ejqiF.png) check the image if you have have a html file and folder css which holds the css file like the given image then you can add the css to html like the given link tag. < link rel="stylesheet" href="css/base.css"> now you can write whatever css you want under the css/base.css file and you have to use the url directory property like url("../image/image.png") whick says that one level up and find the image directory and add image.png to this html document
You should fix the image url, for example consider the following directories: * /img/bc.jpg * /css/style.css * index.html , the `style.css` code is ``` html { background: url(../img/bc.jpg) no-repeat center center fixed; background-size: cover; -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; } ``` add the following code to the header of the page e.g. `index.html` ``` <link rel="stylesheet" href="css/style.css"> ```
64,592,115
I'm coding an arithmetic game where the user is asked a series of addition questions. I want to however randomly assign an operator for each question so that the question could be either: ``` Question 1: num1 + num2 = ``` or ``` Question 2: num1 - num2 = ``` I have been using the Math.random() method to randomise num1 and num2 the last thing I am struggling on is randomly generating '+' and '-'. Is it something to do with the ASCII values of these two characters and then I can randomly pick between them? Thanks for the help! As a side note, I want to ask the user to 'press enter' to start the game, but i'm not sure how to do it. Currently I've got the user to enter 'y' to start. Any ideas? Thanks so much. ``` //prompt user to start the game System.out.println(); System.out.print("Press y to Start the Game: "); String start_program = keyboard.next(); if (start_program.equals("y")) { ``` heres my code so far: ``` public static void main(String[] args) { //mental arithmetic game System.out.println("You will be presented with 8 addition questions."); System.out.println("After the first question, your answer to the previous question will be used\nas the first number in the next addition question."); //set up input scanner Scanner keyboard = new Scanner(System.in); //declare constant variables final int min_range = 1, max_range = 10, Max_Number_of_Questions = 8; long start_time, end_time; //generate 2 random numbers int random_number1 = (int) ((Math.random() * max_range) + min_range); int random_number2 = (int) ((Math.random() * max_range) + min_range); //declare variables int question_number = 1; int guess; //prompt user to start the game System.out.println(); System.out.print("Press y to Start the Game: "); String start_program = keyboard.next(); if (start_program.equals("y")) { //start timer start_time = System.currentTimeMillis(); //ask the question System.out.print("Question " + question_number + ": What is " + random_number1 + " + " + random_number2 + "? "); //take in user input guess = keyboard.nextInt(); while (guess == (random_number1 + random_number2) && question_number < Max_Number_of_Questions) { System.out.println("Correct"); ++question_number; //generate a new question //generate 2 random numbers random_number1 = guess; random_number2 = (int) ((Math.random() * max_range) + min_range); //ask the question again System.out.print("Question " + question_number + ": What is " + random_number1 + " + " + random_number2 + "? "); //take in user input guess = keyboard.nextInt(); } end_time = System.currentTimeMillis(); int time_taken = (int) (end_time - start_time); if (guess != (random_number1 + random_number2)) System.out.println("Wrong"); else { System.out.println(); System.out.println("Well Done! You answered all questions successfully in " + (time_taken / 1000) + " seconds."); } } } ```
2020/10/29
[ "https://Stackoverflow.com/questions/64592115", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13744772/" ]
I think for the random - and + characters you could use boolean like so: ``` Random rd = new Random(); // creating Random object if(rd.nextBoolean()) { //Do something } else { //Do Something else } ``` For the enter, i think this is a game that is played in the console of the ide? Because then you can use a Scanner to track when enter is being pressed. This will help you i think: [Java using scanner enter key pressed](https://stackoverflow.com/questions/18281543/java-using-scanner-enter-key-pressed)
You could try something like this. ``` Random r = new Random(); int[] signs = { 1, -1 }; char[] charSigns = { '+', '-' }; int a = r.nextInt(20); int b = r.nextInt(20); int sign = r.nextInt(2); System.out.printf("%s %s %s = ?%n", a, charSigns[sign], b); // then later. System.out.printf("The answer is " + (a + signs[sign] * b)); ```
72,258,859
Solution: --------- The issue was that our request went to Http... instead of http**s**. This means that the Go library removes the auth header as it treats it as a redirect. There is an Update to this question at the bottom ------------------------------------------------- So I am kinda new to Go and am currently trying to fix a bug in a small service of ours. Its basic funtion is to take in a request that needs (MSAL) authorization, getting that authorization and then forwarding the request to the specific endpoint. It does it like this: ``` http.HandleFunc("/", ProxyRequest) func ProxyRequest(wr http.ResponseWriter, req *http.Request) { auth, err := tokenHelper.GetAuthorization() client := &http.Client{} req.URL = newUrl req.Host = newUrl.Host req.Header.Add("Authorization", *auth) resp, err := client.Do(req) } ``` This works fine, as in the header if printed looks like this: `Authorization: Bearer eyJ0eXAiOiJ...` and the rest of the headers and body are still present. If I copy the request into Postman everything works fine and the debugging Output on my server looks like this: [![Cropped due to privacy but I think you get the point](https://i.stack.imgur.com/PQc3c.png)](https://i.stack.imgur.com/PQc3c.png) but if I sent the Request using Go the Servers Debugger Output looks like this: [![Once again cropped due to privacy concerns](https://i.stack.imgur.com/bM1c3.png)](https://i.stack.imgur.com/bM1c3.png) The only difference being that with Postman it includes the authorization header while with Go it is not being sent. This of course results in the request being rejected. I have searched the Internet and this site quite a bit but as of yet I can not seem to find the mistake in our code. Any help would be appreciated Edit: ----- So I tried both things mentioned in the answers, but I am just going to post the complete new Request one as that is what @izca recommended: ``` newReq, err := http.NewRequest(req.Method, newUrl.String(), req.Body) newReq.Header.Add("Authorization", *auth) log.Println(newReq.Header) resp, err := client.Do(newReq) ``` Which results in exactly the same behaviour. The Request is going to the correct endpoint and is completely barebone, without Authorization Header. [![enter image description here](https://i.stack.imgur.com/CpjqC.png)](https://i.stack.imgur.com/CpjqC.png) Log Output: ``` map[Authorization:[Bearer eyJ0eXAiOiJKV1QiL....]] ```
2022/05/16
[ "https://Stackoverflow.com/questions/72258859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10811865/" ]
``` df.sort_values(by = ['col2', 'col1'] ``` Gave the desired result
If need sort each column independently use [`Series.sort_values`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sort_values.html) in [`DataFrame.apply`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html): ``` c = ['col1','col2'] df[c] = df[c].apply(lambda x: x.sort_values().to_numpy()) #alternative df[c] = df[c].apply(lambda x: x.sort_values().tolist()) print (df) i col1 col2 0 0 00:00:00,1 10 1 1 00:00:01,5 20 2 2 00:00:10,0 30 3 3 00:01:00,1 40 4 5 01:00:00,0 50 ```
42,732,449
I am trying to create a class that reads a file based off different things. But the problem for the moment is the fundamental reading parts. I can read all the lines in a file fine. I can then close the reader and reassign the reader variable with a new BufferedReader (I tested this with reader.ready() not throwing an IOException, as you can see in the code below). However, when I read from the new BufferedReader, it throws a IOException (stream closed): The skeleton code is below: *SeparatorReader.java* ``` package SP.Reading; import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; public class SeparatorReader{ private InputStream stream; private BufferedReader reader; public SeparatorReader(File f) throws FileNotFoundException{ stream = new FileInputStream(f); reader = new BufferedReader(new InputStreamReader(stream)); } public synchronized String readLine() throws IOException{ return reader.readLine(); } public synchronized List<String> readAllLines() throws IOException{ List<String> ls = new ArrayList<String>(); String line; while((line = reader.readLine())!=null){ ls.add(line); } return ls; } public synchronized void reset() throws IOException { reader.close(); reader = new BufferedReader(new InputStreamReader(stream)); System.out.println(reader.ready()); } } ``` *ReadingTester.java* ``` package SP.Reading; import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.util.List; public class ReadingTester { private static boolean error = false; private static boolean error811 = false; private static boolean error812 = false; public static void main(String[] args) { File f = buildFile(); try { SeparatorReader fileReader = new SeparatorReader(f); System.out.println("Commencing test 1 of 27: Reading all the lines from a file"); List<String> lines = fileReader.readAllLines(); fileReader.reset(); if(lines.get(0).equals("line 1") && lines.get(1).equals("line 2") && lines.get(2).equals("line 3")) System.out.println("Reading all the lines from a file complete!"); else{ error = true; error811 = true; System.out.println("Unable to read all the lines from a file"); } System.out.println("Commencing test 2 of 27: Reading the first line from a file"); String l = fileReader.readLine(); fileReader.reset(); if(l == null) System.out.println("l is null!"); if(l.equals("line 1")) System.out.println("Reading the first line from a file complete!"); else{ error = true; error812 = true; System.out.println("Unable to read the first line from a file"); } }catch(IOException e){ e.printStackTrace(); } if(!error) System.out.println("No errors"); else{ if(error811) System.out.println("811"); if(error812) System.out.println("812"); } } private static File buildFile(){ File f = new File("SeparatorReaderFile.txt"); try { f.createNewFile(); BufferedWriter writer = new BufferedWriter(new FileWriter(f)); writer.write("line 1"); writer.newLine(); writer.write("line 2"); writer.newLine(); writer.write("line 3"); writer.close(); } catch (IOException e) { e.printStackTrace(); } return f; } } ``` The output is also below: ``` Commencing test 1 of 27: Reading all the lines from a file false Reading all the lines from a file complete! Commencing test 2 of 27: Reading the first line from a file java.io.IOException: Stream Closed at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(Unknown Source) at sun.nio.cs.StreamDecoder.readBytes(Unknown Source) at sun.nio.cs.StreamDecoder.implRead(Unknown Source) at sun.nio.cs.StreamDecoder.read(Unknown Source) at java.io.InputStreamReader.read(Unknown Source) at java.io.BufferedReader.fill(Unknown Source) at java.io.BufferedReader.readLine(Unknown Source) at java.io.BufferedReader.readLine(Unknown Source) at SP.Reading.SeparatorReader.readLine(SeparatorReader.java:24) at SP.Reading.ReadingTester.main(ReadingTester.java:30) No errors ``` Thanks in advance for all your help!
2017/03/11
[ "https://Stackoverflow.com/questions/42732449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4741655/" ]
This is obviously an AspectJ compiler bug or shortcoming. I have created a [bug ticket](https://bugs.eclipse.org/bugs/show_bug.cgi?id=513528) for it. Here is the (non-Spring) test case I have extracted from your code: ```java package de.scrum_master.app; public class Apple { private String type; private boolean sweet; public Apple(String type, boolean sweet) { this.type = type; this.sweet = sweet; } public String getType() { return type; } public boolean isSweet() { return sweet; } } ``` ```java package de.scrum_master.app; import java.util.Arrays; import java.util.List; public class AppleController { private static final List<Apple> APPLES = Arrays.asList(new Apple("Granny Smith", false), new Apple("Golden Delicious", true)); public static void main(String[] args) { AppleController appleController = new AppleController(); System.out.println("Named: " + appleController.namedApples(APPLES, "Smith")); System.out.println("Sweet: " + appleController.sweetApples(APPLES)); System.out.println("Sour: " + appleController.sourApples(APPLES)); } } ``` ```java package de.scrum_master.aspect; import java.util.List; import java.util.stream.Collectors; import java.util.function.Predicate; import de.scrum_master.app.Apple; import de.scrum_master.app.AppleController; public privileged aspect AppleControllerITDAspect { public List<Apple> AppleController.namedApples(List<Apple> apples, String subString) { // Anonymous subclass works return apples.stream().filter(new Predicate<Apple>() { @Override public boolean test(Apple a) { return a.getType().contains(subString); } }).collect(Collectors.toList()); } public List<Apple> AppleController.sweetApples(List<Apple> apples) { // Method reference works return apples.stream().filter(Apple::isSweet).collect(Collectors.toList()); } public List<Apple> AppleController.sourApples(List<Apple> apples) { // Lambda causes IllegalAccessError return apples.stream().filter(a -> !a.isSweet()).collect(Collectors.toList()); } } ``` The console log looks like this: ```none Named: [de.scrum_master.app.Apple@6f496d9f] Sweet: [de.scrum_master.app.Apple@4e50df2e] Exception in thread "main" java.lang.BootstrapMethodError: java.lang.IllegalAccessError: tried to access method de.scrum_master.app.AppleController.lambda$0(Lde/scrum_master/app/Apple;)Z from class de.scrum_master.aspect.AppleControllerITDAspect at de.scrum_master.aspect.AppleControllerITDAspect.ajc$interMethod$de_scrum_master_aspect_AppleControllerITDAspect$de_scrum_master_app_AppleController$sourApples(AppleControllerITDAspect.aj:28) at de.scrum_master.app.AppleController.sourApples(AppleController.java:1) at de.scrum_master.aspect.AppleControllerITDAspect.ajc$interMethodDispatch1$de_scrum_master_aspect_AppleControllerITDAspect$de_scrum_master_app_AppleController$sourApples(AppleControllerITDAspect.aj) at de.scrum_master.app.AppleController.main(AppleController.java:14) Caused by: java.lang.IllegalAccessError: tried to access method de.scrum_master.app.AppleController.lambda$0(Lde/scrum_master/app/Apple;)Z from class de.scrum_master.aspect.AppleControllerITDAspect at java.lang.invoke.MethodHandleNatives.resolve(Native Method) at java.lang.invoke.MemberName$Factory.resolve(Unknown Source) at java.lang.invoke.MemberName$Factory.resolveOrFail(Unknown Source) at java.lang.invoke.MethodHandles$Lookup.resolveOrFail(Unknown Source) at java.lang.invoke.MethodHandles$Lookup.linkMethodHandleConstant(Unknown Source) at java.lang.invoke.MethodHandleNatives.linkMethodHandleConstant(Unknown Source) ... 4 more ``` In the aspect above you can also see a temporary workaround: use a method reference or a classical anonymous subclass instead of a lambda. Background info: The AspectJ compiler AJC is a regularly updated fork of the Eclipse Java compiler ECJ (AspectJ is also an official Eclipse project, BTW). So maybe the bug is in ECJ, but probably rather in AJC.
Now that method: ``` lambda$0(Lcom/apple/model/Apple;)Z ``` is actually the de-sugar of your lambda `a -> a.isSweet()`, which will look like this: ``` private static boolean lambda$0(Apple s){ return s.isSweet(); } ``` This method is **generated by the compiler**. Unless you are using some weird compiler, this would have to be a bug in aspectj. You can check that the method is there in `AppleController` by invoking the command to decompile your .class file: ``` javap -c -p AppleController.class ``` where the output should be something like this: ``` private static boolean lambda$0(com.model.apple.Apple); Code: 0: aload_0 1: invokevirtual #9 // Method isSweet:()Z 4: ireturn ``` If this method is indeed there (the javac did his job correctly), you theoretically can not get a `java.lang.NoSuchMethodError`, which means that **aspectj** is doing something very funny in the version that you are using. I highly doubt this last paragraph, but just in case... On the other hand if you de-compile (javap command) and you do not see the `lambda$0` method, but one called `lambda$main$0` for example, it means you are compiling with jdk-9 or some non-obvious Eclipse compiler.
16,307,637
This is my first attempt with `NSNotification`, tried several tutorials but somehow it's not working. Basically I am sending a dictionary to class B which is popup subview (`UIViewController`) and testing to whether is has been received. Could anyone please tell me what am I doing wrong? Class A ``` - (IBAction)selectRoutine:(id)sender { UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"Storyboard" bundle:nil]; NSDictionary *dictionary = [NSDictionary dictionaryWithObject:@"Right" forKey:@"Orientation"]; [[NSNotificationCenter defaultCenter] postNotificationName:@"PassData" object:nil userInfo:dictionary]; createExercisePopupViewController* popupController = [storyboard instantiateViewControllerWithIdentifier:@"createExercisePopupView"]; //Tell the operating system the CreateRoutine view controller //is becoming a child: [self addChildViewController:popupController]; //add the target frame to self's view: [self.view addSubview:popupController.view]; //Tell the operating system the view controller has moved: [popupController didMoveToParentViewController:self]; } ``` Class B ``` - (void)viewDidLoad { [super viewDidLoad]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(receiveData:) name:@"PassData" object:nil]; } - (void)receiveData:(NSNotification *)notification { NSLog(@"Data received: %@", [[notification userInfo] valueForKey:@"Orientation"]); } ```
2013/04/30
[ "https://Stackoverflow.com/questions/16307637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1535747/" ]
If it hasn't registered to receive that notification yet - it will never receive it. Notifications don't persist. If there isn't a registered listener, the posted notification will be lost.
Specific to your problem, the receiver hasn't started observing before the notification is sent so the notification just gets lost. More generally: What you're doing wrong is using notifications for this use case. It's fine if you're just playing around and experimenting but the kind of relationship you're modelling here is best actioned by retaining a reference to the view and calling methods on it directly. It's usually best if experimentation is realistic of the situation in which it would actually be used. You should be aware of 3 basic communication mechanisms and when to use them: **Notifications** Use them to notify other unknown objects that something has happened. Use them when you don't know who wants to respond to the event. Use them when multiple different objects want to respond to the event. Usually the observer is registered for most of their lifetime. It's important to ensure the observer removes itself from `NSNotificationCenter` before it is destroyed. **Delegation** Use delegation when one object wants to get data from an unknown source or pass responsibility for some decision to an unknown 'advisor'. **Methods** Use direct calls when you know who the destination object is, what they need and when they need it.
79,650
**Update: We are having this need now also for Windows 11** You know this concept from your car driving lessons: You have your steering wheel and your pedals and mirrors and the instructor has got his own set (in most countries minus the steering wheel). In our small-group training sessions, our students take turns and apply what they have learnt in front of the class: one set of screens to work, three more screens to follow along. We need large mouse pointers with different colours for training: One for the trainee that is active, and a pointing-only mouse-pointer for the trainer. (Optionally the trainer could actively use his/her mouse and also click stuff, but the pointer should in this case change colour or give a visual feedback.) So far we are already using two mice, and Windows can handle that just fine. But it is often confusing to the class, to follow who is doing what. For those who love context: This is for complex and full screens, like when we train for desktop publishing with not-trivial documents. It takes a lot of time, for the teacher to orally explain what tool to grab and where to find certain hidden options. So pointing is more efficient than telling. We do not limit our search to free or open source tools, we could spend some money for such a tool, because we only need very few licences. If there were a similar tool for Linux, especially OpenSuse, we would also love to hear about it, but that is optional too.
2021/06/18
[ "https://softwarerecs.stackexchange.com/questions/79650", "https://softwarerecs.stackexchange.com", "https://softwarerecs.stackexchange.com/users/38867/" ]
***Warning**: read the full text before downloading.* IMHO, [Eithermouse](https://www.eithermouse.com/) is what you need. * It shows multiple cursors (one per mouse) * It is free (donation ware) * It's open source (source code provided, but no FOSS license) * It works on Windows * You can mirror the mouse cursor to better distinguish the teacher cursor from the student cursor Disadvantages: * When both participants move the mouse simultaneously, I noticed some flickering. * The mouse that has a mirrored cursor may become invisible when being moved over a text field. * The application may be detected as [a trojan/virus](https://www.virustotal.com/gui/file/588d26136557cd724b6a18c918758f4a4a22b77373d5e69b25274f7b4be63c7a/detection) by quite a few virus scanners. I can't fully judge about this, especially not when it comes to the binary file that is provided for download. However, the author also provides the source code (download the ZIP file). I have not read the complete 2500 lines of code, but it's quite likely that the identification as a [trojan](https://en.wikipedia.org/wiki/Trojan_horse_(computing)) or dropper comes from this part of the code: [![Download](https://i.stack.imgur.com/gAPen.png)](https://i.stack.imgur.com/gAPen.png) It is there for downloading updates of the software. If you use this in a company, talk to your IT department before downloading and using.
Setting up a new computer (again, for teamwork and training) I came back here. Today I discovered this website with a good overview: [7 Free Tools to Control More Than One Mouse on One Computer](https://www.raymond.cc/blog/install-multiple-mouse-and-keyboard-on-one-computer/) And I ended up installing - and liking - MouseMux from here [MouseMux website](https://mousemux.com/) I am not claiming yet that it is the best solution to our needs, but it is working and I got no security issues as far as I can tell. And nothing has flickered nor crashed during my first tests today. Machine is now Windows 11 Pro.
47,322,884
Working on Voip call app, Issue is start native music player and play song, and opened my app made call after call music player doesn't resume playing by getting focus. What could be the issue, Thanks in advance. **Note : Native call app doing well in this case** **AudioFocusChangeListener** : ``` AudioManager.OnAudioFocusChangeListener mListener = new AudioManager.OnAudioFocusChangeListener() { @Override public void onAudioFocusChange(int i) { MLog.e(TAG, "onAudioFocusChange(" + i + ")"); } }; am.requestAudioFocus(mListener, AudioManager.STREAM_VOICE_CALL, AudioManager.AUDIOFOCUS_GAIN); ``` **After Use trying to give focus for other apps to resume other app work:** ``` AudioManager am = getAudioManager(); am.abandonAudioFocus(mListener); ```
2017/11/16
[ "https://Stackoverflow.com/questions/47322884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2736808/" ]
I faced the same problem. I have been trying to use a background color value from a database. I find out a good solution to add a background color value on inline CSS which value I set from database. ``` <img :src="/Imagesource.jpg" alt="" :style="{'background-color':Your_Variable_Name}"> ```
You can use the component tag offered by Vue.js. ```js <template> <component :is="`style`"> .cg {color: {{color}};} </component> <p class="cg">I am green</p> <br/> <button @click="change">change</button> </template> <script> export default { data(){ return { color: 'green' } }, methods: { change() {this.color = 'red';} } } </script> ```
1,110
The consensus seems to be that [if people find errors in papers, they should contact the author directly, not post a question here](https://cstheory.meta.stackexchange.com/questions/214/should-there-be-a-protocol-to-notify-authors-if-we-find-an-error-in-a-paper). Unfortunately, it seems that we haven't enforced this policy [in this thread](https://cstheory.stackexchange.com/questions/1064/polynomial-time-algorithm-for-graph-isomorphism-testing). There is an [answer](https://cstheory.stackexchange.com/questions/1064/polynomial-time-algorithm-for-graph-isomorphism-testing/5874#5874) whose sole purpose seems to be exactly this – to point out a possible mistake in a specific paper. This has already led to [follow-up discussion](https://cstheory.stackexchange.com/questions/1064/polynomial-time-algorithm-for-graph-isomorphism-testing/6135#6135) that isn't really appropriate on this site. I would suggest that the moderators delete both of these answers, and encourage the participants to discuss the possible errors offline.
2011/04/17
[ "https://cstheory.meta.stackexchange.com/questions/1110", "https://cstheory.meta.stackexchange.com", "https://cstheory.meta.stackexchange.com/users/74/" ]
To expand Jukka's suggestion (i.e., deleting the troublesome answers) into a separate answer: 1. Close the question: it is out of scope, even if it wasn't when it was posted. There is no benefit to the site to the kind of discussion that further answers would constitute; and 2. Delete the answers: it injures the site to flout the site's policy on what constitutes acceptable discourse in this way. There is not a real censorship issue, since the content of at least one of the two answers is hosted elsewhere.
I closed the questions, unlocked it, and removed some comments. I suggest not removing the answers (unless they continue to cause off-topic discussion), and add a note to the questions explaining the situation ("The paper is neither accepted nor refuted. It is off-topic to discuss general correctness of such papers on cstheory.") linking to this meta post.
1,646,939
I have a laptop from office to work at home. In my home LAN I have many devices which shares files and I don't want that the office laptop can access it. Router B is in bridge modus, but I can change if needed and connect input to Wan instead of LAN. ![LAN connection diagram](https://i.stack.imgur.com/UJle7.jpg) Preferred if possible: OFFICE should only have internet access! PC should have access to LAN devices OR at least 1 device before Router B should have access to a shared folder on PC. If not possible: Block OFFICE and PC from LAN, but give them internet.
2021/05/05
[ "https://superuser.com/questions/1646939", "https://superuser.com", "https://superuser.com/users/1331146/" ]
Two things are required: * You will need to enable VLAN on the switch in order to isolate the PC and the office device. If this is not possible you will need to connect Router B directly to Router A. * Router A must be capable to isolate the connection for Router B (e.g. via guest network setting). If this is not possible you could setup a new subnet on your Router B using NAT. It's not a perfect isolation (depends on the specific devices) but it should do the trick for most scenarios.
I have a Private Internet Access VPN subscription and I'm pretty sure there's a checkbox in the program settings to "allow LAN traffic". May be worth a look for an easy solution.
43,353,194
I'm trying to download a file with size greater than 50mb in an asp.net page. But it fails on our production server. It works on development and QA servers. I'm using the following code. ``` Response.Clear() oBinaryReader = New System.IO.BinaryReader(System.IO.File.OpenRead(sDocPath)) lFileSize = Microsoft.VisualBasic.FileLen(sDocPath) Response.AddHeader("Content-Disposition", "attachment;filename=" & sDownloadFileName) Response.ContentType = "application/unknown" Response.BinaryWrite(oBinaryReader.ReadBytes(lFileSize)) Response.Flush() HttpContext.Current.ApplicationInstance.CompleteRequest() Response.End() ``` The error I'm getting from the server is as below. Page\_Load **System.OutOfMemoryException**: Exception of type '`System.OutOfMemoryException`' was thrown. at System.IO.BinaryReader.ReadBytes(Int32 count) at ExportDoc.Page\_Load(Object sender, EventArgs e) in c:\sitename\ExportDoc.aspx.vb:line 87 Server Name What is wrong with the code?
2017/04/11
[ "https://Stackoverflow.com/questions/43353194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/329192/" ]
`OutOfMemoryException` commonly thrown when there is no available memory to perform such operation when handling both managed/unmanaged resources. Therefore, you need to use `Using...End Using` block wrapping around `BinaryReader` to guarantee immediate disposal of unmanaged resources after usage with `IDisposable` interface: ``` Response.Clear() Using oBinaryReader As BinaryReader = New BinaryReader(File.OpenRead(sDocPath)) lFileSize = FileLen(sDocPath) Response.AddHeader("Content-Disposition", "attachment;filename=" & sDownloadFileName) Response.ContentType = "application/unknown" Response.BinaryWrite(oBinaryReader.ReadBytes(lFileSize)) Response.Flush() HttpContext.Current.ApplicationInstance.CompleteRequest() Response.End() End Using ``` Another common usage of `BinaryReader` is using `FileStream` and a byte buffer to control file reading mechanism: ``` Using FStream As FileStream = New FileStream(File.OpenRead(sDocPath)) lFileSize = CType(FStream.Length, Integer) Dim Buffer() As Byte Using oBinaryReader As BinaryReader = New BinaryReader(FStream) Buffer = oBinaryReader.ReadBytes(lFileSize) End Using Response.Clear() Response.AddHeader("Content-Disposition", "attachment;filename=" & sDownloadFileName) Response.ContentType = "application/unknown" Response.BinaryWrite(Buffer) Response.Flush() HttpContext.Current.ApplicationInstance.CompleteRequest() Response.End() End Using ``` References: [VB.NET Using Statement (MSDN)](https://msdn.microsoft.com/en-us/library/htd05whh.aspx) [BinaryReader Class (MSDN)](https://msdn.microsoft.com/en-us/library/system.io.binaryreader(v=vs.110).aspx)
I tried the below code and it resolved my issue, found the code idea from MSDN website. ``` Using iStream As System.IO.Stream = New System.IO.FileStream(sDocPath, System.IO.FileMode.Open, IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Disposition", "attachment; filename=" & filename) While dataToRead > 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, bufferSize) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(bufferSize) dataToRead = dataToRead - length Else dataToRead = -1 End If End While HttpContext.Current.ApplicationInstance.CompleteRequest() End Using ```
64,501,572
I have two DTedit tables which are functionally related I do not want users to get the Insert/New button in DT#2 when no row is selected in DT#1 I have `Table1_Results$rows_selected` to test if selection exists (length>0) I also identified the id of the 'New button' in DT#2 as being `Table2_add` But do not succeed to make the length of `Table1_Results$rows_selected` trigger the shinyjs show() or hide() action for DT#2 Could anyone please share some reactivity command to do this! the following code is not working but illustrates my aim ``` observe(Table1_Results$rows_selected,{ if (length(Table1_Results$rows_selected)) { shinyjs::show('Table2_add') } else { shinyjs::hide('Table2_add') } }) ``` > > Error in .getReactiveEnvironment()$currentContext() : Operation not > allowed without an active reactive context. (You tried to do something > that can only be done from inside a reactive expression or observer.) > > > This manual test using a button works ``` observeEvent(input$showhide, { toggle('Table2_add') }) ``` So it is really the reactive testing of the `Table1_Results$rows_selected` which is lacking Thanks in advance --- In the code below: * I cannot clear the selected row in the observed textoutput * I do not succeed to hide the New button ``` Note: I use DTedit because it allows other features not shown here AIMs: 1) when no drink is selected, hide the New button for containers 2) manage <table>$rows_selected so that it reflects the current status ``` ``` library("shiny") library("shinyjs") library("DT") library("DTedit") server <- function(input, output) { Drink_Results <- dtedit( input, output, name = 'Drink', thedata = data.frame( ID = c(1:3), drink = c('Tea', 'Coffea', 'Water'), stringsAsFactors = FALSE ) ) # create proxy to clear row selection (found 'Drinkdt' by looking in the source) Drink_proxy <- DT::dataTableProxy('Drinkdt') Container_Results <- dtedit( input, output, name = 'Container', thedata = data.frame( ID = c(1:3), Container = c('Cup', 'Glass', 'Pint'), stringsAsFactors = FALSE ) ) # create proxy to clear row selection Container_proxy <- DT::dataTableProxy('Container') # manually toggle visibility for New button observeEvent(input$showhide, { shinyjs::toggle('Container_add') }) # clear Drink row selection observeEvent(input$clearrows, { Drink_proxy %>% selectRows(NULL) }) # when no drink is selected, hide the New button for containers observeEvent(Drink_Results$rows_selected, { if ( length(Drink_Results$rows_selected) ) { shinyjs::show('Container_add') } else { shinyjs::hide('Container_add') } }) # attempt to react on clearing the row-selection choice <- reactive({ paste0(Drink_Results$rows_selected, " - ", Container_Results$rows_selected) }) # output current combination output$choice <- renderText({ as.character(choice()) }) } ui <- tagList(useShinyjs(), fluidPage( shinyFeedback::useShinyFeedback(), h3('What will you drink?'), uiOutput('Drink'), # manually clear row selections actionButton(inputId="clearrows", label="clear selected drink", icon=icon('trash')), hr(), h3("What container do you prefer?"), uiOutput('Container'), hr(), # manually hide the New button actionButton(inputId="showhide", label="toggle New buttons", icon=icon('refresh')), hr(), # show current user choices textOutput('choice'), ) ) shinyApp(ui = ui, server = server) ```
2020/10/23
[ "https://Stackoverflow.com/questions/64501572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1386500/" ]
The reactive for selected row is `input$Drinkdt_rows_selected` in your case, based on the source code. If you use that, your code works fine. Try this ``` server <- function(input, output) { ## could not install DTedit. So, made a copy of the function source("C:\\RStuff\\GWS\\dtedit.R", local=TRUE) Drink_Results <- dtedit( input, output, name = 'Drink', thedata = data.frame( ID = c(1:3), drink = c('Tea', 'Coffea', 'Water'), stringsAsFactors = FALSE ) ) name <- "Drink" # create proxy to clear row selection (found Drinkdt by looking in the source) Drink_proxy <- DT::dataTableProxy('Drinkdt') Container_Results <- dtedit( input, output, name = 'Container', thedata = data.frame( ID = c(1:3), Container = c('Cup', 'Glass', 'Pint'), stringsAsFactors = FALSE ) ) # create proxy to clear row selection Container_proxy <- DT::dataTableProxy('Container') # clear Drink row selection observeEvent(input$clearrows, { Drink_proxy %>% selectRows(NULL) shinyjs::hide('Container_add') }) sel <- reactive({!is.null(input[[paste0(name, 'dt_rows_selected')]])}) observe({ print(sel()) print(input$Drinkdt_rows_selected) }) # when no drink is selected, hide the New button for containers observe({ #observeEvent(input[[paste0(name, 'dt_rows_selected')]], { if ( length(input[[paste0(name, 'dt_rows_selected')]])>0 ) { shinyjs::show('Container_add') }else { shinyjs::hide('Container_add') } }) observeEvent(Drink_Results$thedata, { message(Drink_Results$thedata) }) observeEvent(input[[paste0(name, 'dt_rows_selected')]], ignoreNULL = FALSE, { # 'no' (NULL) row will be 'selected' after each edit of the data message(paste("Selected row:", input[[paste0(name, 'dt_rows_selected')]])) }) # attempt to react on clearing the row-selection choice <- reactive({ if (is.null(input[[paste0(name, 'dt_rows_selected')]])) { paste0("Drink not selected") }else { paste0(input[[paste0(name, 'dt_rows_selected')]], " - ", input$Containerdt_rows_selected) } }) observeEvent(input$showhide, { toggle('Container_add') }) # output current combination output$choice <- renderText({ choice() }) } ui <- fluidPage( shinyFeedback::useShinyFeedback(), useShinyjs(), h3('What will you drink?'), uiOutput('Drink'), # manually clear row selections actionButton(inputId="clearrows", label="clear selected drink", icon=icon('trash')), hr(), h3("What container do you prefer?"), uiOutput('Container'), hr(), # manually hide the New button actionButton(inputId="showhide", label="toggle New buttons", icon=icon('refresh')), hr(), # show current user choices textOutput('choice'), ) shinyApp(ui = ui, server = server) ```
btw - it should be mentioned that the original code was not using the `jbryer` version of `DTedit` (v1.0.0) , which does not return `$rows_selected`. The modified `DavidPatShuiFong` version of `DTedit` (v 2.2.3+) does return `$rows_selected`. The original code presented above used an `observeEvent` which, by default, has `ignoreNULL = TRUE`. That doesn't work, because if no row is selected, then `$rows_selected` will return `NULL`. One option is to set `ignoreNULL = FALSE`. Unfortunately, this still leaves the problem that `shinyjs::hide` does not work on first execute, perhaps because 'Container\_add' does not yet exist on first pass. Adding an `invalidateLater` which only executes a few times fixes that problem. ```r library("shiny") library("shinyjs") library("DT") library("DTedit") server <- function(input, output, session) { Drink_Results <- dtedit( input, output, name = 'Drink', thedata = data.frame( ID = c(1:3), drink = c('Tea', 'Coffea', 'Water'), stringsAsFactors = FALSE ) ) # create proxy to clear row selection (found 'Drinkdt' by looking in the source) Drink_proxy <- DT::dataTableProxy('Drinkdt') Container_Results <- dtedit( input, output, name = 'Container', thedata = data.frame( ID = c(1:3), Container = c('Cup', 'Glass', 'Pint'), stringsAsFactors = FALSE ) ) # create proxy to clear row selection Container_proxy <- DT::dataTableProxy('Container') # manually toggle visibility for New button observeEvent(input$showhide, { shinyjs::toggle('Container_add') }) # clear Drink row selection observeEvent(input$clearrows, ignoreNULL = FALSE, { Drink_proxy %>% selectRows(NULL) shinyjs::hide('Container_add') }) # when no drink is selected, hide the New button for containers invalidateCount <- reactiveVal(0) observe({ # need to execute this observe more than once # (?because 'Container_add' does not actually exist first time?) if (isolate(invalidateCount()) < 1) { shiny::invalidateLater(200, session) # 200ms delay } isolate(invalidateCount(invalidateCount() + 1)) print(paste0("row selected:", Drink_Results$rows_selected)) if (!is.null(Drink_Results$rows_selected)) { shinyjs::show('Container_add') } else { shinyjs::hide('Container_add') } }) } ui <- tagList(useShinyjs(), fluidPage( h3('What will you drink?'), uiOutput('Drink'), # manually clear row selections actionButton(inputId="clearrows", label="clear selected drink", icon=icon('trash')), hr(), h3("What container do you prefer?"), uiOutput('Container'), hr(), # manually hide the New button actionButton(inputId="showhide", label="toggle New buttons", icon=icon('refresh')), hr(), # show current user choices textOutput('choice'), ) ) shinyApp(ui = ui, server = server) ```
51,971,096
I found the thread located here: [Appending row to pandas df adds 0 column](https://stackoverflow.com/questions/22917108/appending-row-to-pandas-dataframe-adds-0-column) but I still don't understand what I am doing wrong. ``` df4 = pd.DataFrame({'Q':['chair', 'desk', 'monitor', 'chair'], 'R':['red', 'blue', 'yellow', 'purple'], 'S': ['english', 'german', 'spanish', 'english']}) df4 Q R S 0 chair red english 1 desk blue german 2 monitor yellow spanish 3 chair purple english >> df5 = df4 >>> df5 = df5.append(['Q'] * 2, ignore_index=True) >>> df5 Q R S 0 0 chair red english NaN 1 desk blue german NaN 2 monitor yellow spanish NaN 3 chair purple english NaN 4 NaN NaN NaN Q 5 NaN NaN NaN Q >>> ``` In my particular case, why did it add the 0 column? My initial DF is not empty.
2018/08/22
[ "https://Stackoverflow.com/questions/51971096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5560898/" ]
Help page for pandas append states: "Append rows of other to the end of this frame, returning a new object. Columns not in this frame are added as new columns." <https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html> in your case, you do not any supply column name(s), so new is(are) created. Many ways to append a new row. One way: ``` df5 = df5.append({'Q':'Q', 'R':'Q', 'S':'Q'}, ignore_index=True) ```
You are trying to append the list `['Q', 'Q']` to a dataframe with 3 columns. This is ambiguous. Since it's not at all clear, Pandas takes the decision to pass `['Q', 'Q']` to the `pd.DataFrame` constructor before appending: ``` out1 = df5.append(pd.DataFrame(['Q'] * 2), ignore_index=True) out2 = df5.append(['Q'] * 2, ignore_index=True) assert out1.equals(out2) # no error, i.e. these are equal ``` If you are still confused, trying printing the dataframe constructed from a single list: ``` print(pd.DataFrame(['Q'] * 2)) 0 0 Q 1 Q ``` Since no column names are specified, you have a column labeled `0`. When appending to a dataframe with different columns, you will necessarily see an additional column in the result.
22,727,107
*Without* using `sed` or `awk`, *only* `cut`, how do I get the last field when the number of fields are unknown or change with every line?
2014/03/29
[ "https://Stackoverflow.com/questions/22727107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3262003/" ]
This is the only solution possible for using nothing but cut: > > echo "s.t.r.i.n.g." | cut -d'.' -f2- > **[repeat\_following\_part\_forever\_or\_until\_out\_of\_memory:]** | cut -d'.' -f2- > > > Using this solution, the number of fields can indeed be unknown and vary from time to time. However as line length must not exceed LINE\_MAX characters or fields, including the new-line character, then an arbitrary number of fields can never be part as a real condition of this solution. Yes, a very silly solution but the only one that meets the criterias I think.
Adding an approach to this old question just for the fun of it: ``` $ cat input.file # file containing input that needs to be processed a;b;c;d;e 1;2;3;4;5 no delimiter here 124;adsf;15454 foo;bar;is;null;info $ cat tmp.sh # showing off the script to do the job #!/bin/bash delim=';' while read -r line; do while [[ "$line" =~ "$delim" ]]; do line=$(cut -d"$delim" -f 2- <<<"$line") done echo "$line" done < input.file $ ./tmp.sh # output of above script/processed input file e 5 no delimiter here 15454 info ``` Besides bash, only cut is used. Well, and echo, I guess.
17,589,566
Sorry my english is not good. I have a problem, I want to compare two different string use reg exp. The first string has structure like **a-b-1**, ex: mobile-phone-1. And the second string has structure like **a-b-1/d-e-2**, ex: mobile-phone-1/nokia-asha-23. How can I do it? You can use preg\_match() method or something method ... This method for two different string. Thanks so much! Code demo: ``` if (preg_match("reg exp 1", string1)) { // do something } if (preg_match("reg exp 2", string2)) { // do something } ``` P/S: shouldn't care too much about code demo
2013/07/11
[ "https://Stackoverflow.com/questions/17589566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1797747/" ]
Maybe because you're not doing the redirect itself? ``` success: function (data) { //console.log('data:'+data); if (data.user) { $('#lblUsername').text(data.user.username); window.location = '/home.php'; // redirect to the homepage } else { $('#button_sign_in').shake(4,6,700,'#CC2222'); $('#username').focus(); } } ``` PHP redirect using header won't work if you send it to the client via AJAX. You have to redirect it on the client-side using JS in this case.
When you do AJAX call to a php script, calling `header()` will not load a new page in the browser. You can respond to AJAX call by `echo 'success/fail'` and redirect to the desired page yourself by checking the response in the `success:` part of AJAX. And don forget to set the php session, and check whether the user login made successful or not in the page which you want to display after login. So your code in php will be, ``` if ($found > 0) { //set session here echo 'success'; } else echo 'fail'; ``` and in Ajax, ``` success: function (data) { if (data == 'success') { window.location = "home_page.php"; } else { $('#button_sign_in').shake(4,6,700,'#CC2222'); $('#username').focus(); } } ``` in you hompage php, check for the session.
349,888
If you place a cell with negligible internal resistance and an EMF of 5V in parallel with 2 resistors, as shown below, each resistor will have a potential difference of 5V across it. [![enter image description here](https://i.stack.imgur.com/XIYD7.png)](https://i.stack.imgur.com/XIYD7.png) However, if you were to replace the rightmost resistor with another cell, this time with an EMF of 6V and a negligible internal resistance, what will be the potential difference across the resistor remaining in the middle? How would the potential difference be the same across each branch in this case? Would it even be the same, and if not then how does this fit with Kirchhoff's second law?
2017/08/02
[ "https://physics.stackexchange.com/questions/349888", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/144045/" ]
Under *all* circumstances? No. If you immerse the circuit in a region with a changing magnetic field going through the circuit's loop, then Faraday's law tells you that the electric field circulation over the loop is proportional to the change in magnetic flux through the loop $$ \oint\_\mathcal C \mathbf E\cdot\mathrm d\mathbf l = -\frac{\mathrm d}{\mathrm dt}\iint\_\mathcal S \mathbf B\cdot \mathrm d\mathbf S, $$ where if your circuit consists only of resistive elements and cells then the integral on the left is the sum of the Ohm's-law voltages across the resistors and the cells' marked driving voltages. --- In the case you posit, on the other hand, the situation is simpler in some ways. Here the Kirchhoff voltage law still holds, but what breaks is your assumptions, which for this situation are inconsistent. In particular, you can no longer say > > negligible internal resistance > > > for either of the two cells, and you probably can't think of those resistances as linear circuit elements, either. Instead, you need to include the cells' internal resistance (however small) into the configuration, do the full Kirchhoff analysis, and *then* decide whether your cells are in their linear regime and whether the internal resistances are so small that removing them would not appreciably change the conclusions. What you will find is that they're not removable, and you will likely be pushing charge through one of the batteries in reverse. Here the usual circuit abstractions break down: some voltage sources will accept this and keep their stride, but others can have nonlinear current-voltage characteristics, and many can sustain damage, from mild all the way up to catastrophic.
> > However, if you were to replace the rightmost resistor with another > cell, this time with an EMF of 6V and a negligible internal > resistance > > > In these kind of problems, it's best to explicitly include the internal resistance in the calculation and then see if, in fact, one can neglect the internal resistance of both voltage sources. (However, such a circuit is no long a parallel circuit *unless* one does a source transformation of each voltage source with series internal resistance to a current source with parallel internal resistance.) In that case, the voltage across the center resistor is easily found by superposition: $$V\_R = \frac{R||r\_2}{r\_1 + R||r\_2}5\, \mathrm{V} + \frac{R||r\_1}{r\_2 + R||r\_1}6\,\mathrm{V}$$ where $r\_1$ is the internal resistance of the $5\,\mathrm{V}$ source and $r\_2$ is the internal resistance of the $6\,\mathrm{V}$ source. Now note that setting either $r\_1 = 0$ *or* $r\_2 = 0$ is OK for the voltage calculation. For example, setting $r\_1 = 0$ yields $$V\_R = 5\,\mathrm{V}$$ The current *out of* the $6\,\mathrm{V}$ source is then $$I\_2 = \frac{6 - 5}{r\_2}\, \mathrm{A}$$ and the current *out of* the $5\,\mathrm{V}$ source is thus $$I\_1 = \frac{5}{R} - I\_2\, \mathrm{A}$$ We see that, for $r\_2 \le \frac{R}{5}$, the current $I\_1$ is negative, i.e., the $6\, \mathrm{V}$ source supplies power to the $5\, \mathrm{V}$ source. But note that we *cannot* now set $r\_2 = 0$ since, as $r\_2 \rightarrow 0$, the current $I\_2 \rightarrow \infty$. *So, in fact, you can't meaningfully stipulate that both voltage sources have negligible internal resistance.* --- It's interesting to also consider the case that $r\_2 = k\cdot r\_1$ and then let $r\_1 \rightarrow 0$. You then find that $$V\_R \rightarrow \frac{k}{1 + k}5\, \mathrm{V} + \frac{1}{1+k}6\, \mathrm{V}$$ and $$I\_1 \rightarrow - I\_2 \rightarrow \infty $$
20,170,251
I need to run my Python program forever in an infinite loop.. Currently I am running it like this - ``` #!/usr/bin/python import time # some python code that I want # to keep on running # Is this the right way to run the python program forever? # And do I even need this time.sleep call? while True: time.sleep(5) ``` Is there any better way of doing it? Or do I even need `time.sleep` call? Any thoughts?
2013/11/24
[ "https://Stackoverflow.com/questions/20170251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
How about this one? ```py import signal signal.pause() ``` This will let your program sleep until it receives a signal from some other process (or itself, in another thread), letting it know it is time to do something.
If you mean run as service then you can use any rest framework ``` from flask import Flask class A: def one(port): app = Flask(__name__) app.run(port = port) ``` call it: ``` one(port=1001) ``` it will always keep listening on 1001 ``` * Running on http://127.0.0.1:1001/ (Press CTRL+C to quit) ```
8,659,647
Here's a situation I run into frequently: I've got Parent and Child objects, and the Child object has a Parent property. I want to run a query that gets the child objects and joins each one to the correct parent object: ``` Dim db = New DataContextEx() get the children, along with the corresponding parent Dim Children = From x In db.ChildTable Join y In db.ParentTable On x.ParentId Equals y.Id Execute x.Parent = y <-- pseudocode Select x ``` The pseudocode shows what I want to accomplish: to return the child object x but with the code after the (fake) Execute statement executed. I can think of a lot of ways to accomplish the end goal, but they all have a lot more lines of code and/or creation of temporary objects or functions that I find inelegant. (Note this is VB.NET syntax, but it's not a VB syntax question, since AFAIK C# would have the same problem.) So what would be the cleanest way to do what I'm trying to do? **Edit**: People have asked about what ORM I'm using, but this is really a plain vanilla LINQ question; I'm not trying to convert this into logic to be run on the server, I just want some syntactic sugar to run the code client side after the query has been run on the server.
2011/12/28
[ "https://Stackoverflow.com/questions/8659647", "https://Stackoverflow.com", "https://Stackoverflow.com/users/349974/" ]
You could use an anonymous type when projecting results. C# example: ``` var items = from x In db.ChildTable join y In db.ParentTable on x.ParentId equals y.Id select new { Child =x , Parent=y }; ``` Then assign the parent properties. ``` foreach(var item in items) { item.Child.Parent = item.Parent; } return items.Select(item => item.Child); ``` Also you may want to use some ORM solutions for this instead of rolling your own.
These suggestions, especially [@Paul's](https://stackoverflow.com/a/8660002/349974), were very helpful, but my final version is different enough that I'm going to write it up as my own answer. I implemented the following totally general extension function: ``` <Extension()> _ Public Function Apply(Of T)(ByVal Enumerable As IEnumerable(Of T), ByVal action As Action(Of T)) As IEnumerable(Of T) For Each item In Enumerable action(item) Next Return Enumerable End Function ``` This is the same as ForEach, except that it returns the incoming sequence, and I think the name **Apply** makes it clear that there are side effects. Then I can write my query as: ``` Dim Children = ( From x In db.ChildTable Join y In db.ParentTable On x.ParentId Equals y.Id ). Apply(Sub(item) item.x.Parent = item.y). Select(Function(item) item.x) ``` It would of course be better if there were a way to have custom query operators, so I wouldn't have to use the lambda syntax, but even so this seems very clean and reusable. Thanks again for all the help.
836,242
Stupid question time - how do you update the title attribute of a control? Obviously this does not work: ``` $("#valPageIndex").attr('title') = pageIndex; ```
2009/05/07
[ "https://Stackoverflow.com/questions/836242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/86555/" ]
``` $("#valPageIndex").attr('title', pageIndex); ``` Basically, `$("#valPageIndex").attr('title')` is just for getting its value, while `$("#valPageIndex").attr('title', value)` is for setting it. Here's the official doc: <http://docs.jquery.com/Attributes/attr>.
``` $("#valPageIndex").attr('title', pageIndex); ``` <http://docs.jquery.com/Attributes/attr#keyvalue> A common jQuery convention is to have `.foo()` or `.foo('selector')` act as a getter and `.foo(value)` or `.foo('selector', value)` act as a setter.
14,279,642
I have 3 sql tables calls category, movies, category\_movies. category and movies tables have many to many relationship. Thats why I use category\_movies table. This is tables' structure... > > > ``` > Category : cat_id, cat_name, > movies : mov_id, mov_name, > category_movies : cat_id, mov_id > > ``` > > Now I have got 3 category IDs dynamically and now I want to select movies' names alone with category names which belongs to 3 category\_id have already got. This is the query that I tried so far.. ``` SELECT c.cat_name AS cn, m.mov_name AS mn, m.mov_id FROM category AS c INNER JOIN category_movies AS cm ON cm.cat_id = c.cat_id INNER JOIN movies AS m ON m.mov_id = cs.mov_id WHERE c.cat_id IN (2, 5, 7) GROUP BY c.cat_name, m.mov_name, m.mov_id HAVING COUNT(*) >= 3 ``` but this is now working.. can anybody tell me what is wrong with this query?
2013/01/11
[ "https://Stackoverflow.com/questions/14279642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1942155/" ]
use `IN` clause on this ``` SELECT.. FROM.. WHERE cat_id IN (2, 5, 7) ``` and is the same with ``` SELECT.. FROM.. WHERE cat_id = 2 OR cat_id = 5 OR cat_id = 7 ``` Please also take note that it is `INNER JOIN` not `INNOR JOIN` but I guess, you want to perform `RELATIONAL Division` (*you want to search for a movie that has all the category you want to find*) ``` SELECT c.cat_name, m.mov_name, m.mov_id FROM category AS c INNER JOIN movies AS m ON m.cat_id = c.cat_id INNER JOIN category_movies AS cm ON cm.mov_id = m.mov_id WHERE cat_id IN (2, 5, 7) GROUP BY c.cat_name, m.mov_name, m.mov_id HAVING COUNT(*) >= 3 ``` * [Relational Division](http://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/)
1 code ``` SELECT Id, name, YEAR(BillingYar) AS Year FROM Records WHERE Year ≥ 2010 ``` 2 code ``` SELECT id, name FROM students WHERE grades = (SELECT MAX(grades) FROM students GROUP BY subject_id); ``` I can see only in first code wrong this symbol ( ≥ ), need to be >= . Something else?
12,827
I want to change the default username (pi) to something, I tried ``` usermod -l newusername pi ``` but that gives me ``` usermod: user pi is currently used by process 2104 ``` Is there another way to modify the root account or disable this and create a new root account?
2014/01/08
[ "https://raspberrypi.stackexchange.com/questions/12827", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/11899/" ]
**If you're in the console** of the pi there is a way to get around this without having to make another user (or set a pw on root): Assuming nothing else is running with your username other then the shell on the console - no X session, no ssh login, etc: ``` exec sudo -s cd / usermod -l newname -d /home/newname -m oldname ``` **The reason this works:** * `sudo -s` tells `sudo` that instead of just running the command as another user, it should run a new shell as the given user * `exec` tells the shell that instead of spawning off a new process when it runs a command (hence leaving the shell process running as the logged in user), that the shell should *overwrite* itself with the new process. This means that when the `exec` command ends the shell is gone. In the case of a login shell, that equates to disconnecting from the login. * the `cd /` is optional. At a minimum, things get a bit confusing if you move a directory you're in (your login starts out sitting in the user `pi` home directory). Sometimes this will cause a fail, so it's better to be safe than sorry. Therefore with `exec sudo -s` you're overwriting your shell with a new shell that has been created as a different user (the root user in this case). P.S. be sure to give `usermod -d` a **full (hard link) path** or you'll end up moving the account's home to somewhere you don't expect and have a bogus directory entry in `passwd`.
The answers above are correct, I just want to give another option that may suits you better. Assuming: --------- * A brand new raspberry pi * You want to change the default username `pi` to `mypie` * You want to adapt also the main group from `pi` to `mypie` * You want other things to work out like sudo and auto-login Proceed to: ----------- ### Step 1: stop user `pi` from running before the change. * Boot it, go to RPI configurations and + allow SSH, + disallow auto-login + hit ok * Press ALT+F1 to go to the first tty * Escalate to root with `sudo su -` * Edit `$vim /etc/systemd/system/autologin@.service` + Find and *comment* (#) the line - `#ExecStart=-/sbin/agetty --autologin pi --noclear %I $TERM`you can *uncomment* it later if you want console *autologin*, but then don't forget to change the user `pi` to your new username `mypi` * Create a new root password with `passwd`. (DON'T FORGET IT) * Type `reboot` ### Step 2: make the user change * If you see the graphical login prompt, you are good. Do **not** login. Instead, press ALT+F1 (\* if you want to do it via ssh, see the apendix) * After ALT+F1, you should see a `login` question (and not an autologin). * Login as `root` with your root password. Now you are alone in the system, and changes to `pi` will not be met with `usermod: user pi is currently used by process 2104`. Check with `ps -u pi` to see an empty list. * Very carefully, key by key, type `usermod -l mypie pi` . This will change your username, from `/etc/passwd` file, but things are not ready yet. Anyway, check with `tail /etc/passwd` and see the last line `mypie:1000:...` The 1000 is the UID and it is now yours. * Try `su mypie` just to be sure. Do nothing. Just `exit` again to root. It should work. Now you need to adjust the group and a `$HOME` folder. ### Step 3: make the group change * Type, again carefully, `groupmod -n mypie pi` . This will change the `pi` group name. Check it with `tail /etc/group` and you will see the last line the new name associated with `GID` 1000. * Just to clarify, type `ls -la /home/pi` and you will see that the `pi` HOME now belongs to you, `mypie`. ### Step 4: lets adopt the new home. * I see in the answers above the creation of a new folder, copying everything. No need. Lets just use the same. * First move to `cd /home` to make it easier. Type `ls -la` and see `pi`, onwer `mypie` group `mypie` * Type carefully: `mv pi mypie` . You now need to associate this change with your new user. * Type carefully: `usermod -d /home/mypie mypie` . This will change your home directory. Check it with `tail /etc/passwd` and look at the sixth field (separated by `:`). ### Step 5: some adjusts after the fact. * Reboot with `reboot` * Login as your new user `mypie` in the graphical interface. * Open a terminal. *Change your password* * Type `passwd` to change the password of `mypie` to something else than `raspberry` * Type `sudo su -` and you will be asked your password. *auto-login again if you will (I don't recommend, but well)* * If you want to autologin your new account, edit the file: + `$vim etc/lightdm/lightdm.conf` + find the line with `#autologin-user=`, change it to `autologin-user=mypie` (no *comment* #) * If you want back the ALT+F1 autologin, find and edit the file: + `$vim /etc/systemd/system/autologin@.service` and change the line + `#ExecStart=-/sbin/agetty --autologin mypie --noclear %I $TERM` *Make your sudo passwordless again (I don't recommend as well)* * Move yourself (root) to `cd /etc/sudoers.d` * Rename the file `010_pi-nopasswd` to `010_mypie_nopasswd` * Open it `vim 010_mypie_nopasswd` and change the line `pi ALL=(ALL) NOPASSWD: ALL` to, obviously `mypie ALL=(ALL) NOPASSWD: ALL`. It is read-only, so save it forcing with `:x!` *While you are into it, change your hostname* * Edit `$vim /etc/hosts` and change `127.0.1.1 raspberry` to something more appropriate like `127.0.1.1 myoven`. * Edit `$vim /etc/hostname` and let a single line with `myoven`. *Done* ### Step 6: reboot * Type, carefully, `reboot` --- ### Appendix - ssh * You may want to do this via ssh. For this to work, first you need to allow root login. * Find the file `/etc/ssh/sshd_config` * Comment the line `#PermitRootLogin without-password` * Add the line `PermitRootLogin yes` * Save, exit, restart ssh with `/etc/init.d/ssh restart` --- * After you have done it, undo this changes as they are too dangerous to let that way. * Same file, delete `PermitRootLogin yes` line and remove the comment from `PermitRootLogin without-password` --- **Note 1:** This is a guide, and the content deals with very dangerous commands. Backup first, or be aware that maybe you will need to burn again your image. As I am assuming a brand new raspberry pi, there is not much to backup anyway. But if you adapt it to another situation, be advised. **Note 2:** There might be more things to change. As I am new to the Raspberry pi (I got mine 2 days ago), I may find other adjusts I left out and I will edit this answer again. **Note 3:** My first attempt was to move `pi` user and `pi` group to another `UID` and `GID` (1001) and create a new user for me as `1000`. That didn't quite go as I planned and I needed to burn my SD card again after spending the whole day trying to figure out why the "configure you pi" program would not work anymore. But well, this way here is far easier anyway, so here you go: a new pi with just your username as UID 1000 (and all the good stuff in your home). **Note 4:** Be advised, after doing that, the standard configuration tool stops working. [![Raspberry Pi Configuration Tool](https://i.stack.imgur.com/JeEPd.png)](https://i.stack.imgur.com/JeEPd.png) **footnote:** Thanks for the *stackexchange raspberrypi* community (as I'm new here also).
734,032
Where do you run into a real world situation involving 3 variables and 3 equations? Can someone think of a specific example from business, etc? I recall taking an operations research course that seemed to involve optimization of 3 variables, but do not recall a single example or theme. Any help is appreciated.
2014/03/31
[ "https://math.stackexchange.com/questions/734032", "https://math.stackexchange.com", "https://math.stackexchange.com/users/17139/" ]
In the spirit of Christmas and New Years' resolutions, suppose that what we were on a diet and needed to eat precisely $245$ calories, $6$ grams of protein, and $7$ grams of fat for breakfast. Unfortunately, I open my cupboard to see that all I have is three boxes of cereal: Cheerios, Cinnamon Toast Crunch, and Rice Krispies. There nutritional information per serving is as follows: ``` Cereal Calories Protein Fat Cheerios 120 4 2 Cinnamon Toast Crunch 130 3 5 Rice Krispies 105 1 2 ``` Now, normally, I would dive in and gorge myself on Cinnamon Toast Crunch$^\dagger$ because they're delicious - but, I need to stick to my new years resolution. First, I denote $c = $ servings of cheerios, $t = $ servings of Cinnamon Toast Crunch, and $r = $ servings of Rice Krispies. Then, I form the following system of $3$ equations in $3$ unknowns: $$120c + 130t + 105r = 245 \ \text{calories}$$ $$4c + 3t + r = 6 \ \text{grams of protein} $$ $$2c + 5t + 2r = 7 \ \text{grams of fat} $$ Now, I leave it to you to find out if I am stuck with one bland mix of cereal$^1$, whether I will be able to form many mixtures of cereal$^2$, or If I will be forever cursed with the dreaded stomach-tire$^3$. $^1$The system has a unique solution. $^2$The system has infinitely many solutions $^3$The system has no solution $^\dagger$French Toast Crunch is **even better**, and baby [it's back](http://time.com/3623148/french-toast-crunch/). Note: I got the numbers for this (silly) example from [here](https://www.math.hmc.edu/~dk/math40/math40-lect03.pdf), courtesy of Dr. Dagan Karp.
Linear programming. look [HERE](http://www.purplemath.com/modules/linprog4.htm) ... in fact the first example (rabbit food) involves three variables.
312,119
I have a question regarding classification in general. Let $f$ be a classifier, which outputs a set of probabilities given some data D. Normally, one would say: well, if $P(c|D) > 0.5$, we will assign a class 1, otherwise 0 (let this be a binary classification). My question is, what if I find out, that if I classify the class as 1 also when the probabilities are larger than, for instance 0.2, and the classifier performs better. Is it legitimate to then use this new threshold when doing classification? I would interpret the necessity for lower classification bound in the context of the data emitting a smaller signal; yet still significant for the classification problem. I realize this is one way to do it. However, if this is not correct thinking of reducing the threshold, what would be some data transformations, which emphasize individual features in a similar manner, so that the threshold can remain at 0.5?
2017/11/06
[ "https://stats.stackexchange.com/questions/312119", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93932/" ]
Stephan's answer is great. It fundamentally depends on what you want to do with the classifier. Just adding a few examples. A way to find the best threshold is to define an objective function. For binary classification, this can be accuracy or F1-score for example. Depending on which you choose, the best threshold will be different. For F1-score, there is an interesting answer here: [What is F1 Optimal Threshold? How to calculate it?](https://stats.stackexchange.com/questions/182331/what-is-f1-optimal-threshold-how-to-calculate-it) . But saying "I want to use F1-score" is where you actually make the choice. Whether this choice is good or not depends on the final purpose. Another way to see it is facing the trade-off between exploration and exploitation (Stephan's last point): The [multi-armed bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit) is an example of such a problem: you have to deal with two conflicting objectives of acquiring information and choosing the best bandit. One Bayesian strategy is to choose each bandit randomly with the probability it is the best. It's not exactly classification but dealing with output probabilities in a similar way. If the classifier is just one brick in decision making algorithm, then the best threshold will depend on the final purpose of the algorithm. It should be evaluated and tuned in regard to the objective function of the whole process.
There is no wrong threshold. The threshold you choose depends of your objective in your prediction, or rather what you want to favor, for example precision versus recall (try to graph it and measure its associated AUC to compare different classification models of your choosing). I am giving you this example of precision vs recall, because my own problem case i am working on right now, i choose my threshold depending of the minimal precision (or PPV Positive Predictive Value) i want my model to have when predicting, but i do not care much about negatives. As such i take the threshold that corresponds to the wanted precision once i have trained my model. Precision is my constraint and Recall is the performance of my model, when i compare to other classification models.
64,436,210
``` import pickle class player : def __init__(self, name , level ): self.name = name self.level = level def itiz(self): print("ur name is {} and ur lvl{}".format(self.name,self.level)) p = player("bob",12) with open("Player.txt","w") as fichier : record = pickle.Pickler(fichier) record.dump(p) ``` this is the error write() argument must be str, not bytes
2020/10/19
[ "https://Stackoverflow.com/questions/64436210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14334596/" ]
Its common to convert binary to ascii and there are several different commonly used protocols to do it. This example does a base64 encoding. Hex encoding is another popular choice. Whoever consumes this file will need to know its encoding. But it also needs to know its a python pickle, so not much extra labor there. ``` import pickle import binascii class player : def __init__(self, name , level ): self.name = name self.level = level def itiz(self): print("ur name is {} and ur lvl{}".format(self.name,self.level)) p = player("bob",12) with open("Player.txt","w") as fichierx: fichierx.write(binascii.b2a_base64(pickle.dumps(p)).decode('ascii')) print(open("Player.txt").read()) ```
I've been trying to follow all of the chat in the comments, and none of it makes much sense to me. You don't want to "stock it in binary", but Pickle is a binary format, so by choosing Pickle as your serialization method, that decision has already been made. Your easy answer is therefore just what you say in your subject line...use 'wb' instead of 'w', and move on (remembering to use 'rb' when you read your file back in): ``` p = player("bob",12) with open("Player.pic", "wb") as fichierx: pickle.dump( p, fichierx ) ``` If you really want to use a text-based format...something human readable, that wouldn't be difficult given the data I see in your object. Just store your fields in a dict, add `load` and `store` methods to your object, and use the `json` library to implement those methods by reading and writing your dict from/to disk in JSON format. There is one valid reason I know of to add an "ascification" layer around your pickled data, and that's if you want to be able to copy/paste it around, like you commonly do with SSH keys, certs, etc. As someone else said, this doesn't make your data more human readable...just easier to move around, like in email and such. If this is what you want, although you didn't say so, then all of the stuff above comes back on the table, I do admit. In that case, please take all of my blabbering to mean "tell us what your requirements really are".
23,189,167
I am running the following stack: * ruby 2.1.1p76 (2014-02-24 revision 45161) [x86\_64-linux] * RubyGems 2.2.2 * Rails 4.1.0 * Bundler version 1.6.2 on ubuntu running apache And I am getting the following error: > > Could not find json-1.8.1 in any of the sources (Bundler::GemNotFound) > > > When I look for json as follows: ``` $ gem list | grep json json (1.8.1) multi_json (1.9.2) ``` It is there but for some reason, the message from Passenger is as follows: > > Ruby (Rack) application could not be started > > > Error message: > Could not find json-1.8.1 in any of the sources (Bundler::GemNotFound) > Exception class: > PhusionPassenger::UnknownError > > >
2014/04/21
[ "https://Stackoverflow.com/questions/23189167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/303347/" ]
For me, this problem was caused by Spring (the Rails quick-loader) not picking up Gem/path changes. I was executing `rails generate rspec:install` and getting a json-1.8.1 not found. I probably executed thirty different commands -- any of which probably had an impact to the final resolution -- but eventually doing a `bin/spring stop` allowed further `rails` commands to work because they restarted the Spring server with an updated Gem-list.
I ran into this exact error while using [rack-pow](/questions/tagged/rack-pow "show questions tagged 'rack-pow'") and [rvm](/questions/tagged/rvm "show questions tagged 'rvm'"). The issue was that pow couldn't find the gems, so using [the solution from rvm](https://rvm.io/integration/pow), I created a `.powenv` in the root of the rails app with this content: ``` # detect `$rvm_path` if [ -z "${rvm_path:-}" ] && [ -x "${HOME:-}/.rvm/bin/rvm" ] then rvm_path="${HOME:-}/.rvm" fi if [ -z "${rvm_path:-}" ] && [ -x "/usr/local/rvm/bin/rvm" ] then rvm_path="/usr/local/rvm" fi # load environment of current project ruby if [ -n "${rvm_path:-}" ] && [ -x "${rvm_path:-}/bin/rvm" ] && rvm_project_environment=`"${rvm_path:-}/bin/rvm" . do rvm env --path 2>/dev/null` && [ -n "${rvm_project_environment:-}" ] && [ -s "${rvm_project_environment:-}" ] then echo "RVM loading: ${rvm_project_environment:-}" \. "${rvm_project_environment:-}" else echo "RVM project not found at: $PWD" fi ``` This solved the problem.
15,328,324
``` protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(request.getParameter("msg").toString()); String data = request.getParameter("msg").toString(); Gson gson = new Gson(); MessageBase msggg = gson.fromJson(data, MessageBase.class); //System.out.println(msggg.Id + msggg.MessageText); } ``` --- ``` public abstract class MessageBase implements Serializable { public int Id; public String MessageText; public Date ReceiveDate; } public class SyncSmsMessage extends MessageBase { public String SenderNum; } ``` The code works until `MessageBase msggg=gson.fromJson(data, MessageBase.class);`. I get this exception: ``` java.lang.RuntimeException: Failed to invoke public com.example.syncapp.MessageBase() with no args at com.google.gson.internal.ConstructorConstructor$2.construct(ConstructorConstructor.java:94) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:162) at com.google.gson.Gson.fromJson(Gson.java:795) at com.google.gson.Gson.fromJson(Gson.java:761) at com.google.gson.Gson.fromJson(Gson.java:710) at com.google.gson.Gson.fromJson(Gson.java:682) at AndroidServlet.doPost(AndroidServlet.java:75) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) ``` What I need to do? I put .jar in lib folder and i think the tomcat load the .jar well.
2013/03/10
[ "https://Stackoverflow.com/questions/15328324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1612863/" ]
I have same problem when use retrofit and figure out this is happen when use **abstract** class so i'm create empty class that **extends** abstract class (`MessageBase`) like this : ``` public class BaseResponse extends MessageBase { } ``` And now use `BaseResponse` that have all field of `MessageBase`
### Problem Example: You have an abstract class which is composed of abstract classes members, then you extend this main class and its members too. ``` public abstract class Receipt implements Serializable { @Expose protected ReceiptMainDetails mainDetails; @Expose protected ReceiptDetails payerReceiptDetails; @Expose protected ReceiptDetails payeeReceiptDetails; ``` (The details are all abstract too) ``` public class TransferReceipt extends Receipt { public TransferReceipt() {} public TransferReceipt(TransferDetails body, Payer payer, Payee payee) { super(body, payer, payee); } public static class TransferDetails extends ReceiptMainDetails { public TransferDetails() {} } ``` (And the other details are extended too) GSON will not be able to transform the json to the object. ### Solution The Receipt can be abstract, but its members must be concrete classes, even though you may extend them again. ### Test your solution ``` @Test public void testGson() { String jsonString = "{youChooseWhatIsHere}"; Gson gson = new Gson(); TransferReceipt receipt = gson.fromJson(jsonString, TransferReceipt.class); String actualReceiptJson = gson.toJson(receipt); System.out.println(jsonString); System.out.println(actualReceiptJson); System.out.println(receipt); } ```
210,884
In *Avengers: Endgame* the Avengers attack Thanos in his garden on Titan II. Captain Marvel points out that the planet has no defences, no radar, no tracking systems, no armies and seems to be totally unguarded. When they find Thanos they discover that > > he has already destroyed the Infinity Stones > > > some two days previously. Before this happened, was Titan II as vulnerable as it was when the Avengers turned up? The Avengers are only able to track Thanos due to > > the energy surge caused by destroying the Stones. > > > But, theoretically, if they'd found him before this then would the planet have been in the same state? In other words, did Thanos have all his minions and soldiers around him but just sent them away two days before the Avengers arrived? Was there defensive military hardware which was subsequently dismantled? Or was he on his own the whole time?
2019/04/26
[ "https://scifi.stackexchange.com/questions/210884", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/64888/" ]
He was on his own the whole time. --------------------------------- **One** - Occam's Razor suggests that the simplest explanation is the correct one. There were no signs of defenses or defenders, and had they been there in the past they'd have left some signs, so the simplest explanation is that they were never there. **Two** - Thanos did not expect to need them. > > "With all six Stones, I could simply snap my fingers. They would all > cease to exist. I call that... mercy." > > > "And then what?" > > > "I finally rest. And watch the sun rise on a grateful universe.... > > > Thanos does not expect strife. He truly believes that the Universe will be grateful to him and that he can retire to a peaceful life. **Three** - Thanos has hung up his armor. Literally. [![Thanos Armor Scarecrow](https://i.stack.imgur.com/HP8i4.jpg)](https://i.stack.imgur.com/HP8i4.jpg) The fact that Thanos has hung his armor up to scare the crows away reflects how truly he believes he has retired to a peaceful life, and does not need his arms, his armor, or his army.
At the end of Infinity War, we see Thanos teleport onto that planet. He is not with his army. He even takes off his armour. In his mind, he had already decimated the only people who could've had the chance to go up against him. Also, he had the full Infinity Gauntlet. So he didn't have much to be afraid of. So, Yes, the planet would've been defenseless nonetheless.
1,807,596
I am using Hibernate as my JPA provider with it connecting to a Progress database. When a NaN value is persisted it is causing lots of problems - it prevents the row from being read in certain circumstances. Is there a way to hook in to the standard double type persistence to convert NaN (and probably + and - infinity) to a different value? It does not matter if the NaN or infinity information is lost, I just want a readable row! I know I could do something like this: ``` @Column(name = "doubleColumn") public double getDoubleColumn() { return PersistedDouble.convert(doubleColumn); } ``` But I am worried about maintenance as that will have to be manually added for any doubles mapped to the database.
2009/11/27
[ "https://Stackoverflow.com/questions/1807596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/95361/" ]
[Following this discussion](https://forums.hibernate.org/viewtopic.php?f=1&t=927883), I have the feeling, that hibernate doesn't offer a way to convert NaN into something else. I think, you have to prevent NaN values earlier, even before they're written to the bean member variables (like adding guard/conversion code to the setters). **EDIT** I fear, the best unpleasant solution is to use guard code and, which is even worse, an additional column on the table, to flag, whether the value is a number or not. What certainly will complicate the query and insert operations. But you need NaN in the database and you can't fight the jdbc driver/database to behave properly (and accept NaN as valid inputs for NUMBER fields).
I had exactly the same problem, and with the guidance of these solutions, I also prepared a custom type class extending DoubleType. Inside that class I converted NaN values to null at the set function and vice versa for the get function, since null is OK for my database columns. I also changed the mapping for NaN possible columns to the custom type class. That solution worked perfectly for hibernate 3.3.2. Unfortunately, after upgrading Hibernate to 3.6.10, it stopped working. In order to make it work again, I replaced the custom type from extending DoubleType to implement UserType. The important data type function implementations should be as follows: ``` private int[] types = { Types.DOUBLE }; public int[] sqlTypes() { return types; } @SuppressWarnings("rawtypes") public Class returnedClass() { return Double.class; } ``` And here are the get and set functions: ``` public Object nullSafeGet(ResultSet rs, String[] names, Object owner) throws HibernateException, SQLException { Double value = rs.getDouble(names[0]); if (rs.wasNull()) return Double.NaN; else return value; } public void nullSafeSet(PreparedStatement ps, Object value, int index) throws HibernateException, SQLException { Double dbl = (Double) value; if ((dbl == null) || (Double.isNaN(dbl))) ps.setNull(index, Types.DOUBLE); else ps.setDouble(index, dbl); } ```
24,530,433
I'm trying to compile for Android on Windows, I've executed `publish.sh` and `gameDevGuide.sh` successfully. my `Android.mk` has been modified by `gameDevGuide.sh` When I run `build_native.py` I get the following error: > > D:\cocos-projects\game\proj.android>build\_native.py The Selected NDK > toolchain version was 4.8 ! Android NDK: > > > ERROR:D:\cocos-projects\game\proj.android../cocos2d/plugin/publish/protocols/android/Android.mk:PluginProtocolStatic: > LOCAL\_SRC\_FILES points to a missing file Android NDK: Check that > > > D:\cocos-projects\game\proj.android../cocos2d/plugin/publish/protocols/android/./lib/armeabi/libPluginProtocolStatic.a > exists > > > D:\cocos-projects\game\proj.android../cocos2d/plugin/publish/protocols/android/Android.mk contains: > > LOCAL\_PATH := $(call my-dir) > > > include $(CLEAR\_VARS) LOCAL\_MODULE := PluginProtocolStatic > LOCAL\_MODULE\_FILENAME := libPluginProtocolStatic > > > LOCAL\_SRC\_FILES := ./lib/$(TARGET\_ARCH\_ABI)/libPluginProtocolStatic.a > LOCAL\_EXPORT\_C\_INCLUDES := $(LOCAL\_PATH)/../include $(LOCAL\_PATH) > LOCAL\_EXPORT\_LDLIBS := -llog > > > include $(PREBUILT\_STATIC\_LIBRARY) > > > The path `D:\cocos-projects\game\proj.android../cocos2d/plugin/publish/protocols/android/./lib/armeabi/libPluginProtocolStatic.a` seems wrong (notice the dot). `libPluginProtocolStatic.a` does not exist Any idea how do I fix that? (Cocos2d-x 3.2alpha)
2014/07/02
[ "https://Stackoverflow.com/questions/24530433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068397/" ]
It is the somewhat wrong approach. I would suggest something like this: ``` int arguments(){ //assuming obj takes an int as 3rd constructor argument, could be any //do computation you wanted to do in A::A } class A{ A() : obj(non, Default, arguments()){} B obj; } ``` Now the initialization is done before `obj` is created. You may also make `arguments` return a `B` and rely on the move constructor of `B`. Edit: The answer does not change much with the edited question. Now you have two objects but the same logic applies: ``` class A{ A() : obj(doInitCalculations()), obj2(something){} B obj; C obj2; } ``` `obj` must be initialized before `obj2` (even if you write `A() : obj2(), obj1{}`) because they are constructed in the order they are declared in the class. So `doInitCalculations` is still called before either object is constructed.
You'll need a level of indirection, something like: ``` class A { unique_ptr<B> pObj; public: A { // do your pre-B stuff pObj = std::unique_ptr<B>(new B(/* args */)); }; // other stuff } ```
5,641,579
In another thread on XNA, [Callum Rogers wrote some code](https://stackoverflow.com/a/2984527/) which creates a texture with the outline of a circle, but I'm trying to create a circle filled with a color. What I have to modify on this code to fill the circle with color? ``` public Texture2D CreateCircle(int radius) { int outerRadius = radius*2 + 2; // So circle doesn't go out of bounds Texture2D texture = new Texture2D(GraphicsDevice, outerRadius, outerRadius); Color[] data = new Color[outerRadius * outerRadius]; // Colour the entire texture transparent first. for (int i = 0; i < data.Length; i++) data[i] = Color.Transparent; // Work out the minimum step necessary using trigonometry + sine approximation. double angleStep = 1f/radius; for (double angle = 0; angle < Math.PI*2; angle += angleStep) { // Use the parametric definition of a circle: http://en.wikipedia.org/wiki/Circle#Cartesian_coordinates int x = (int)Math.Round(radius + radius * Math.Cos(angle)); int y = (int)Math.Round(radius + radius * Math.Sin(angle)); data[y * outerRadius + x + 1] = Color.White; } texture.SetData(data); return texture; } ```
2011/04/12
[ "https://Stackoverflow.com/questions/5641579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/704734/" ]
Don't use a texture for stuff like this (especially for things being in one single color!) - also don't try to do it pixel by pixel. You've got 3D acceleration for a reason. Just draw the circle similar to a pie using a triangle fan. You'll need the following vertices. * Center of the circle * x points on the circle's border. The first two points will define a line between the center of the circle and its border. The third vertex will define the first polygon. Vertices 1, 3 and 4 will then define the second polygon, etc. To get the points on the circle's border use the formulas from your example. The first angle will be 0°, the following ones multiples of (360° / *points on circle*). To get a full circle you'll need one additional point that matches the second point (the first point on the border). Depending on the number of vertices on the circle you'll get different n-gons. The more vertices you use the rounder the shape will look (at some performance cost): * (Less than 2 vertices aren't possible as a polygon requires at least 3 vertices to be drawn.) * Total of 4 points (3 points on circle) will result in a triangle. * Total of 5 points (4 point on circle) will result in a square. * Total of 6 points (5 points on circle) will result in a pentagon * ... Actually the [XNA example for drawing primites](http://msdn.microsoft.com/en-us/library/bb196414%28v=xnagamestudio.10%29.aspx) show how to draw a circle (or n-gon) using a triangle fan.
If you need to do it from scratch (though I'm guessing there are easier ways), change the way you perform the rendering. Instead of iterating through angles and plotting pixels, iterate through pixels and determine where they are relative to the circle. If they are `<R`, draw as fill color. If they are `~= R`, draw as border color.
24,377,078
I am actually TRYING to solve on how will I make these formulas right on c programming. These are few lines of my code. My program is supposed to get an input from the user which is a day and give its equiv. in years,months,weeks, and days. So for example I have 730 days. If im going to convert it in years with months, weeks... it will be THERE ARE 2 YEARS, O MONTHS, 0, WEEKS, AND 0 DAYS (not sure if this is right). Another example, I have 402 days then it will be there is 1 YEAR, 1 MONTH, 1 WEEK, 0 DAYS left(Am I right?). The code that i had for "month" before was month =(days /365)/30; and changed it to month =(days %365)/30; Last time when I inputted a number, the answers were right then when i entered different numbers, it seems to be wrong. ``` #include<stdio.h> #include<conio.h> void main( ) { int days ,yr,mn,wk,d; printf("Enter the no of days"); scanf("%d",&days); yr = days /365; mn =(days /365)/30; printf("Years= %d \t Months= %d \t Weeks =%d \t days = %d",yr,mn,wk,d); getch(); } ``` So I really am having a hard time solving for the right formula. I hope that I can get some little help on what is wrong with the formula for not getting the right outputs. I must use mod for this.
2014/06/24
[ "https://Stackoverflow.com/questions/24377078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3767918/" ]
The key to this is: for each successively smaller unit of time, you need to be working with the number of days *left over* from those you have already accounted for. In the example of 50 days, there are no years, and 1 month; once you've accounted for the month, there are still 50-30=20 days left to account for. So the algorithm would be: for each successively smaller unit of time, compute how many of that unit there are, and subtract out the number of days covered by that (leaving the days left to account for).
Okay, so I have a solution: ``` long yr = days/365; days -= yr * 365; long mn = days/30; days -= mn * 30; long wk = days/7; days -= wk * 7; // days now has remainder of days ```
3,926,393
Suppose $A$ is a $n \times n$ matrix i.e. $A \in \mathbb{C}^{n \times n}$, prove that rank($A^{n+1}$) = rank($A^n$). In other words, I need to prove that their range spaces or null spaces are equal. If it helps, $A$ is a singular matrix. Note that, I don't want to use Jordan blocks to prove this. Is it possible to prove this without using Jordan form? I can use Schur's triangularization theorem. Also, it's not known if A is diagonalizable.
2020/11/28
[ "https://math.stackexchange.com/questions/3926393", "https://math.stackexchange.com", "https://math.stackexchange.com/users/516816/" ]
**Hint** You can prove that for $k \ge 0$ $$\mathrm{rank}(A^{k+2}) - \mathrm{rank}(A^{k+1}) \le \mathrm{rank}(A^{k+1}) - \mathrm{rank}(A^{k})$$ Therefore, $$\mathrm{rank}(A^{n+1}) < \mathrm{rank}(A^{n})$$ would imply the contradiction $\mathrm{rank}(A) \gt n$.
Every matrix corresponds to a linear map. Suppose that $A=\mathcal{M}(T)$, where $T\in \mathcal{L}(V)$ and $dim (V)=n$. Using Theorem 8.4 in Axler, $$Null(T^n)=Null(T^{n+1}).$$ According to the Fundamental Theorem of Linear Maps, $$n=dim Null(T^n)+ dim Range(T^n)$$ $$n=dim Null(T^{n+1})+dim Range(T^{n+1})$$ $dim Range(T^n)=dim Range(T^{n+1})$ implies that $rank(A^n)=rank(A^{n+1})$.
16,731,252
I have some hidden divs and when a button is clicked I want to show a div. I've seen `slideDown` but that's not exactly what I want. I want that the hidden div grows from nothing to it's original size and doing this from the top right corner of the (hidden) div.
2013/05/24
[ "https://Stackoverflow.com/questions/16731252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40676/" ]
``` $("#box").show('size', { origin: ["top", "right"] }, 2000); ``` Use `.toggle()` with the same parameters instead if you want to be able to hide it with the same event. First parameter is the effect we're using, `size`. Second parameter is an object of options specific to this effect, of which we only use `origin` to specify where it should resize from. Third parameter is the duration in milliseconds, which you can change at your leisure. Live example: <http://jsbin.com/uwonun/1>
This isn't entirely what you want, but it could maybe be modified: You could use the jQuery UI `.effect` method on your dive with the `"size"` parameter. This is customizable in the options and you should be able to work out how here: > > <http://jqueryui.com/effect/> > > > Good Luck, and I hope this helps!
5,186,237
I'm trying to use `session[:user.return_to]` but with no success... My code: ``` def after_sign_in_path_for(resource) (session[:"user.return_to"].nil?) ? "/" : session[:"user.return_to"].to_s end ``` So, the problem is: When I check my session variables, I don't have any with those names. I would like to get to the page I was working on, right before being redirected to the login page.
2011/03/03
[ "https://Stackoverflow.com/questions/5186237", "https://Stackoverflow.com", "https://Stackoverflow.com/users/627094/" ]
in your ApplicationController.rb ``` def after_sign_in_path_for(resource) sign_in_url = url_for(:action => 'new', :controller => 'sessions', :only_path => false, :protocol => 'http') if request.referer == sign_in_url super else stored_location_for(resource) || request.referer || root_path end end ``` stored\_location\_for(resource) is a built-in method from Devise that stores the locations from which the user was coming from.
You need to define the session variable before you redirect someone. ``` session[:url_back] = "/*** ```
836,877
Newer Delphi versions (including Delphi 2007, which I'm using) have a build in file history feature, which lets you revert to old files from within the IDE. That's nice, but I already have source control. Is it possible to disable this feature? I ask because the IDE auto-creates a \history folder with old versions of all your files, and that annoys me.
2009/05/07
[ "https://Stackoverflow.com/questions/836877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/672/" ]
Tools \ Options \ Editor Options \ Uncheck "Create Backup Files" ;)
[DDevExtension](http://andy.jgknet.de/blog/?page_id=10) by Andreas Hausladen has a feature to remove \_\_history directories.
706,030
Consider a C struct: ``` struct T { int x; int y; }; ``` When this is partially initialized as in ``` struct T t = {42}; ``` is *t.y* guaranteed to be 0 or is this an implementation decision of the compiler?
2009/04/01
[ "https://Stackoverflow.com/questions/706030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23424/" ]
It's guaranteed to be 0 if it's partially initialized, just like array initializers. If it's uninitialized, it'll be unknown. ``` struct T t; // t.x, t.y will NOT be initialized to 0 (not guaranteed to) struct T t = {42}; // t.y will be initialized to 0. ``` Similarly: ``` int x[10]; // Won't be initialized. int x[10] = {1}; // initialized to {1,0,0,...} ``` Sample: ``` // a.c struct T { int x, y }; extern void f(void*); void partialInitialization() { struct T t = {42}; f(&t); } void noInitialization() { struct T t; f(&t); } // Compile with: gcc -O2 -S a.c // a.s: ; ... partialInitialzation: ; ... ; movl $0, -4(%ebp) ;;;; initializes t.y to 0. ; movl $42, -8(%ebp) ; ... noInitialization: ; ... ; Nothing related to initialization. It just allocates memory on stack. ```
No. it is guaranteed to be 0.
21,739,527
I can't seem to click the "Editing" "Designing" "Controls" "Navigation bar" checkboxes in the View ribbon on the Page Editor. Is this a permission thing? Haven't found a way to enable it (I have it enabled on a mirror/development version so I know it works) Thanks
2014/02/12
[ "https://Stackoverflow.com/questions/21739527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1970902/" ]
``` items <- strsplit(df$items, " ") data.frame(user = rep(df$users, sapply(items, length)), item = unlist(items)) ## user item ## 1 1 23 ## 2 1 77 ## 3 1 49 ## 4 2 10 ## 5 2 18 ## 6 2 28 ## 7 3 20 ## 8 3 31 ## 9 3 84 ``` or ``` library(data.table) DT <- data.table(df) DT[, list(item = unlist(strsplit(items, " "))), by = users] ## users item ## 1: 1 23 ## 2: 1 77 ## 3: 1 49 ## 4: 2 10 ## 5: 2 18 ## 6: 2 28 ## 7: 3 20 ## 8: 3 31 ## 9: 3 84 ```
If you're willing to install my "SOfun" package or load my [`concat.split.DT` function](https://gist.github.com/mrdwab/6873058), AND if there are the same number of items in each "item" string (in your example, there are 3), then the following might be an option: ``` library(reshape2) library(data.table) melt(concat.split.DT(indf, "items", " "), id.vars="users") ``` Here's an example. ### Sample data: 3 rows, 3000 rows, and 3,000,000 rows I've added an "id" column so you can compare the output across the two options. ``` ## your sample data.frame df <- data.frame(users=c(1,2,3), items=c("23 77 49", "10 18 28", "20 31 84")) ## extended to 3000 rows df1k <- df[rep(rownames(df), 1000), ] df1k$id <- sequence(nrow(df1k)) ## extended to 3 million rows df1m <- df1M <- df[rep(rownames(df), 1000000), ] df1m$id <- sequence(nrow(df1m)) ``` ### Load the required packages * "SOfun" (only on GitHub) for `concat.split.DT` which makes use of `fread` from "data.table" to split concatenated values. * "reshape2" for `melt` * "data.table" for its awesomeness, at least version 1.8.11 --- ``` # library(devtools) # install_github("SOfun", "mrdwab") library(SOfun) library(data.table) library(reshape2) packageVersion("data.table") # [1] ‘1.8.11’ ``` Here are some functions to test the speed of Jake's answer and this one. Later I'll try to update with "dplyr" too. ``` fun1 <- function(indf) { DT <- melt(concat.split.DT(indf, "items", " "), id.vars=c("id", "users")) setkeyv(DT, c("id", "users")) DT } fun2 <- function(indf) { DT <- data.table(indf) DT[, list(item = unlist(strsplit(as.character(items), " "))), by = list(id, users)] } ``` ***Testing on 3,000 rows*** ``` microbenchmark(fun1(df1k), fun2(df1k)) # Unit: milliseconds # expr min lq median uq max neval # fun1(df1k) 17.64675 18.21658 18.79859 21.21943 71.7737 100 # fun2(df1k) 152.97974 158.44148 163.12707 199.77297 345.7508 100 ``` ***Testing (just once) on 3,000,000 rows*** Time would be in seconds here.... ``` system.time(fun1(df1m)) # user system elapsed # 7.71 0.94 8.69 system.time(fun2(df1m)) # user system elapsed # 177.80 0.50 178.97 ``` --- ### Update @Jake makes a good point in the comments that adding an "id" made a very big difference in timings. I added it just so that the output of the two `data.table` approaches could be easily compared to see that the results were the same. Removing the "id" column and removing reference to "id" in `fun1` and `fun2` gives us the following: ``` microbenchmark(fun1a(df1M), fun2a(df1M), fun3(df1M), times = 5) # Unit: seconds # expr min lq median uq max neval # fun1a(df1M) 2.307313 2.420845 2.630284 2.822011 3.074464 5 # fun2a(df1M) 12.480502 12.491783 12.761392 13.069169 13.733686 5 # fun3(df1M) 13.976329 14.281856 14.471252 15.041450 15.089593 5 ``` Also benchmarked above is `fun3`, which is @mnel's "dplyr" approach. ``` fun3 <- function(indf) { rbind_all(do(indf %.% group_by(users), .f = function(d) data.frame( d[,1,drop=FALSE], items = unlist(strsplit(as.character(d[['items']]),' ')), stringsAsFactors=FALSE))) } ``` Pretty nice performance all answers!
26,927,538
I am trying to show `test.png` in the background, but it doesn't show up. Below is what I tried: HTML: ``` <html> <head> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <div class="menubalkje"></div> <div class="menu"></div> <div class="content"> test </div> <img class="bgafb" src="images/test.png"> </body> </html> ``` CSS: ``` body { padding:0px; margin: 0 auto; } .menubalkje{ background-color:#b32b00; margin-left:auto; margin-right:auto; width:1200px; height:25px; } .menu{ background-color:E53700; margin-left:auto; margin-right:auto; width:1200px; height:75px; } .content{ background-color:#ff3e01; width:1200px; height:100%; margin-left:auto; margin-right:auto; } .bgafb{ position:fixed; left:-270px; -moz-transform: scaleX(-1); -o-transform: scaleX(-1); -webkit-transform: scaleX(-1); transform: scaleX(-1); filter: FlipH; -ms-filter: "FlipH"; } ``` The `img` element with id `bgafb` needs to show the background image, but doesn't.
2014/11/14
[ "https://Stackoverflow.com/questions/26927538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4197600/" ]
Add `background` property with comma separated values in your CSS like: ``` #divId { background:url(images/image1.png) repeat-x, url(images/image2.png) repeat; } ``` Hope This Help. Got Help from **[HERE](http://blog.thelibzter.com/css-tricks-use-two-background-images-for-one-div/)**:
If you want to use something as a background image, put it in the body of your CSS. ``` body { background-image: url('path_to_image'); } ```
27,647,407
> > Autoboxing is the automatic conversion that the Java compiler makes > between the primitive types and their corresponding object wrapper > classes. For example, converting an int to an Integer, a double to a > Double, and so on. If the conversion goes the other way, this is > called unboxing. > > > So why do we need it and why do we use autoboxing and unboxing in Java?
2014/12/25
[ "https://Stackoverflow.com/questions/27647407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Some context is required to fully understand the main reason behind this. Primitives versus classes ------------------------- **Primitive variables in Java contain values** (an integer, a double-precision floating point binary number, etc). Because [these values may have different lengths](http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html), the variables containing them may also have different lengths (consider `float` versus `double`). On the other hand, **class variables contain references** to instances. References are typically implemented as pointers (or something very similar to pointers) in many languages. These things typically have the same size, regardless of the sizes of the instances they refer to (`Object`, `String`, `Integer`, etc). This property of class variables **makes the references they contain interchangeable** (to an extent). This allows us to do what we call *substitution*: broadly speaking, **[to use an instance of a particular type as an instance of another, related type](http://en.wikipedia.org/wiki/Liskov_substitution_principle)** (use a `String` as an `Object`, for example). **Primitive variables aren't interchangeable** in the same way, neither with each other, nor with `Object`. The most obvious reason for this (but not the only reason) is their size difference. This makes primitive types inconvenient in this respect, but we still need them in the language (for reasons that mainly boil down to performance). Generics and type erasure ------------------------- Generic types are types with one or more *type parameters* (the exact number is called *generic arity*). For example, the *generic type definition* `List<T>` has a type parameter `T`, which can be `Object` (producing a *concrete type* `List<Object>`), `String` (`List<String>`), `Integer` (`List<Integer>`) and so on. Generic types are a lot more complicated than non-generic ones. When they were introduced to Java (after its initial release), in order to avoid making radical changes to the JVM and possibly breaking compatibility with older binaries, **the creators of Java decided to implement generic types in the least invasive way:** all concrete types of `List<T>` are, in fact, compiled to (the binary equivalent of) `List<Object>` (for other types, the bound may be something other than `Object`, but you get the point). **Generic arity and type parameter information are lost in this process**, which is why we call it [*type erasure*](http://docs.oracle.com/javase/tutorial/java/generics/erasure.html). Putting the two together ------------------------ Now the problem is the combination of the above realities: if `List<T>` becomes `List<Object>` in all cases, then **`T` must always be a type that can be directly assigned to `Object`**. Anything else can't be allowed. Since, as we said before, `int`, `float` and `double` aren't interchangeable with `Object`, there can't be a `List<int>`, `List<float>` or `List<double>` (unless a significantly more complicated implementation of generics existed in the JVM). But Java offers types like `Integer`, `Float` and `Double` which wrap these primitives in class instances, making them effectively substitutable as `Object`, thus **allowing generic types to indirectly work with the primitives** as well (because you *can* have `List<Integer>`, `List<Float>`, `List<Double>` and so on). The process of creating an `Integer` from an `int`, a `Float` from a `float` and so on, is called *[boxing](http://coderevisited.com/boxing-and-unboxing-in-java/)*. The reverse is called *unboxing*. Because having to box primitives every time you want to use them as `Object` is inconvenient, **there are cases where the language does this automatically - [that's called *autoboxing*](http://docs.oracle.com/javase/7/docs/technotes/guides/language/autoboxing.html).**
Starting with JDK 5, java has added two important functions: autoboxing and autounboxing. **AutoBoxing** is the process for which a primitive type is automatically encapsulated in the equivalent wrapper whenever such an object is needed. You do not have to explicitly construct an object. **Auto-unboxing** is the process whereby the value of an encapsulated object is automatically extracted from a type wrapper when its value is required. You do not need to call a method such as *intValue()* or *doubleValue()*. The addition of autoboxing and auto-unboxing greatly **simplifies writing algorithms**, eliminating the bait manually boxing and unboxing of values. It is also helpful to **avoid mistakes**. It is also very important **for generics**, who only operate on objects. Lastly, autoboxing facilitates work with the **Collections Framework**.
74,305,213
I am using postgresql. Let's suppose I have this table name `my_table`: ``` id | idcm | stores | du | au | dtc | ---------------------------------------------------------------------------------- 1 | 20447 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-11-03 11:12:19.213799+01 | 2 | 20456 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-11-03 11:12:19.213799+01 | 3 | 20478 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-11-03 11:12:19.213799+01 | 4 | 20482 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-11-03 11:12:19.213799+01 | 5 | 20485 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | 6 | 20497 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | 7 | 20499 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | ``` I want to select only the rows having the value of `id` equal to one of the elements of the array in `stores` (of that line). However, the type of `stores` is not array, it is jsonb. So I want to get something like this: ``` id | idcm | stores | du | au | dtc | ---------------------------------------------------------------------------------- 2 | 20456 | [2, 5] | 2022-11-02 | 2022-11-15 | 2022-11-03 11:12:19.213799+01 | 5 | 20485 | [7, 5] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | 6 | 20497 | [2, 6] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | 7 | 20499 | [5, 7] | 2022-11-02 | 2022-11-15 | 2022-10-25 20:25:08.949996+02 | ``` I have tryed with ``` select * from my_table where stores::text ilike id::text; ``` but it returns zero rows because I would need to put wildcard character `%` before and after `id`, so I have tryed with ``` select * from my_table where stores::text ilike %id%::text; ``` but I get a syntax error.
2022/11/03
[ "https://Stackoverflow.com/questions/74305213", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7658051/" ]
You can use the [contains operator](https://www.postgresql.org/docs/current/static/functions-json.html) after converting the ID to a single JSON value: ``` select * from the_table where stores @> to_jsonb(id) ```
Try this: ```sql select * from my_table where id = any(stores); ```