qid
int64
10
74.7M
question
stringlengths
15
26.2k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
27
28.1k
response_k
stringlengths
23
26.8k
23,763,695
I have a Fortran formatted text file (here is 3 first rows): ``` 00033+3251 A B C? 6.96 5.480" 358 9.12 F0V 0.00 2.28s 1.00: 2MASS, dJ=1.3 00033+3251 Aa Ab Aab S1,E 0.62 0.273m 0 9.28 F0V 11.28 K2 1.68* 0.32* SB 1469 00033+3251 Aab Ac A E* 4.26 0.076" 0 9.12 F0V 0.00 2.00s 0.28* 2008MNRAS.383.1506 ``` and the file format description: ``` -------------------------------------------------------------------------------- Bytes Format Units Label Explanations -------------------------------------------------------------------------------- 1- 10 A10 --- WDS WDS(J2000) 12- 14 A3 --- Primary Designation of the primary 16- 18 A3 --- Secondary Designation of the secondary component 20- 22 A3 --- Parent Designation of the parent (1) 24- 29 A6 --- Type Observing technique/status (2) 31- 35 F5.2 d logP ? Logarithm (10) of period in days 37- 44 F8.3 --- Sep Separation or axis 45 A1 --- x_Sep ['"m] Units of sep. (',",m) 47- 49 I3 deg PA Position angle 51- 55 F5.2 mag Vmag1 V-magnitude of the primary 57- 61 A5 --- SP1 Spectral type of the primary 63- 67 F5.2 mag Vmag2 V-magnitude of the secondary 69- 73 A5 --- SP2 Spectral type of the secondary 75- 79 F5.2 solMass Mass1 Mass of the primary 80 A1 --- MCode1 Mass estimation code for primary (3) 82- 86 F5.2 solMass Mass2 Mass of the secondary 87 A1 --- MCode2 Mass estimation code for secondary (3) 89-108 A20 --- Rem Remark ``` How to read my file in Python. I have found only [read\_fwf](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_fwf.html) function from the [pandas](http://pandas.pydata.org/) library. ``` import pandas as pd filename = 'systems' columns = ((0,10),(11,14),(15,18),(19,22),(23,29),(30,35),(36,44),(45,45),(46,49),(50,55),(56,61),(62,67),(68,73),(74,79),(80,80),(81,86),(87,87),(88,108)) data = pd.read_fwf(filename, colspecs = columns, header=None) ``` Is this the only possible and effective way? I hope I can do this without [pandas](http://pandas.pydata.org/). Have you any suggestions?
2014/05/20
[ "https://Stackoverflow.com/questions/23763695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2579566/" ]
``` columns = ((0,10),(11,14),(15,18),(19,22),(23,29),(30,35), (36,44),(44,45),(46,49),(50,55),(56,61),(62,67), (68,73),(74,79),(79,80),(81,86),(86,87),(88,108)) string=file.readline() dataline = [ string[c[0]:c[1]] for c in columns ] ``` note the column indices are (startbyte-1,endbyte) so that a single character field is eg: (44,45) this leaves you with a list of strings. You probably want to do conversion to floats, integers, etc. There are a number of questions here on that topic..
This type of file can be read with astropy tables. The header you show looks a lot like a CDS-formatted ascii table, which has a specific reader implemented for it: <http://astropy.readthedocs.org/en/latest/api/astropy.io.ascii.Cds.html#astropy.io.ascii.Cds>
48,300,750
**Please don't mark as duplicate this question without understanding.** **I have gone through all answers which are available but none is able to resolve my issue.** **Scenario:** I have to add Sections with feature of expand and collapse on click, these sections are dynamic as per API response. So I have taken On xml file with below tree structure. **main.xml** **LinearLayout->ScrollView->LinearLayout** Now I am adding custom xml design file in this linear layout as per response, also this response contains number of questions in each section. So for managing questions, I have taken listview in custom xml file. Now I have to show each listview in full height so only top scroll should work no any kind of scroll inside section. I have checked some answers with common method name **setListViewHeightBasedOnChildren** but it is not working as listview is added run time dynamically. So please help me for this.
2018/01/17
[ "https://Stackoverflow.com/questions/48300750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2741598/" ]
I *think* this is what is going on. I wish the AWS docs would explicitly state it. Bookmarks alone would only let you pick up at the next piece of data (e.g. next file in S3). But for a complex job with Dynamic Frames, the job itself it stateful. To resume processing, you need to not only pick up with the next piece of input, but also restore the state you had built up within your Dynamic Frames during the last run. The `transformation_ctx` is like a filename for saving the Dynamic Frame state. You have to name it, because AWS Glue isn't going to analyze your script to figure out which dynamic frame invocation is which. Inferred primarily from [Tracking Processed Data Using Job Bookmarks](https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html), which is the same page that other answers linked, but has somewhat clarified text since they quoted it: > > Many of the AWS Glue PySpark dynamic frame methods include an optional parameter named transformation\_ctx, which is a unique identifier for the ETL operator instance. The transformation\_ctx parameter is used to identify state information within a job bookmark for the given operator. Specifically, AWS Glue uses transformation\_ctx to index the key to the bookmark state. > > >
As mentioned in this [link](https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html), `transformation_ctx` parameter is used for job bookmarks. If you don't want to enable job bookmark then don't pass the parameter. Moreover, if you want to use job bookmarks, then enable the job bookmark parameter and pass value using `transformation_ctx` parameter.
41,873,321
I'm using TinyMCE WYSIWYG editor in my html form. I want to change icons (16x16 images) of existing toolbar buttons, i.e. bold button, italic button, etc. The idea is to put my own drawings instead of basic **B**, ***I***, etc icons. I found how to add a brand new button with my custom icon to toolbar via `tinymce.Editor.addButton('myButton', {... 'image': '/path/to/my/drawing'})` method, but I still have no idea how to replace icons for already existing buttons.
2017/01/26
[ "https://Stackoverflow.com/questions/41873321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6775670/" ]
I found the solution, but probably you know easier ways, so feel free to answer. It could be done by creating your own skin for TinyMCE. But skins don't look easy to create and [TinyMCESkinCreator](http://skin.tinymce.com/) doesn't seem help in deep customization. So I found [cool third-party skin](http://tinymceplugins.com/plugins/lightgray-no-fonts-skin) which has evident place for storing of these icons - `img/sprites16px.png`. So I was able to redraw any of them in graphic editor. I successfully replaced **B** letter for bold to russian letter **Ж** by editing the spritelist: [![my result](https://i.stack.imgur.com/uePyx.png)](https://i.stack.imgur.com/uePyx.png)
try this <https://www.tinymce.com/docs/configure/content-appearance/> <http://www.wpbeginner.com/wp-tutorials/how-to-add-custom-styles-to-wordpress-visual-editor/> [How to apply CSS on custom buttons in tinyMCE active editor popup](https://stackoverflow.com/questions/37185129/how-to-apply-css-on-custom-buttons-in-tinymce-active-editor-popup)
8,192,569
I have a shell script which conditionally calls a function. For Example:- ``` if [ "$choice" = "true" ] then process_install elif [ "$choice" = "false" ] then process_exit fi process_install() { commands... commands... } process_exit() { commands... commands... } ``` Please let me know how to accomplish this.
2011/11/19
[ "https://Stackoverflow.com/questions/8192569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626206/" ]
``` #!/bin/bash process_install() { commands... commands... } process_exit() { commands... commands... } if [ "$choice" = "true" ] then process_install else process_exit fi ```
Example of using a function() in bash: ``` #!/bin/bash # file.sh: a sample shell script to demonstrate the concept of Bash shell functions # define usage function usage(){ echo "Usage: $0 filename" exit 1 } # define is_file_exists function # $f -> store argument passed to the script is_file_exists(){ local f="$1" [[ -f "$f" ]] && return 0 || return 1 } # invoke usage # call usage() function if filename not supplied [[ $# -eq 0 ]] && usage # Invoke is_file_exits if ( is_file_exists "$1" ) then echo "File found: $1" else echo "File not found: $1" fi ```
255,926
This is all being done in an ASP.NET web forms application: I need a modal popup that will show the user a preview of images that match a search criteria (e.g. they search for "dog" and the popup shows them all dog-related pictures). The search results shouldn't be too long, but the popup should support the ability to scroll if necessary. I also need to capture which image they selected and send that info back to the calling page. I've taken a look at some of the options out there and am having a hard time deciding on which one to use. Any suggestions?
2008/11/01
[ "https://Stackoverflow.com/questions/255926", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1574/" ]
i prefer the use of jquery because it doesnt require the use of any server-side computations. also if the app ever ported to another technology other than asp.net it will still most likely work.
I'd say go with ours; [Ajax Calendar Starter-Kit](http://ra-ajax.org/samples/Viewport-Calendar-Starter-Kit.aspx) (click the button to the lower left corner) but then again I am definitely biased...
69,615,183
I will try to be brief! For whatever reason, I couldn't make the entirety of my program in python, so I had to outsource one specific task to php (a language I do not know very well). As the python program runs, it is supposed to trigger the php program to run, and then do a few things afterwards which is not a problem. It seems to me that, to be able to run something through python, you need it to be able to run through cmd first, and then you can make python use cmd to run the program. I had a few issues there, because the programs are on different drives, and the php program references other files and locations in the same directory and in sub-directories to where it is, this means I couldn't execute in one line of cmd, but first had to change directory, to then execute the php program from the folder it's in. Because my command wasn't just one line, I made a batch file containing all the steps. My current working method is to open up cmd, change directory in cmd to where the php file is, and then run the php file. I had to add php to the "Environment Variable Path" to be able to do this. Here is the batch file that currently works when run by me: ``` cd /d C: cd C:\Users\UserMain\Desktop\php\colorextract php (2).php ``` When I double click this bat file, from my E drive, it successfully executes the php program. But when I tell python to execute the batch file, that is where things go wrong. Here is my python code, apologies for the name of the bat file: ``` import os os.system('cmd /k "bitch.bat"') ``` The resultant cmd window then goes thru the steps of the batch file: 1) it changes to the right directory, 2) it is unable to execute the php file because: **'php' is not recognised as an internal or external command, operable program or batch file.** Now, this is the standard error you get if you were to try running a php program without having added php to the "Environment Variable Path", I know this because I went through that same thing. But if I manually open a cmd window, not administrative or anything, I can 1) successfully perform the steps outlined in batch file, and program runs, and 2) I can even run the bat file, and that also runs the program. The cmd window opened by python does not seem to be able to reference the "Environment Variable Path", or it is for another reason somehow handicapped against being able to do all the things that a normal cmd widow can. How can this be fixed? Thanks in advance to anyone who reads this! **Edit: I found that python had not detected the changes I made to the environment variables the day before, hence why python's cmd was giving the exact error that not having php in the environment variable gives. After I restarted my computer, my code worked. Thank you to @Gerhard and @Aaron Junker for making me think much harder about this issue.**
2021/10/18
[ "https://Stackoverflow.com/questions/69615183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9120674/" ]
Whenever, I saw this problem. I restarted the computer and it working immediately. I recommend trying this solution.
It happened to me because of the node.js version and an npm update. 1. Uninstall nodejs 2. Delete folder C:\Users\Administrator\AppData\Roaming\npm 3. Install nodejs (v17.x) 4. In your project folder: ``` npm install -g npm@8.3.2 npm i --package-lock-only npm audit fix --force ``` It worked for me. I hope it helps you :)
34,771,409
I have used [NSSM](https://nssm.cc/) to create a Windows Service for my **Selenium Server** instance (**v 2.48.2 of the Selenium Server Standalone JAR**). I have also set the service to log on as a Local System Account, and have allowed the service to interact with the desktop. When I have used a particular account for the service, instead of the local system account, Internet Explorer would not launch. I noticed that after a few days, the hard disk would start filling up with temporary internet files at the following location: ``` C:\Windows\SysWOW64\config\systemprofile\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5 ``` After a few days, I saw that the size of this folder was ~30 GB. I have had to manually clear out this folder. I've used the following command to create the service: ``` nssm install seleniumhub java -jar C:\selenium-server\selenium-server-standalone-2.48.2.jar -Dwebdriver.chrome.driver=C:\selenium-server\chromedriver.exe -Dwebdriver.ie.driver=C:\selenium-server\IEDriverServer.exe ``` Has anyone else run into this issue?
2016/01/13
[ "https://Stackoverflow.com/questions/34771409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2414019/" ]
You can take advantage of the built-in Sqoop metastore You can create a simple incremental import job with the following command: sqoop job \ --create <> \ --\ import \ --connect <> \ --username <> \ --password <> \ --table <> \ --incremental append \ --check-<> \ --last-value 0 And start it with the --exec parameter: ``` sqoop job --exec <<Job Name>> ``` Sqoop will automatically serialize the last imported value back into the metastore after each successful incremental job
One way to get it: Create log table in database and develop incremental import as follows ``` Query the log table using sqoop eval command with the last value from last run Run the sqoop import Update the log table with the latest valueusing sqoop eval command ``` You need to automate the process of `sqoop eval`, `sqoop import` and `sqoop eval`. You can submit any valid query to any database you have connectivity using `sqoop eval`. Hence you can run select query to before the import to get the last value from last run and run update query to update the log table with last value from the current run.
6,531,459
while displaying address from database I have added `,` at the end of each value. when last value of address in empty am getting `,` along with the last value. But I don't want that. Can any one help? ``` if ($Fetch['addr']!=''){ echo $Fetch['addr'].','; ``` it is displaying ``` address,city,postalcode ``` if i remove postalcode it displayes ``` address,city, ``` but i don't need `,` at the end when any of the value is not provided
2011/06/30
[ "https://Stackoverflow.com/questions/6531459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/810107/" ]
Insert your value in array and at the end of it use [`join`](http://php.net/manual/en/function.join.php) to convert array to string with: ``` $array[] = [Your Value Here]; $string = join ( ',' , $array ) ```
Use Substr() It will really helps! ``` $str = "abcde,"; $str = substr($str,'',-1); ``` Now this will remove , from String.
226,307
How can I install my Brother HL2270dw laser printer? There are no drivers available in the Software Center.
2012/12/06
[ "https://askubuntu.com/questions/226307", "https://askubuntu.com", "https://askubuntu.com/users/113151/" ]
I have a blog post on how to install the printer for wireless wifi printing in Ubuntu 12.04 64 bit. Note my version is specific and only tested for the 64 bit version of Ubuntu. Brother does not include 64 bit drivers for the HL-2270DW drivers but I have patched the drivers to make them work with the 64 bit version. The patched drivers give access to ALL the features of the printer. The link is <http://chadchenault.blogspot.com/2012/05/brother-hl-2270dw-printer-driver.html> Steps: 1. Download patched drivers to work with 64 bit ubuntu from the link above. You can also patch them yourselves using the command line and instructions from Brother's website. 2. Open a terminal window "ctrl + alt + t" 3. change into the directory to where the files where downloaded 4. Enter the following two commands from the terminal window. ``` sudo dpkg -i --force-all hl2270dwlpr-2.1.0-1a.i386.deb<code> sudo dpkg -i --force-all cupswrapperHL2270DW-2.0.4-2a.i386.deb<code> ``` USB printing should now work. If you want to configure wifi or ethernet printing, you will need to configure CUPS. I have instructions with pictures at my blog.
I tried the instructions at the beginning of this post but when I got to the end, i.e. When asked "Will you specify the DeviceURI?" choose "No" for USB connection or "Yes" for network connection. I was given a list of eleven URIs to choose from, none of which made any sense to me as a neophite Ubuntu users. See below: ``` Will you specify the Device URI? [Y/n] ->Y 0: socket 1: http 2: ipp 3: https 4: ipps 5: lpd 6: ipp14 7: hp 8: hpfax 9: smb 10 (I): Specify IP address. 11 (A): Auto. (usb://dev/usblp0) select the number of destination Device URI. ->11 ``` It seems these instructions ended in mid-stream and thus I'm not able to proceed. Bob Thomson Ottawa Canada
29,075,767
Look at this example: ``` 2.1.3 :001 > Stat.create! (0.1ms) BEGIN SQL (0.3ms) INSERT INTO `stats` (`created_at`, `updated_at`) VALUES ('2015-03-16 11:20:08', '2015-03-16 11:20:08') (0.4ms) COMMIT => #<Stat id: 1, uid: nil, country: nil, city: nil, created_at: "2015-03-16 11:20:08", updated_at: "2015-03-16 11:20:08"> ``` As you can see the `create!` method execute insert statement inside useless transaction. How to disable transation in this case only (without disabling them in whole application)?
2015/03/16
[ "https://Stackoverflow.com/questions/29075767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2398521/" ]
The problem here is that you want to modify behavior for a class-level method. This is inherently not thread-safe, at the very least for concurrent transactions for other Stat objects. A simple workaround would be to flag the instance as not requiring a transaction: ``` class Stat < ActiveRecord::Base attr_accessor :skip_transaction def with_transaction_returning_status if skip_transaction yield else super end end end Stat.create! skip_transaction: true ``` If you are running on a single threaded framework, and therefore aren't concerned with suspending transactions for Stat objects during this time, you can use class level methods and wrap the call like so: ``` class Stat < ActiveRecord::Base def self.transaction(*args) if @skip_transaction yield else super end end def self.skip_transaction begin @skip_transaction = true yield ensure @skip_transaction = nil end end end Stat.skip_transaction { Stat.create! } ```
I don't know of any nice way of doing this On ruby 2.2 you can do ``` stat = Stat.new stat.method(:save).super_method.call ``` This won't work pre ruby 2.2 (that's when `super_method` was added) and only works because in the list of ancestors, transactions is the first (or last depending on which way you order) to override save. If it wasn't then this code would skip over the 'wrong' save method. As such, I could hardly recommend this You could do something like ``` stat = Stat.new m = stat.method(:save) until m.owner == ActiveRecord::Transactions m = m.super_method end m = m.super_method ``` To automatically walk up the chain until you have found the transactions bit, but there's no telling what code you might have skipped over.
58,658,474
I have an A3-sized print document that contains images customer provide and it will fill up the document through calculation (There could sometimes be more than 300 images at once being computed to printDocument). The problem I am facing now is that when it is sent to the printer the document is too big for the printer memory to handle. Is there a way to let printer to print page as soon as it is sent rather than the whole document? My colleague suggest to break those pages to a different document. Is that possible? I have scour through the documentation and there seems to be no way for the printDocument or printerController to talk with the printer to start printing page as soon as it receives. On my test run I have a job of 360 images stuffed into 28 pages and the document spool data went up to 2.71GB [Screenshot of the print queue](https://i.stack.imgur.com/rTwof.png) ``` private void PrintPageEventHandler(object sender, PrintPageEventArgs e) { //set some settings //loop until the page has been filled up by images while(counter < maxImageAllowedPerPage) { e.Graphics.DrawImage(image, currentPoint.X + posX, currentPoint.Y + posY, newWidth, newHeight); } e.Graphics.Dispose(); e.HasMorePages = (PrintedImageCount != TotalImageCount); } ```
2019/11/01
[ "https://Stackoverflow.com/questions/58658474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6293799/" ]
You can use external library [Gson](https://www.baeldung.com/gson-string-to-jsonobject) ``` Gson gson=new Gson(); /*You can convert to your DTO as well */ Map<Object,Object> map = gson.from(myJsonObject,Map.class); ``` Other way is using objectmapper [example](https://www.baeldung.com/jackson-object-mapper-tutorial) of fasterxml. ``` ObjectMapper objectMapper=new ObjectMapper(); /*You can convert to your DTO as well */ objectMapper.readValue(data, Map.class); ```
I have created a small class for this purpose which can basically get value from json using a path only used google.gson <https://github.com/izeryab/JsonParser> Here is how to use this for getting nested value from json: ``` public class Main { public static void main(String[] args) { String json = "{\"data\":[{\"stuff\":[\n" + " { \"onetype\":[\n" + " {\"id\":1,\"name\":\"John Doe\"},\n" + " {\"id\":2,\"name\":\"Don Joeh\"}\n" + " ]},\n" + " {\"othertype\":[\n" + " {\"id\":2,\"company\":\"ACME\"}\n" + " ]}]\n" + "},{\"otherstuff\":[\n" + " {\"thing\":\n" + " [[1,42],[2,2]]\n" + " }]\n" + "}]}"; String name = JsonUtil.getJsonElementFromJsonStringUsingPath("data>0>stuff>0>onetype>0>name", json, ">").getAsString(); int id= JsonUtil.getJsonElementFromJsonStringUsingPath("data>0>stuff>0>onetype>0>id",json,">").getAsInt(); System.out.println("id : "+id); System.out.println("name : "+name); } } ``` Corresponding JAVA DOCS: <https://izeryab.github.io/JsonParser/JsonUtil.html>
4,066,763
I am looking for some help regarding the pointer dereference operator **`->`** . Let me describe what I am trying to do. I am implementing a unidirectional iterator for a special container. The container is special in the sense that it does not physically allocate any space for the contained values but generates them at run time on demand. For example, consider that the container is "**M consecutive integral multiples of N**". Since I do not want to store the value directly in my iterator, I create a value on the heap on demand. When I need a pointer to the value I delete the old one if it's out-of-date and create a new one. This means that invocation of **`operator *()`** or **`operator ->()`** may `delete` an old value and `new` a new value, if the iterator has been advanced with `operator ++()` after they were last invoked. Now I would like to use a **`smart_ptr`** to point at my value rather than keep a native pointer around. In order to do so I realize I need to understand the semantics of the **`->`** operator better. * First of all, is **`->`** a **unary** operator ? * If that be so, how does **`i->member`** work. This would translate to `(pointer returned)member`, which is not a syntactically valid form. * **"member"** could be a data member or a member function. * **`->()`** smells more like a binary operator that executes `(*pointer returned).member`. Since "member" is not a value, such a semantics is not equivalent to a binary operator either. * What happens to the pointer returned by `->()` ? who is supposed to own it ? * How can I use RAAI in this framework ? Is reference counted pointers the only option ? * There is no `--` operator for this iterator so i don't need to keep the previous values around Thanks for your responses. Ending with a meta question, should this be a wiki ?
2010/11/01
[ "https://Stackoverflow.com/questions/4066763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485493/" ]
Here is the answer to answer your questions, though I do not think they are so useful as you might hope: * Yes, it is a unary operator * No, it does not care what `member` is. If `member` isn't a field/member of the class/struct pointed to be the return of `operator ->` then the compiler will complain. * `member` could indeed be either a data member or a member function. * No, it's a unary operator since it does nothing whatsoever with `member`. It's there just to implement smart pointers. It is perfectly valid to say `return_type *ptr = smarptr.operator ->()`. * It's owned by whatever returns it. But this is purely by convention. You can decide whatever you want. But if you decide anything other than that it will surprise a whole ton of people to the point they will think they have a bug in their program and will never think to wonder if your smart pointer implementation has some bizarre semantics nobody else uses. * If you use `operator ->` for anything at all other than implementing a smart pointer I have no intentions of helping you write code that programmers after you will revile and make fun of. * This is a statement, not a question, so I have no answer for it. :-) I question your desire to point at your value. It seems to me like you could hold it by value as a member value of your iterator, and I will give an example of how this could work below. But if you're set on using a pointer to your value, and want to use a smart pointer, just return the result of `smartptr.operator ->()` for your own `operator ->()`. You might also use the member function of your smart pointer (often `get`) that returns a 'bare' pointer and that would likely confuse people a little less. Here is a sample of how your example container should work: ``` class multiples { public: multiples(int n, int starting_multiplier, int ending_multiplier) : n_(n), starting_(starting_multiplier), ending_(ending_multiplier) { } class const_iterator { friend class multiples; public: const int &operator *() const { return curval_; } const int *operator ->() const { return &curval_; } const const_iterator &operator ++() { curval_ += n_; return *this; } const const_iterator operator ++(int) { const_iterator tmp(*this); curval += n_; return tmp; } bool operator ==(const const_iterator &b) const { return curval_ == b.curval_; } bool operator !=(const const_iterator &b) const { return curval_ != b.curval_; } protected: explicit const_iterator(int n, int starting) : n_(n), curval_(starting * n) {} private: const int n_; int curval_; }; const_iterator begin() const { return const_iterator(n_, starting_); } const_iterator end() const { return const_iterator(n_, ending_); } private: const int n_, starting_, ending_; }; ```
Much of it is answered by others. I'd still like to share how -> operator overloading works. When we do `pA->Function()`, something like this happens behind the scenes `(pA.operator->())->Function()` In RAII framework, the pointer class needs to behave as if it were an actual pointer. So one needs to overload the arrow `->` operator. e.g. of an overloaded -> operator in RAII. `T* operator->() { return m_ptr; }`
57,024,549
I am learning how to use Hyperledger fabric and Chaincode. Basically, I followed this [link](https://medium.com/coinmonks/hyperledger-fabric-account-based-wallet-java-chaincode-8cbf80a6fb82) to create a Java chain code in Hyperledger fabric, but I can't invoke my chain code in the cli/channel. I used `BYFN.sh` from Hyperledger fabric official guide to building up the fabric network and followed the blog to make Gradle & Java Chaincode. After I tried to invoke it with ```sh peer chaincode invoke -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["createWallet","tom","100"]}' ``` I got this error: > > Error: error getting endorser client for invoke: endorser client failed to connect to peer0.org2.example.com:7051: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.21.0.4:7051: connect: connection refused" > > > It would be very helpful if someone can point a direction and tell me what might possibly be wrong. I tried to google it but I could not find any useful information. --- update: Turns out it is a port number issue. I went into one of the peer's container and turns out that the port number is 9051 instead of 7051. So I change the port number in the chaincode invoke command and everything works fine.
2019/07/14
[ "https://Stackoverflow.com/questions/57024549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11781507/" ]
You are specifying same port number for peer0.0rg1.example.com and peer0.0rg2.example.com in your command. if you have created you network using byfn topology peer0.org2 listens on port 9051. Also, when asking these type of question, it's always better to provide your configuration i.e docker-compose.yaml files. That is very useful when understanding the questions. It maybe possible that in your compose.yaml file you have not mapped the port number of Org2 to a different port number. You can try any of the following- 1. Look at the port number defined in docker yaml file and use that in command for invoking chain-code. It should be different for every peer. 2. Modify the yaml file so that 7051 redirects to a new port number, say 9051. like below environment: * CORE\_PEER\_ID=peer0.org2.example.com * CORE\_PEER\_ADDRESS=peer0.org2.example.com:7051 * CORE\_PEER\_GOSSIP\_EXTERNALENDPOINT=peer0.org2.example.com:7051 * CORE\_PEER\_GOSSIP\_BOOTSTRAP=peer1.org2.example.com:7051 * CORE\_PEER\_LOCALMSPID=Org2MSP volumes: + /var/run/:/host/var/run/ + ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp + ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls + peer0.org2.example.com:/var/hyperledger/production ports: * 9051:7051 * 9053:7053`. Look at the end where port number 7051 is mapped to 9051.
You could check out the free tool Chaincoder on <https://www.chaincoder.org>, which is a great start for beginners. It uses standard CLI to for fabric network, chaincode, channels... within an IDE.
209,269
I am currently twirling around in circles trying to find this word. I've heard it used before, and I want to use it in something I am writing, but I can't remember it. On the tip of my brain, but not falling off. Anyway, the word means something like to state the obvious truth, and it is so obvious that there is little point in stating it. If I were to use it in a sentence, it would be found in the blank: "To state it is to utter a \_\_\_\_\_\_\_\_." Well, that might be a poor sample sentence. Edited to add: Okay, those are all good Thesaurus words, but not the one I am looking for. A further clue is that the word is somewhat pejorative, or perhaps somewhat negative. And it is definitely a noun. I will recognize when it is posted, I promise.
2014/11/20
[ "https://english.stackexchange.com/questions/209269", "https://english.stackexchange.com", "https://english.stackexchange.com/users/20156/" ]
A term that comes to mind from logic is [*vacuous truth*](http://en.wikipedia.org/wiki/Vacuous_truth) (statement of what cannot be untrue), like WP's example: > > [The] statement 'all cell phones in the room are turned off' may be true simply because there are no cell phones in the room." > > > Another from logic is [*tautology*](http://en.wikipedia.org/wiki/Tautology_%28logic%29) (statement of what must be true). I've heard this colloquially applied to phrases like *First things first*, *Enough is enough*, or *Tomorrow is another day*. I've sometimes found use in *shibboleth* for the cases in between, as [MW](http://www.merriam-webster.com/dictionary/shibboleth) says: > > an old idea, opinion, or saying that is commonly believed and repeated but that may be seen as old-fashioned or untrue > > >
Are you thinking of an [*axiom*](http://www.thefreedictionary.com/Axiom)? > > A self-evident or universally recognized truth > > > There's also an [*aphorism*](http://www.thefreedictionary.com/aphorism): > > a terse saying embodying a general truth or astute observation. > > > Along the lines of axiom is *maxim*, and along the lines of an aphorism is an *apophthegm*. There's also [*truism*](http://www.thefreedictionary.com/truism): > > a common statement that is obviously true; a self-evident, obvious truth, esp. a *cliché*. > > >
47,996,071
I have a TAB file something like: ``` V I 280 6 - VRSSAI N V 2739 7 - SAVNATA A R 203 5 - AEERR Q A 2517 7 - AQSTPSP S S 1012 5 - GGGSS L A 281 11 - AAEPALSAGSL ``` And I would like to check the last column respect to the order of letters in 1st and 2nd column. If are coincidences between the first and last letter in last column comparing to the 1st and 2nd column respectively remain identical. On the contrary if there are not coincidences I would like to locate the reverse order pattern in last column and then print the string from the letter in 1st column to the end and then take the first letter and print to the letter in 2nd column. The desired output would be: ``` V I 280 6 - VRSSAI N V 2739 7 - NATASAV A R 203 5 - AEERR Q A 2517 7 - QSTPSPA S S 1012 5 - SGGGS L A 281 11 - LSAGSLAAEPA ``` In this way I'm try to do different scripts but do not work correctly I don't know exactly why. ``` awk 'BEGIN {FS=OFS="\t"}{gsub(/$2$1/,"\t",$6); print $1$7$6$2}' "input" > "output"; ``` Other way is: ``` awk 'BEGIN {FS=OFS="\t"} {len=split($11,arrseq,"$7$6"); for(i=0;i<len;i++){printf "%s ",arrseq[i],arrseq[i+1]}' `"input" > "output";` ``` And I try by means of substr function too but finally no one works correctly. Is it possible to do in bash? Thanks in advance I try to put an example in order to understand better the question. ``` $1 $2 $6 L A AAEPALSAGSL (reverse pattern 'AL' $2$1) ``` desired output in $6 from the corresponding $2 letter within reverse pattern to the end following by first letter to corresponding $1 letter within the reverse pattern ``` $1 $2 $6 L A LSAGSLAAEPA ```
2017/12/27
[ "https://Stackoverflow.com/questions/47996071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8405531/" ]
You can try this awk, it's not perfect but it give you a starting point. ``` awk '{i=(match($6,$1));if(i==1)print;else{a=$6;b=substr(a,i);c=substr(a,1,(i-1));$6=b c;print}}' OFS='\t' infile ```
``` gawk ' BEGIN{ OFS="\t" } $6 !~ "^"$1".*"$2"$" { $6 = gensub("(.*"$2")("$1".*)", "\\2\\1", 1, $6) } {print} ' input.txt ``` **Output** ``` V I 280 6 - VRSSAI N V 2739 7 - NATASAV A R 203 5 - AEERR Q A 2517 7 - QSTPSPA S S 1012 5 - SGGGS L A 281 11 - LSAGSLAAEPA ```
5,704
I'm considering getting a Kindle, but am still doubting as to whether the 3G model is worth the extra expense. As I'm also considering getting an iPhone later: on the off chance that I'd ever want to buy a book without a WiFi hotspot nearby, wouldn't it be possible to share an iPhone's 3G connection with the Kindle through WiFi and hence buy a book and read it on the WiFi-only Kindle?
2010/12/30
[ "https://apple.stackexchange.com/questions/5704", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/2184/" ]
For information about the full article “iOS 4.3 lets iPhone 4 host wifi over 3G” <http://devicegadget.com/apple/ios-4-3-lets-iphone-4-host-wifi-3g/1320/> Thx.. ;)
1. You can create a mobile hotspot with an iPhone (if you pay your carrier for the privilege and your carrier supports it) 2. A Kindle Fire (which is WiFi only) can connect to a mobile hotspot created by an iPhone to connect to the Internet. Given 1 and 2, I assume a "regular" Kindle can do the same. ps - The mobile hotspot prices are insane, and both AT&T and Verizon have done nothing but raise prices as time goes on. But it's possible to do, and if you only need it for a short while you can cancel it when you are done. I would *not* spend the money on a 3G Kindle unless you wanted to get a Kindle 3 (aka "Kindle Keyboard") where the 3G can be used for more than just downloading books. The new Kindle 4s which have 3G do not let you web browse using 3G, only WiFi. (And if you have an iPhone, there's no way you would *ever* want to use the horrible Kindle browser anyway.)
12,002,987
I want to edit my .htaccess to prevent the current folder plus all subfolders to cache content. The site contains html, images, swf, xml files. I tried this, but the server did not like that. ``` <FilesMatch ".(js|css|pdf|swf)$"> Header set Cache-Control no-cache </FilesMatch> ``` > > Internal Server Error > > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > > Please contact the server administrator, xxx@xxx.com and inform them > of the time the error occurred, and anything you might have done that > may have caused the error. > > > More information about this error may be available in the server error > log. > > > Additionally, a 500 Internal Server Error error was encountered while > trying to use an ErrorDocument to handle the request. > > > Apache/2.2.14 (Ubuntu) Server at www.xxx.com Port 80 > > > Any tips?
2012/08/17
[ "https://Stackoverflow.com/questions/12002987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/473841/" ]
Here are few tips that can lead you to a solution : 1- [kernel infos](http://linux.die.net/man/2/uname) 2- You can get that information in `/proc/cpuinfo`. Google for 'Proc Filesystems'. 3- [extract system infos](http://www.daniweb.com/software-development/c/threads/33551/c-program-to-extract-system-info) 4- Another StackOverflow subject that might help [here](https://stackoverflow.com/questions/448811/calculate-total-cpu-usage) From that i think you can lead yourself to the solution.
You'll need to follow three steps: 1) use system() or popen() to invoke shell command or utility from your program. (refer: [this](http://www.cs.uleth.ca/~holzmann/C/system/shell_commands.html)) 2) Decide on a utility from [this list](http://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html), to generate utilization data. 3) capture this data from program and parse it to gather relevant info.
1,019,907
I have various time-series I'd like to correlate and present as either a csv-file or in-memory datatable (.NET). These time-series are arrays of time-value-pairs (actually these are objects containing more than just time and value). The time-series may span across different overlapping periods, and some might even have holes (missing values for given timestamps). For those interested, I'm using the OPC HDA .NET library to extract historic time-series from an OPC HDA server. The resulting datatable should have one column for each time-series all in chronological order based on a timestamp column. See example below: ``` |-------|-------|-------|-------|-------| TIME TS1 TS2 TS3 TS4 |-------|-------|-------|-------|-------| 1 X X X |-------|-------|-------|-------|-------| 2 X X X X |-------|-------|-------|-------|-------| 3 X X X |-------|-------|-------|-------|-------| 4 X X X |-------|-------|-------|-------|-------| 5 X X X |-------|-------|-------|-------|-------| ``` What would be the most effective way of achieving this? With "effective" I mean with the least amount of code. But considering that the timeseries could become quite large, memory usage might also be an issue.
2009/06/19
[ "https://Stackoverflow.com/questions/1019907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32671/" ]
You might go with a data structure like a nested dictionary and iterate over the contents: ``` Dictionary <TimeSeries, Dictionary<DateTime, Value>> dict = new Dictionary<TimeSeries, Dictionary<DateTime, Value>>(); foreach (TimeSeries series in dict.Keys) { //table row output code goes here Dictionary<DateTime, Value> innerDict = dict[series]; foreach (DateTime date in innerDict.Keys) { Value seriesValueAtTimeT = innerDict[date]; //table column output code goes here } } ``` Where your output code is writing out to something else, depending on your needs, and you replace the datatypes TimeSeries, Value, etc., with your actual data types.
Hey Chris. I realize you've already accepted an answer, but thought I'd post a solution I use. Hopefully it's of some use to someone. If not at least it provides a place for me to find it sometime in the future. :-) This is VBA code fired off directly from an Excel 2007 code module. It could be easily converted to .Net. The key for the data manipulation is the pivot table object. I've found it to be massively efficient at getting the data into the layout you've specified. ``` Sub GetIndexData () Dim cn as ADODB.Connection, cmd As ADODB.Command, rs As ADODB.Recordset Dim rPivotTopLeft As Range, rPivotBottomRight As Range Application.ScreenUpdating = False Application.Calculation = xlCalculationManual 'Get the data.' Set cn = New ADODB.Connection With cn .Provider = "SQLOLEDB" .ConnectionString = "Database=" & mDBName & ";" & _ "Server=" & mDBServerName & ";" & _ "UID=" & mDBUserID & ";" & _ "Password=" & mDBPassword & ";" & _ "Persist Security Info=True;" .CursorLocation = adUseClient .Open End With Set cmd = New ADODB.Command Set rs = New ADODB.Recordset With cmd .ActiveConnection = adoTools.DBConnection .CommandType = adCmdText .CommandText = "SELECT YourData From YourSource WHERE YourCritera" Set rs = .Execute End With If Not (rs.BOF And rs.EOF) Then 'Check that we have some data.' 'Put the data into a worksheet.' With wsRawData .Cells.CurrentRegion.Clear Set rPivotTopLeft = .Range("A1") With ThisWorkbook.PivotCaches.Add(SourceType:=xlExternal) Set .Recordset = rs .CreatePivotTable _ TableDestination:=rPivotTopLeft, _ TableName:="MyPivotTable" End With 'Massage the data into the desired layout.' With .PivotTables("MyPivotTable") .ManualUpdate = True .PivotFields("Date").Orientation = xlRowField .PivotFields("Index").Orientation = xlColumnField .AddDataField .PivotFields("Return"), "Returns", xlSum .DisplayFieldCaptions = False .ColumnGrand = False .RowGrand = False .ManualUpdate = False End With mMonthCount = Range(.Range("A3"), .Cells(Rows.Count, "A").End(xlUp)).Count mIndexCount = Range(.Range("B2"), .Cells(2, Columns.Count).End(xlToLeft)).Count 'Convert pivot table to values.' Set rPivotBottomRight = .Cells(mMonthCount + 2, mIndexCount + 1) With .Range(rPivotTopLeft, rPivotBottomRight) .Copy .PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks:=False, Transpose:=False End With 'Format the worksheet.' .Range("A3").Resize(mMonthCount, 1).NumberFormat = "mmm-yy" .Range("B3").Resize(mMonthCount, mIndexCount).NumberFormat = "0.00%" Union(.Rows(2), .Columns(1)).Font.Bold = True .Cells.ColumnWidth = 7.14 .Rows(1).Delete End With rs.close Set rs = Nothing cmd.ActiveConnection = Nothing Set cmd = Nothing cn.close Set cn = Nothing End Sub ``` From there its relatively easy to leverage the built in excel regression statistics to output a correlation matrix. With this technique I'm producing a worksheet with a 600x600 matrix of correlations in about 45 seconds. Note that the .PivotFields parameters should be changed to fit the column names of your data from your data source.
18,968,020
I need to compare the first element of two different files *after* a certain phrase. So far I have this: ``` import re data1 = "" data2 = "" first = re.match(r".*Ignore until after this:(?P<data1>.*)", firstlist[0]) second = re.match(r".*Ignore until after this:(?P<data2>.*)", secondarray[0]) data1 = first.group('data1') data2 = second.group('data2') if data1 == data2: #rest of the code... ``` I want to ignore everything up to a certain point, and then save the rest into the variable. I do something almost identical to this earlier in the script and it works. However, when I run this, I get this error: ``` File "myfile.py", line [whatever line it is], in <module> data1 = first.group('data1') AttributeError: 'NoneType' object has no attribute 'group' ``` Why isn't `re.match` isn't working properly with first and second? ### EDIT As per suggestion, I've changed `[\s\S]*` to `.*`. EDIT 2: This is what the input looks like (NOT like in the comment below): ``` Random text More random text Even more random text Ignore until after this: Meaningful text, keep this ...and everything else... ...until the end of the file here ``` That's really basically all it is: a string of text that needs to be saved from after a certain point
2013/09/23
[ "https://Stackoverflow.com/questions/18968020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2661085/" ]
You're probably just having issues because of the newlines in your file. As Martijn Pieters pointed out in the comments to your questions, you can use the flag re.DOTALL to capture everything. So with a file like so, (named `tmp` in this example) ``` Random text More random text Even more random text Ignore until after this: Meaningful text, keep this ...and everything else... ...until the end of the file here ``` You could do something like this ``` with open('tmp') as f: first = re.match(r'.*Ignore until after this:(?P<data1>.*)', f.read(), re.DOTALL) print(first.group('data1')) ``` which gives ``` Meaningful text, keep this ...and everything else... ...until the end of the file here ```
The dot '.' character in regular expressions matches any character except a newline. So if you have your entire file as a single string, then the regular expression is matching up to the first new line, then trying to match your phrase against the start of the next line. When this fails, it returns a NoneType. See [this](http://docs.python.org/2/library/re.html) and [this](https://stackoverflow.com/questions/15953406/python-regular-expression-matching-multiple-lines-re-dotall).
6,101,473
Help. For the past two days I've tried to get Solr to index the exampledocs. I've learned a lot, but am completely stuck. I've tried it on two different Windows 7 PCs. I'm following the basic tutorial at <http://lucene.apache.org/solr/tutorial.html>, I tried with Solr 1.4 and 3.1. I can get to the Solr admin page just fine and everything seems to be working at <http://localhost:8983/solr/admin/>. Following the next step in tutorial to create sample index with java -jar post.jar \*.xml inside Windows CMD screen. The process starts to read the first file (HD.xml) and then throws me the **#400\_Missing\_Solr\_Core\_Name\_in\_path** error. I get same error when I try with **-Durl=http://localhost:8938/solr/update** command. When I run start.jar, I can see in the resulting msgs: *Warning [] Solr index directory 'solr.\data\index doesn't exist. Creating new index.* I can then see the new data\index subdirectory, but why isn't solr recognizing it when using the post.jar? I have not changed anything in the solrconfig.xml, which shows: ${solr.data.dir:} My Solr admin shows this: cwd=C:\Program Files\Solr3\example SolrHome=solr.\ My directory structure is simply: ``` Solr3\client solr3\contrib solr3\dist solr3\docs solr3\example solr3\example\etc solr3\example\example-DIH solr3\example\exampledocs solr3\example\lib solr3\example\logs solr3\example\multicore solr3\example\solr\bin solr3\example\solr\conf solr3\example\solr\data solr3\example\solr\data\index solr3\example\webapps solr3\example\work solr3\example\start.jar ``` I'm pulling my hair out just following the basic tutorial. Granted I'm using Windows OS, but what am I doing wrong? Any help would be greatly appreciated. UPDATE: FINALLY got it to work on a 3rd Win 7 machine. Have no idea what went wrong on the first two... didn't think that the file structure of Solr would be so delicate. I had installed Lucid on the first PC (successfully) to learn and then tried using Solr under Tomcat, (defaults to :8080 instead of standard :8983), which probably buggered things up. Was surprised that I couldn't get basic Solr on Jetty to work on a 2nd clean Win7 PC.
2011/05/23
[ "https://Stackoverflow.com/questions/6101473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/766447/" ]
Either ``` PlusMinusDiv mathBase = new PlusMinusDiv(); mathBase.Div(10,20); ``` or ``` IMathBase mathBase = new PlusMinusDiv(); ((PlusMinusDiv)mathBase).Div(10,20); ```
There are a couple of ways you could handle this, you could: 1.Simply add the new method to the interface definition, and update any existing implementations. 2.As above, but use an abstract base class which can provide a default implementation. That way any existing implementations that inherit from your abstract base class won't break, and can optionally override the implementation provided by the base class. 3.Use an extension method, e.g.: ``` public static int Div(this IMathBase mathBase, int a, int b) { return (a / b); } int result = mathBase.Div(20, 10); ```
21,479
Does the main character in a short story need a name? My story gives lots of other details about the character.
2016/03/25
[ "https://writers.stackexchange.com/questions/21479", "https://writers.stackexchange.com", "https://writers.stackexchange.com/users/18369/" ]
Presently, the most talked about television show is the Star Wars series "The Mandolorian" which is about a member of the Mandolorian Culture and his adventures with a young member of the species to which Jedi Master Yoda belongs to. At time of writing, four episodes have been released and neither character has been named on screen and no allusion to a real name existing for future revalation has been named. Fans have taken to calling the titular character "Mando" when refering to him. Even more bizarre is that, because George Lucas has refused to give much details on Yoda's species (still unnamed) and the fact that the young member of his race represents the first major development of the race in 20 years (the last being the revalation of a female member Yaddle in 1999's Phantom Menace) fans are refering to the child partner of Mando as "Baby Yoda" as a short hand for "Unnamed baby of the unnamed race of people Yoda comes from". Of course, "The Mandolorian" was sold as a Space Western (the producers actually call it a "Space Samurai" film, more on this later) and with the Dollars Trilogy being a main source of inspiration. The trilogy is a series of three movies portraying Clint Eastwood's character known to film fans as "The man with no name" as he was never given a real name in the three films. The first entry of the Dollars Trilogy ("A Fistful of Dollars") was an adaptation of the Kurosawa Japanese Period Piece (Jidi Geki) "Yojimbo" which featured a similar plot only with Samurai instead of Cowboys. The character that inspired "The Man with No Name" was Toshiro Mifune's "The Ronin with No Name". This isn't uncommon. Many tropes and archtype characters in Samurai films have a similar Western counterpart and many Hollywood fans of Kurosawa felt this was the best way to share these stories with Americans. The Magnificent Seven was similarly adapted from the Seven Samurai. That said, one Hollywood director bucked this trend and decided to make an adaptation of a Kurasawa film "The Hidden Fortress" (about a Princess and her General leading an epic fight against Imperial forces... from the point of view of the comic relief) into a little sci-fi film called "Star Wars" (In case you were wondering why R2-D2 and C-3P0 were so prominent in the opening of "A New Hope" or where in the world Lucas got the word "Jedi" from). These aren't the only works with similar names. X-File's persistent foes, the group known as The Syndicate, never had their real names revealed, save for the leader. The members were only known by descritive names such as "The Well Manicured Man", "The First Elder," "The Second Elder," and the personal foe of Mulder, "The Cigerette Smoking Man" (aka Cancer Man aka Morely Man(prefered fiction brand "Morely" of his trade mark addiction. The name comes from a prop shop that specializes in prop generic name packaging that looks like real products). Additionally, there were two informants to Mulder in the series from within the Syndicate known only by false names they gave to Mulder, Deepthroat and Mr. X (the former was derrived from the real life informant on the Watergate scandle and the later was named for the X shape of Duct tape Mulder would tape to his window when he needed to contact Mr. X). There is some argument of main character status, but one episode of titled "Musings of a Cigarette Smoking Man" purports to be the (mostly true, but we don't know where the fictional parts occur) back story of the titular character... and never once uses his real name in the course of the story (we do learn he wrote a novel about his life, with his fictional self named Jack Collette and his pen name as Roman A. Clef. The former is definitely not his name as the later is a pun on "Roman à clef" which is a fictional character who represents a real person. CSM uses the Roman à clef Roman A. Clef to write about Jack Collette who is a Roman à clef of Roman A. Clef... which is a Roman à clef for whoever the CSM really is.")
It's a lot easier to think of the character if s/he has a name, but it isn't strictly required. In Daphne DuMaurier's *Rebecca*, the first-person narrator is never named, and is only known as "the second Mrs. De Winter."
73,856,942
I have a issue with my number to string implemention. For some reason I keep getting 000 on my terminal even though. I couldn't find a solution, what is the potantial issue here? Im now sure my code is broken but don't really see the problem. ``` #include <stdio.h> #include <stdlib.h> #include <math.h> /* public domain code feel free to use as what you wish, no warranties given*/ char finddigits(unsigned n) { char base = 6; unsigned tester = 100000; while(base % tester != 0) { base--; /* inefficient but works for now */ switch(tester) { case 100000: tester = 10000; break; case 10000: tester = 1000; break; case 1000: tester = 100; break; case 100: tester = 10; break; case 10: tester = 1; break; } } return base; } char* num2str(unsigned n) { char size = finddigits(n); char* tempbuf = malloc(size); *tempbuf = 48 + (n / pow(10, size)); for(unsigned int i = 1; i < size; i++) *(tempbuf + i) = 48 + (n % (10 * i)); return tempbuf; } int main(int argc, char* argv[]) { int numbr = 210; printf("%s \n", num2str(numbr)); /* expected 210 on string got 000 */ return 0; } ```
2022/09/26
[ "https://Stackoverflow.com/questions/73856942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19508355/" ]
You just want `num2str` to return the digit string for `n`. A few issues: 1. `finddigits` is supposed to calculate the number of digits in `n`. But, [if it works at all], it uses an algorithm I've never seen. 2. `finddigits` isn't needed in `num2str` as `num2str` can be [much] simpler by filling the buffer in the reverse direction. 3. `num2str` is too complicated. 4. Calling `num2str` from `printf` leaks memory from the `num2str` call to `malloc` --- Here's a refactored version: ``` #include <stdio.h> #include <stdlib.h> #include <math.h> char finddigits(unsigned n) { int count = 0; if (n == 0) count = 1; for (; n != 0; n /= 10, ++count); return count; } char * num2str(unsigned n) { static char buf[100]; char *dst = &buf[sizeof(buf) - 1]; // add string terminator *dst-- = 0; // we must always output a 0 if (n == 0) *dst-- = '0'; // work backwards in the array for (; n != 0; n /= 10, --dst) *dst = (n % 10) + '0'; // point to first digit in string dst += 1; return dst; } void dotest(unsigned n) { printf("n=%u '%s'\n",n,num2str(n)); } int main(int argc, char *argv[]) { dotest(210); dotest(0); dotest(12345); return 0; } ``` --- Here's the program output: ``` n=210 '210' n=0 '0' n=12345 '12345' ```
Mathematics is also "public domain". Here are two versions of one of your functions, shown with a `main()` that tests both versions with several values. Don't convert a signed integer value to unsigned for no particular reason. If you need/want the *absolute* value of a (possibly) negative number, C provides a function to achieve that. ``` #include <stdio.h> #include <stdlib.h> #include <math.h> int finddigits1( int n ) { int count = n == 0; // ensure 0 is counted as 1 digit while( n ) count++, n /= 10; return count; } int finddigits2( int n ) { // log of zero is undefined return ( n == 0 ) ? 1 : (int)log10(abs(n))+1; } char *num2str( int n, char buf[] ) { printf( "finddigits1( %d ) counted %d\n", n, finddigits1( n ) ); printf( "finddigits2( %d ) returns %d\n", n, finddigits2( n ) ); strcpy( buf, "Hello world!" ); // Left as an exercise to fix your own code return buf; // returning allows direct usage by caller } int main() { int tests[] = { 123, -123, 0, 100, 54321 }; char buf[ 30 ]; // better to pass large buffer to function than use malloc() for( int i = 0; i < sizeof tests/sizeof tests[0]; i++ ) { int n = tests[i]; printf( "n = %d '%s'\n", n, num2str( n, buf ) ); } return 0; } ``` ```none finddigits1( 123 ) counted 3 finddigits2( 123 ) returns 3 n = 123 'Hello world!' finddigits1( -123 ) counted 3 // NB: Account for negative sign! finddigits2( -123 ) returns 3 n = -123 'Hello world!' finddigits1( 0 ) counted 1 finddigits2( 0 ) returns 1 n = 0 'Hello world!' finddigits1( 100 ) counted 3 finddigits2( 100 ) returns 3 n = 100 'Hello world!' finddigits1( 54321 ) counted 5 finddigits2( 54321 ) returns 5 n = 54321 'Hello world!' ```
25,136,610
I have the structure like the following in my Xcode project. ![enter image description here](https://i.stack.imgur.com/8SePH.png) I need to read all the images from project and need to store it in array. I have used the following code from some other StackOverflow posts, but it's not working. ``` NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Images"ofType:@""]; NSDirectoryEnumerator *direnum; direnum = [fileManager enumeratorAtPath: filePath]; imageFolder = [NSMutableArray new]; for(NSString *filename in direnum){ if([[filename pathExtension] isEqualToString:@"png"]){ [imageFolder addObject:filename]; } } NSLog(@"Files in the folder %@",imageFolder); ``` I would appreciate, if any one help me to complete this.
2014/08/05
[ "https://Stackoverflow.com/questions/25136610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2241037/" ]
`- (NSArray *)pathsForResourcesOfType:(NSString *)extension inDirectory:(NSString *)subpath` will get you the array you want. ``` NSArray *imagesArray = [[NSBundle mainBundle] pathsForResourcesOfType:@"png" inDirectory:@"Images"]; ```
Please use this code, hope it will work. ``` NSString *dirPath = [[[NSBundle mainBundle] bundlePath] stringByAppendingPathComponent:@"images"]; NSError * error; NSArray * images = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:dirPath error:&error]; ```
108,244
Let's assume a JPG file (or any other image/video file format), that I duplicate the file, and I modify the tags on the copy. Is there a way to verify that the image inside both JPG files is actually the same? I mean, if I do a digest of both files, I will obviously get a different hash. But is there a way to extract from the JPG only the bytes representing the image, so that I can do a digest of these bytes and see that both files are actually the one and same image? The above is in the context of trying to automate how I manage my photos, i.e. extracting photos from my phone to my pc, detecting duplicates before I back things up, etc. In case it would influence the answers, my home made tools are written in Java, sometimes using external tools like exiftool. Addendum ======== There are two other threads ([How can I identify duplicate image files?](https://photo.stackexchange.com/questions/6326/how-can-i-identify-duplicate-image-files) and [Consolidate photos and eliminate duplicates? [duplicate]](https://photo.stackexchange.com/questions/40819/consolidate-photos-and-eliminate-duplicates)) that *appear* to be duplicates, but in my opinion, they're not. Firstly, the second thread is all about Mac and about file naming conventions based on EXIF data. That is most certainly not what my question is about. As for the first thread, it asks for tools that can find duplicates. While a plethora of answers were provided, none of the tools were doing exactly what I wanted (and most were not free) except for the very last answer "Dedup-image" (which incidentally had the lowest score in that thread...). When I dug into the script, I found exactly what I wanted, and this happened to be exactly the answer provided by @Kahovius In short, the answer from @Kahovius: * is free * is able to recognize that two files having different hashes (because they have different EXIF metadata) actually contain the *exact* same image (i.e. exact same bytes) * can be automated (in BASH scripts for instance) * (as a bonus) can even detect images that are duplicate when the only transformation was a lossless rotation (the actual bytes of the image are the same as the rotation is just a tag that was added to the file). All other tools mentioned in the first thread were either not free, or would compare files as blobs (i.e. two files with same image bytes but different EXIF metadata would be seen as different), or would do some heuristics when comparing images (e.g. an image which has red eye removed could be considered duplicate of the original image). None of this is what I wanted.
2019/05/13
[ "https://photo.stackexchange.com/questions/108244", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/84758/" ]
Here's a quick and dirty solution with `exiftool` and `cmp`: 1) Make copies of the two image files: ``` cp img1.jpg tmp1.jpg cp img2.jpg tmp2.jpg ``` 2) Use `exiftool` to remove metadata: ``` exiftool -all= tmp1.jpg exiftool -all= tmp2.jpg ``` 3) Use `cmp` to compare the temporary files: ``` cmp tmp1.jpg tmp2.jpg ``` `cmp` exits with a message if the two files are *different*; if they're the same, there's no output.
If you think the only difference is metadata (which is enough to break comparison of finger prints), you can extract the [actual data](https://en.wikipedia.org/wiki/JPEG#Syntax_and_structure) without decoding it. Be careful that JPEG is really two formats: the old [JFIF](https://en.wikipedia.org/wiki/JPEG_File_Interchange_Format) and the more recent [EXIF](https://en.wikipedia.org/wiki/Exif) (which isn't only about metadata), but I doubt that applications will convert from one to another without re-encoding (and thus breaking a comparison).
15,659,471
CODE:- ``` List<Task> tasks = new List<Task>(); foreach (var item in arr)//arr contains 1000+ mailids { tasks.Add(Task.Factory.StartNew(() => { using (MailMessage msg = new MailMessage()) { msg=getmsg();//p-shadow code no erorr here SmtpClient smtp = new SmtpClient(); smtp.Host = smtpServer; smtp.Port = smtpPort; smtp.EnableSsl = isSmtpSsl != 0 ? true : false; smtp.Credentials = new NetworkCredential(smtpUName,smtpPass); smtp.Timeout = int.MaxValue; smtp.Send(msg);//---throws error.... //sql code to insert db that mail is send } }, TaskCreationOptions.LongRunning)); } Task.WaitAll(tasks.ToArray<Task>()); ``` ERROR-"Failure sending mail." INTERNAL ERROR-"Unable to read data from the transport connection: net\_io\_connectionclosed."
2013/03/27
[ "https://Stackoverflow.com/questions/15659471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1079808/" ]
Regarding the exception 'net\_io\_connectionclosed', please look at this link as it might point you in the right direction about why the connection is being closed. It could be authentication problems, firewall etc. [Enabling Tracing for System.Net](http://blogs.msdn.com/b/feroze_daud/archive/2005/05/12/416922.aspx) Please find below the code that I would use in your situation instead of spawning 1000 tasks which is not very efficient. [Sending bulk emails with Tasks Parallel.ForEach](http://talkdotnet.wordpress.com/2014/03/20/sending-bulk-emails-with-tasks-parallel-foreach/)
As Jim mentions, you're likely just hitting a throttling/rejection situation. IMHO, you're likely to be more successful by 1) using a single SmtpClient rather than N of them (so the single client could internally serialize as/if necessary) and 2) using the built-in async call ([SendMailAsync](http://msdn.microsoft.com/en-us/library/hh193996.aspx)) instead of StartNew and calling the sync version. Something like: ``` var messages = ... // get collection of MailMessage instances var smtpClient = new SmtpClient { ... }; // construct a single SmtpClient var tasks = messages.Select(smtpClient.SendMailAsync).ToArray(); Task.WaitAll(tasks); ``` I'm not sure if that's really going to help much, though - if it still fails, you might just need to make a single long-running Task that sends the emails serially.
9,589,907
Im trying to run the following to import a large volume of sales data in a text file into a database. When i run the following i get the error: "Invalid object name 'PetDatabase.Sales' ``` BULK INSERT PetDatabase.Sales FROM 'C:\Temp\P1.txt' WITH ( FORMATFILE = 'C:\Temp\PetSales.Fmt' ); ``` Can anyone see whats causing my problem? I do have the tables within a folder; however, when i tried PetsDatabase.Tables.Sales it made no difference.
2012/03/06
[ "https://Stackoverflow.com/questions/9589907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1091114/" ]
Ignore this answer. It was written when the question was tagged with `mysql`. Leaving the answer here to keep the comments. -- Try using `LOAD DATA INFILE` instead. <http://dev.mysql.com/doc/refman/5.1/en/load-data.html>
1. Make sure PetDatabase.Sales exists in your text file. 2. Swap for whichever row and field terminator delimiters you're using. Here I'm using delimiters from a comma separated file BULK INSERT PetDatabase FROM 'c:\temp\p1.txt' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) GO --Check the content of the table. SELECT \* FROM PetDatabase GO --Drop the table to clean up database. SELECT \* FROM PetDatabase GO Also, make sure the following doesn't apply to you: If a SQL Server user is logged in using Windows Authentication, the user can read only the files accessible to the user account, independent of the security profile of the SQL Server process. When executing the BULK INSERT statement by using sqlcmd or osql, from one computer, inserting data into SQL Server on a second computer, and specifying a data\_file on third computer by using a UNC path, you may receive a 4861 error. To resolve this error, use SQL Server Authentication and specify a SQL Server login that uses the security profile of the SQL Server process account, or configure Windows to enable security account delegation.
159,455
Given a matrix, e.g. ``` matrix = Table[a[i, j], {i, 1, 3}, {j, 1, 3}]; ``` I would like to have a function that takes `matrix` as input and returns a list of matrices representing all possible dihedral transformations of `matrix` (all possible simultaneous reflections and rotations along all columns and/or all rows). Of course I could write a monstrosity of `Do` routines, but I am wondering if there is a neat and quick way to do that in Mathematica? Thanks for any suggestion! EDIT: Forgot to mention: There should also be reflections (transposition) along the diagonal and anti-diagonal in the action of the group!
2017/11/08
[ "https://mathematica.stackexchange.com/questions/159455", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5517/" ]
Let's denote a matrix by a simple notation: ``` M[a, b, c, d, e, f, g, h, q] ``` such that the matrix form is recovered by the substitution: ``` subM = {M[x__] :> MatrixForm[Partition[{x}, Sqrt[Length[{x}]]]]}; M[a, b, c, d, e, f, g, h, q] /. subM ``` > > [![enter image description here](https://i.stack.imgur.com/CYIcy.png)](https://i.stack.imgur.com/CYIcy.png) > > > Rotations and reflections of rows is one dihedral group $D\_{2n}$ with $2n$ elements. Rotations and reflections of columns is another copy of dihedral group $D\_{2n}$. Finally, transposition along the diagonals (or, when combined with a row or column reflection, equivalently simply rotations by 90 degrees) are a superimposed $S\_2$ group with 2 elements. This means we have a semi-direct product $S\_2\rtimes D\_{2n}$ with $2(2n)^2$ elements in total. In our example $n=3$, so that we get $72$ elements. One can generate an invariant object from an `M` as follows: ``` invar[m_] := Block[{tmp, res, n, pref}, pref = m /. M[xx__] -> 1; n = Sqrt[Length[m /. M -> List]]; tmp = Partition[m/pref /. M -> List, n]; res = Table[M[RotateRight[tmp, i]], {i, 0, n - 1}] //DeleteDuplicates; res = ((Table[ M[Transpose[RotateRight[Transpose[(#[[1]])], i]]], {i, 0, n - 1}] &) /@ res) // Flatten // DeleteDuplicates; res = {res, (Reverse@# & /@ # & /@ res)} // Flatten // DeleteDuplicates; res = {res, (Transpose@Reverse@Transpose@# & /@ # & /@ res)} // Flatten // DeleteDuplicates; res = {res, (Transpose@# & /@ # & /@ res)} // Flatten // DeleteDuplicates; pref ( res /. M[x__] :> (M[x] /. List -> Sequence) /. List -> Plus) // Expand ] ``` So that we properly get 72 distinct objects in the most general case, collectively furnishing an invariant set: ``` invar[M[a, b, c, d, e, f, g, h, q]] /. subM ``` > > [![enter image description here](https://i.stack.imgur.com/yaLbF.png)](https://i.stack.imgur.com/yaLbF.png) > > > However, the function `invar[m_]` is too hacky for my taste. Especially when `n` grows bigger, I suspect it will become very slow. That is why I'd prefer to have a more elegant solution.
The action on the four corners of the matrix defines the group, so we are talking about DihedralGroup[4], which has 8 elements. We can construct those 8 matrices combining Transpose and Reverse: ``` dih4[mat_] := With[{list = NestList[Transpose[Reverse[#]] &, mat, 3]}, Join[list, Transpose /@ list]] ``` Now try with your matrix: ``` MatrixForm /@ dih4[matrix] ```
13,546,738
My computer runs at 3.6ghz with 12gb of RAM, and uses Windows 7 and RailsInstaller for developing Rails apps. Back when i used pure css, there was no issue, but when i started using Sass and Compass, i have to wait over 10-15 seconds when i refresh to see new changes. (that only happens when i edit a .scss file) Anyone knows what causes this on a fairly fast machine?
2012/11/24
[ "https://Stackoverflow.com/questions/13546738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241654/" ]
The sass compile times became significantly longer with the update from sass 3.1.x to 3.2.x. There seems to be a specific concern as per Roman's response above, it's caused if you use `@extend` in deeply nested selectors since '@extend' does a lot of processing work. [Here is the link to a currently open issue on the sass project on github for this exact problem](https://github.com/nex3/sass/issues/495). That being said, the reports on that issue is more like 90-100 sec compile times (104sec for my own personal project that uses `@extend` alot to pull in foundation/bootstrap classes). So, if you're experiencing 9-10sec lags, yours might just be a general issue with the newest update being generally slower in compile times.
In my case it was because of too frequent use of the @extend directive.
531,329
When we write the Lagrangian $\mathcal{L}=\frac{1}{2}m\dot{x}^2-U(x)$, where $U$ is the potential energy, we are assuming that the mass $m$ is constant, the only variables being the velocity $\dot{x}$ and position $x$. What can be done to determine the equation of motion of the particle in case the mass is changing? I know that we cannot simply use the formula $$\dot{p}=m\ddot{x}+\dot{m}\dot{x},$$ with $p=\frac{\partial \mathcal{L}}{\partial \dot{x}}$, because it isn't Galilean invariant and the system is not closed, so some other procedure must be used. Perhaps the method of Lagrange multipliers may be used? Or via a non-standard Lagrangian that somehow reproduces the equation of motion given [here](https://en.wikipedia.org/wiki/Variable-mass_system)?
2020/02/15
[ "https://physics.stackexchange.com/questions/531329", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/237982/" ]
If you just give the mass an explicit time-dependence, $$L = \frac12 m(t) \dot{x}^2 - U(x)$$ then the Euler-Lagrange equation is $$\frac{d}{dt} (m \dot{x}) = \dot{m} \dot{x} + \dot{m} \ddot{x} = - \frac{dU}{dx}.$$ It's unclear to me why you think "we simply cannot use" this result. It isn't Galilean invariant, but once you let $m(t)$ have arbitrary time-dependence, the action isn't Galilean invariant either.
Im my previous naswer, I included a terms of order $dt$ in the Lagrangian. It makes the variation somewhat awkward. After some thoughts, I find it is rather nature to employ the dissipation function and incorporated as a generalized force [Rayleigh Dissipation](https://en.wikipedia.org/wiki/Rayleigh_dissipation_function). The dissipation function is basically the power dissipated by the friction force. In this case, the mass ejected is taking away a part of the knietic energy from the system in a similar way of dissipation force. $$ G(v) = \frac{1}{2} \frac{dm}{dt} (\dot{x}-u)^2 $$ The mass changing rate $\dot{m} =\frac{dm}{dt}$ is negative for rocket motion. The generalized force $$ Q = \frac{\partial G}{\partial \dot{x}} = \frac{dm}{dt} (\dot{x}-u) $$ The Euler-Lagrangian equation with generalized force: $$ \frac{d}{dt} \left\{ \frac{\partial L}{\partial \dot{x}}\right\} -\frac{\partial L}{\partial x} = Q $$ Appy to the rocket problem with constant gravity: $$ L = \frac{1}{2} m \dot{x}^2 - mgx $$ The Equation of motion: $$ \frac{d}{dt} \left\{ \frac{\partial L}{\partial \dot{x}}\right\} -\frac{\partial L}{\partial x} = \frac{d m\dot{x}}{dt} + mg = Q $$ $$ m\ddot{x} + \dot{m}\dot{x} + mg = \dot{m} (\dot{x} - u) $$ Render the correct rocket equation of motion. $$ m \dot{v} = -\dot{m} u - m g = |\dot{m}| u - m g. $$ Another example, throw $dm/dt$ mass rate vertically outside a boat, $u=0$: $$ m \dot{v} = 0 \to v(t) = \text{constant} $$ The masses casted outside is moving with same $v$, therefore doesn't change the velocity of the remaining boat (neglecting the vertical shift within our 1-d treatment.)
223,062
Using [answer about microphone](https://electronics.stackexchange.com/a/36809/18643) I have built a simple amplifier. ![schematic](https://i.stack.imgur.com/zw9Yz.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fzw9Yz.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) I changed some of the parts due to my needs and resources. * **R1**: is now potentiometer and allows me to set the "0" of the sound to 2.5V * **R3**: should control amplification (but it doesn't seem to), so again a potentiometer * **C1**: unless the original answer meant electrolytic capacitor, there's no way to get 22uF capacitor. So I used just two 22nF caps. * **C2**: I have 10uF capacitor (salvaged from old sound card ironically), so I used that So I have set up R1 to get 2.5 volts on silence. The image displays 80Hz sine wave rather than silence, [I generated it here (works on Android)](http://onlinetonegenerator.com/). ![image description](https://i.stack.imgur.com/idIfJ.png) Now if I move the **R3.2**, instead of change to gain, I just get voltage offset, and an insignificant one. Picture describes full range (min - max - min) of that particular potentiometer: ![image description](https://i.stack.imgur.com/LmvSv.png) Note also that **increasing resistance increases voltage on output**. I could use my 50k potentiometer instead but that would give me no precision. So what is wrong with my circuit? How to control the amplification? **Note:** I displayed the analog input using [this processing script](http://pastebin.com/ENvYtsHe) and [this arduino program](http://pastebin.com/s3f4G00t), both mine but I thought I might share.
2016/03/16
[ "https://electronics.stackexchange.com/questions/223062", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/18643/" ]
Do you own a car? It's a very easy experiment, just turn everything electric on and you can "feel" the engine working harder. The reason for this is the magnetic field every current creates. Under no load you have the spinning magnetic field from the rotor, which creates a voltage in the stator. If there is a load a current starts to flow which creates a magnetic field in the reverse direction. This hinders the rotors movement, so you have to put in more mechanical power to keep the rpm constant.
The answer is Yes, Yes, Yes. (I'm a rusty electrical engineer) Anything else would be a violation of 2nd law of thermodynamics. Let's start with a simple magnetic moving across a coil. If you then put a load on the coil, such as a resistance, then swiping the magnetic will attempt to induce a current, which will be limited by resistance, and as it does so, the voltage will rise across the coil. (It takes time for charges to pass through a resistance, and a voltage is created as the charges accumulate) This is an inductive spike. If you swipe a magnet so that an emf is generated in the coil and you have the ends of the coil unattached (open), then you will attempt to induce a current, but there will be no path for the current to flow, and a large separation of charge creates a much higher voltage spike from your coil. If you swipe a magnet so that an emf is generated in the coil and you have the ends of the coil shorted together (!), then as much current as can be induced by the changing magnetic field is allowed to do so. Note that voltage will be LOW, current will be HIGH. However, the high current induced in the coil ALSO produces a magnetic field, in the opposite direction as to the magnetic field change due to the swipe. In other words, if a magnetic is swiped, whatever current is allowed to flow in the coil will resist the swipe. This is how regenerative braking works. (Think stepper motors, or fancy electric cars) In a car, this system works as a whole. The generator attempts to induce a current, but the current is resisted by the internal resistance of the battery, etc. In effect, an electrical system will only draw as much current as it needs (you can think of it as current draw is determined by the load) Your car is "governed", meaning that it will use less gas or more gas in order to maintain its idle setting. What that means is, if the load on the generator is greater, it will be able to induce a greater current to meet that need, at the expense of an opposite field by that current, meaning the engine will be loaded down and the gas usage will rise. Now, you mentioned alternators ~~ these are a slightly different beast in that they don't have permanent magnets. Instead, the regulators in them shunt current through their field coils to create a magnetic field. In essence, though, the same result occurs. The only effect that the regulator has (for both generators and alternators) is that there will be an upper limit on the amount of load that can be "seen" by the generator/alternator, in order to protect the windings from melting, and to maintain a more consistent system voltage. An alternator which has no load on it will free spin, as the regulator shunts no current through the field windings. A generator which has no load will --free spin-- [Edit: It won't free spin, because the permanent magnetic can still induce eddy currents in the core of the armature which resist the motion, etc.] , but generate high open circuit voltages. That gets handled with bleeder circuitry in the regulator.
30,941,365
I tried to play with Database creation and queries. In order to do that I started a C# form application, added a Database Service then added a Table with some values, then I wanted to use some code to retrieve those values. Here's the code: ``` string conn = "data source = ./SQLEXPRESS; AttachDbFilename=C:\\Users\\Asus\\Desktop\\RobaMia\\SQLSERVER\\WindowsFormsApplication3\\WindowsFormsApplication3\\Database1.mdf; Integrated Security=True;Connect Timeout=30;User Instance=True"; SqlConnection sql = new SqlConnection(conn); sql.Open(); MessageBox.Show("Connection Opened"); sql.Close(); ``` Sadly the program throws an exception when comes to the open because it seems it cannot find the Database... "Server not found or not accessible" I don't know what is the problem, what would you suggest? --- Ok, it seems to work now but I get an incorrect syntax for my query string conn = "Server=(LocalDB)\v11.0; AttachDbFilename=C:\Users\Asus\Desktop\RobaMia\SQLSERVER\WindowsFormsApplication3\WindowsFormsApplication3\Database1.mdf; Integrated Security=True;Connect Timeout=30;User Instance=False"; ``` string queryString = "SELECT * FROM Table"; SqlConnection sql = new SqlConnection(conn); sql.Open(); SqlDataAdapter adapter = new SqlDataAdapter(); SqlCommand command = new SqlCommand(queryString, sql); /* --->here I get the error*/ command.ExecuteNonQuery(); DataSet data = new DataSet(); adapter.Fill(data); MessageBox.Show(data.ToString()); sql.Close(); ```
2015/06/19
[ "https://Stackoverflow.com/questions/30941365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5028734/" ]
It looks like the data source part your connection string is wrong. It should be: ``` "Data source=.\\SQLEXpress" ``` Complete: ``` string conn = "Data source=.\\SQLEXpress; AttachDbFilename=C:\\Users\\Asus\\Desktop\\RobaMia\\SQLSERVER\\WindowsFormsApplication3\\WindowsFormsApplication3\\Database1.mdf; Integrated Security=True;Connect Timeout=30;User Instance=True"; ``` <https://www.connectionstrings.com/sql-server/> As an additional note, you may be best off placing this in an `app.config` or `web.config` file just encase you reference the connection string multiple times and later you decide to change the value of it.
Have you tried(LocalDB), instead of SQLExpress? ``` "Server=(localdb)\\Test;Integrated Security=true;AttachDbFileName= myDbFile;" ``` <http://www.asp.net/mvc/overview/getting-started/introduction/creating-a-connection-string>
9,224,855
> > **Possible Duplicate:** > > [Can you write object oriented code in C?](https://stackoverflow.com/questions/351733/can-you-write-object-oriented-code-in-c) > > > Can I use C(not C++!!!) for Object Oriented Programming?
2012/02/10
[ "https://Stackoverflow.com/questions/9224855", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1125374/" ]
There is a book: "Object-Orientated Programming with ANSI-C".
I was always under the impression that you couldn't. This is why: C++ was originally called "C with Objects." There might be a way to, in effect, fake OOP C, but I don't think that it's strictly 100% OOP. greater minds will be able to clarify this, though
67,761,348
I want to use the Hilt in my project for getting the helper class object as the common for my activity. I have tried the below lines of code for it **Manifest file** entry ``` <application android:name=".application.AppController" android:allowBackup="false" ``` My **AppController** class ``` @HiltAndroidApp public class AppController extends Application implements Configuration.Provider { } ``` Please check my **gradles** file for it ``` implementation "androidx.activity:activity-ktx:1.2.3" implementation 'androidx.fragment:fragment-ktx:1.3.4' implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.1" implementation "androidx.lifecycle:lifecycle-extensions:2.2.0" implementation "androidx.hilt:hilt-lifecycle-viewmodel:1.0.0-alpha03" implementation 'com.google.dagger:hilt-android:2.36' kapt 'androidx.hilt:hilt-compiler:1.0.0' kapt 'com.google.dagger:hilt-android-compiler:2.31.2-alpha' ``` Please check my **classPath** entry below ``` classpath "com.google.dagger:hilt-android-gradle-plugin:2.36" ``` Now please check my **NVModule Module** class in which i have used the Hilt ``` import dagger.Module import dagger.Provides import dagger.hilt.InstallIn import dagger.hilt.components.SingletonComponent import javax.inject.Singleton @Module @InstallIn(SingletonComponent::class) class NVModule { @Provides fun provideDeviceDiagnosticHelper(): DeviceDiagnosticHelper{ return DeviceDiagnosticHelper() } } ``` Now please check the **Activity** class in which i have injected the Hilt object ``` @AndroidEntryPoint public class AutomaticTestActivity extends AppCompatActivity { private AutomaticTestPresenter automaticTestPresenter; private AutomaticTestView automaticTestView; ActivityDeviceDiagnosticBinding mBinding; @Inject DeviceDiagnosticHelper deviceDiagnosticHelper; } ``` But I am getting the following exception while Running the app please check it once **Exception**:- > > /home/..../com/my/app/activity/Hilt\_AutomaticTestActivity.java:70: error: method getActivityFactory in class DefaultViewModelFactories cannot be applied to given types; > return DefaultViewModelFactories.getActivityFactory(this); > ^ > required: ComponentActivity,Factory > found: Hilt\_AutomaticTestActivity > reason: actual and formal argument lists differ in length > > > Please help me to short of from this problem
2021/05/30
[ "https://Stackoverflow.com/questions/67761348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3946958/" ]
version of dependency for project level and app level must be same. like : classpath "com.google.dagger:hilt-android-gradle-plugin:2.38.1" and implementation "com.google.dagger:hilt-android:2.38.1" kapt "com.google.dagger:hilt-android-compiler:2.38.1"
``` implementation "com.google.dagger:hilt-android:2.38.1" kapt "com.google.dagger:hilt-android-compiler:2.38.1" implementation 'androidx.hilt:hilt-lifecycle-viewmodel:1.0.0-alpha03' kapt 'androidx.hilt:hilt-compiler:1.0.0' ``` worked for me
42,376,575
I am learning C and Im writing a simple program where you enter number of days, and it says how much hours are in that days. For example, you enter 2 days, and it says there are 48 hours. There is no error, but the end result is not good. Here is my code and my result Code: ``` int day, hour; hour= 24 * day; printf("Find out how many hours are in days.\n"); printf("Enter number of days. Days = "); scanf("%d", &day); printf("%d days has %d hours.\n", day, hour); return (EXIT_SUCCESS);` ``` Here is the result in the console. ``` Find out how many hours are in days. Enter number of days. Days = 1 1 days has 677268864 hours. ```
2017/02/21
[ "https://Stackoverflow.com/questions/42376575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7541617/" ]
You need to evaluate `hour` once `day` is known. You can't schedule an evaluation like that in C, if you get my meaning. (You can, interestingly in C++, using actors.) Currently you are reading an uninitialised variable. The behaviour on doing that is undefined.
The variable `day` has not been filled yet; it likely contains junk data. To fix this, move the `int hours = day * 24` after the `scanf("%d",&day);`, this way the variable `day` will actually have the value scanned in.
4,525,121
In a Project Euler problem I need to deal with numbers that can have hundreds of digits. And I need to perform some calculation on the first 9 digits. My question is: what is the fastest possible way to determine the first N digits of a 100-digit integer? Last N digits are easy with modulo/remainder. For the first digits I can apply modulo 100 times to get digit by digit, or I can convert the number to String and truncate, but they all are linear time. Is there a better way?
2010/12/24
[ "https://Stackoverflow.com/questions/4525121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277683/" ]
You can count number of digits with this function: ``` (defn dec-digit-count [n] (inc (if (zero? n) 0 (long (Math/floor (Math/log10 n)))))) ``` Now we know how many digits are there, and we want to leave only first 9. What we have to is divide the number with 10^(digits-9) or in Clojure: ``` (defn first-digits [number digits] (unchecked-divide number (int (Math/pow 10 digits)))) ``` And call it like: `(first-digits your-number 9)` and I think it's in constant time. I'm only not sure about `log10` implementation. But, it's sure a lot faster that a modulo/loop solution. Also, there's an even easier solution. You can simply copy&paste first 9 digits from the number.
In Java: ``` public class Main { public static void main(String[] args) throws IOException { long N = 7812938291232L; System.out.println(N / (int) (Math.pow(10, Math.floor(Math.log10(N)) - 8))); N = 1234567890; System.out.println(N / (int) (Math.pow(10, Math.floor(Math.log10(N)) - 8))); N = 1000000000; System.out.println(N / (int) (Math.pow(10, Math.floor(Math.log10(N)) - 8))); } } ``` yields ``` 781293829 123456789 100000000 ```
265,765
I would like to implement a payment solution with my Rails site using Paypal Website Payments Standard. (See <https://www.paypal.com/IntegrationCenter/ic_standard_home.html>). I don't want fancy gateways, and I don't want to go through the hassles related to storing credit card information. I also want to use Paypal IPN for notifications. I have checked out various sites such as <http://www.fortytwo.gr/blog/14/Using-Paypal-with-Rails>, and I'm too cheap to try Rails Kits at <http://railskits.com/saas/>. Does anybody know of a working open-source Rails application that makes use of Paypal Website Payments Standard for billing, and can you point me to that code? Also, my business is in Canada, but the servers are in the USA. Do I have to sign up for Paypal.com or paypal.ca?
2008/11/05
[ "https://Stackoverflow.com/questions/265765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/33890/" ]
Both [Spree](http://spreehq.org/) and [Substruct](http://code.google.com/p/substruct/) are e-commerce platforms written in Rails that support Paypal IPN. Both are open source, so it might be helpful to start by looking at how they handle Paypal Integration.
[Spree](http://spreehq.org) contains an [extension](http://github.com/Gregg/spree-pp-website-standard/tree/master) for Payapl Website Payments Standard. In fact, the popular [envycasts](http://envycasts.com) site is a great example of this extension in use. If you want Rails and you want the Paypal Standard stuff, this is your solution.
58,254,254
I was writing some code in Python when I suddenly became curious regarding blank line conventions for import statements. I'm aware of the [typical import statement conventions specified in the PEP 8 style guide](https://www.python.org/dev/peps/pep-0008/#imports) and for [blank lines](https://www.python.org/dev/peps/pep-0008/#blank-lines) as well. However, I became curious if there is a convention or unwritten rule for blank lines among import statements. For example, I usually like to put a blank line in between the three categories that are specified in PEP 8 (i.e. standard library imports, related third party imports, local application/library specific imports) but I've also noticed that many people tend not to do so. My PyLint application even throws a warning whenever I put a blank line. I personally felt that this added a bit of clarity as to what "category" each imported library falls into. Is there a sort of convention that I should be following? Thanks in advance.
2019/10/06
[ "https://Stackoverflow.com/questions/58254254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7766024/" ]
Can you please try with adding .any() or .all() at the end of you np.select() statement? ``` df["AndHeathSolRadFact"] = np.select( [ (df['Month'].between(8,12)), (df['Month'].between(1,2) & df['CloudCover']>30) ], #list of conditions [1, 1], #list of results default=0).all() #default if no match ```
EDITED: I really have a elegant solution for you problem:- ``` from dask.array import from_array as fa df.compute()['Name of you column'] = fa(the_list_you_want_to_assign_as_column) ```
19,372,667
Since Clojure is designed to run in a Java virtual machine (JVM), I don't understand [this statement](https://nightweb.net/blog/clojure-on-android.html): > > While Clojure started its life mainly as a server-side language, the advent of ClojureScript demonstrates that the core developers don't see that as its only purpose. > > > I am not real familiar with Java though I am interested in Lisp languages and hence Clojure, so this makes me wonder. Most web servers I've worked on are traditional Apache variants with standard server-side languages like Ruby, PHP, Perl, but I've never seen Java as a default installed server language in my hosting environments, so what is the meaning of this statement? Second, JVMs are typically run on client operating systems like Mac or Windows, are they not? Sun says there are many billions of JVMs in the world, obviously this is not referring to servers.
2013/10/15
[ "https://Stackoverflow.com/questions/19372667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3758484/" ]
Java is exactly as much of a server-side language as Ruby or Perl (though not really PHP): It's a general-purpose language that is frequently used to write server applications, including Web applications and SOA services. Whether Java is "installed as a default", it's typically trivial to install on the Unix machines that are the usual hosts for Java servers. A JVM can theoretically run on any platform; there are JVMs that run on bare x86 hardware, and Blu-Ray players have embedded JVMs. Sun originally thought that Java was the future for rich-client applications, but instead it's found a much wider use in powering Web sites and other services that clients access through various APIs.
I don't know much about Clojure's history, but it seems clear that it has been intended as a general-purpose language for some time--whatever initially pushed Hickey et al. to want to develop it. Because Clojure supports easy access to existing Java libraries and is able to create standard Java-style jar files--both crucial benefits on a server as well as elsewhere--it would have been obvious early on that Clojure could be useful outside of servers. So my answer to why "the advent of ClojureScript demonstrates that the core developers don't see [server side applications] as its only purpose" is that no such demonstration was needed.
39,071,202
I have a function named `Keys()` to get all the keys of a map, here is the code: ```golang func main() { m2 := map[int]interface{}{ 2:"string", 3:"int", } fmt.Println(Keys(m2)) } func Keys(m map[interface{}]interface{}) (keys []interface{}) { for k := range m { keys = append(keys, k) } return keys } ``` But I got ``` cannot use m2 (type map[int]interface {}) as type map[interface {}]interface {} in argument to Keys ``` Does Go support generics and how should I fix my code?
2016/08/22
[ "https://Stackoverflow.com/questions/39071202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5046896/" ]
1- Golang is strongly typed language, So the `map[int]interface{}` is not compatible with `map[interface{}]interface{}`. `int` is different type than `interface{}`, and see: [Go: What's the meaning of interface{}?](https://stackoverflow.com/questions/23148812/go-whats-the-meaning-of-interface) 2- No, Golang doesn't support generics, and this is very good, because it makes language simple and fast. --- You have some options: If you don't want to change the type of map used: 1- You may edit the function to: `func Keys(m map[int]interface{}) []int`, like this working sample code: ```golang package main import "fmt" func main() { m2 := map[int]interface{}{ 2: "string", 3: "int", } fmt.Println(Keys(m2)) } func Keys(m map[int]interface{}) []int { keys := make([]int, len(m)) i := 0 for k := range m { keys[i] = k i++ } return keys } ``` output ( may not be in order): ``` [2 3] ``` --- 2- Or you may edit the function to: `func Keys(m map[int]interface{}) []interface{}`, like this working sample code: ```golang package main import "fmt" func main() { m2 := map[int]interface{}{ 2: "string", 3: "int", } fmt.Println(Keys(m2)) } func Keys(m map[int]interface{}) []interface{} { keys := make([]interface{}, len(m)) i := 0 for k := range m { keys[i] = k i++ } return keys } ``` output ( may not be in order): ``` [2 3] ``` --- If you don't want to change the `Keys` function used: 3- You may edit the map to: `map[interface{}]interface{}`, like this working sample code: ```golang package main import "fmt" func main() { m2 := map[interface{}]interface{}{ 2: "string", 3: "int", } fmt.Println(Keys(m2)) } func Keys(m map[interface{}]interface{}) []interface{} { keys := make([]interface{}, len(m)) i := 0 for k := range m { keys[i] = k i++ } return keys } ``` --- 4- Also you may use `reflect` package for some use cases, but with the performance (speed) penalty. And See: [The Laws of Reflection](https://blog.golang.org/laws-of-reflection)
Starting from Go 1.18 (released in beta) the language adds type parameters and you are be able to easily write a function like this: ``` func Keys[K comparable, V any](m map[K]V) []K { keys := make([]K, 0, len(m)) for k := range m { keys = append(keys, k) } return keys } ``` Example usage: ``` func main() { m := map[int]string{2: "string", 3: "int"} keys := Keys(m) fmt.Println(keys) // [2 3] fmt.Println(reflect.TypeOf(keys)) // []int m2 := map[string]int{"a": 1, "b": 2} keys2 := Keys(m2) fmt.Println(keys2) // [a b] fmt.Println(reflect.TypeOf(keys2)) // []string } ``` Playground: <https://gotipplay.golang.org/p/pdsI2H7w-N4> Note that, based on the [current proposal](https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md#map-keys), the type constraint on the type parameter `K` is the predeclared identifier `comparable`, instead of `any`. This is because map keys must support [comparison operators](https://golang.org/ref/spec#Map_types). Therefore you must restrict `K` to comparable types only. --- Alternatively based on [this accepted proposal](https://github.com/golang/go/issues/47649) the new package `maps` is also available to accomplish the same thing. However this is not yet in the standard library. Instead it was included in [`golang.org/x/exp`](https://cs.opensource.google/go/x/exp/+/master:maps/maps.go), therefore it is **not covered by Go 1 compatibility promise**. The difference between `maps.Keys` and the function above is that `maps.Keys` is parametrized on `M` (other than `K` and `V`), with the approximate constraint `~map[K]V`. This allows all defined types with underlying `map`: ``` type MyMap map[string]int ``` The usage is basically the same: ``` package main import ( "fmt" "reflect" "golang.org/x/exp/maps" ) func main() { m := map[int]string{2: "string", 3: "int"} keys := maps.Keys(m) fmt.Println(keys) // [2 3] fmt.Println(reflect.TypeOf(keys)) // []int } ``` Playground: <https://gotipplay.golang.org/p/Bx11jmyifAg>
181,149
I want to create a collection (List or Set) which can behave like Trigger.New context. Means If we use the collection out side of it's class, we should not be able to add any element in this collection. However we can change content of its existing element. For example in Trigger.New collection of Account Trigger, I can change name of the new Account as below, but But if I try to add any element in Trigger.New collection, It will give compile time error. ``` trigger accountTest on Account (before insert) { for(Account currAccount : Trigger.New) { currAccount.Name = 'Name changed'; // here I can change the name for new account. } Account tempAccount = new Acccount(Name='New Account to Add'); // But if I try to add tempAccount in Trigger.New collection, It will give compile time error as below Trigger.New.add(tempAccount); //Collection is read-only } ``` Is this requirement possible? IF so, how can I achieve it?
2017/06/29
[ "https://salesforce.stackexchange.com/questions/181149", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/46652/" ]
There are several "magic" data types that we can't create in Apex Code. For example, we can't create read-only SObject records (as Trigger.new in an after trigger), lists that can't be modified (as in Trigger.new), and so on. You could write your own class to emulate this, but it wouldn't be a perfect replica, since we can't create custom array classes like we can in Java.
Just assign Trigger.new to an Account list and manipulate that one instead. ``` List<Account> accounts = new List<Account>(); accounts.addAll(Trigger.new); Account tempAccount = new Account(Name='New Account to Add'); accounts.add(tempAccount); ```
11,158,439
Given the following code: ``` string source = "Some Unicode String"; foreach( char value in source ) { int y = (int)value; } ``` Is it possible that the cast from char to int could fail (and under what circumstances)?
2012/06/22
[ "https://Stackoverflow.com/questions/11158439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143919/" ]
Use [`any?`](http://ruby-doc.org/core-1.9.3/Enumerable.html#method-i-any-3F): ``` @array.any? { |h| h[:status] == "B" } ```
Just to add to what steenslag said: `detect` ***doesn't always return nil.*** You can pass in a lambda to execute (call) if detect does not 'detect' (find) an item. In other words, you can tell `detect` what to do if it can't detect (find) something. To add to your example: ``` not_found = lambda { "uh oh. couldn't detect anything!"} # try to find something that isn't in the Enumerable object: @array.detect(not_found) {|h| h[:status] == 'X'} ``` will return `"uh oh. couldn't detect anything!"` This means that you don't have to write this kind of code: ``` if (result = @array.detect {|h| h[:status] == 'X'}).nil? # show some error, do something here to handle it # (this would be the behavior you'd put into your lambda) else # deal nicely with the result end ``` That's one major difference between `any?` and `detect` -- you can't tell `any?` what to do if it doesn't find any items. This is in the Enumerable class. ref: <http://ruby-doc.org/core/classes/Enumerable.html#M003123>
47,236,528
I am trying to automate this website using VBA excel. I am stuck at one point where I need to select value from the drop-down box. I am very much new to this as this is my first such project. This is what I have coded to select the value: ``` Set objSelect = objIE.document.getElementById("personTitle") For Each opt In objSelect.Options If opt.Value = "Miss" Then 'Debug.Print "found!" opt.Selected = True 'opt.Selected = "selected" Else 'Debug.Print "not found!" opt.Selected = False End If Next ``` I have also tried using the debug.print to check if the value that I am trying to find is actually getting matched or not- and it turns out that it matches. The only problem I am facing is that the value is not getting set. Can any of the gurus here please help? Here is the HTML of that section: ```html <div class="input-wrap input-wrap__inline"> <div tabindex="-1" class="select is-placeholder"><div class="select_display">Title</div><div class="select_arrow glyphicon glyphicon-chevron-down"></div><dl class="select_list"><dt class="pretend-dd is-hover" data-index="1" data-val="Mr">Mr</dt><dt class="pretend-dd" data-index="2" data-val="Mrs">Mrs</dt><dt class="pretend-dd" data-index="3" data-val="Miss">Miss</dt><dt class="pretend-dd" data-index="4" data-val="Ms">Ms</dt><dt class="pretend-dd" data-index="5" data-val="Dr">Dr</dt></dl></div><select name="personTitle" class="parsley-validated hasCustomSelect .no-change, .bv-dropdown-select is-invisible" id="personTitle" required="" data-required-message="Please select a title"> <option selected="selected" value="">Title</option> <option value="Mr">Mr</option> <option value="Mrs">Mrs</option> <option value="Miss">Miss</option> <option value="Ms">Ms</option> <option value="Dr">Dr</option> </select> </div> ```
2017/11/11
[ "https://Stackoverflow.com/questions/47236528", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8920890/" ]
I think you want a different class. The class in that HTML snippet is `select_list`. Then the subsequent `dt` tags. If you observe the following CSS selector, where ["."](https://www.w3schools.com/cssref/sel_class.asp) means class and [" dt"](https://www.w3schools.com/cssref/sel_element_element.asp) means select all `dt` tags inside elements of that class, you will see it makes the correct selections: [![CSS selector](https://i.stack.imgur.com/HIqEb.png)](https://i.stack.imgur.com/HIqEb.png) In the code below, I translate this selector into: ``` ieDoc.getElementsByClassName("select_list")(0).getElementsByTagName("dt") ``` This assumes that index 0 is the correct one to use for elements of the class `"select_list"`. You can easily inspect the collection to find the right index if you set it to a variable e.g. ``` Dim x As Object Set x = ieDoc.getElementsByClassName("select_list")(0).getElementsByTagName("dt") ``` --- **Code:** ``` Dim currentOption As Object For Each currentOption In ieDoc.getElementsByClassName("select_list")(0).getElementsByTagName("dt") If InStr(currentOption.innerText, "Miss") > 0 Then currentOption.Selected = True End If Next currentOption ```
Here are a couple options to try if you haven't already: ``` If opt.Value = "Miss" Then 'Debug.Print "found!" opt.Click ``` OR ``` If opt.Value = "Miss" Then 'Debug.Print "found!" opt.Focus opt.FireEvent ("onchange") ``` If this turns out to be something done in kendoGrid or kendoDropDownList, I might be able to help with that also.
35,539,924
I am running linux (Debian) and I want to include the header `mpi.h` into my project. The `mpi.h` is located under `/usr/include/mpi` and running `mpiCC` and `mpirun` shows that it's installed. I couldn't find a clear tutorial.
2016/02/21
[ "https://Stackoverflow.com/questions/35539924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5940262/" ]
You should add `include_directories(/usr/include/mpi/)` to your CMakeLists.txt and just include the header: ``` #include <mpi.h> ```
You should include the header file `mpi/mpi.h` Like this ``` #include<mpi/mpi.h> ```
25,285,792
When we sort a list, like ``` a = [1,2,3,3,2,2,1] sorted(a) => [1, 1, 2, 2, 2, 3, 3] ``` equal elements are always adjacent in the resulting list. How can I achieve the opposite task - shuffle the list so that equal elements are never (or as seldom as possible) adjacent? For example, for the above list one of the possible solutions is ``` p = [1,3,2,3,2,1,2] ``` More formally, given a list `a`, generate a permutation `p` of it that minimizes the number of pairs `p[i]==p[i+1]`. Since the lists are large, generating and filtering all permutations is not an option. Bonus question: how to generate all such permutations efficiently? This is the code I'm using to test the solutions: <https://gist.github.com/gebrkn/9f550094b3d24a35aebd> UPD: Choosing a winner here was a tough choice, because many people posted excellent answers. [@VincentvanderWeele](https://stackoverflow.com/a/25285879/989121), [@David Eisenstat](https://stackoverflow.com/a/25290780/989121), [@Coady](https://stackoverflow.com/a/25291640/989121), [@enrico.bacis](https://stackoverflow.com/a/25286137/989121) and [@srgerg](https://stackoverflow.com/a/25401193/989121) provided functions that generate the best possible permutation flawlessly. [@tobias\_k](https://stackoverflow.com/a/25286251/989121) and David also answered the bonus question (generate all permutations). Additional points to David for the correctness proof. The code from @VincentvanderWeele appears to be the fastest.
2014/08/13
[ "https://Stackoverflow.com/questions/25285792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/989121/" ]
This is along the lines of Thijser's currently incomplete pseudocode. The idea is to take the most frequent of the remaining item types unless it was just taken. (See also [Coady's implementation](https://stackoverflow.com/a/25291640/2144669) of this algorithm.) ``` import collections import heapq class Sentinel: pass def david_eisenstat(lst): counts = collections.Counter(lst) heap = [(-count, key) for key, count in counts.items()] heapq.heapify(heap) output = [] last = Sentinel() while heap: minuscount1, key1 = heapq.heappop(heap) if key1 != last or not heap: last = key1 minuscount1 += 1 else: minuscount2, key2 = heapq.heappop(heap) last = key2 minuscount2 += 1 if minuscount2 != 0: heapq.heappush(heap, (minuscount2, key2)) output.append(last) if minuscount1 != 0: heapq.heappush(heap, (minuscount1, key1)) return output ``` Proof of correctness -------------------- For two item types, with counts k1 and k2, the optimal solution has k2 - k1 - 1 defects if k1 < k2, 0 defects if k1 = k2, and k1 - k2 - 1 defects if k1 > k2. The = case is obvious. The others are symmetric; each instance of the minority element prevents at most two defects out of a total of k1 + k2 - 1 possible. This greedy algorithm returns optimal solutions, by the following logic. We call a prefix (partial solution) *safe* if it extends to an optimal solution. Clearly the empty prefix is safe, and if a safe prefix is a whole solution then that solution is optimal. It suffices to show inductively that each greedy step maintains safety. The only way that a greedy step introduces a defect is if only one item type remains, in which case there is only one way to continue, and that way is safe. Otherwise, let P be the (safe) prefix just before the step under consideration, let P' be the prefix just after, and let S be an optimal solution extending P. If S extends P' also, then we're done. Otherwise, let P' = Px and S = PQ and Q = yQ', where x and y are items and Q and Q' are sequences. Suppose first that P does not end with y. By the algorithm's choice, x is at least as frequent in Q as y. Consider the maximal substrings of Q containing only x and y. If the first substring has at least as many x's as y's, then it can be rewritten without introducing additional defects to begin with x. If the first substring has more y's than x's, then some other substring has more x's than y's, and we can rewrite these substrings without additional defects so that x goes first. In both cases, we find an optimal solution T that extends P', as needed. Suppose now that P does end with y. Modify Q by moving the first occurrence of x to the front. In doing so, we introduce at most one defect (where x used to be) and eliminate one defect (the yy). Generating all solutions ------------------------ This is [tobias\_k's answer](https://stackoverflow.com/a/25286251/2144669) plus efficient tests to detect when the choice currently under consideration is globally constrained in some way. The asymptotic running time is optimal, since the overhead of generation is on the order of the length of the output. The worst-case delay unfortunately is quadratic; it could be reduced to linear (optimal) with better data structures. ``` from collections import Counter from itertools import permutations from operator import itemgetter from random import randrange def get_mode(count): return max(count.items(), key=itemgetter(1))[0] def enum2(prefix, x, count, total, mode): prefix.append(x) count_x = count[x] if count_x == 1: del count[x] else: count[x] = count_x - 1 yield from enum1(prefix, count, total - 1, mode) count[x] = count_x del prefix[-1] def enum1(prefix, count, total, mode): if total == 0: yield tuple(prefix) return if count[mode] * 2 - 1 >= total and [mode] != prefix[-1:]: yield from enum2(prefix, mode, count, total, mode) else: defect_okay = not prefix or count[prefix[-1]] * 2 > total mode = get_mode(count) for x in list(count.keys()): if defect_okay or [x] != prefix[-1:]: yield from enum2(prefix, x, count, total, mode) def enum(seq): count = Counter(seq) if count: yield from enum1([], count, sum(count.values()), get_mode(count)) else: yield () def defects(lst): return sum(lst[i - 1] == lst[i] for i in range(1, len(lst))) def test(lst): perms = set(permutations(lst)) opt = min(map(defects, perms)) slow = {perm for perm in perms if defects(perm) == opt} fast = set(enum(lst)) print(lst, fast, slow) assert slow == fast for r in range(10000): test([randrange(3) for i in range(randrange(6))]) ```
The algorithm already given of taking the most common item left that isn't the previous item is correct. Here's a simple implementation, which optimally uses a heap to track the most common. ``` import collections, heapq def nonadjacent(keys): heap = [(-count, key) for key, count in collections.Counter(a).items()] heapq.heapify(heap) count, key = 0, None while heap: count, key = heapq.heapreplace(heap, (count, key)) if count else heapq.heappop(heap) yield key count += 1 for index in xrange(-count): yield key >>> a = [1,2,3,3,2,2,1] >>> list(nonadjacent(a)) [2, 1, 2, 3, 1, 2, 3] ```
13,622
In LaTeX, we can say something like: ``` "See equation \ref{eq} on page \pageref{eq}..." ``` Is there any way to do: ``` "See equation \ref{eq} in chapter \chapterref{eq}..." ```
2011/03/16
[ "https://tex.stackexchange.com/questions/13622", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/4270/" ]
[`hyperref`](http://ctan.org/pkg/hyperref) offers `\autoref` which will turn `\autoref{chap:foo}` into "chapter *X*" ```latex \documentclass{book} \usepackage{hyperref} \begin{document} \chapter{Foo} \label{chap:foo} This is the beginning of \autoref{chap:foo} \end{document} ``` Produces: ![chapter foo](https://i.stack.imgur.com/RqoUK.png) To edit how hyperref references appear, see answers to [this question](https://tex.stackexchange.com/q/823/215) [`cleveref`](http://ctan.org/pkg/cleveref) is another package that offers this sort of functionality. [The LaTeX wikibook](http://en.wikibooks.org/wiki/LaTeX/Labels_and_Cross-referencing) has details on various cross-referencing options.
The `refstyle` package has this functionality. It will correctly reference an Appendix also if your label is such. You can also link it to `varioref`. Please read the documentaion. ``` \documentclass{book} \usepackage[nokeyprefix]{refstyle} \begin{document} \mainmatter \chapter{First chapter}\label{chp:Chp1} \Eqref{eq:Einst} in \chapref{chp:Chp1} and \eqref{eq:Feyn} in \chapref{chp:App1} it is ... \begin{equation} E=mc^2 \label{eq:Einst} \end{equation} \appendix \chapter{First Appendix}\label{chp:App1} \begin{equation} e^{i\pi}-1 = 0 \label{eq:Feyn} \end{equation} \end{document} ``` This gives: ![enter image description here](https://i.stack.imgur.com/d3RqO.png)
34,282,341
I'm struggling with the alignment in the following `navbar` that uses Bootstrap 3. **Problem:** The `Login` and `Signup` elements are on different rows. Whats the correct way to adjust them so that they are both on the same line? `display: inline-block;` and `white-space: nowrap;` does not appear to work on the `li` elements. Note that the `input` element should fill up the space between the logo and the Login/Signup elements. **Bootply:** <http://www.bootply.com/Bh5TBV9vS0> [![enter image description here](https://i.stack.imgur.com/tHOUa.png)](https://i.stack.imgur.com/tHOUa.png) ``` <nav class="navbar navbar-default navbar-fixed-top"> ``` **HTML** ``` <a class="brand" href="#"> <img src="https://upload.wikimedia.org/wikipedia/commons/4/4a/Logo_2013_Google.png" width="100"> </a> <form role="form" class="form-horizontal search-form"> <!-- <i class="fa fa-search"></i> --> <input type="text" class="form-control" placeholder="Search"> </form> <ul class="nav navbar-nav"> <li><a href="/users/sign_in">Login</a></li> <li><a href="/users/sign_up">Sign up</a></li> </ul> </nav> ``` **CSS:** ``` .brand { width: 170px; height: 80px; background-color: #ccc; display: inline-block } .navbar.transparent.navbar-inverse .navbar-inner { background: rgba(0,0,0,0.4); } .search-form { width: 100%; height: 30px; display: inline-block; } .search-form { margin-top:10px; padding: 0 10px; } nav { display: flex; } ```
2015/12/15
[ "https://Stackoverflow.com/questions/34282341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/741099/" ]
Remove `float:left` for your list items and apply `dispaly:inline-block` and apply the style `white-space:nowrap` for your `ul` element. ``` ul.navbar-nav{ white-space:nowrap;} .navbar-nav li{display:inline-block; float:none;} ```
I have updated width of `.search-form` from `100% to 76%` and now it works fine. Check [here](http://www.bootply.com/ZWzOf77CID)
8,380
The area of the earth covered by desert is more than 20%. (Sites range from 20-33%, I don't know why the wide range). With technology close to providing zero cost solar (not free, but declining each year by a substantial fraction) the ability to desalinate water and pump it over desert land will be viable at some point in time. What would be the resulting downside to doing this? The new land would be a source of food to reduce starvation, and plants to help clean the air and provide oxygen. Would there be any negative impact to the earth's ecosystem?
2015/01/07
[ "https://worldbuilding.stackexchange.com/questions/8380", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/-1/" ]
A lot of negative posts, so here's a more positive view: While terraforming ALL of the Earth's deserts away might be a pipe dream at the moment, there were serious engineering proposals in the 20th Century that might have altered the climate of the Sahara considerably, by flooding areas that were below the sea level: ![enter image description here](https://i.stack.imgur.com/qRdOl.gif) The most interesting of these projects was the Qatarra Sea proposal in Western Egypt, which would have also generated electric power in the process. While the lake itself would likely be too salty to suport life, the shores would thrive. Of course, the electricity potential of such a project would be dwarfed by the energy generated from [littering the desert with solar panels](http://www.desertec.org/). As countries around the Sahara will get better governance & property rights protections, and become richer, they will probably follow the path of other better managed countries in making the desert bloom, with or without solar power. ![http://i57.tinypic.com/33aatd5.jpg](https://i.stack.imgur.com/VLCp5.jpg) Careful ecological management could greatly shrink the Sahara even without massive irrigation work. With fusion or near-zero cost solar, it's quite likely that more and more of the wastelands of Earth will be reclaimed. It would likely not be massive and wasteful open lakes, but subterranean irrigation micro-arrays going straight to the roots of cultivated plants (better known as [Drip Irrigation](http://en.wikipedia.org/wiki/Drip_irrigation)). There will be those who will moan and decry the loss of habitat for some desert spider or spiny shrub, but most likely the Chadians and Nigeriens will be too busy growing out their gardens to care.
The safest answer is that we don't know. Life on Earth is often the result of an equilibrium and pumping to much water into the land will cause some problems. As other mentioned, we need deserts, as they play a role in our biosphere. I will say that we don't know for sure what will happen if we change millions of kilometres into marshes by pumping water into them but we could still try to regain the lands that were taken by the desert during the last decades/centuries. Re-affecting the land cannot be done simply by adding more water, it is not always the solution. In northern China and in the Sahel, the main source of desertification is the erosion of the grounds. Erosion caused (in China) by overusing the land. A combination of deforestation and overusing the soil with poor agricultural practices has led to the desertification of what was once the birthplace of the chinese civilization. This problem is also present in the Amazon. The deforestation has not led to fertile lands but lands that are affected but erosion in a large scale. The ground is simply collapsing as there is a lot of water but nothing to retain the soil. If it can be done, it is surely a slow process. Add water to fill rivers and wait for the plants to develop before adding more water. Increase the water with the increase of water retention given by the increase in plants in the area.
37,543,057
For my android app I use the parceler library and the lombok library. These are the annotations of my class: ``` @Table @ToString @Getter @NoArgsConstructor @Parcel public class MyClass { ``` However, during gradle build, Parceler complains that there is no default empty constructor. So does this mean it doesn't recognize the `@NoArgsConstructor` annotation and these two simply won't work together? Because e.g. SugarORM has no probs with it. Or am I just missing something?
2016/05/31
[ "https://Stackoverflow.com/questions/37543057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4924595/" ]
This gets into how Lombok adds code to your class. Lombok uses a [known trick](http://notatube.blogspot.com/2010/11/project-lombok-trick-explained.html) in the Java annotation processor to add code to your class. This added code is not visible to Parceler during the annotation processor round and makes the added no-args constructor unknown to Parceler. I'd recommend adding the no-args constructor manually, annotating the existing constructor with `@ParcelConstructor` (I assume you have one) or consider using the Lombok fork [Hrisey](https://github.com/mg6maciej/hrisey) that has an `@Parcelable` annotation. We had an issue/question about this recently: <https://github.com/johncarl81/parceler/issues/177>
Use @ParcelFactory on a static method building an empty object: ``` @Builder public class MyClass { @ParcelFactory static MyClass fromParcel() { return builder().build(); } } ``` This trick can ben used with Jackson and other libraries allowing you to annotate static factory methods in addition to constructors.
1,913,929
How good it'd be to have 64 bit MySQL on 64 bit Linux ofcouse? Presently I have 32bit Mysql / OS but 64bit hardware. Shall I consider upgrade? What advantages do I have?
2009/12/16
[ "https://Stackoverflow.com/questions/1913929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/232879/" ]
Your memcached client will do all of the sharding for you. The harder part is knowing what you should cache so it will actually be beneficial to your application. In general, you *don't* want to cache things like database queries. You *do* want to cache things that you build from database queries that represent higher-level objects in your application. Having a good API for storage and retrieval of your applications is a good start. You might find the [story of memcached](http://code.google.com/p/memcached/wiki/TutorialCachingStory) a good introduction.
You can below link, hope it will help to get started.... <http://sacharya.com/using-memcached-with-java/> <http://www.javascool.com/2010/08/31/using-memcached-with-java-applications/>
63,791,297
Let's say I have the following input model: ``` public class InputModel { [Required] public string? Name { get; set; } [Required] public DateTime? Birthday { get; set; } } ``` When the birthday field is not provided then I get the following appropriate response: ``` { "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-818caf3d757ae345a735fd0f4a523ecb-e9f90641c111814c-00", "errors": { "Birthday": [ "The Birthday field is required." ] } } ``` But if I provide an invalid date string, then the following is returned: ``` { "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-715471c843155940a6f0cae580cd1b69-247e6dbfe3442446-00", "errors": { "model": [ "The model field is required." ], "$.birthday": [ "The JSON value could not be converted to System.Nullable`1[System.DateTime]. Path: $.birthday | LineNumber: 13 | BytePositionInLine: 37." ] } } ``` These two response models are not consistent which makes it difficult for the client to reason about the validation errors. How can I validate the DateTime string before it gets handled by the converter so that I can return a response model similar to the first? Something like this: ``` { "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-818caf3d757ae345a735fd0f4a523ecb-e9f90641c111814c-00", "errors": { "Birthday": [ "The Birthday field is badly formed." ] } } ```
2020/09/08
[ "https://Stackoverflow.com/questions/63791297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/589558/" ]
> > How can I validate the DateTime string before it gets handled by the > converter so that I can return a response model similar to the first? > > > You can create a custom BadRequest method that inherits **`ValidationProblemDetails`** to return the error message you want. First, add the following code in your `startup.cs ConfigureServices` method: ``` services.AddMvc() .ConfigureApiBehaviorOptions(options => { options.InvalidModelStateResponseFactory = context => { var problems = new CustomBadRequest(context); return new BadRequestObjectResult(problems); }; }); ``` Here is the custom Bad Request method: ``` public class CustomBadRequest : ValidationProblemDetails { public CustomBadRequest(ActionContext context) { Title = "Invalid arguments to the API"; Detail = "The inputs supplied to the API are invalid"; Status = 400; ConstructErrorMessages(context); Type = context.HttpContext.TraceIdentifier; } private void ConstructErrorMessages(ActionContext context) { foreach (var keyModelStatePair in context.ModelState) { var key = keyModelStatePair.Key.Replace("$.", ""); var errors = keyModelStatePair.Value.Errors; if (errors != null && errors.Count > 0) { if (errors.Count == 1) { var errorMessage = GetErrorMessage(key, errors[0]); Errors.Add(key, new[] { errorMessage }); } else { var errorMessages = new string[errors.Count]; for (var i = 0; i < errors.Count; i++) { errorMessages[i] = GetErrorMessage(key,errors[i]); } Errors.Add(key, errorMessages); } } } } string GetErrorMessage(string key, ModelError error) { if (error.ErrorMessage != $"The {key} field is required.") { return $"The {key} field is badly formed."; } return error.ErrorMessage; } } ``` After the above settings, when your InputModel is illegal, it will automatically enter the `CustomBadRequest` method and return the corresponding error message through judgment. Here is the test result through postman: [![enter image description here](https://i.stack.imgur.com/nriXd.gif)](https://i.stack.imgur.com/nriXd.gif)
Try defining the datatype of the datetime field like as follows: ``` [Required] [DataType(DataType.DateTime)] public DateTime? Birthday { get; set; } ```
49,744,802
Sorry for my English. I'm learning PHP and I try to create a small member area, I think it's a good way to learn. In my member area, some members are "verified" and some are not. How can I display different pics based on data `stored in Mysql` using PHP? What I want is to display `"Picture1" if "1" is the value stored in a MySQL column and "Picture2" if "2"` is the value, etc... You are "unverified member" so you see picture 1, verified member see picture 2... I know how to `SELECT data using MySQLi` but I can't find what I have to do next? Thank you.
2018/04/10
[ "https://Stackoverflow.com/questions/49744802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6012193/" ]
Lets assume that you have a database named as db and table as tb with columns name and verify. Name is string and verify is Boolean. Also verify stored 1 if user is verified and 0 it isn't. so, you can do it by iteration statements (either by using if else or by switch statements) ``` <?php $servername = "localhost"; $username = "root"; $password = ""; $dbname = "db"; // Create connection $conn = new mysqli($servername, $username, $password, $dbname); // Check connection if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } $query="SELECT * FROM `tb`"; $sql=$conn->query($query); if($sql->num_rows>0) { while($row=$sql->fetch_assoc()) { //display name echo $row['name']; //check if user is verified or not //using if else statement if($row['verify']=="0") { echo '<img src="./picture1.jpg">'; } else { echo '<img src="./picture2.jpg">'; } //using switch case switch($row['verify']) { case(0): { echo '<img src="./picture1.jpg">'; } case(1); { echo '<img src="./picture2.jpg">'; } } //remember i have used both methods hence it will show the image twice , i did it just you possible ways , you choose any one from it.(if else or switch case) } } $conn->close(); ?> ```
Try this: ``` <img src="<?php echo "/pictures/Picture{$number}.jpg"; ?>" /> ```
31,964,635
Usecase : I am trying to read pdf from url and then send it via FTP I have functions as: ``` def send_via_ftp(self, url, filename, ftp_site, username, password, directory): import urllib2 try: data = urllib2.urlopen(url) except urllib2.URLError, e: print "Failed to fetch content: %s" % e return False except urllib2.HTTPError, e: print "HTTP ERROR: %s" % e return False return self.send_file_by_ftp(data, ftp_site, username, password, directory, filename) ``` ----------------------------------------------------------------------------- ============================================================================= ``` def send_file_by_ftp(self, data, ftp_site, username, password, directory, filename): import ftplib try: remote_ftp_connection = ftplib.FTP(ftp_site) except ftplib.all_errors as e: print str(e) return False else: remote_ftp_connection.login(username, password) try: if len(directory): remote_ftp_connection.cwd(directory) remote_ftp_connection.storbinary("STOR %s" % filename, data) except ftplib.error_perm,e: print str(e) return False else: remote_ftp_connection.quit() return True ``` My call looks like: send\_via\_ftp("<http://url/>\*\*\*.pdf", "XYZ.pdf", "ftp url 192.168.0.101", "XXXX", "YYYYY", "") The file is successfully made in the FTP folder but the content in the file are not written. While I open it says "Format Error: Not a pdf or corrupted". What could be the problem? Many thanks for any help
2015/08/12
[ "https://Stackoverflow.com/questions/31964635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2503890/" ]
Windows explorer allots 15 custom overlay icons (Windows reserves 4, so effectively only **[11 overlay icons](https://support.microsoft.com/en-us/help/3106961/registered-icon-overlay-handlers-aren-t-used-by-windows-shell)**) - they are shared between multiple applications (Google drive, One drive, Tortoise SVN). If you have multiple applications installed - the first ones in list will display their icons, rest of applications won’t. Problem is described deeper in: <https://tortoisesvn.net/faq.html#ovlnotall>. Open registry editor in: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers` Rename icons which are not important to you to start from ‘z\_’ prefix (will be last in list, will not be used after that). [![regedit snapshoot](https://i.stack.imgur.com/uABFd.png)](https://i.stack.imgur.com/uABFd.png) Windows restart might be needed, as just restart explorer does not work. But in my case icons appeared to be correct after some time. (10-20 minutes ?).
I did all of the above and nothing worked. The overlay icons were appearing in Explorer but not in Total Commander. Eventually what solved the issue for me was that I discovered a command in Total Commander that refreshed the Overlay Icons. To access it, right click on the toolbar and click on "Change...", and add "cm\_SwitchOverlayIcons" as shown in the image below. After adding the command icon to the toolbar, I click it once, and the overlay icon of TortoiseSVN appeared! [![enter image description here](https://i.stack.imgur.com/W1ozq.png)](https://i.stack.imgur.com/W1ozq.png)
11,640,703
``` Student Subj Period Score ------------------------- A Math 100 50 A Hist 100 100 A Sci 200 70 B Math 100 50 B Hist 100 50 ``` I'm trying to query for the highest score for an each student. If scores are all same (such as 'B' then pick one on the bottom. If that's not possible, **just pick anything**. I'm having a hard time using max() and min() and group by to get a correct result. Result should look like this ``` Student Subj Period Score ------------------------- A Hist 100 100 B Hist 100 50 ```
2012/07/24
[ "https://Stackoverflow.com/questions/11640703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1224698/" ]
You want to use analytic functions (I assume this is in Oracle from the mention of pl/sql): ``` select Student, Subj, Period, Score from (select t.*, row_number() over (partition by student order by score desc) as seqnum from t ) t where seqnum = 1 ```
Gordon's answer certainly works very well, and it uses ANSI syntax that is supported by a number of database engines. Oracle has an additional syntax that I don't think is ANSI, but it eliminates the need for any subquery. It performs at least as well as Gordon's answer (it may be slightly faster, but I'm not sure). It uses a KEEP LAST extension to aggregate functions. ``` select student, max(subj) keep( dense_rank last order by score, period ) as subj, max(period) keep( dense_rank last order by score, period ) as period, max(score) as score from grades group by student ; ``` Note - Unless you explicitly want to provide random results, you should always structure your query to give determinate answers, meaning that a given set of data will always give the same answer, regardless how the data was loaded. I structured the above query to order by the highest score, followed by the highest period. If there is still a tie then the max(subj) is returned.
42,237,899
This is what `AsyncMethods` class looks like: ``` public class AsyncMethods { public static async Task<double> GetdoubleAsync() { Console.WriteLine("Thread.CurrentThread.ManagedThreadId: " + Thread.CurrentThread.ManagedThreadId); await Task.Delay(1000); return 80d; } public static async Task<string> GetStringAsync() { Console.WriteLine("Thread.CurrentThread.ManagedThreadId: " + Thread.CurrentThread.ManagedThreadId); await Task.Delay(1000); return "async"; } public static async Task<DateTime> GetDateTimeAsync() { Console.WriteLine("Thread.CurrentThread.ManagedThreadId: " + Thread.CurrentThread.ManagedThreadId); await Task.Delay(1000); return DateTime.Now; } } ``` This what my main method looks like: ``` static void Main(string[] args) { while (Console.ReadLine() != "exit") { Console.WriteLine("Thread.CurrentThread.ManagedThreadId: " + Thread.CurrentThread.ManagedThreadId); DateTime dt = DateTime.Now; var res = GetStuffAsync().Result; var ts = DateTime.Now - dt; Console.WriteLine(res); Console.WriteLine("Seconds taken: " + ts.Seconds + " milliseconds taken: " + ts.Milliseconds); } Console.ReadLine(); return; } static async Task<object> GetStuffAsync() { var doubleTask = AsyncMethods.GetdoubleAsync(); var StringTask = AsyncMethods.GetStringAsync(); var DateTimeTask = AsyncMethods.GetDateTimeAsync(); return new { _double = await doubleTask, _String = await StringTask, _DateTime = await DateTimeTask, }; } ``` As it can be seen in each method i added a delay of 1 second. Here is the output: ``` Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 { _double = 80, _String = async, _DateTime = 2/15/2017 4:32:00 AM } Seconds taken: 1 milliseconds taken: 40 Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 Thread.CurrentThread.ManagedThreadId: 10 { _double = 80, _String = async, _DateTime = 2/15/2017 4:32:03 AM } Seconds taken: 1 milliseconds taken: 16 ``` Now i have 2 questions: 1. How come everything happened on a single thread? 2. Why was the Delay only 1 second when i waited 3 seconds?
2017/02/14
[ "https://Stackoverflow.com/questions/42237899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/683104/" ]
First off: if you have two questions **please ask two questions**. Don't put two questions in one question. > > How come everything happened on a single thread? > > > That's the wrong question to ask. The correct question is: why do you think anything should happen on a second thread? Here, I'll give you a task: wait five minutes, and then check your email. While you're waiting, make a sandwich. **Did you have to hire someone to either do the waiting or make the sandwich**? Obviously not. Threads are workers. There's no need to hire a worker if the job can be done by one worker. The whole point of `await` is to *avoid* going to extra threads if you don't need to. In this case you don't need to. > > Why was the Delay only 1 second when i waited 3 seconds? > > > Compare these two workflows. * Wait five minutes; while you're waiting, make a sandwich * then check your email * then wait five minutes; while you're waiting, make a sandwich * then check your email * then wait five minutes; while you're waiting, make a sandwich * then check your email If you execute that workflow, you'll wait a total of fifteen minutes. The workflow you wrote was: * Wait five minutes * simultaneously, wait five minutes * simultaneously, wait five minutes * while you're waiting, make a sandwich * then check your email You only wait five minutes with that workflow; all the delays happen at the same time. Do you see how you wrote your program incorrectly now? The key insight to understand here is that **an await is a point in a program where the continuation of the await is delayed until after the awaited task completes**. If you don't put in an await, the program continues by itself without waiting. That's the meaning of `await`.
They all *start* on the same thread. When you call your three Async methods in sequence, they all execute synchronously up until the first `await` call. (After the `await`, they become state machines that pick up where they left off whenever they get scheduled. If you checked the thread ID *after* the `await Task.Delay` call, you would probably find that the continuations ran on different threads -- at least here in a console app.) As for why it's only delaying 1 second... that's what you're telling it to do. You've got three async tasks, all running simultaneously, each delaying for one second. You're not saying "[a]wait until the first task is done before starting the second" -- in fact you're carefully doing the opposite, starting all three and then awaiting all three -- so they run in parallel.
5,663,787
I have to automatically upload folders to an FTP using a Python script. I am able to upload a single file, but not folders with subfolders and files in them. I did a lot of search, but failed. Could some one help me out here? Thanks in advance. ``` #! /usr/bin/python import ftplib s = ftplib.FTP('serverip','usrname','password') file = '/home/rock/test.txt' ftppath = '/IT' filename = "rak" s.cwd(ftppath) f = open(file,'rb') s.storbinary('STOR ' + filename, f) f.close() s.quit() ```
2011/04/14
[ "https://Stackoverflow.com/questions/5663787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/532312/" ]
I recently came into this problem and figured out a recursive function to solve it. ``` import ftplib import os server = 'localhost' username = 'generic_user' password = 'password' myFTP = ftplib.FTP(server, username, password) myPath = r'c:\temp' def uploadThis(path): files = os.listdir(path) os.chdir(path) for f in files: if os.path.isfile(path + r'\{}'.format(f)): fh = open(f, 'rb') myFTP.storbinary('STOR %s' % f, fh) fh.close() elif os.path.isdir(path + r'\{}'.format(f)): myFTP.mkd(f) myFTP.cwd(f) uploadThis(path + r'\{}'.format(f)) myFTP.cwd('..') os.chdir('..') uploadThis(myPath) # now call the recursive function ```
You basically need to use os.walk() to grab those files and transfer them. Here's a script I wrote for myself to do much of what your asking. I wrote it a long time ago, so I'd probably do it differently if I wrote it again, but I get a lot of use out of it. It imports psftplib, which is a wrapper I wrote for the putty sftp. Feel free to remove these references, or grab the lib at: <http://code.google.com/p/psftplib/source/browse/trunk/psftplib.py> ``` # -*- coding: utf8 -*- '''This tool will ftp all the files in a given directory to a given location if the file ftpallcfg.py exists in the directory it will be loaded and the values within it used, with the current directory used as the source directory. ftpallcfg.py file contains the following variables. =========================== server = <server to ftp to> username = <Username for access to given server> remote_dir = <remote server directory> encrypt= True/False monitor = True/False walk = True/False =========================== ''' import ftplib import os import getpass import sys import time import socket import psftplib __revision__ = 1.11 SLEEP_SECONDS = 1 class FtpAddOns(): PATH_CACHE = [] def __init__(self, ftp_h): self.ftp_h = ftp_h def ftp_exists(self, path): '''path exists check function for ftp handler''' exists = None if path not in self.PATH_CACHE: try: self.ftp_h.cwd(path) exists = True self.PATH_CACHE.append(path) except ftplib.error_perm, e: if str(e.args).count('550'): exists = False else: exists = True return exists def ftp_mkdirs(self, path, sep='/'): '''mkdirs function for ftp handler''' split_path = path.split(sep) new_dir = '' for server_dir in split_path: if server_dir: new_dir += sep + server_dir if not self.ftp_exists(new_dir): try: print 'Attempting to create directory (%s) ...' % (new_dir), self.ftp_h.mkd(new_dir) print 'Done!' except Exception, e: print 'ERROR -- %s' % (str(e.args)) def _get_local_files(local_dir, walk=False): '''Retrieve local files list result_list == a list of dictionaries with path and mtime keys. ex: {'path':<filepath>,'mtime':<file last modified time>} ignore_dirs == a list of directories to ignore, should not include the base_dir. ignore_files == a list of files to ignore. ignore_file_ext == a list of extentions to ignore. ''' result_list = [] ignore_dirs = ['CVS', '.svn'] ignore_files = ['.project', '.pydevproject'] ignore_file_ext = ['.pyc'] base_dir = os.path.abspath(local_dir) for current_dir, dirs, files in os.walk(base_dir): for this_dir in ignore_dirs: if this_dir in dirs: dirs.remove(this_dir) sub_dir = current_dir.replace(base_dir, '') if not walk and sub_dir: break for this_file in files: if this_file not in ignore_files and os.path.splitext(this_file)[-1].lower() not in ignore_file_ext: filepath = os.path.join(current_dir, this_file) file_monitor_dict = { 'path': filepath, 'mtime': os.path.getmtime(filepath) } result_list.append(file_monitor_dict) return result_list def monitor_and_ftp(server, username, password, local_dir, remote_dir, encrypt=False, walk=False): '''Monitor local files and when an update is found connect and upload''' print 'Monitoring changes in (%s).' % (os.path.abspath(local_dir)) print '(Use ctrl-c to exit)' last_files_list = _get_local_files(local_dir) while True: try: time.sleep(SLEEP_SECONDS) latest_files_list = _get_local_files(local_dir) files_to_update = [] for idx in xrange(len(latest_files_list)): if idx < len(last_files_list): # compare last modified times if latest_files_list[idx]['mtime'] > last_files_list[idx]['mtime']: files_to_update.append(latest_files_list[idx]) else: # add the file to the list (new file) files_to_update.append(latest_files_list[idx]) if files_to_update: print print 'Detected NEW or CHANGED file(s), attempting to send ...' print is_success = upload_all(server, username, password, local_dir, remote_dir, files_to_update, encrypt, walk) if not is_success: break else: print '.', last_files_list = latest_files_list[:] # copy the list to hold except KeyboardInterrupt: print print 'Exiting.' break def upload_all(server, username, password, base_local_dir, base_remote_dir, files_to_update=None, encrypt=False, walk=False): '''Upload all files in a given directory to the given remote directory''' continue_on = False login_ok = False server_connect_ok = False base_local_dir = os.path.abspath(base_local_dir) base_remote_dir = os.path.normpath(base_remote_dir) if files_to_update: local_files = files_to_update else: local_files = _get_local_files(base_local_dir, walk) if local_files: if not encrypt: # Use standard FTP ftp_h = ftplib.FTP() else: # Use sftp ftp_h = psftplib.SFTP() try: ftp_h.connect(server) server_connect_ok = True except socket.gaierror, e: print 'ERROR -- Could not connect to (%s): %s' % (server, str(e.args)) except IOError, e: print 'ERROR -- File not found: %s' % (str(e.args)) except socket.error, e: print 'ERROR -- Could not connect to (%s): %s' % (server, str(e.args)) ftp_path_tools = FtpAddOns(ftp_h) if server_connect_ok: try: ftp_h.login(username,password) print 'Logged into (%s) as (%s)' % (server, username) login_ok = True except ftplib.error_perm, e: print 'ERROR -- Check Username/Password: %s' % (str(e.args)) except psftplib.ProcessTimeout, e: print 'ERROR -- Check Username/Password (timeout): %s' % (str(e.args)) if login_ok: for file_info in local_files: filepath = file_info['path'] path, filename = os.path.split(filepath) remote_sub_path = path.replace(base_local_dir, '') remote_path = path.replace(base_local_dir, base_remote_dir) remote_path = remote_path.replace('\\', '/') # Convert to unix style if not ftp_path_tools.ftp_exists(remote_path): ftp_path_tools.ftp_mkdirs(remote_path) # Change to directory try: ftp_h.cwd(remote_path) continue_on = True except ftplib.error_perm, e: print 'ERROR -- %s' % (str(e.args)) except psftplib.PsFtpInvalidCommand, e: print 'ERROR -- %s' % (str(e.args)) if continue_on: if os.path.exists(filepath): f_h = open(filepath,'rb') filename = os.path.split(f_h.name)[-1] display_filename = os.path.join(remote_sub_path, filename) display_filename = display_filename.replace('\\', '/') print 'Sending (%s) ...' % (display_filename), send_cmd = 'STOR %s' % (filename) try: ftp_h.storbinary(send_cmd, f_h) f_h.close() print 'Done!' except Exception, e: print 'ERROR!' print str(e.args) print else: print "WARNING -- File no longer exists, (%s)!" % (filepath) ftp_h.quit() print 'Closing Connection' else: print 'ERROR -- No files found in (%s)' % (base_local_dir) return continue_on if __name__ == '__main__': import optparse default_config_file = u'ftpallcfg.py' # Create parser, and configure command line options to parse parser = optparse.OptionParser() parser.add_option("-l", "--local_dir", dest="local_dir", help="Local Directory (Defaults to CWD)", default='.') parser.add_option("-r", "--remote_dir", dest="remote_dir", help="[REQUIRED] Target Remote directory", default=None) parser.add_option("-u", "--username", dest="username", help="[REQUIRED] username", default=None) parser.add_option("-s","--server", dest="server", help="[REQUIRED] Server Address", default=None) parser.add_option("-e", "--encrypt", action="store_true", dest="encrypt", help="Use sftp", default=False) parser.add_option("-m", action="store_true", dest="monitor", help="Keep process open and monitor changes", default=False) parser.add_option("-w", action="store_true", dest="walkdir", help="Walk sub directories of the given directory to find files to send.", default=False) (options,args) = parser.parse_args() if (options.username and options.server and options.remote_dir) or \ os.path.exists(default_config_file): local_dir = options.local_dir if os.path.exists(default_config_file): sys.path.append('.') import ftpallcfg try: server = ftpallcfg.server username = ftpallcfg.username remote_dir = ftpallcfg.remote_dir encrypt = ftpallcfg.encrypt monitor = ftpallcfg.monitor walk = ftpallcfg.walk except AttributeError, e: print "ERROR --", str(e.args) print print 'Value(s) missing in %s file! The following values MUST be included:' % (default_config_file) print '================================' print 'server = <server to ftp to>' print 'username = <Username for access to given server>' print 'remote_dir = <remote server directory>' print 'encrypt= True/False' print 'monitor = True/False' print 'walk == True/False' print '================================' sys.exit() else: server = options.server username = options.username remote_dir = options.remote_dir encrypt = options.encrypt monitor = options.monitor walk = options.walkdir # get the user password prompt = 'Password (%s@%s): ' % (username, server) if os.isatty(sys.stdin.fileno()): p = getpass.getpass(prompt) else: #p = sys.stdin.readline().rstrip() p = raw_input(prompt).rstrip() if options.encrypt: print '>> Using sftp for secure transfers <<' print if monitor: try: monitor_and_ftp(server,username,p,local_dir, remote_dir, encrypt, walk) except KeyboardInterrupt: print 'Exiting...' else: try: upload_all(server, username, p, local_dir, remote_dir, [], encrypt, walk) except KeyboardInterrupt: print 'Exiting...' else: print 'ERROR -- Required option not given!' print __revision__ print __doc__ print parser.print_help() ```
1,448
[CbpA](http://www.uniprot.org/uniprot/P36659) is DNA binding protein found in E. coli and binds non-specifically to curved DNA ([Cosgriff et al., 2010](http://dx.doi.org/10.1111/j.1365-2958.2010.07292.x)), when the bacterium is in a static phase of growth. The use of "curved DNA" confuses me. Is the term "curved DNA" essentially the same as "Circular DNA"? --- [Cosgriff, S. et al. Dimerization and DNA-dependent aggregation of the Escherichia coli nucleoid protein and chaperone CbpA. Mol. Microbiol. 77, 1289–1300 (2010).](http://dx.doi.org/10.1111/j.1365-2958.2010.07292.x)
2012/03/16
[ "https://biology.stackexchange.com/questions/1448", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/158/" ]
Cone cells are each connected to their own neurone. This allows them a great deal of resolution as the brain can interpret the exact position of the cone cell that was stimulated by a light photon. However in order to improve low light vision, multiple rod cells are connected to a single neurone - this is called summation. Whilst it does allow for an action potential to be generated in low light conditions, it greatly reduces resolution as the brain can not know precisely which rod cell was stimulated: ![Anatomy of the human retina](https://i.stack.imgur.com/15SFw.gif) Rods can not detect colour as they only come in one variety - cone cells (in humans) come in a red, green and blue specific form to allow for the perception of colour by the brain due to the relative strength of these signals.
All of the above answers are great, and very informative. But they are also *technically* wrong, in certain conditions. Once you understand them, you'll be able to understand this explanation of why. The canonical answer is that cones are used for color perception in bright light and rods are used in low light. But rods have a peak color sensitivity that is very distinct from the cones (see the chart posted above). And more importantly, *there are light levels at which both rods and cones are equally functional for color perception*. This is known as the "Purkinje effect" or "Purkinje shift". Basically, when light levels dim, your red color perception diminishes first, but your blue color perception is enhanced (or at least doesn't diminish nearly as fast). The specific effect is that red objects get darker much faster than blue ones. But the brain isn't yet just perceiving the blue objects as a brighter gray, so it seems there is some color perception built into the brain based on the rods. <http://en.wikipedia.org/wiki/Purkinje_effect>
48,611
probably a weird question. I'm using the custom [MetaBox and CustomFields Class](https://github.com/jaredatch/Custom-Metaboxes-and-Fields-for-WordPress) by jaredatch on Github. I have this "event-date" metabox: ``` $meta_boxes[] = array( 'id' => 'event_date', 'title' => 'Event Date', 'pages' => array( 'wr_event', ), 'context' => 'normal', 'priority' => 'high', 'fields' => array( array( 'name' => 'Test Date Picker (UNIX timestamp)', 'desc' => 'field description (optional)', 'id' => $prefix . 'event_date', 'type' => 'text_date_timestamp', ) ), ); ``` I have a second metabox called "event-review" ``` $meta_boxes[] = array( 'id' => 'wr_event_review', 'title' => 'Event Review', 'pages' => array( 'wr_event', ), // Post type 'context' => 'normal', 'priority' => 'high', 'show_names' => true, // Show field names on the left 'fields' => array( array( 'name' => 'Event Review', 'id' => $prefix . 'event_wysiwyg', 'type' => 'wysiwyg', 'options' => array( 'textarea_rows' => 5, ), ) ), ); ``` I wonder if it's possible to show the event-review metabox only when the date is over? Something like… ``` if ( date('U') > date('U', $_POST["_wr_event_date"] ) ) { $meta_boxes[] = array( 'id' => 'wr_event_review', 'title' => 'Event Review', ``` However I have no idea if this is even possible or even how I can get the current `event_date` that is in the input. Any thoughts on this?
2012/04/11
[ "https://wordpress.stackexchange.com/questions/48611", "https://wordpress.stackexchange.com", "https://wordpress.stackexchange.com/users/3529/" ]
Use `add_editor_style` e.g.: functions.php ``` add_editor_style('custom-editor-style.css'); ``` <http://codex.wordpress.org/Function_Reference/add_editor_style>
`add_editor_style` is recommended for theme. You can `mce_css` filter in plugin. The following sample code is from [here](http://codex.wordpress.org/Plugin_API/Filter_Reference/mce_css) ``` function plugin_mce_css( $mce_css ) { if ( !empty( $mce_css ) ) $mce_css .= ','; $mce_css .= plugins_url( 'editor.css', __FILE__ ); return $mce_css; } add_filter( 'mce_css', 'plugin_mce_css' ); ```
29,431,726
I'm developing an android app where I would like to fetch some data (mostly text) from the internet but not necesseraly from a website! I would like to have a server that allows clients to fetch some text data. What kind of server fits my goals the best? Http or maybe simply tcp? I don't know much about http so I don't know if it matches my goals and/or if it handles well a kind of text "database". Edit: A use case could be: people could write comments and send them to the server. Then clients could refresh their app by fetching new comments from the server. Therefore I'M asking what kind of server could best handle services and kind offre database if needed.
2015/04/03
[ "https://Stackoverflow.com/questions/29431726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3787649/" ]
AWK may be used for this purpose. See <https://www.gnu.org/software/gawk/manual/html_node/Redirection.html> which contains the following conceptual example: ``` $ awk '{ print $2 > "phone-list" > print $1 > "name-list" }' mail-list $ cat phone-list -| 555-5553 -| 555-3412 … $ cat name-list -| Amelia -| Anthony … ``` where mail-list is filled with two columns of information: the first column containing the names, and the second containing the phone numbers. See the `match(string,regex)` function (<http://www.grymoire.com/Unix/Awk.html#uh-47>) for capturing regular expressions, keeping in mind that $0 designates the entire line read in. This function returns RSTART and RLENGTH, which can be used with the `substr(string,position,length)` (<http://www.grymoire.com/Unix/Awk.html#uh-43>) function to return the matched pattern (where string=$0 if you're searching by line). A great introduction to AWK is here: <http://www.grymoire.com/Unix/Awk.html> ...may look long but worth the investment. **Update** If you're actually dealing with multiple lines containing fields of information, and you don't particularly care whether the items found are printed in the same columnar form, then the following would work: ``` echo -e " apple pears banana \n kiwi ananas cocoa\n pork" | awk '{ #printf "\n" for(j=1;j<=NF;j++){ i=match($j,/[ab][a-z]+/) if(i>0){ print $j > "removed.txt" }else{ printf $j " " } } }' ``` If you do care about retaining the columnar form, then you could use the `printf` function as above commented out with a little massaging to get it just right (and replace second `print` with `printf $j " "`). Since AWK deals in fields, though, the method presented above would cause problems if you had multiple instances of the pattern in a single field (i.e. without a delimiter between) that you wanted to capture. **Update 2** Here's a better solution that will ensure all matches are found and is irrespective of fields: ``` echo -e " apple pears banana \n kiwi ananas cocoa" | awk ' BEGIN { regex="a.{2,3}"; } { ibeg=1; imat=match(substr($0,ibeg),regex); after=$0; while (imat) { before = substr($0,ibeg,RSTART-1); pattern = substr($0,ibeg+RSTART-1,RLENGTH); after = substr($0,ibeg+RSTART+RLENGTH-1); printf before; print pattern >"removed.txt"; ibeg=ibeg+RSTART+RLENGTH-1; imat=match(substr($0,ibeg),regex); } print after; } ' ``` Output: ``` e peba kiwi ocoa ``` Removed: ``` $ cat removed.txt appl ars anan anan as c ```
Using GNU awk for the 4th arg to split(): ``` $ cat tst.awk { split($0,flds,re,seps) for (i=1;i in flds;i++) { printf "%s", flds[i] if (i in seps) print seps[i] > "removed.txt" } print "" } $ echo -e " apple pears banana \n kiwi ananas cocoa" | awk -v re=' [ab][a-z]+' -f tst.awk pears kiwi cocoa $ cat removed.txt apple banana ananas ```
1,328,568
I am trying to write a simple audit class that takes input via operator << and writes the audit after receiving a custom manipulator like this: ``` class CAudit { public: //needs to be templated CAudit& operator << ( LPCSTR data ) { audittext << data; return *this; } //attempted manipulator static CAudit& write(CAudit& audit) { //write contents of audittext to audit and clear it return audit; } private: std::stringstream audittext; }; //to be used like CAudit audit; audit << "Data " << data << " received at " << time << CAudit::write; ``` I recognise that the overloaded operator in my code does not return a stream object but was wondering if it was still possible to use a manipulator like syntax. Currently the compiler is seeing the '<<' as the binary right shift operator. Thanks for any input, Patrick
2009/08/25
[ "https://Stackoverflow.com/questions/1328568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38892/" ]
To make it work you have to add overload of operator << for functions, than call the function from it: ``` class CAudit { //...other details here as in original question CAudit& operator << (CAudit & (*func)(CAudit &)) { return func(*this); } }; CAudit audit; audit << "some text" << CAudit::write; ```
Binary shift operator and stream operator is the same operator. It is completely legal to overload operator+ for your class to write "Hello world" on std::cout (although it is a very bad idea). The same way C++ standard authors decided to overload operator<< for streams as writing to the stream. You didn't write clearly what is your problem. My guess is compilation error. The best thing in this case is to quote the error message. If I am right, the problem is, that you defined only operator<< for LPCSTR, and then you want it to work function object on the right side. You use word "manipulator", but you misunderstand something. Manipulator for a stream (stream from STL) is a function that performs some actions on the stream it is written to. And it works only because of this overload: ``` ostream& operator<< (ostream& ( *pf )(ostream&)); ``` which takes a function and applies it to a stream. Similarly you need: ``` CAudit& operator<< (CAudit& ( *pf )(CAudit& audit)) { return (*pf)(audit); } ```
14,488,946
I need to connect to the data in a remote MySQL table, iterate through it, and use ActiveRecord "create" methods so that all validations and callbacks are performed. I have found documentation for having multiple database config settings, and assigning them individually per model, but that doesn't fit my needs because I don't want to create models or migrations for a one time import. So I found this other method of querying a database outside of AR: ``` base = ActiveRecord::Base.establish_connection( :adapter => "mysql2", :host => "XXX", :username => "YYY", :password => "ZZZ", :database => "AAA" ) results = base.connection.execute("SELECT * FROM programs WHERE 1") ``` But it returns an object that looks fairly inscrutable: ``` #<Mysql2::Result:0x007fdf6e8bb9d8> ``` Any ideas?
2013/01/23
[ "https://Stackoverflow.com/questions/14488946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636892/" ]
1) Yes, this is the best way for the aggregate to maintain its integrity. Some say that this can result in aggregates with very large number of methods, however in that case there may be multiple aggregates at play. 2) Ideally, the aggregate would perform the required operation without passing references. There may be a case where passing a reference makes sense, but this should be implemented with care as it makes reasoning about integrity more difficult.
> > I assume in most cases if external object needs to performs some > operation on internal entity, it should call method(s) defined on the > Aggregate root > > > Just to add a slightly different take on this, the reverse approach might also be used. Adding methods to the Aggregate Root **in most cases** forces you to divide your domain in very small Aggregates lest the roots become bloated, violating SRP. This slicing might come at the cost of sacrificing the natural business cohesion of your Aggregates. Instead, you could decide that **in most cases** you will let external objects get transient references to internal entities and manipulate them as they wish. **In rarer cases**, especially ones that imply enforcing invariants that span across multiple entities, it would be a better idea to implement these operations directly on the Root. That approach is discussed here : <https://groups.google.com/forum/#!topic/dddcqrs/mtGanS39XYo> > > the way I see it is although an aggregate root is responsible for the > life cycle of entities within, that doesn't mean that it should be the > exclusive interface ( other than returning a specific entity) to all > methods called on any item within the aggregate. > > > Overall, the final decision will depend on whether you want to design your aggregates primarily with domain/functional cohesiveness in mind or you first want to think of them as transactional safeguards.
206,283
As expected, Nvidia drivers have reduced my customizations. I can't login to Unity 3D session. I can't find any content to help me to uninstall the driver and I don't know what to do as I have never uninstalled them before.
2012/10/25
[ "https://askubuntu.com/questions/206283", "https://askubuntu.com", "https://askubuntu.com/users/99229/" ]
If you have only access to GRUB: 1. Restart your computer 2. Select `Advanced options for Ubuntu` at GRUB Boot Menu 3. Select `Ubuntu ..... (recovery mode)` 4. Select `dpkg` Repair broken packages at the Recovery Menu (this will stop the nvidia processes so we can uninstall). Choose Yes. When finished press ENTER (even if it found problems) 5. Select `root` (Drop to root shell promt at the Recovery Menu) 6. Type your root password 7. Type: `apt-get remove --purge nvidia-*` 8. If finished type: `reboot` 9. Boot as usual, it should go to your Ubuntu Login screen now
I didn't want to modify `/etc/modules`, and just revert to an install without the nvidia drivers. So I mostly did as the accepted suggested, but instead of the last step, I do this: ``` sudo rm /etc/modprobe.d/blacklist-nouveau.conf sudo update-initramfs -k all -u ``` So my variant summary: ``` sudo apt-get remove --purge nvidia-* sudo apt-get install ubuntu-desktop sudo rm /etc/X11/xorg.conf sudo rm /etc/modprobe.d/blacklist-nouveau.conf sudo update-initramfs -k all -u ```
2,434,595
I'm looking for some stand alone library to access SMB/CIFS shares. I am not looking for mounting the shares, just browsing and accessing the files for reading. Preferable something with a simple simple API similar to regular POSIX operations of `opendir`, `scandir`, `read` and etc. Thanks in advance!
2010/03/12
[ "https://Stackoverflow.com/questions/2434595", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2454753/" ]
as you say nothing about the preferred language I would recommend looking on <http://jcifs.samba.org> especially <http://jcifs.samba.org/src/docs/api/jcifs/smb/SmbFile.html> it has what you ask for. Reading files, enumearating directories. API fairly simple.
[Pysmb](http://miketeo.net/wp/index.php/projects/pysmb) is a good choice if you want to access samba shares from python.
181,782
I am trying to build a transformer to convert 12V AC power to 5V AC. Here is what I have right now: [![Transformer](https://i.stack.imgur.com/yzflK.jpg)](https://i.stack.imgur.com/yzflK.jpg) I haven't adjusted the coil ratio yet but I tried it just to see if there would be any output and as a matter of fact there is none. I tested the core and it's ferromagnetic, so my guesses are either it can't work because the core's center is empty (it's a pipe), or because the solenoids can't be aligned and the core has to be circular. I would greatly appreciate if someone could explain to me what's wrong with that design and why it doesn't work (or why it should).
2015/07/25
[ "https://electronics.stackexchange.com/questions/181782", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/27630/" ]
The core doesn't need to be circular, but it must be closed, otherwise the linked flux will be very low. Moreover, the fact that the pipe is empty doesn't improve the situation, since the flux is concentrated where there is higher permeability, i.e. in the core, but the net section of the core in your case is small. In fact most of the section of the coil is filled with air, which has poor permeability. You cannot close the core with a simple piece of iron wire. It won't be effective, since the flux will be constrained in the smaller section of the wire. Keep in mind that flux obeys a sort of "Ohm's law for magnetic circuits", called [Hopkinson's law](https://en.wikipedia.org/wiki/Magnetic_circuit#Hopkinson.27s_law:_the_magnetic_analogy_to_Ohm.27s_law). The role of resistance is taken by a quantity known as *reluctance*, which is inversely proportional to the net section of the core where the flux flows. The flux is analogous to current. Therefore a tiny section will limit the flux greatly. Since the role of the voltage is taken by the *magnetomotive force* (MMF) which depends on the current in the coil, you can understand that with the same current in the primary and a high reluctance due to a flux constrained in a little section of wire, the flux will be small, and hence the induced current in the secondary will be small. If you try to pump more current in the primary, the result will be that the core will saturate (a strongly non-linear effect), with the consequence that its permeability will drop drastically, voiding your attempt. To have enough coupling between the two coils you need a closed magnetic circuit with substantially low reluctance. Therefore you need a closed path made of ferromagnetic material with a more or less constant section, since any narrowing in the section will increase the reluctance. **EDIT** (prompted by a useful comment by @Asmyldof) Although, I explained above why your setup is not efficient for a power [transformer](https://en.wikipedia.org/wiki/Transformer), and the explanation still stands, there are a couple of issues to be aware of when dealing with transformer operation. [This interesting article on transformers](http://www.allaboutcircuits.com/textbook/alternating-current/chpt-9/practical-considerations-transformers/) has nice pictures and delves into the subject in more detail. I'll point out briefly two key aspects below. As I said, to be able to have high coupling between primary and secondary winding you need low reluctance and a closed core. This calls for a solid core with a closed magnetic path. Relative to your setup, this will improve the situation, but be aware that using a ferromagnetic core which is also electrically conducting, as iron is, has its drawbacks. First (and really important for a power transformer) there are core power losses. If the core is made of a good conducting material, eddy currents will be induced in its cross section and this will cause power loss by [Joule heating](https://en.wikipedia.org/wiki/Joule_heating) (as in a resistor). This is not the only source of core losses, but for conductive cores it's the most relevant usually. Therefore using a solid iron bar as transformer core you risk to lose much power heating the core itself (that's why cores made of iron are not solid, they are still "filled", but laminated, i.e. made by many layers electrically insulated from each other). The second key aspect is [saturation](https://en.wikipedia.org/wiki/Saturation_%28magnetic%29). If you increase the primary current over a certain limit the core will saturate and the permeability will drop, hence the reluctance will rise. Having a not-completely-closed-loop core is, in this case, beneficial. In fact sometimes cores are built with a small air-gap, i.e. the core forms an almost-closed loop, but not quite. The small air gap has much higher reluctance than the rest of the core, hence it increases the overall reluctance of core+gap, which seems bad, but the advantage is that the gap helps linearizing the core, i.e. limits the effect of saturation. Moreover, the gap is very small (say about the thickness a sheet of paper) and this prevents the flux from dispersing in the space around the core, hence it doesn't worsen the overall coupling too much. Other interesting links about transformers: * [Transformers Basics](http://www.electronics-tutorials.ws/transformer/transformer-basics.html) * [Transformers Construction](http://www.electronics-tutorials.ws/transformer/transformer-construction.html)
From the picture, it appears that you placed the coils "next" to each other. This configuration gives you the **least** amount of flux cutting across the secondary winding. To improve the coupling, you need to wind the secondary **on top** of the primary. The "efficiency" of the coupling will depend on what you use as a core (air, hollow pipe, solid pipe, etc.), but the transformer action has to work! If you use 200 turns on the primary and 100 turns on the secondary, the output should be 1/2 of the input voltage. The size of the wires will determine the current capabilities of the windings, but not the voltage.
9,151,611
There is a client - server basic application. The client uses a simple remoting to comunicate with the server side. The server side could be powered on WebORB, BlazeDS or any other product. The client side is using the FLEX framework. That is it about a technologies stack. Now, let's forget about the server side and just have a look on the following client side ``` package com.blog.ri { import mx.collections.ArrayCollection; public class MyCollection extends ArrayCollection { public function MyCollection(source:Array=null) { super(source); } } } ``` Additionally, let's assume we have the following class and it is mapped to the server side class: ``` package com.blog.ri { [Bindable] [RemoteClass(alias="com.blog.ri.MyEntity")] public dynamic class MyEntity { private var _myCollection:MyCollection; public function get myCollection():MyCollection { if(_myCollection == null) _myCollection = new MyCollection(); return _myCollection; } public function set myCollection(value:MyCollection):void { _myCollection = value; } } } ``` Also, the server side service expose for clients the void save(MyEntity candidate) method and I implemented it on the client side as it shown below: ``` package com.blog.ri { public class MyService { private var _remoteObject:RemoteObject; public function MyService() { var channelSet:ChannelSet = new ChannelSet(); var amfChannel:AMFChannel = new AMFChannel("my-amf", "http://localhost/weborb.aspx"); channelSet.addChannel(amfChannel); _remoteObject = new RemoteObject("GenericDestination"); _remoteObject.channelSet = channelSet; _remoteObject.source = "com.blog.ri.MyService"; _remoteObject.getDetailedStatistic.addEventListener("result",onItemSaved); _remoteObject.addEventListener("fault", onFault); } public function save(candidate:MyEntity, responder:IResponder = null ):void { var asyncToken:AsyncToken = _remoteObject.save(candidate); if( responder != null ) asyncToken.addResponder( responder ); } } } ``` Finally, I tried to save a new instance of the MyEntity class in our main mxml file as it shown below: ``` protected function creationCompleteHandler():void { var myService:MyService = new MyService(); var candidate:MyEntity = new MyEntity(); candidate.myCollection = new MyCollection(); myService.save(candidate); } ``` That is it. When I run the code, I received the following exception: > > ArgumentError: Error #2004: One of the parameters is invalid. at > flash.net::NetConnection/invokeWithArgsArray() at > flash.net::NetConnection/call() at > mx.messaging.channels::NetConnectionChannel/internalSend()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:281] > at > mx.messaging.channels::AMFChannel/internalSend()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\channels\AMFChannel.as:364] > at > mx.messaging::Channel/send()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\Channel.as:1002] > at > mx.messaging.channels::PollingChannel/send()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\channels\PollingChannel.as:394] > at > mx.messaging::ChannelSet/send()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\ChannelSet.as:1429] > at > mx.messaging::ChannelSet/channelConnectHandler()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\ChannelSet.as:1084] > at flash.events::EventDispatcher/dispatchEventFunction() at > flash.events::EventDispatcher/dispatchEvent() at > mx.messaging::Channel/connectSuccess()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\Channel.as:1148] > at > mx.messaging.channels::AMFChannel/resultHandler()[E:\dev\hero\_private\frameworks\projects\rpc\src\mx\messaging\channels\AMFChannel.as:576] > > > As you can see, I extended the ArrayCollection class and according to the Adobe documentation, the ArrayCollection implements the [IExternalizable interface](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/IExternalizable.html). I decided to localize the problem and created a simple class that implements IExternalizable. Then, I extended this class in some other MyChild class and defined the MyChild property in the MyEntity class. In this case, I received the exception above as well. Is there a problem how I wrote the code or it is a bug within flex? Thanks for any help. The question is duplicated in my blog.
2012/02/05
[ "https://Stackoverflow.com/questions/9151611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/672186/" ]
Try adding an alias for MyCollection. I got the unhelpful Error #2004 until all of my classes implementing IExternalizable had aliases. In my case it was for persistence (ViewNavigatorApplicationBase.persistNavigatorState = true) in which case I had to make sure I called registerClassAlias early enough in the startup to precede View.deserializeData(). The preinitialize event on the app works ok.
Some of your classes being (de)serialized is probably not linked into Flex project. Try adding following to your main Application/Module file: ``` private var forceReference:Array = [YourClass1, YourClass2]; ```
736,432
How to get the location of a program in Ubuntu? For example I have `Oracle`, how to get the folder racine (location) of `Oracle`?
2016/02/18
[ "https://askubuntu.com/questions/736432", "https://askubuntu.com", "https://askubuntu.com/users/496526/" ]
Bash and Dash have the `command` built-in command that can show you the location of a command with the `-v` switch, if the command refers to an executable. For built-in commands and aliases the result is different. Examples: ```none $ command -v java /usr/bin/java $ echo $? 0 $ command -v echo echo $ command -v ls alias ls='ls -h --color=auto' $ command -v non-existing_command; echo $? 1 ``` Additionally, all shells derived from Sh, know the `type` command that tells you about the nature of any command. ```none $ type java java is /usr/bin/java $ type ls ls is aliased to `ls -h --color=auto' $ type echo echo is a shell builtin $ type non-existing_command bash: type: non-existing_command: not found ``` If your shell (e. g. Bash) supports it, `type -a` lists everything that a command may refer to: ```none $ type -a ls ls is aliased to `ls -h --color=auto' ls is /bin/ls $ type -a echo echo is a shell builtin echo is /bin/echo $ type -a touch touch is /usr/bin/touch touch is /bin/touch ```
Shellscript `what-about` ======================== I have a bash shellscript, that tries to * find where a program is located, * find the corresponding package and * tell what kind of program it is Maybe use the name `what-about`, make it executable and put it in PATH, ``` #!/bin/bash LANG=C inversvid="\0033[7m" resetvid="\0033[0m" if [ $# -ne 1 ] then echo "Usage: ${0##*/} <program-name>" echo "Will try to find corresponding package" echo "and tell what kind of program it is" exit 1 fi command="$1" str=;for ((i=1;i<=$(tput cols);i++)) do str="-$str";done tmp="$command" first=true curdir="$(pwd)" tmq=$(which "$command") tdr="${tmq%/*}" tex="${tmq##*/}" if test -d "$tdr"; then cd "$tdr"; fi #echo "cwd='$(pwd)' ################# d" while $first || [ "${tmp:0:1}" == "l" ] do first=false tmp=${tmp##*\ } tmq="$tmp" tmp=$(ls -l "$(which "$tmp")" 2>/dev/null) tdr="${tmq%/*}" tex="${tmq##*/}" if test -d "$tdr"; then cd "$tdr"; fi # echo "cwd='$(pwd)' ################# d" if [ "$tmp" == "" ] then tmp=$(ls -l "$tex" 2>/dev/null) tmp=${tmp##*\ } if [ "$tmp" == "" ] then echo "$command is not in PATH" # package=$(bash -ic "$command -v 2>&1") # echo "package=$package XXXXX 0" bash -ic "alias '$command' > /dev/null 2>&1" > /dev/null 2>&1 if [ $? -ne 0 ] then echo 'looking for package ...' package=$(bash -ic "$command -v 2>&1"| sed -e '0,/with:/d'| grep -v '^$') else echo 'alias, hence not looking for package' fi # echo "package=$package XXXXX 1" if [ "$package" != "" ] then echo "$str" echo "package: [to get command '$1']" echo -e "${inversvid}${package}${resetvid}" fi else echo "$tmp" fi else echo "$tmp" fi done tmp=${tmp##*\ } if [ "$tmp" != "" ] then echo "$str" program="$tex" program="$(pwd)/$tex" file "$program" if [ "$program" == "/usr/bin/snap" ] then echo "$str" echo "/usr/bin/snap run $command # run $command " sprog=$(find /snap/"$command" -type f -iname "$command" \ -exec file {} \; 2>/dev/null | sort | tail -n1) echo -e "${inversvid}file: $sprog$resetvid" echo "/usr/bin/snap list $command # list $command" slist="$(/usr/bin/snap list "$command")" echo -e "${inversvid}$slist$resetvid" else package=$(dpkg -S "$program") if [ "$package" == "" ] then package=$(dpkg -S "$tex" | grep -e " /bin/$tex$" -e " /sbin/$tex$") if [ "$package" != "" ] then ls -l /bin /sbin fi fi if [ "$package" != "" ] then echo "$str" echo " package: /path/program [for command '$1']" echo -e "${inversvid} $package ${resetvid}" fi fi fi echo "$str" #alias=$(grep "alias $command=" "$HOME/.bashrc") alias=$(bash -ic "alias '$command' 2>/dev/null"| grep "$command") if [ "$alias" != "" ] then echo "$alias" fi type=$(type "$command" 2>/dev/null) if [ "$type" != "" ] then echo "type: $type" elif [ "$alias" == "" ] then echo "type: $command: not found" fi cd "$curdir" ``` Demo examples ============= Sometimes there are two alternatives, e.g. for `echo`, both a separate compiled program and shell built-in command. The shell built-in will get priority and be used unless you use the full path of the separate program, ### `echo` in 18.04.6 LTS ``` $ what-about echo -rwxr-xr-x 1 root root 35000 jan 18 2018 /bin/echo ---------------------------------------------------------------------------------- /bin/echo: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=057373f1356c861e0ec5b52c72804c86c6842cd5, stripped ---------------------------------------------------------------------------------- package: /path/program [for command 'echo'] coreutils: /bin/echo ---------------------------------------------------------------------------------- type: echo is a shell builtin ``` ### `echo` in 22.04 LTS ``` $ what-about echo -rwxr-xr-x 1 root root 35120 Feb 7 17:03 /usr/bin/echo ---------------------------------------------------------------------------------- /usr/bin/echo: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=b56baf290970e98b8056b1a8114a4404b8186689, for GNU/Linux 3.2.0, stripped dpkg-query: no path found matching pattern /usr/bin/echo lrwxrwxrwx 1 root root 7 Apr 27 10:02 /bin -> usr/bin lrwxrwxrwx 1 root root 8 Apr 27 10:02 /sbin -> usr/sbin ---------------------------------------------------------------------------------- package: /path/program [for command 'echo'] coreutils: /bin/echo ---------------------------------------------------------------------------------- type: echo is a shell builtin ``` ### `rename` in 18.04.6 LTS Sometimes a command is linked to program, that might be hidden, e.g. the version of `rename` that I use, ``` $ what-about rename lrwxrwxrwx 1 root root 24 maj 12 2018 /usr/bin/rename -> /etc/alternatives/rename lrwxrwxrwx 1 root root 20 maj 12 2018 /etc/alternatives/rename -> /usr/bin/file-rename -rwxr-xr-x 1 root root 3085 feb 20 2018 /usr/bin/file-rename ---------------------------------------------------------------------------------- /usr/bin/file-rename: Perl script text executable ---------------------------------------------------------------------------------- package: /path/program [for command 'rename'] rename: /usr/bin/file-rename ---------------------------------------------------------------------------------- type: rename is /usr/bin/rename ``` ### `rm` in 18.04.6 LTS I have an alias for `rm` in order to avoid mistakes, and the alias has priority over the program in `PATH`. You can prefix with backslash, `\rm` to skip the alias and run the program directly. (Please remember that the alias applies only for the specific user, and not for `sudo` and other users, unless they have defined a similar alias.) ``` $ what-about rm -rwxr-xr-x 1 root root 63704 jan 18 2018 /bin/rm --------------------------------------------------------------------------- /bin/rm: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, uildID[sha1]=864c9bbef111ce358b3452cf7ea457d292ba93f0, stripped --------------------------------------------------------------------------- package: /path/program [for command 'rm'] coreutils: /bin/rm --------------------------------------------------------------------------- alias rm='rm -i' type: rm is /bin/rm ``` ### `firefox` in 18.04.6 LTS ``` $ what-about firefox lrwxrwxrwx 1 root root 25 jun 8 23:23 /usr/bin/firefox -> ../lib/firefox/firefox.sh -rwxr-xr-x 1 root root 2667 jun 8 23:23 ../lib/firefox/firefox.sh ---------------------------------------------------------------------------------- /usr/lib/firefox/firefox.sh: POSIX shell script, ASCII text executable ---------------------------------------------------------------------------------- package: /path/program [for command 'firefox'] firefox: /usr/lib/firefox/firefox.sh ---------------------------------------------------------------------------------- type: firefox is /usr/bin/firefox ``` ### `firefox` in 22.04 LTS ``` $ what-about firefox lrwxrwxrwx 1 root root 13 Jun 14 10:17 /snap/bin/firefox -> /usr/bin/snap -rwxr-xr-x 1 root root 15815640 Apr 21 10:50 /usr/bin/snap ---------------------------------------------------------------------------------- /usr/bin/snap: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=_CKxPPsUzOWnuqfKlk4n/eiKeSmbgVb16CaXm3O1l/-ofceI8uzhaCgwaTtgJt/YygjnGD2ASByLbaEMRR4, stripped ---------------------------------------------------------------------------------- /usr/bin/snap run firefox # run firefox /usr/bin/snap list firefox # list firefox Name Version Rev Tracking Publisher Notes firefox 101.0.1-1 1443 latest/stable/… mozilla** - ---------------------------------------------------------------------------------- type: firefox is /snap/bin/firefox ```
78,475
I'm trying to connect the phone lines in my new home. I have 2 lines given by my phone provider arriving downstairs in the apartment: [![enter image description here](https://i.stack.imgur.com/Wmgz0.jpg)](https://i.stack.imgur.com/Wmgz0.jpg) Then I have 2 `RJ11` cables plugged into to the connections shown above which send the lines upstairs. However upstairs no connection has been made, I just have the 2 line cables. This is the connection I bought: [![enter image description here](https://i.stack.imgur.com/8B277.jpg)](https://i.stack.imgur.com/8B277.jpg) Now here are my questions: **Q1: Do you confirm that on the above pictures, holes 1-8 at the top are for the left-hand connection, and holes 1-8 at the bottom are for the right-hand connection?** **Q2: Do I need to connect my white/blue wires back into the same holes, or do I need to change their position somehow?** FYI If I connect my modem directly to either of the connections shown in 1st picture, my internet works, so I know those lines are good :)
2015/11/22
[ "https://diy.stackexchange.com/questions/78475", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/45848/" ]
Those are RJ45s not RJ11s. RJ11's only have 4 pins. An RJ11 fits into the center 4 pins of this jack though. Q1 Whether one is left or right I can't tell but the top is definitely for one jack and the bottom is for the other jack. Q2 leave your white on 5 and blue on 4 like all the diagrams below. You can use 568A and the orange pair for the second line connected to 3 and 6 that would be the traditional second line color. The phone doesn't care which color you use as long as they go to the other end. Hopefully it is obvious that you will not be able to use these jacks for data even though they will accept a data plug. <https://en.m.wikipedia.org/wiki/T568A/B#Wiring>
This connector looks like the parallel connection, It is designed for 2 telephones. Usually the DSL service provider provides the ADSL Splitter use the splitter and pullout the connection for your phone and your modem. > > Make sure no parallel connection before ADSL Splitter, as you will get poor phone signal as well as the DSL signals. > > >
98,969
I am looking for expertise that helps to improve speed of the code below. First, a little bit of background: There is some system of differential equations $\dot{\vec{m}}(t)=S(t)\vec{m}(t)$. In order to solve this without using `NDSolve/NDSolveValue` for this one can just proceed as $$\vec{m}(T)=\left(\prod\limits\_{j=N}^1 e^{S(j\,dt)}\right) \cdot\vec{m}(0)$$ where $dt$ is length of one timestep and $N=T/dt$ the number thereof. Obviously, this method involves the matrix exponential of $S$ which can be time consuming, especially since $N$ should be on the order of $10^3$ for my purposes to achieve reasonable accuracy. Let me define the system in Mathematica code (the important variable is `mat` which corresponds to $S(t)$ - the tiny rest is needed to construct a sample matrix that is almost of same dimension and "sparsity" as the ones I am actually dealing with) ``` ClearAll[init, listDiag, listOffDiag, valOffDiag, tmp, mat, col, squ]; col[mat_?MatrixQ] := Flatten[Transpose[mat]]; (* stack columns of a matrix *) squ[list_?VectorQ] := Transpose[ArrayReshape[list, {Sqrt[Length@list], Sqrt[Length@list]}]]; (* transform stacked column form into square matrix again *) (* preliminary definitions of initial condition and `tmp` that is needed to construct `mat` *) init = ConstantArray[0, {36, 36}]; init[[8, 8]] = 1; listOffDiag = {{13, 1}, {13, 7}, {14, 2}, {14, 8}, {15, 3}, {15, 9}, {16, 4}, {16, 10}, {17, 5}, {17, 11}, {18, 6}, {18, 12}, {19, 1}, {19, 7}, {20, 2}, {20, 8}, {21, 3}, {21, 9}, {22, 4}, {22,10}, {23, 5}, {23, 11}, {24, 6}, {24, 12}, {25, 1}, {25, 7}, {26,2}, {26, 8}, {27, 3}, {27, 9}, {28, 4}, {28, 10}, {29, 5}, {29, 11}, {30, 6}, {30, 12}}; listDiag = {{1, 1}, {2, 2}, {3, 3}, {4, 4}, {5, 5}, {6, 6}, {7, 7}, {9, 9}, {10, 10}, {11, 11}, {13, 13}, {14, 14}, {15, 15}, {16,16}, {17, 17}, {18, 18}, {19, 19}, {20, 20}, {21, 21}, {22, 22}, {23, 23}, {24, 24}, {25, 25}, {26, 26}, {27, 27}, {28, 28}, {29, 29}, {30, 30}, {31, 31}, {33, 33}, {34, 34}, {35, 35}}; valOffDiag = t*RandomReal[{-5, 5}, Length@listOffDiag]; tmp = SparseArray[Join[Thread[Rule[listDiag, RandomReal[{-100, 100},Length@listDiag]]],Thread[Rule[Table[{30 + i, 30 + i}, {i, 1, 6}],ConstantArray[0, 6]]]]] + SparseArray[Join[Thread[Rule[listOffDiag,valOffDiag]], {{36, 36} -> 0}]] + Transpose@SparseArray[Join[Thread[Rule[listOffDiag, valOffDiag]],{{36, 36}->0}]]; mat = KroneckerProduct[tmp, IdentityMatrix[36]] + KroneckerProduct[IdentityMatrix[36], Transpose@tmp]; ``` Now here are my two approaches. Observing `AbsoluteTiming` of ``` MatrixExp[-I*SparseArray[ArrayRules[mat] /. t -> 3,Dimensions[mat]]].col[init]; // AbsoluteTiming MatrixExp[-I*SparseArray[ArrayRules[mat] /. t -> 3, Dimensions[mat]],col[init]]; // AbsoluteTiming ``` yields an order of magnitude improvement of the latter over the former (0.254 vs 0.027) on my machine. So instead of first computing the matrix product surrounded by the parentheses in the system of ODEs above, it should be faster to use `MatrixExp[matrix,vector]` sequentially. ``` evol1[mat_, initial_, ti_, tf_] := Module[ {dt = (tf - ti)/10, res, d = Dimensions[mat][[1]]}, res = Prepend[Table[MatrixExp[-I*SparseArray[ArrayRules[mat]/.t->i, Dimensions[mat]]], {i,ti, tf, dt}], col[initial]]; Return[squ[Apply[Dot, Reverse[res]]]]]; evol2[mat_, initial_, ti_, tf_] := Module[ {dt = (tf - ti)/10, res = col[initial]}, Do[res = MatrixExp[-I*SparseArray[ArrayRules[mat]/.t->i,Dimensions[mat]],res], {i, ti, tf, dt}]; Return[squ[res]]]; ``` `evol1` is the straightforward method that computes the matrix product of all matrix exponentials and then applies it to the initial vector. `evol2` makes use of `MatrixExp[matrix,vector]`. Comparing speed and results: ``` res1 = evol1[mat, init, 0.01, 10]; // AbsoluteTiming (* {2.634993, Null} *) res2 = evol2[mat, init, 0.01, 10]; // AbsoluteTiming (* {0.302688, Null} *) Chop[res1 - res2] == ConstantArray[0, {36, 36}] (* True *) ``` I feel like the `Do` in `evol2` is all but efficient but I do not get any idea about how to replace it. Is there any possibility to increase speed - not caring about memory usage? **Edit** I am sorry for the initial confusion arising from a copy-paste error that affected the results.
2015/11/09
[ "https://mathematica.stackexchange.com/questions/98969", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/21606/" ]
Your approach is certainly faster than using [`NDSolve`](http://reference.wolfram.com/language/ref/NDSolve), but I think there is a mistake in the implementation, and the accuracy leaves much to be desired. Your ODE is: $$m'(t) = -i S(t) . m(t)$$ This ODE can be discretized as follows: $$m'(t) \Delta t \approx m(t + \Delta t) - m(t) \ \approx - i S(t) . m(t) \Delta t$$ which can be rewritten as: $$m(t + \Delta t)\approx m(t)-i S(t) \Delta t\, .m(t)$$ or $$m(t+\Delta t)\approx (I-i S(t) \Delta t) \, .m(t)$$ Here $I$ is the identity matrix. Using the exponential to approximate the RHS, we obtain: $$m(t+\Delta t)\approx e^{-i S(t) \Delta t}.m(t)$$ which differs from yours with the presence of $\Delta t$ in the exponent. I will rewrite your `evol2` function to include this $\Delta t$ term, and to make it a function argument (instead of always using $(t\_f-t\_i)/10$). Also, I won't bother reshaping the vector into a matrix: ``` evol[mat_, initial_, {t_, ti_, tf_, dt_}] := Module[{res=col[initial]}, Do[res = MatrixExp[-I mat dt, res], {t, ti, tf, dt}]; res ] ``` Now, let's solve your ODE using [`NDSolveValue`](http://reference.wolfram.com/language/ref/NDSolveValue): ``` res = NDSolveValue[{m'[t] == -I mat . m[t], m[0] == col[init]}, m, {t, 0, 10}]; //AbsoluteTiming ``` > > {10.126, Null} > > > Let's compare the two approaches at time $t=1$: ``` discrete1 = evol[mat, init, {t, 0, 1, .1}]; //AbsoluteTiming discrete2 = evol[mat, init, {t, 0, 1, .01}]; //AbsoluteTiming discrete3 = evol[mat, init, {t, 0, 1, .001}]; //AbsoluteTiming MinMax @ ReIm[res[1] - discrete1] MinMax @ ReIm[res[1] - discrete2] MinMax @ ReIm[res[1] - discrete3] ``` > > {0.042896, Null} > > > {0.336994, Null} > > > {3.2513, Null} > > > {-0.0549661, 0.11708} > > > {-0.00586709, 0.0119637} > > > {-0.000589242, 0.00119601} > > > As expected, the discrete approach timing increases linearly with the number of steps, and the accuracy also increases linearly. However, even with a $\Delta t$ of .001 the error is rather large. The situation is much worse at time $t=10$: ``` discrete1 = evol[mat, init, {t, 0, 10, .1}]; //AbsoluteTiming discrete2 = evol[mat, init, {t, 0, 10, .01}]; //AbsoluteTiming discrete3 = evol[mat, init, {t, 0, 10, .001}]; //AbsoluteTiming MinMax @ ReIm[res[10] - discrete1] MinMax @ ReIm[res[10] - discrete2] MinMax @ ReIm[res[10] - discrete3] ``` > > {0.347939, Null} > > > {3.38722, Null} > > > {34.2144, Null} > > > {-0.273781, 0.294955} > > > {-0.0395226, 0.0314886} > > > {-0.00394035, 0.0032858} > > > I think it's much better to just use [`NDSolve`](http://reference.wolfram.com/language/ref/NDSolve) and rely on the error control that is built-in to the function. *Timing update* I forgot to mention that my version of `evol` is essentially the same as @sebhofer's *(my previous edit erroneously asserted that `evol` was faster)*: ``` r1 = squ @ evol[mat, init, {t, 0, 10, 1}]; //RepeatedTiming r2 = evol2[mat,init, 0, 10]; //RepeatedTiming r3 = evol3[mat, init, 0, 10]; //RepeatedTiming r1 === r2 === r3 ``` > > {0.037, Null} > > > {0.16, Null} > > > {0.038, Null} > > > True > > >
Let me at least show you how to avoid the `Do` loop while I think about other possible improvements. The idiomatic way is to use `Fold`. Here is my version: ``` evol3[mat_, initial_, ti_, tf_] := Module[{dt = (tf - ti)/10, res = col[initial], ar = ArrayRules[mat]}, squ[Fold[ MatrixExp[-I*SparseArray[ar /. t -> #2, Dimensions[mat]], #1] &, res, Range[ti, tf, dt]]] ]; ``` Notice that I don't compute the `ArrayRules` every time, since it can be done once. But this still only gives a minor improvement over your `evol2`, from 0.23 to 0.21 on my machine.
69,347,386
Suppose I have this series of numbers in a vector: ``` vec <- c(1,2,3,4,5) # just an example, numbers could be far higher ``` How can I programmatically divide these numbers into equally-spaced intervals ranging between `0-1`, such that I get: for * 1: `0` * 2: `0, 1` * 3: `0, 0.5, 1` * 4: `0, 0.33, 0.66, 1` * 5: `0, 0.25, 0.50, 0.75, 1` * and so on. Any idea?
2021/09/27
[ "https://Stackoverflow.com/questions/69347386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8039978/" ]
We can use *seq* with *length.out* argument: ``` lapply(1:5, function(i) seq(0, 1, length.out = i)) # [[1]] # [1] 0 # # [[2]] # [1] 0 1 # # [[3]] # [1] 0.0 0.5 1.0 # # [[4]] # [1] 0.0000000 0.3333333 0.6666667 1.0000000 # # [[5]] # [1] 0.00 0.25 0.50 0.75 1.00 ``` or *mapply*: ``` mapply(seq, from = 0, to = 1, length.out = 1:5) ```
Using `map` ``` library(purrr) map(1:5, ~ seq(0, 1, length.out = .x)) ``` -output ``` [[1]] [1] 0 [[2]] [1] 0 1 [[3]] [1] 0.0 0.5 1.0 [[4]] [1] 0.0000000 0.3333333 0.6666667 1.0000000 [[5]] [1] 0.00 0.25 0.50 0.75 1.00 ```
5,646,954
I have a `winform` called ***Form1*** and a `textbox` called ***textBox1*** In the ***Form1*** I can set the text by typing: ``` textBox1.text = "change text"; ``` Now I have created another class. How do I call ***textBox1*** in this class? so I want to change the text for ***textBox1*** in this class. How can I access the ***Form1*** from this new class?
2011/04/13
[ "https://Stackoverflow.com/questions/5646954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692957/" ]
You can change the access modifier for the generated field in `Form1.Designer.cs` from `private` to `public`. Change this ``` private System.Windows.Forms.TextBox textBox1; ``` by this ``` public System.Windows.Forms.TextBox textBox1; ``` You can now handle it using a reference of the form `Form1.textBox1`. Visual Studio will not overwrite this if you make any changes to the control properties, unless you delete it and recreate it. You can also chane it from the UI if you are not confortable with editing code directly. Look for the Modifiers property: [![Modifiers](https://i.stack.imgur.com/LVhn1.png)](https://i.stack.imgur.com/LVhn1.png)
I Found an easy way to do this,I've tested it,it works Properly. First I created a Windows Project,on the form I Inserted a TextBox and I named it textBox1 then I inserted a button named button1,then add a class named class1. in the class1 I created a TextBox: ``` class class1 { public static TextBox txt1=new TextBox(); //a global textbox to interfece with form1 public static void Hello() { txt1.Text="Hello"; } } ``` Now in your Form Do this: ``` public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { class1.txt1=textBox1; class1.Hello(); } } ``` in the button1\_Click I coppied the object textBox1 into txt1,so now txt1 has the properties of textBox1 and u can change textBox1 text in another form or class.
9,985,971
we are developing an application with following characteristics: * standalone spring 3.1.0.RELEASE application * JPA with Hibernate 3.6.7.Final * AMQP (RabbitMQ as Server, Spring AMQP as Client) * SQL Server 2008 with jTDS driver We need to synchronize transactions between RabbitMQ and SQL Server, so we are trying to setup XA with atomikos. The problem is that we can´t find a single spring sample configuration file for that situation that actually works. We tried so many combinations using samples from Spring documentation, google, forums, etc. Something is allways missing. Could someone, please, provide us a such file so we can use it as a starting template for our configuration. P.S. Do we really need XA? Thanks.
2012/04/03
[ "https://Stackoverflow.com/questions/9985971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3221/" ]
After digging through the various candidate solutions (use of [Change Data Capture](http://msdn.microsoft.com/en-us/library/bb522489.aspx), adapters for [SQL Server Service Broker](http://msdn.microsoft.com/en-us/library/ms345108%28v=sql.90%29.aspx), and possible conversions from [MSMQ](http://en.wikipedia.org/wiki/Microsoft_Message_Queuing) to RabbitMQ), I came to a solution that should work for you. It turns out you can bind [WCF](http://msdn.microsoft.com/en-us/netframework/aa663324) clients and service contracts to SQL Server. From here, you can use [RabbitMQ's AMQP binding for WCF](http://www.rabbitmq.com/releases/rabbitmq-dotnet-client/v1.5.1/rabbitmq-dotnet-client-1.5.1-wcf-service-model.pdf) to translate messages back and forth, as the aforelinked PDF documents in great detail. I'm unclear on whether an additional transaction manager like XA will be required here, but if you suffer problems with transaction duplication, loops, and mismessaging, it's worth investigating as an intermediary for the connection. If you go that route, you'd need to weld that into your gateway in the same way you've defined in the problem statement. If this needs clarification, I'd be happy to expand on this here. Best of luck with your application. It sounds like an integration hydra from the outside, but once you get everything talking together, it should work nicely.
As i know RabbitMQ *does not* support XA style transactions. And yes you can do it in Java: **TestConfiguration.java** ``` import org.springframework.amqp.core.Queue; import org.springframework.amqp.rabbit.config.AbstractRabbitConfiguration; import org.springframework.amqp.rabbit.connection.ConnectionFactory; import org.springframework.amqp.rabbit.connection.SingleConnectionFactory; import org.springframework.amqp.rabbit.core.RabbitTemplate; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class TestConfiguration extends AbstractRabbitConfiguration { private String routingKey = "test.queue"; private String testQueueName = "test.queue"; public ConnectionFactory getConnectionFactory() { SingleConnectionFactory connectionFactory = new SingleConnectionFactory("localhost"); connectionFactory.setUsername("guest"); connectionFactory.setPassword("guest"); return connectionFactory; } @Override public RabbitTemplate rabbitTemplate() { RabbitTemplate rabbitTemplate = new RabbitTemplate(getConnectionFactory()); rabbitTemplate.setRoutingKey(routingKey); rabbitTemplate.setQueue(testQueueName); return rabbitTemplate; ``` **Simple example of send:** ``` import org.springframework.amqp.core.AmqpTemplate; import org.springframework.beans.factory.annotation.Autowired; public class MessageSender { @Autowired private AmqpTemplate template; public void send(String text) { template.convertAndSend(text); } } ``` **..and receive:** ``` import org.springframework.amqp.core.Message; import org.springframework.amqp.core.MessageListener; public class MessageHandler implements MessageListener { @Override public void onMessage(Message message) { System.out.println("Received message: " + message); } } ``` * Here are some references + [Spring AMQP - Reference Documentation](http://static.springsource.org/spring-amqp/docs/1.0.x/reference/html/#d0e692) + [Using RabbitMQ, Spring AMQP and Spring Integration](http://www.digitalsanctum.com/2010/08/31/using-rabbitmq,-spring-amqp-and-spring-integration/) + [Spring Framework AMQP RabbitMQ Connection](http://static.springsource.org/spring-amqp/docs/1.0.x/apidocs/org/springframework/amqp/rabbit/connection/ConnectionFactoryUtils.ResourceFactory.html) + [Blog](http://krams915.blogspot.com.es/search/label/RabbitMQ) (detailed descriptions about all process that need to be configured in Java in the case of one application event management )
634,610
How can I download or turn on my *stereo mix* feature in Windows 7? In music chat rooms I can't play music. When I had Windows XP, I had the *stereo mix* option in Audio settings. I can't seem to find it here.
2013/08/21
[ "https://superuser.com/questions/634610", "https://superuser.com", "https://superuser.com/users/247227/" ]
As of April 2017 and the new Skype for Windows 10 (Skype UWP - Universal Windows Platform), the `main.db` file has moved to ``` %localappdata%\Packages\Microsoft.SkypeApp_kzf8qxf38zg5c\LocalState\<SkypeUsername>\skype.db ``` with `%localappdata%` being `C:\Users\<username>\AppData\Local` Source: <https://answers.microsoft.com/en-us/skype/forum/skype_win10-skype_startms/skype-app-data-folder-missing-directs-me-to/e4ecb6ca-f64d-4983-b768-727237638fb2>
Normally it will be stored at `main.db file` for windows operating system the default path for Windows XP: ``` C:\Documents and Settings\<windows user>\Application Data\Skype\<WindowsUsername> ``` the default path for Windows 7+: ``` C:\Users\<username>\AppData\Roaming\Skype\live#<SkypeLiveUsername> ``` 1. Close Skype 2. Navigate to run command 3. type `%appdata%\skype` 4. Navigate to your user name (skype username) 5. you could find all the info at main.db file Hope it helps
66,757
I have a fairly simple python program that takes in some csv style text files containing genomic data (these files are fairly lightweight: <10 megabytes) processes / compares them and then produces some output text files. Currently this program has a command line interface suitable for my own use. However, I want this app to become usable by wider community of biologists who may not be comfortable with command line. Moreover, the current CLI is cumbersome due to the large number of parameters required to perform the task. What I want to do is develop a functional GUI. However, there are some major caveats. 1) I want to develop web/browser based UI. Why? i) Overwhelming majority of bioinformatics tools are web based. Biologists are comfortable with such interfaces. ii) Many (Python) GUI frameworks create somewhat inelegant interfaces (ex: Tkinter), while others have restrictive licenses (PyQt). iii) In the future when we are ready to address the security concerns we will want to host our application on a server. Security is important because the app will often deal with HIPAA protected data. 2) I would prefer a python based framework because that is the language that I most comfortable with and I need to develop this interface within 3 weeks. So far the only Python based solution I have found is Flask + Pywebview (confirm?). I am also aware of Electron and NW.js. However, since I am not an experienced web/javascript developer I would prefer to stick with Python. Are there any frameworks suitable for this problem that I have not mentioned above? Is Flask + Pywebview a viable framework for developing a desktop application GUI using web based technologies? Thank you!
2019/08/22
[ "https://softwarerecs.stackexchange.com/questions/66757", "https://softwarerecs.stackexchange.com", "https://softwarerecs.stackexchange.com/users/58506/" ]
The project [argparseweb](https://github.com/nirizr/argparseweb) might be a good starting point. It aims to automatically convert argparse-like command line interfaces to simple web interfaces. This package was made for getting your personal command line scripts to the next stage - internal shared utilities. *I haven't personally tried this out!*
I would use django for a simple web gui which servers via <http://localhost:8000> Maybe you don't need all the features that django offers (ORM, database migrations, ...). But django is a solid framework, which helps you if new features will be needed in the future.
145,351
I have DKIM set up for one domain and it works properly. I am trying, though, to set it up for another domain name on the same machine. In short, I am sending registration and password recovery emails from one domain and everything else from the other domain. Both domains map to the same host. Setting up domainkeys in this manner was no problem - adding another "DAEMON\_OPTS=" line in /etc/default/dk-filter was the solution. This is not the case for DKIM, though, since it stores this information in a configuration file (/etc/dkim-filter.conf) that is formatted in a completely different manner. Any help would be most appreciated. Thanks.
2010/05/26
[ "https://serverfault.com/questions/145351", "https://serverfault.com", "https://serverfault.com/users/15759/" ]
Take a look at this person's question and answer: DKIM sign outgoing mail from any domain with postfix and ubuntu [DKIM sign outgoing mail from any domain (with Postfix and Ubuntu)](https://serverfault.com/questions/52830/dkim-sign-outgoing-mail-from-any-domain-with-postfix-and-ubuntu) You may also need to read this (especially if you're on Ubuntu) <https://help.ubuntu.com/community/Postfix/DKIM> You'll still need to poke around a bit to see what configurations are relevant to you, but hopefully this will put you on the right track.
For Centos/RHEL take a look at this <http://www.topdog.za.net/postfix_dkim_milter>
48,163,779
I would like to insert a new data array into the UITableView without using reloadData() method. I created simple code to do that. I referred to [this discussion](https://stackoverflow.com/questions/31870206/how-to-insert-new-cell-into-uitableview-in-swift). However, error is occurred. I investigated why error is occurred, but I could not find it out. My source code is here. ``` class ViewController: UIViewController, UITableViewDelegate, UITableViewDataSource { @IBOutlet weak var tableView: UITableView! var objects = [ "Apple", "Orange" ] let insertObjects = [ "Banana", "Grape" ] override func viewDidLoad() { super.viewDidLoad() self.tableView.delegate = self self.tableView.dataSource = self } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } @IBAction func insert(_ sender: Any) { self.objects = self.insertObjects + self.objects tableView.beginUpdates() let indexPath = IndexPath(row: 0, section: 0) tableView.insertRows(at: [indexPath], with: .automatic) tableView.endUpdates() } func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return self.objects.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = self.tableView.dequeueReusableCell(withIdentifier: "TableViewCell", for: indexPath) as! TableViewCell cell.label.text = self.objects[indexPath.row] return cell } } ``` And, error message is here. ``` *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (4) must be equal to the number of rows contained in that section before the update (2), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted) and plus or minus the number of rows moved into or out of that section (0 moved in, 0 moved out).' *** First throw call stack: ``` Could give me your advice ?
2018/01/09
[ "https://Stackoverflow.com/questions/48163779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6458354/" ]
Doing *most* (and the hardest part) of what you want can easily be done in MongoDB. The final step when returning "basic", "premium" or "standard" most likely can also be done, but I think it's not worth the hassle as that is trivial in Go. In MongoDB use the [Aggregation framework](https://docs.mongodb.com/manual/aggregation/) for this. This is available in the `mgo` package via the [`Collection.Pipe()`](https://godoc.org/gopkg.in/mgo.v2#Collection.Pipe) method. You have to pass a slice to it, each element corresponds to an aggregation stage. Read this answer for more details: [How to Get an Aggregate from a MongoDB Collection](https://stackoverflow.com/questions/47379455/how-to-get-an-aggregate-from-a-mongodb-collection/47388623#47388623) Back to your example. Your `GetEventLevel()` method could be implemented like this: ``` func (dao *campaignDAO) GetEventLevel(eventID string) (string, error) { c := sess.DB("").C("eventboosts") // sess represents a MongoDB Session now := time.Now() pipe := c.Pipe([]bson.M{ { "$match": bson.M{ "_event_id": eventID, // Boost for the specific event "is_published": true, // Boost is active "start_date": bson.M{"$lt": now}, // now is between start and end "end_date": bson.M{"$gt": now}, // now is between start and end }, }, { "$lookup": bson.M{ "from": "campaigns", "localField": "_campaign_id", "foreignField": "_id", "as": "campaign", }, }, {"$unwind": "$campaign"}, { "$match": bson.M{ "campaign.is_published": true, // Attached campaign is active }, }, }) var result []*EventBoost if err := pipe.All(&result); err != nil { return "", err } if len(result) == 0 { return "standard", nil } return result[0].Level, nil } ``` If you only need at most one `EventBoost` (or there may not be more at the same time), use `$limit` stage to limit results to a single one, and use `$project` to only fetch the `level` field and nothing more. Use this pipeline for the above mentioned simplification / optimization: ``` pipe := c.Pipe([]bson.M{ { "$match": bson.M{ "_event_id": eventID, // Boost for the specific event "is_published": true, // Boost is active "start_date": bson.M{"$lt": now}, // now is between start and end "end_date": bson.M{"$gt": now}, // now is between start and end }, }, { "$lookup": bson.M{ "from": "campaigns", "localField": "_campaign_id", "foreignField": "_id", "as": "campaign", }, }, {"$unwind": "$campaign"}, { "$match": bson.M{ "campaign.is_published": true, // Attached campaign is active }, }, {"$limit": 1}, // Fetch at most 1 result { "$project": bson.M{ "_id": 0, // We don't even need the EventBoost's ID "level": "$level", // We do need the level and nothing more }, }, }) ```
Since you're only storing IDs to reference documents in other collections, rather than de-normalizing the data entirely, no, you cannot do this purely in MongoDB. MongoDB is not a relational database. What you're describing is exactly what MongoDB was designed *not* to do. You'll need to perform the logic in Go; whether you do so in the DAO is up to you, but personally, I would lean toward a simple method on the type that does the logic on the fly based on the field values, e.g. `Campaign.GetEventLevel` or something along those lines. Having a method that fine-grained in your DAO implies to me some unusual design decisions that don't suit the MongoDB model very well. For the most part, with MongoDB, you want to retrieve you documents (objects) and work with them in your app. Trying to perform query logic in MongoDB as you would with a typical RDBMS is going to lead to frustration and poor performance.
38,139,579
Having created a new ASP.Net Core Web Application (.Net Core) project in Visual Studio 2015 Update 3 and checked this in to a Visual Studio Team Services git repository. I am using the release version of .Net Core and ASP.Net Core. I am now trying to build this using the VSTS Build system using a build definition based on this guide: <https://www.visualstudio.com/en-us/docs/build/apps/aspnet/aspnetcore-to-azure> However, the build fails on the `dotnet restore` step with the log shown below (first part not included, this is from where it starts to fail). What am I doing wrong? Do I need to prep the current Hosted Agents in any way to get this working with the v1.0.0 release of ASP.Net Core? ``` 2016-07-01T06:58:23.7437947Z log : Restoring packages for tool 'BundlerMinifier.Core' in C:\a\1\s\src\FjordTours.BasicApp\project.json... 2016-07-01T06:58:23.7457953Z info : GET https://api.nuget.org/v3-flatcontainer/bundlerminifier.core/index.json 2016-07-01T06:58:24.2167463Z info : OK https://api.nuget.org/v3-flatcontainer/bundlerminifier.core/index.json 469ms 2016-07-01T06:58:24.2188952Z info : GET https://api.nuget.org/v3-flatcontainer/bundlerminifier.core/2.0.238/bundlerminifier.core.2.0.238.nupkg 2016-07-01T06:58:24.4460947Z info : OK https://api.nuget.org/v3-flatcontainer/bundlerminifier.core/2.0.238/bundlerminifier.core.2.0.238.nupkg 226ms 2016-07-01T06:58:24.4500936Z info : GET https://api.nuget.org/v3-flatcontainer/nuglify/index.json 2016-07-01T06:58:24.7169172Z info : OK https://api.nuget.org/v3-flatcontainer/nuglify/index.json 266ms 2016-07-01T06:58:24.7184197Z info : GET https://api.nuget.org/v3-flatcontainer/nuglify/1.5.0/nuglify.1.5.0.nupkg 2016-07-01T06:58:24.9458237Z info : OK https://api.nuget.org/v3-flatcontainer/nuglify/1.5.0/nuglify.1.5.0.nupkg 226ms 2016-07-01T06:58:25.0165432Z log : Installing NUglify 1.5.0. 2016-07-01T06:58:25.0175433Z log : Installing BundlerMinifier.Core 2.0.238. 2016-07-01T06:58:26.3365438Z log : Restoring packages for tool 'Microsoft.AspNetCore.Razor.Tools' in C:\a\1\s\src\FjordTours.BasicApp\project.json... 2016-07-01T06:58:27.5785417Z error: Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 supports: netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:27.5815442Z error: Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 supports: 2016-07-01T06:58:27.5815442Z error: - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:27.5828217Z error: - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:27.5835753Z error: One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:27.5845450Z log : Restoring packages for tool 'Microsoft.AspNetCore.Server.IISIntegration.Tools' in C:\a\1\s\src\FjordTours.BasicApp\project.json... 2016-07-01T06:58:27.5855447Z info : GET https://api.nuget.org/v3-flatcontainer/microsoft.aspnetcore.server.iisintegration.tools/index.json 2016-07-01T06:58:27.8429648Z info : OK https://api.nuget.org/v3-flatcontainer/microsoft.aspnetcore.server.iisintegration.tools/index.json 257ms 2016-07-01T06:58:27.8452904Z info : GET https://api.nuget.org/v3-flatcontainer/microsoft.aspnetcore.server.iisintegration.tools/1.0.0-preview2-final/microsoft.aspnetcore.server.iisintegration.tools.1.0.0-preview2-final.nupkg 2016-07-01T06:58:28.0722938Z info : OK https://api.nuget.org/v3-flatcontainer/microsoft.aspnetcore.server.iisintegration.tools/1.0.0-preview2-final/microsoft.aspnetcore.server.iisintegration.tools.1.0.0-preview2-final.nupkg 226ms 2016-07-01T06:58:28.1392401Z info : GET https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.projectmodel/index.json 2016-07-01T06:58:28.4376727Z info : OK https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.projectmodel/index.json 298ms 2016-07-01T06:58:28.4406725Z info : GET https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.projectmodel/1.0.0-rc3-003121/microsoft.dotnet.projectmodel.1.0.0-rc3-003121.nupkg 2016-07-01T06:58:28.6676393Z info : OK https://api.nuget.org/v3-flatcontainer/microsoft.dotnet.projectmodel/1.0.0-rc3-003121/microsoft.dotnet.projectmodel.1.0.0-rc3-003121.nupkg 227ms 2016-07-01T06:58:28.6946386Z log : Installing Microsoft.DotNet.ProjectModel 1.0.0-rc3-003121. 2016-07-01T06:58:28.6956390Z log : Installing Microsoft.AspNetCore.Server.IISIntegration.Tools 1.0.0-preview2-final. 2016-07-01T06:58:32.3480344Z error: Package Microsoft.DotNet.ProjectModel 1.0.0-rc3-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel 1.0.0-rc3-003121 supports: 2016-07-01T06:58:32.3500341Z error: - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.3500341Z error: - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.3510431Z error: One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:32.7120347Z error: Package Microsoft.AspNetCore.Mvc 1.0.0 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.AspNetCore.Mvc 1.0.0 supports: 2016-07-01T06:58:32.7130338Z error: - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.7140347Z error: - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.7140347Z error: Package Microsoft.ApplicationInsights.AspNetCore 1.0.0 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.ApplicationInsights.AspNetCore 1.0.0 supports: 2016-07-01T06:58:32.7140347Z error: - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.7150344Z error: - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.7150344Z error: Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 supports: netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.7160344Z error: Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 supports: 2016-07-01T06:58:32.7160344Z error: - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.7160344Z error: - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.7174190Z error: One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:32.7174190Z info : Committing restore... 2016-07-01T06:58:32.7210345Z log : Writing lock file to disk. Path: C:\a\1\s\src\FjordTours.BasicApp\project.lock.json 2016-07-01T06:58:32.9154998Z log : C:\a\1\s\src\FjordTours.BasicApp\project.json 2016-07-01T06:58:32.9174999Z log : Restore failed in 44621ms. 2016-07-01T06:58:32.9274991Z NuGet Config files used: 2016-07-01T06:58:32.9294993Z Errors in C:\a\1\s\src\FjordTours.BasicApp\project.json 2016-07-01T06:58:32.9294993Z Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 supports: netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9305008Z Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 supports: 2016-07-01T06:58:32.9305008Z - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.9315007Z - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9315007Z One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:32.9325003Z Package Microsoft.DotNet.ProjectModel 1.0.0-rc3-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel 1.0.0-rc3-003121 supports: 2016-07-01T06:58:32.9325003Z - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.9325003Z - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9335002Z One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:32.9335002Z Package Microsoft.AspNetCore.Mvc 1.0.0 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.AspNetCore.Mvc 1.0.0 supports: 2016-07-01T06:58:32.9345003Z - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.9345003Z - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9355002Z Package Microsoft.ApplicationInsights.AspNetCore 1.0.0 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.ApplicationInsights.AspNetCore 1.0.0 supports: 2016-07-01T06:58:32.9355002Z - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.9355002Z - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9365007Z Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.ProjectModel.Loader 1.0.0-preview2-003121 supports: netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9365007Z Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.DotNet.Cli.Utils 1.0.0-preview2-003121 supports: 2016-07-01T06:58:32.9374999Z - net451 (.NETFramework,Version=v4.5.1) 2016-07-01T06:58:32.9374999Z - netstandard1.6 (.NETStandard,Version=v1.6) 2016-07-01T06:58:32.9385006Z One or more packages are incompatible with .NETCoreApp,Version=v1.0. 2016-07-01T06:58:32.9385006Z C:\Users\buildguest\AppData\Roaming\NuGet\NuGet.Config 2016-07-01T06:58:32.9385006Z C:\ProgramData\nuget\Config\Microsoft.VisualStudio.Offline.config 2016-07-01T06:58:32.9395005Z Feeds used: 2016-07-01T06:58:32.9395005Z https://api.nuget.org/v3/index.json 2016-07-01T06:58:32.9405006Z C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\ 2016-07-01T06:58:32.9514998Z Installed: 2016-07-01T06:58:32.9525024Z 159 package(s) to C:\a\1\s\src\FjordTours.BasicApp\project.json ``` The global.json (as created by the ASP.Net Core Web Application (.Net Core) template): ``` { "projects": [ "src", "test" ], "sdk": { "version": "1.0.0-preview2-003121" } } ``` The project.json (as created by the ASP.Net Core Web Application (.Net Core) template): ``` { "dependencies": { "Microsoft.NETCore.App": { "version": "1.0.0", "type": "platform" }, "Microsoft.AspNetCore.Diagnostics": "1.0.0", "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.Razor.Tools": { "version": "1.0.0-preview2-final", "type": "build" }, "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0", "Microsoft.AspNetCore.Server.Kestrel": "1.0.0", "Microsoft.AspNetCore.StaticFiles": "1.0.0", "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0", "Microsoft.Extensions.Configuration.Json": "1.0.0", "Microsoft.Extensions.Logging": "1.0.0", "Microsoft.Extensions.Logging.Console": "1.0.0", "Microsoft.Extensions.Logging.Debug": "1.0.0", "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0", "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0" }, "tools": { "BundlerMinifier.Core": "2.0.238", "Microsoft.AspNetCore.Razor.Tools": "1.0.0-preview2-final", "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final" }, "frameworks": { "netcoreapp1.0": { "imports": [ "dotnet5.6", "portable-net45+win8" ] } }, "buildOptions": { "emitEntryPoint": true, "preserveCompilationContext": true }, "runtimeOptions": { "configProperties": { "System.GC.Server": true } }, "publishOptions": { "include": [ "wwwroot", "Views", "Areas/**/Views", "appsettings.json", "web.config" ] }, "scripts": { "prepublish": [ "bower install", "dotnet bundle" ], "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ] } } ```
2016/07/01
[ "https://Stackoverflow.com/questions/38139579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783768/" ]
You should replace "tools" section in your project.json like this: ``` "tools": { "BundlerMinifier.Core": "2.1.258", "Microsoft.AspNetCore.Razor.Tools": { "version": "1.0.0-preview2-final", "imports": [ "netstandard1.6" ] }, "Microsoft.AspNetCore.Server.IISIntegration.Tools": { "version": "1.0.0-preview2-final", "imports": [ "netstandard1.6" ] }, "Microsoft.EntityFrameworkCore.Tools": { "version": "1.0.0-preview2-final", "imports": [ "netstandard1.6" ] }, "Microsoft.Extensions.SecretManager.Tools": "1.0.0-preview2-final", "Microsoft.VisualStudio.Web.CodeGeneration.Tools": { "version": "1.0.0-preview2-final", "imports": [ "portable-net45+win8+wp8+wpa81", "netstandard1.6" ] } }, ``` some packages needs to import `netstandard1.6` but `Microsoft.VisualStudio.Web.CodeGeneration.Tools` needs `netstandard1.6` and `portable-net45+win8+wp8+wpa81`
I encountered the same issue with `dotnet restore` using the hosted build agent in VSTS. Based on the comment from @svick , I ran `dotnet --version` to check what is installed on the hosted agent. As of today, it is still `1.0.0-preview1-002702` and not the referenced version of `1.0.0-preview2-003121`. ``` 2016-07-04T06:17:47.5047693Z ##[warning]File name doesn't indicate a full path to a executable file. 2016-07-04T06:17:47.5057787Z Executing the following command-line. (workingFolder = C:\a\1\s) 2016-07-04T06:17:47.5057787Z dotnet --version 2016-07-04T06:17:47.5067748Z Error message highlight pattern: 2016-07-04T06:17:47.5067748Z Warning message highlight pattern: 2016-07-04T06:17:48.9679166Z 1.0.0-preview1-002702 ``` The official software for the hosted build agents can be found at <https://www.visualstudio.com/en-us/docs/build/agents/hosted-pool> . In particular, notice it still shows `Visual Studio 2015 Update 2`. Hopefully, this gets updated soon.
60,099,641
I am working with two tables: **tblOrders** ``` | Part | Order | |:----:|:-----:| | AXXA | 10122 | | AXXA | 10125 | | AXXB | 10244 | | AXXB | 10254 | | AXXB | 10259 | ``` and **tblPOs** ``` | rOrder | PONum | PODate | |:------:|:-------:|:----------:| | 10122 | PO99283 | 2/6/2020 | | 10125 | PO99283 | 2/6/2020 | | 10244 | PO99455 | 3/22/2020 | | 10244 | PO99456 | 3/11/2020 | | 10254 | PO99019 | 1/22/2020 | | 10259 | PO99122 | 12/22/2020 | | 10259 | PO99122 | 1/2/2021 | ``` I am trying to join these two tables, and for each order, select the PO with the earliest arrival date (in the event there is more than one). So for Order Number `10244`, it has two POs, `PO99455` and `PO99456`. I would want `PO99456` to be associated, since it's date of `03/11/2020` is sooner than `PO99455`'s date of `03/22/2020`. An ideal resulting dataset would look like: ``` | PartNum | OrderNum | PONum | PODate | |:-------:|:--------:|:-------:|:----------:| | AXXA | 10122 | PO99283 | 2/6/2020 | | AXXA | 10125 | PO99283 | 2/6/2020 | | AXXB | 10244 | PO99456 | 3/11/2020 | | AXXB | 10254 | PO99019 | 1/22/2020 | | AXXB | 10259 | PO99122 | 12/22/2020 | ``` Currently, I have written: ``` SELECT a.PartNum, a.OrderNum, b.PONum, MAX(b.PODate) AS PODate FROM tblOrders a LEFT JOIN tblPOs b ON a.OrderNum = b.rOrderNum GROUP BY a.PartNum, a.OrderNum, b.PONum ``` Which yields: ``` PartNum OrderNum PONum PODate AXXA 10122 PO99283 2020-02-05 AXXA 10125 PO99283 2020-02-05 AXXB 10244 PO99455 2020-03-21 AXXB 10244 PO99456 2020-03-10 AXXB 10254 PO99019 2020-01-21 AXXB 10259 PO99122 2021-01-01 ``` I've set up a [SQL FIDDLE](http://sqlfiddle.com/#!18/1fbfc/2) with this demo. How can I change my query to achieve the desired results?
2020/02/06
[ "https://Stackoverflow.com/questions/60099641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4876561/" ]
if your input is > > F03, F04, F05 > > > with whitespaces between commas and next value, the statement gives : > > select .... where product\_code in ('F03', ' F04', ' F05'); > > > Note the whitespaces. Then the values 'F04' and 'F05' are not found. What if the input is `'F03,F04,F05'` ? (without spaces)
Here is what I suspect is what you want: ``` text_input = "A,B,C" in_clause <- paste0("'", unlist(strsplit(text_input, ",")), "'", collapse=",") sql <- paste0("WHERE product_code IN (", in_clause, ")") sql [1] "WHERE product_code IN ('A','B','C')" ``` Here I am still using your combination of `unlist` and `strsplit` to generate a string vector of terms for the `IN` clause. But then I use `paste` with `collapse` to get the output you want.
4,935,719
I am trying to solve a problem that involves basically implementing a logical AND between the input parameter. The complexity of the problem involves the size of the input parameters. To give a high level overview, I am trying to implement the logic similar to ``` 100 & 100 == 100 001 & 010 == 0 001 & 100 == 0 ..... ``` The complexity is that some of the input parameters can be 400 bits long. Its not a true binary number representation. It's more of a positional representation. The same input can be represented as ``` 100 = x1; (or) x100 011 = x2,3; (or) x011 001.......11 = x3,......450,451; ``` So basically "x" is just a prefix with the value for it. This is an ACL system designed a long time ago and I am trying to implement a Java version for it. I couldn't find a data type in Java that could be used to represent a binary representation that is as huge as having 400 bits. I can also use the decimal representation [ie., x2,3] and solve it too, but I couldn't think of way other than looping through the entire number range and comparing it with the other input parameter. Both input parameters could be normalized to the same representation format [ie., binary or decimal]. Any suggestions (or) help on how I can solve this problem?
2011/02/08
[ "https://Stackoverflow.com/questions/4935719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/608436/" ]
[java.util.BitSet](http://download.oracle.com/javase/6/docs/api/java/util/BitSet.html).
a BigInteger should also work: * new BigInteger(binaryString, 2) should parse the input correctly * BigInteger implements "and" * use testBit(n) to access the bits
9,866,071
I want to simulate a keypress in Java. Others have tried this using Robot. This only allows an atomic keypress, where I want to simulate a key hold (for a second, say) and release. So, I need to use JNA or JNI. I investigated JNative, but this seems to be for consuming key events at an OS level, not generating them at an OS level. How can I generate such events from Java? Thanks
2012/03/26
[ "https://Stackoverflow.com/questions/9866071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1292066/" ]
You can repeat it by fetching again the data ``` while($row = mysql_fetch_assoc($result)){ //another html element <div>$row['name']</div> <div>$row['title']</div> //and so on } ``` or you need to put it on the variable and call display it again on other html element ``` $name = $row['name']; $title = $row['title'] //and so on ``` then put it on the other element, but if you want to call all the data of each id, you need to do the first code
I use this or similar code to inject PHP messages into a fixed DIV positioned in front of other elements (z-index: 9999) just for convenience at the development stage. Each PHP message passes into my 'custom\_message()' function and is further conveyed into the innard of preformatted DIV created by echoed JS. There can be as many as it gets, all put inside that fixed DIV, one under the other. ``` <style> #php_messages { position: fixed; left: 0; top: 0; z-index: 9999; } .php_message { background-color: #333; border: red solid 1px; color: white; font-family: "Courier New", Courier, monospace; margin: 1em; padding: 1em; } </style> <div id="php_messages"></div> <?php function custom_message($output) { echo ' <script> var el = document.createElement("DIV"); el.classList.add("php_message"); el.innerHTML = \''.$output.'\'; document.getElementById("php_messages").appendChild(el); </script> '; } ?> ```
12,953,345
I have this own class : ``` public class PeriodContainerPanel:StackPanel { public PeriodContainerPanel() : base() { addCollectionsToStackPanel(); } private void addCollectionsToStackPanel() { this.Children.Clear(); if (PeriodsList!=null) { double minutes = PeriodsList.Count * (Properties.Settings.Default.EndTimeSpan - Properties.Settings.Default.StartTimeSpan).TotalMinutes; foreach (ObservableCollection<PeriodBase> lst in PeriodsList) { this.Children.Add(new ChartUserControl(lst) { Minutes = minutes }); } } } public List<ObservableCollection<PeriodBase>> PeriodsList { get { return (List<ObservableCollection<PeriodBase>>)GetValue(PeriodsListProperty); } //do NOT modify anything in here set { SetValue(PeriodsListProperty, value); addCollectionsToStackPanel(); } //...or here } public static readonly DependencyProperty PeriodsListProperty = DependencyProperty.Register( "PeriodsList", //Must be the same name as the property created above typeof(List<ObservableCollection<PeriodBase>>), //Must be the same type as the property created above typeof(PeriodContainerPanel), //Must be the same as the owner class new UIPropertyMetadata( null //default value, must be of the same type as the property )); } ``` And i use this `DependencyProperty` `PeriodList` in `UserControl` like this : ``` <GridViewColumn> <GridViewColumn.CellTemplate> <DataTemplate> <UI:PeriodContainerPanel PeriodsList="{Binding RelativeSource={RelativeSource Mode=TemplatedParent}, Path=DataContext}" /> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> ``` I check with `Convertor` is there any getting process (if there is value) yes there is value and it is correct, but its not set to `PeriodsList` property. What is problem ? P.S if there is any question about code, please tell , i can add
2012/10/18
[ "https://Stackoverflow.com/questions/12953345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/885200/" ]
Just use `std::copy` 5 times. ``` int* ptrArray = myArray; for (i =0;i < 5; i++) { std::copy(iVector[i].begin(), iVector[i].end(), ptrArray); ptrArray += iVector[i].size(); } ```
Or you can use plain old C. You first initialize the array size to be the number of rows \* the number of columns your vector of vectors has. Then you use memcpy to copy each vector to the array. ``` vector<vector<int> > v = { {1,2},{3,4},{5,6} }; //v is 3 by 2 matrix int *arr = (int*)malloc( (3*2) * sizeof(int)); // arr has size 3*2 = 6 for (int i = 0; i < 3; i++) memcpy(arr + v[i].size() * i, &(v[i][0]), v[i].size() * sizeof(int)); ``` Here's a function that I wrote that does this for you: ``` template<typename T> T *vectorToArray(vector<vector<T> > const &v) { T *rv = (T*)malloc((v.size()*v[0].size()) * sizeof(T)); //Assuming all rows have the same size for (unsigned i = 0; i < v.size(); i++) memcpy(rv + v[i].size() * i, &(v[i][0]), v[i].size() * sizeof(T)); return rv; } ``` So now you can do something like this: ``` vector<vector<int> > v = { {1,2},{3,4},{5,6} }; //v is 3 by 2 matrix int *arr = vectorToArray(v); ``` I hope this helps
7,391,929
I have a script which renames files taken from a CSV but it throws an error when it trys to re-rename files after it has already carried out the successful proceedure CSV file is like: ``` old new AC100 DC100 AC101 DC102 ``` Code tried: ``` $sourceDir = read-host "Please enter source Dir:" $csvL = $sourceDir + "\files.csv" $csv = import-csv $csvL $files = get-childitem $sourceDir $csv | % { ForEach( $file in $files){ if($file = $_.old){ $old = $sourceDir + "\" + $_.old Rename-Item $old $_.new } } } ``` I beleive it is something to do with looping and the csv but im not sure where im going wrong, i have had similar issue before. Here is a sample of the error. ```none + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new Rename-Item : Cannot rename because item at 'C:\scripts\2039X.67438.TXT' does not exist. At C:\scripts\renamerTim.ps1:18 char:15 + Rename-Item <<<< $old $_.new ``` thanks in advance, Craig
2011/09/12
[ "https://Stackoverflow.com/questions/7391929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1395967/" ]
The answer is in that question: ["pure virtual method called" when implementing a boost::thread wrapper interface](https://stackoverflow.com/questions/3160403/pure-virtual-method-called-when-implementing-a-boostthread-wrapper-interface) Basically, when the boost::thread object begins running, the object it was run against had the time to be deleted. You have to implement a `join` method that you call manually before destroying the object.
From reading your updates, you're calling delete in the main thread, while the thread is starting in the other. Depending on the race between the destructor and the invocation of `run`, it will either: 1. Crash before it starts, because the vtable is completely destroyed 2. Call the Thread::run (which is pure virtual, and crashes with a pure virtual thunk) 3. Call the correct function, which is the derived class run() If you add a call to `sleep(1)` after you call start, but before you call `delete`, you'll find that it works as you expect.
1,385,710
I am considering to use DTOs instead of passing around my domain objects. I have read several posts here as well as elsewhere, and i understand there are several approaches to getting this done. If i only have about 10 domain classes in all, and considering that i want to use DTOs rather than domain objects for consumption in my Views (WPF front ends), what is the recommended approach. I think using tools like automapper etc maybe an overkill for my situation. So i am thinking of writing my custom mapper class that will have methods for converting a domain type to a DTO type. What is the best way to do this, are there any sample to get me started to do this? Second question: When writing those methods that will create DTOs, how do i deal with setting up all the data, especially when the domain type has references to other domain objects? Do i write equivalent properties in the DTO for mapping to those refernece types in the domain class? Please ask if i have not put my second question in proper words. But i think you understand what i am trying to ask. Thrid question: When writing DTOs, should i write multiple DTOs, each containing partial data for a given domain model, so that each of it can be used to cater to a specific View's requirement, or should the DTO have all the data that are there in the corresponding model class.
2009/09/06
[ "https://Stackoverflow.com/questions/1385710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I'm kind of using DTOs in a project. I tend to make the DTOs only to show the data I need for an specified view. I fetch all the data shown in the view in my data access class. For example, I may have an Order object which references a Client object: ``` public class Client{ public int Id{get;set;} public string Name{get;set;} } public class Order{ public int OrderID{get;set;} public Client client{get;set;} public double Total{get;set;} public IEnumerable<OrderLine> lines {get;set;} } ``` Then in my OrderListDTO I may have something like: ``` public class OrderListDTO{ public int OrderId{get;set;} public string ClientName{get;set;} ... } ``` Which are the fields I want to show in my view. I fetch all these fields in my Database access code so I don't have to bother with entity asociations in my view or controller code.
We all know what `Dtos` are (probably). **But the important thing is to overuse DTOs or not.** Transfering data using `Dtos` between "local" services is a good practice but have a huge overhead on your developer team. **There is some facts:** 1. Clients should not see or interact with Entities (`Daos`). So you always need Dtos for transferig data to/from remote (out of the process). 2. Using `Dtos` to pass data between services is optional. If you don't plan to split up your project to microservices there is no need to do that. It will be just an overhead for you. > > **And this is my comment:** If you plan to distribute your project to > microservices in **long future**. or **don't plan** to do that, then > **DON'T OVERUSE DTOs** > > > You need to read this article <https://martinfowler.com/bliki/LocalDTO.html>
57,789,008
What is the advantage of using Firebird read-only transaction? I am working with Firebird databases more than seven years. I have never used this option. I am just curious what is typical scenario for this feature? I am working with Firebird 2.5.x but if you have useful insight related to Firebird 3.x (or newer versions) your comments are welcome.
2019/09/04
[ "https://Stackoverflow.com/questions/57789008", "https://Stackoverflow.com", "https://Stackoverflow.com/users/538022/" ]
While the transaction is in effect you see a consistent view of the database - data modifications (deletes/updates/inserts) from other connections or sessions won't be visible. Common to use for combined reports where you run multiple queries against the same data to show it in different ways. If each query ran independently the reports might not agree with each other as latter queries could see different data. Not specific to Firebird - *read only transactions* is a common database phrase you can search on for more information.
Read-only transactions in Firebird use a pre-committed transaction (although it will still need to be ended with a `COMMIT`). It cannot alter the database (except global temporary tables that are `on commit delete rows`), and it cannot lock rows or tables. It does provide the visibility guarantees of the transaction configuration. The benefit of using read-only transactions is that it guarantees that the transaction will not block other transactions. However, because most read operations in Firebird do not block other transactions, that benefit is not very big. In addition, you can use read-only transactions to verify that a transaction does not modify the database (directly or indirectly).
8,028,295
I am currently running the latest version of Code-blocks in Ubuntu 11.04. I have GTK+2, and 3 developer libraries fully installed (and working), and presumably have x11 installed. The header files are there. However, a simple code will not compile using x11 coding. ``` #include "X11/Xlib.h" int main() { Display *display = XOpenDisplay(0); Window root = DefaultRootWindow(display); XWarpPointer(display, None, root, 0, 0, 0, 0, 100, 100); XCloseDisplay(display); return 0; } ``` This give me the readout of: > > obj/Release/main.o||In function `main':| > > > undefined reference to `XOpenDisplay' > > > undefined reference to `XWarpPointer' > > > undefined reference to `XCloseDisplay' > > > |=== Build finished: 3 errors, 0 warnings ===| > > > I've tried reading multiple webpages of 'linking' x11, I only find headerfiles, and not the file type asked by the linker within the compiler (That's the wrong term for that.. it's not a compiler.. it's something else.. I know. Apologies)
2011/11/06
[ "https://Stackoverflow.com/questions/8028295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/741470/" ]
I don't know what Code Blocks is, but for a normal compile/link process, you need to specify `-lX11` to link with libX11.so for the Xlib functions.
Thanks! I just had the same problem. To spell it out: Settings > Compiler and Debugger > Linker settings > Other linker options > "-lX11" For the record 'codelite' got it right all by itself.
49,826,455
I want to place different `textarea`s on different pages. The *CSS* for the `textarea` seems to be overriding the *rows* and *cols* I try to set for the second `textarea`. I've tried "*textarea*" and "*textarea1*", but that obviously didn't work. ```css textarea { width: 40%; height: 75px; padding: 12px 20px ; box-sizing: border-box; border: 5px solid #D8FF01; border-radius: 4px; background-color: #000; color: white; font-weight: bolder; resize: both; } ``` ```html <form> <textarea name="comment" placeholder="Enter text here"></textarea> </form> <form> <textarea name="reason" placeholder="Enter text here: (500 characters maximum)" maxlength="500" rows="10" cols="50"></textarea> </form> ```
2018/04/13
[ "https://Stackoverflow.com/questions/49826455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9311132/" ]
you can use like this ```css textarea:not(.reason) { width: 40%; height: 75px; padding: 12px 20px ; box-sizing: border-box; border: 5px solid #D8FF01; border-radius: 4px; background-color: #000; color: white; font-weight:bolder; resize:both; } ``` ```html <form> <textarea name="comment" placeholder="Enter text here"></textarea> </form> <form> <textarea name="reason" class="reason" placeholder="Enter text here: (500 characters maximum)" maxlength="500" rows="10" cols="50"></textarea> </form> ``` in this way, `CSS selector` uses `class` of element. or -- ```css textarea.myTextBox { width: 40%; height: 75px; padding: 12px 20px; box-sizing: border-box; border: 5px solid #D8FF01; border-radius: 4px; background-color: #000; color: white; font-weight: bolder; resize: both; } ``` ```html <form> <textarea name="comment" class="myTextBox" placeholder="Enter text here"></textarea> </form> <form> <textarea name="reason" placeholder="Enter text here: (500 characters maximum)" maxlength="500" rows="10" cols="50"></textarea> </form> ```
try to just do a separate css for both by using class so: ``` <form> <textarea class="xxx" name="comment" placeholder="Enter text here"> </textarea> </form> <form> <textarea class="yyy" name="reason" placeholder="Enter text here: (500 characters maximum)" maxlength="500" rows="10" cols="50"> </textarea> </form> ``` CSS: ``` .xxx { enter styling code here } .yyy { enter styling code here } ```
14,948
At a Catholic Mass (Roman rite) people sometimes say (either in vernacular or Latin): "Confiteor Deo omnipotenti, et vobis fratres, quia peccavi nimis cogitatione, verbo, opere et omissione: mea culpa, mea culpa, mea maxima culpa. Ideo precor beatam Mariam semper Virginem, omnes Angelos et Sanctos, et vos, fratres, orare pro me ad Dominum Deum nostrum." Would the meaning change a bit if I changed "mea culpa" to "culpa mea" even if Latin doesn't care about word order?
2020/11/19
[ "https://latin.stackexchange.com/questions/14948", "https://latin.stackexchange.com", "https://latin.stackexchange.com/users/8610/" ]
No, the meaning would not change. *Mea culpa* and *culpa mea* both mean “my fault.” [There is a tendency](https://www.jstor.org/stable/3287962?seq=1#metadata_info_tab_contents) that when the possessive comes first, it is emphasized (*my* fault, not yours), and when it comes second, the noun is emphasized. But the meaning itself is unaffected.
It's not quite clear what you mean by "would the meaning change a bit". The denotative meaning (what the words explicitly mean as such) wouldn't change at all, but the connotative meaning (what the words imply in their wider context) would shift slightly, since putting the possessive first emphasizes it. So *culpa mea* still means "my fault", but it lacks the emphasis of *mea culpa*, which could also be translated as "my own fault" to convey the emphasis.
19,238,215
I'm having trouble with one of my C++ homework assignments. We're using a class to print out a list of phone numbers, but I'm having trouble figuring out how to get the number variable to print the defaults we're supposed to use with our constructors. If any of you could point me in the right direction, with an example or the like, I would greatly appreciate it! ``` #include <iostream> #include <string> class PhoneNumber { //This class will help the user make a list of phone numbers) private: int countryCode; //Stores the Country Code e.g. +1 int areaCode; //Stores Area Code e.g. (925) int number; //Stores Number e.g. 754-6978 char type; //Stores the "type" of number. e.g. 'H' for home, 'B' for business, 'C' for cell. int year; //Stores the year the number was added, for some odd reason. public: PhoneNumber(); //Empty constructor PhoneNumber(int ccode, int acode, int num, char line, int year); PhoneNumber(int num, char line = 'B'); PhoneNumber(int acode, int num, char line = 'C'); void setCountry(int ccode); void setArea(int acode); void setNumber(int num); void setType(char line); void setYear(int yr); int getCountry() const; int getArea() const; int getNumber() const; char getType() const; int getYear() const; bool doubleDigits() const; void printNumber() const; void printPhoneNumberStats() const; }; //The Default Constructor PhoneNumber::PhoneNumber() { void setCountry(int ccode = 43); void setArea(int acode = 800); void setNumber(int num = 8675309); void setType(char line = 'H'); void setYear(int year = 1981); } //Fill out the Phone Number class PhoneNumber::PhoneNumber(int ccode, int acode, int num, char line, int year) { } PhoneNumber::PhoneNumber(int num, char line) { } PhoneNumber::PhoneNumber (int acode, int num, char line) { } int PhoneNumber::getCountry() const { return countryCode; } int PhoneNumber::getArea() const { return areaCode; } int PhoneNumber::getNumber() const { return number; } char PhoneNumber::getType() const { return type; } int PhoneNumber::getYear() const { return year; } bool PhoneNumber::doubleDigits() const { } void PhoneNumber::printNumber() const { std::cout << number << std::endl; } void PhoneNumber::printPhoneNumberStats() const { } ``` Those are all my functions. Then there is the int main() that we are required to use. ``` int main() { PhoneNumber firstNum; PhoneNumber secondNum(39, 415, 867666, 'B', 2012); PhoneNumber thirdNum(1212121); PhoneNumber fourthNum(1234566, 'C'); PhoneNumber fifthNum(925, 4392181); PhoneNumber sixthNum(925, 5512346, 'H'); firstNum.printNumber(); /* firstNum.printPhoneNumberStats(); secondNum.printNumber(); secondNum.printPhoneNumberStats(); thirdNum.printNumber(); thirdNum.printPhoneNumberStats(); fourthNum.printNumber(); fourthNum.printPhoneNumberStats(); fifthNum.printNumber(); fifthNum.printPhoneNumberStats(); sixthNum.printNumber(); sixthNum.printPhoneNumberStats(); */ return 0; } ``` If you can explain *why* I'm getting garbage when I attempt to print the **number** variable, and how to fix it, I would be grateful. Of course, any other advice would also be appreciated. Thank you! Caitlin
2013/10/08
[ "https://Stackoverflow.com/questions/19238215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2856836/" ]
Your `PhoneNumber` default constructor (`PhoneNumber::PhoneNumber()`) doesn't actually initialize any of the member variables; ``` PhoneNumber::PhoneNumber() { void setCountry(int ccode = 43); void setArea(int acode = 800); void setNumber(int num = 8675309); void setType(char line = 'H'); void setYear(int year = 1981); } ``` These lines do not initialize anything - they are declaring functions, they're not calling the existing functions in the object. To have a default constructor that initializes the member values you can do this: ``` PhoneNumber::PhoneNumber() : countryCode(43), areaCode(800), number(8675309), type('H'), year(1981) { } ``` Or if you want to use the "set" functions, call them this way: ``` PhoneNumber::PhoneNumber() { setCountry(43); setArea(800); setNumber(8675309); setType('H'); setYear(1981); } ```
`number` is never initialized. Your `printNumber()` method should build `number` (which needs to be a `string` if you want the `-` in it) from the other members of the class, THEN print it. So, your `printNumber()` method needs to look more like this... ``` void PhoneNumber::printNumber() const { number = //whatever you need to do to initialize it std::cout << number << std::endl; } ``` Unless you're going to use your `constructor` methods to initialize `number`.
451,549
I am trying to get a data backup from a broken LG/Google Nexus 5 smartphone. It was repeating in a boot loop, because the on-switch was stuck at always on. I dismantled the mainboard. Here is a overview: [![PCB overview](https://i.stack.imgur.com/XrdMI.jpg)](https://i.stack.imgur.com/XrdMI.jpg) After desoldering the little push button (from the sport marked with the arrow) I soldered two wires to the contacts the switch was shorting, to make a provisory switch. Then I reassembled the phone partially, to get it started. While doing so I unfortunately pulled to much on one wire and ripped it off the PCB with the solder pad. Annoyingly it was not one of the four pads which are connected to the ground plane, but the "signal contact" which probably only has a narrow connection path to some controller pin. As I can see on this microscope images, there is no conductive track leaving my broken signal pad on the top layer of the PCB. It is isolated all-around against the ground plane. So it must be connected by a "via in pad". Is that correct? I already tried to cover the broken pad with tin by making a solder bridge between the neighboring ground pads, without success. But after the heating it gave me a better view on the underground of the broken pad. [![zoom on mounting position of switch](https://i.stack.imgur.com/t9cWo.jpg)](https://i.stack.imgur.com/t9cWo.jpg) There seems to be an outer and an inner conductive plane, the inner one with an little golden point in it. Is that the via-in-pad? [![zoom on potential via in pad](https://i.stack.imgur.com/wdpC4.jpg)](https://i.stack.imgur.com/wdpC4.jpg) I also tried to short the connection by a wire with an alligator clamp on ground on the one side and a multimeter probe on the other side. With the probe I tried to touch the via. I had no success in switching on the phone by this method. Is the inner circular area with the golden point my target potential which I have to pull on ground? 1. With what technique can I expose that area? Scratching? 2. Are there some other spots on the PCB where I could search for my target potential? 3. Is there an other way to get a data backup of the flash memory? Thank you in advance! Edit: You can find some close up photographies of the switch here: <http://runawaybrainz.blogspot.com/2015/05/google-nexus-5-power-button-woes.html>
2019/08/04
[ "https://electronics.stackexchange.com/questions/451549", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/102633/" ]
7 milliWatts spread over 60 centimeters will not be detected. The heat will be dumped into the chest, and cooled by the blood. Sunlight is 1,000 watts per square meter, or 1,000 watts per 10,000 square cm. We easily sense sunlight, which is 0.1 watts per square cm. Your heat density is 0.1 milliWatts per cm length or about 0.1mW per 1cm\*0.2cm (assuming the insulation spreads out the heat) or about 0.5mw per cm squared. Thus your heat flux into the skin, thru the wire's insulation, is 200X smaller than the sun's flux.
What do you mean by a wire? Is it an electrical wire, with an insulation? As you have already said, the temperature will depend on the environment (including air flow), surface area of wire and insulation. Additionally, it will also depend on biological factors like blood flow, electrolyte status, perspiration etc. For biological experiments, keep in mind that human pain perception for aluminum conductors is just above 42C at the chest (most sensitive) for healthy person. Pain perception is measured at epidermis/dermis interface. Make sure your subject is not hyper-sensitive to pain, e.g. due to impaired hormone status, or you might end up with an emergency situation. As you can see, there is no simple answer, especially at the extreme limits of accuracy where you are calculating. (Additionally, your reference is not room temperature at 305K, it will be skin temperature prior to measurement).
45,378,033
I have an application1(C#) that is hosted on port:80 and application 2(nodejs) that is hosted on port:3030. Both are on localhost. **The request workflow is as following:** * browsers sends request to application 1 * application 1 sends back number of cookies * later on browser sends the request to application 2 * ^ problem is on the last step, the cookies doesnt get included in the request. **Things I have tried/understood:** * I understand that this is a same-origin policy restriction and because of different port # the browser treats them as different domains. * In Application 1 (its using System.Web.HttpCookie) i have tried to set the domain to be port specific ("127.0.0.1:3030") but seems like the browser doesnt accept it or ignore it. ``` //c# code var testCookie1 = new HttpCookie("Test", "testValue"); testCookie1.Domain = "127.0.0.1:3030"; testCookie1.Path = "/"; testCookie1.Expires = DateTime.Now.AddDays(1); Response.SetCookie(testCookie1); var testCookie2 = new HttpCookie("Test2", "testValue2"); testCookie2.Domain = "127.0.0.1"; testCookie2.Path = "/"; testCookie2.Expires = DateTime.Now.AddDays(1); Response.SetCookie(testCookie2); ``` [![Cookies that come back from server](https://i.stack.imgur.com/PsxQN.png)](https://i.stack.imgur.com/PsxQN.png) [![Cookies that get stored in browser](https://i.stack.imgur.com/ZeY1d.png)](https://i.stack.imgur.com/ZeY1d.png) The server sends back a cookie with the port number attached to it but the browser seems like it ignores it. and here is my ajax calls: ``` var request = $.ajax({ url: 'http://127.0.0.1:3030/SomeTask', type: 'POST', crossDomain: true, }); ```
2017/07/28
[ "https://Stackoverflow.com/questions/45378033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7025662/" ]
Your domain is the same in this case localhost, so there shouldn't be any problem. Another thing is: the port is part of an URI, not of a domain, the domain is also part of an URI, so you are mixing apples and fruits... Please refer to this [another question in SO](https://stackoverflow.com/questions/1612177/are-http-cookies-port-specific) The rfc clearly states > > Introduction > > > For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. **Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.** > > > I didn't give a try myself. In my job, we have to share cookies across subdomains (not ports) setting a dot in front of the domain `var testCookie1 = new HttpCookie("Test", "testValue"); testCookie1.Domain = "." + mydomain;` This way x.mydomain and y.mydomain will share cookies. So, try not to set the port in the cookies, and use the name localhost instead the resolved ipaddress. You can simulate production setting in your hosts file something like: ``` 127.0.0.1 myawesomesubdomain.thisdomainnotexist.com.tr ``` and then set the cookie to that domain without the port
Here are a two different solutions you can try: 1. Run an Apache server and route the requests to either servers 2. Disable security( i.e., same origin policy) in the browsers.
1,430,219
I have a Rails app for bands. Bands can import their shows which all occur in different time zones. It seems like a ton of work to store these events in UTC. I would have to figure out the time zone for any show created and then convert back to the show's local time zone when displaying to the user. Is there a simple plugin to get a UTC offset based on geolocation? That would probably help, but does anyone see any major reasons why I should store in UTC here? I understand storing timestamps in UTC is probably a good idea...but band event times?
2009/09/15
[ "https://Stackoverflow.com/questions/1430219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/45849/" ]
I'm working on something similar - a site with a list of events. In my situation, it's important to have the times standardized (in UTC) because we have announce and on-sale times to worry about (i.e., when the events appear on the site and when the on-sale links show up), not just displaying the event start time (which itself doesn't care what time zone it's in). Going from a UTC time in the database to the local time for the given venue (i.e., to display the local time in the event listing) is pretty simple using something along the lines of `e.start.in_time_zone("#{e.venue.time_zone}")`. What I couldn't figure out was getting the local time at the point of data entry recognized as a local time needed to be converted to UTC, without having to deal with changing Time.zone. I found something that works. Check out this post: <http://steveluscher.com/archives/changing-a-times-zone-in-rails-keeping-the-same-local-representation>. I added a new file (time\_zone\_support.rb) to my config/initializers directory. Here are the contents: ``` module ActiveSupport class TimeWithZone def zone=(new_zone = ::Time.zone) # Reinitialize with the new zone and the local time initialize(nil, ::Time.__send__(:get_zone, new_zone), time) end end end ``` This allows the following in the console: ``` >> e.starts = Time.zone.parse("2010-09-12 10:00 am") => Sun, 12 Sep 2010 10:00:00 UTC +00:00 >> e.starts.zone = e.time_zone => "Pacific Time (US & Canada)" >> e.starts => Sun, 12 Sep 2010 10:00:00 PDT -07:00 >> e.save => true ``` Hope that helps you too!
You don't need to worry about UTC here, since the event is tied to the time and location in which it's occurring. If someone is traveling from Hawaii to San Francisco for an 8pm show, that doesn't mean the show is going to start at 4pm for them. So, you can store plain time and location, don't worry about TZ (since presumably the bands just have a time and location form) and store it like that in the db. As far as Rails is concerned, all events will be occurring in the TZ in which it exists, just with this funny "location" attribute that tells the reader that the event is really 1000mi away. This means it can do it's normal auto-TZ conversions for itself, but the reader won't be able to tell. It will just be converting to time-as-entered by the band in the first place. 8pm+"Los Angeles", 10pm+"Tulsa, OK." The one wrinkle here is whether shows are broadcast over the internet or something like that, where the location is decoupled from the event, in which you would still do the above, but let the viewer/potential-audience-member choose their own TZ (or geolocate them) and have the app do the conversion upon display in the view.
12,266,741
I am using UAModalPanel to create a popover controller effect. I can get the popver box to display, but I am struggling to figure out how to create a view (graphically, in storyboard), instantiate that view in code, and add it to the UAModalPanel. **What I've Tried** * Created a UIViewController in storyboard, set it's class to a custom class, instantiated that class in code, got it's view and tried to add it to the current 'scene'. That's it. Surely there is a way that I can make a view in storyboard, have it make a sub-class of UIView which I can then grab in code where I need to use it? Instead of laying it out in code?
2012/09/04
[ "https://Stackoverflow.com/questions/12266741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/477528/" ]
In storyboards you'll want to drag and drop a new `UIViewController` then give it an identifier here: ![enter image description here](https://i.stack.imgur.com/oifLR.png) Then in code you can get the view property of this view controller with the following: ``` UIViewController *myController = [self.storyboard instantiateViewControllerWithIdentifier:@"myIDfromTheLastStep"]; ``` Now you can get the conrollers view property and make adjustments. Here's an example of frame change ``` [myController.view setFrame:CGRectMake(0, 0, 320, 320)]; ```
Unless you need the segues you may better off creating a standalone XIB Layout the view as you need, set its Class to your own (MyCustomView) then instantiate like this ``` NSArray *nib=[[NSBundle mainBundle] loadNibNamed:@"MyCustomView" owner:self options:nil]; MyCustomView *view = (MyCustomView*)[nib objectAtIndex:0]; ``` As long as your view is the first/only object in the XIB this will instantiate the view for you
1,794,281
I am using the following code: ``` while (invalidInput) { // ask the user to specify a number to update the times by System.out.print("Specify an integer between 0 and 5: "); if (in.hasNextInt()) { // get the update value updateValue = in.nextInt(); // check to see if it was within range if (updateValue >= 0 && updateValue <= 5) { invalidInput = false; } else { System.out.println("You have not entered a number between 0 and 5. Try again."); } } else { System.out.println("You have entered an invalid input. Try again."); } } ``` However, if I enter a 'w' it will tell me "You have entered invalid input. Try Again." and then it will go into an infinite loop showing the text "Specify an integer between 0 and 5: You have entered an invalid input. Try again." Why is this happening? Isn't the program supposed to wait for the user to input and press enter each time it reaches the statement: ``` if (in.hasNextInt()) ```
2009/11/25
[ "https://Stackoverflow.com/questions/1794281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/29326/" ]
The problem was that you did not advance the `Scanner` past the problematic input. From [`hasNextInt()`](http://java.sun.com/javase/6/docs/api/java/util/Scanner.html#hasNextInt%28%29) documentation: > > Returns `true` if the next token in this scanner's input can be interpreted as an `int` value in the default radix using the `nextInt()` method. **The scanner does not advance past any input.** > > > This is true of all `hasNextXXX()` methods: they return `true` or `false`, without advancing the `Scanner`. Here's a snippet to illustrate the problem: ``` String input = "1 2 3 oops 4 5 6"; Scanner sc = new Scanner(input); while (sc.hasNext()) { if (sc.hasNextInt()) { int num = sc.nextInt(); System.out.println("Got " + num); } else { System.out.println("int, please!"); //sc.next(); // uncomment to fix! } } ``` You will find that this program will go into an infinite loop, asking `int, please!` repeatedly. If you uncomment the `sc.next()` statement, then it will make the `Scanner` go past the token that fails `hasNextInt()`. The program would then print: ``` Got 1 Got 2 Got 3 int, please! Got 4 Got 5 Got 6 ``` The fact that a failed `hasNextXXX()` check doesn't skip the input is intentional: it allows you to perform additional checks on that token if necessary. Here's an example to illustrate: ``` String input = " 1 true foo 2 false bar 3 "; Scanner sc = new Scanner(input); while (sc.hasNext()) { if (sc.hasNextInt()) { System.out.println("(int) " + sc.nextInt()); } else if (sc.hasNextBoolean()) { System.out.println("(boolean) " + sc.nextBoolean()); } else { System.out.println(sc.next()); } } ``` If you run this program, it will output the following: ``` (int) 1 (boolean) true foo (int) 2 (boolean) false bar (int) 3 ```
``` import java.util.Scanner; public class Detail { public static void main(String args[]) { int a; Scanner sc = new Scanner(System.in); System.out.print("Specify an integer between 0 and 5: "); a =sc.nextInt(); if(a >= 0 && a <= 5) { System.out.println("You have entered an invalid input. Try again."); } else { System.out.println("You have not entered a number between 0 and 5.Try again."); } } } ```
12,304,785
I have a XML comment like that. ``` /// <summary> /// Lorem ipsum /// </summary> /// <param name="entity"></param> /// <returns></returns> ``` I'd like to place inside it a piece of the (multiline) code. How can I do that ? Edit ---- Here's the info about multiline code [Adding line breaks to comments for Intellisense](https://stackoverflow.com/questions/467202/adding-line-breaks-to-comments-for-intellisense)
2012/09/06
[ "https://Stackoverflow.com/questions/12304785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/106616/" ]
To add an example, use the ```xml <example> <code> Put your example here. </code> </example> ``` tags.
I'm not entirely sure what you're asking, but here's my best shot: I typically place code examples inside an `<example> </example>` tag. ``` /// <summary> /// Lorem ipsum /// </summary> /// <example> /// Put you're example here. /// </example> /// <param name="entity"></param> /// <returns></returns> ```
59,840,967
in wordpress I have custom post type 'referenzen'. This post type has Custom fields(ACF) 'Referenzen-buildin-type' Group with subfield 'building-type' which is checkbox. I do not know how to select posts with specific building type. This is not working for me : ``` $posts = get_posts(array( 'meta_query' => array( array( 'key' => 'referenzen-building-types_building-type', 'value' => '"Museen"', 'compare' => 'LIKE' ) ) )); ``` Any idea? Thanks
2020/01/21
[ "https://Stackoverflow.com/questions/59840967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12754483/" ]
`fill-height` and, in general, all flexbox related properties rely strictly on parent-child relations. So when you insert a `<div>` in between the parent and the child, they stop working. To learn more about how flexbox works, I recommend [A complete guide to flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/). In short, your layout: ``` <v-app> <div v-if="connected"> <Main /> </div> <div v-else> <Login /> </div> </v-app> ``` breaks the parent-child relation (between `<v-app>` and `<Main>`). You can get rid of the extra `<div>`s in two ways: 1. Simply place the `v-if` on the contents: ```html <v-app> <Main v-if="connected" /> <Login v-else /> </v-app> ``` 2. Or use `<template>` tags. Because they're only logical wrappers, Vue doesn't produce actual DOM elements for them. This is particularly helpful when you have multiple elements as content and you don't want to place the `v-if` on each: ```html <v-app> <template v-if="connected"> <Main /> <SomeOtherComponent /> <YetOtherComponent /> </template> <Login v-else /> </v-app> ``` Obviously, if you have more than one component in the `v-else` case, you can turn that into a `<template>` as well. --- Note of caution: because `<template>` doesn't actually produce a DOM element, when you use it with `v-for`, you'll have to `:key` its children instead of the `<template>`, but except this edge-case, it's a great way to couple layout components without having them wrapped into a DOM element. It's also useful when you're dealing with strict parent/child HTML relations (i.e: `<table>` + `<tr>` + `<td>`, `<ul>` + `<li>`, `<ol>` + `<li>`, etc...).
just set to fill-height to true, like this ``` <v-container fill-height> </v-container> ``` if does not work, put it in v-content, just try this one ``` <v-content> <v-container fill-height> <v-layout> <v-flex xs12> ... ... </v-flex> </v-layout> </v-container> </v-content> ``` Or ``` <v-content> <v-container class="fill-height" fluid> <v-row justify="center" align="center"> <v-col class="shrink"> ... ... </v-col> </v-row> </v-container> </v-content> ```
18,253,354
``` select round(avg(et_gsm_sınyal)) as sinyal,mahalle_kodu,ilce_kodu,sebeke from (select et_gsm_sınyal,sozlesme_no,SUBSTR(et_operator,1,5) as sebeke from thkol316old where tarih >= ADD_MONTHS (TRUNC (SYSDATE, 'MM'), -1) AND tarih < TRUNC(SYSDATE, 'MM')) okuma, (select sozlesme_no,ilce_kodu,mahalle_kodu from commt020) bilgiler where okuma.sozlesme_no=bilgiler.sozlesme_no group by mahalle_kodu,ilce_kodu,sebeke; ``` * commt020 -> customer table * thkol316old -> old bill table This query is works but it's works very slow. It's about 550 seconds response time. What am I supposed to do this query work faster ? It's the execution plan ``` SELECT STATEMENT 7547 HASH GROUP BY 7547 FILTER Filter Predicates ADD_MONTHS(TRUNC(SYSDATE@!,'fmmm'),-1) NESTED LOOPS NESTED LOOPS 7546 TABLE ACCESS COMMT020 BY GLOBAL INDEX ROWID 3 ROW LOCATION ROW LOCATION` ```
2013/08/15
[ "https://Stackoverflow.com/questions/18253354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615526/" ]
You need to make the body and html elements have height:100%, and you need to give the outer div height: 100%. CSS: ``` body, html { height: 100%} <div style="overflow: hidden; clear: both; height: 100%"> <div style="background-color: blue; height: 70%;">Top</div> ... ```
You can add a `position: absolute` to the parent `div` and subsequently stretch it to achieve full width and height. Note that the `width: 100%` declarations are important to enforce block-level formatting context. ``` <div style="position:absolute; overflow: hidden; top:0; left:0; right: 0; bottom: 0;"> <div style="background-color: blue; height: 70%; width: 100%;">Top</div> <div style="background-color: red; height: 30%; width: 100%;">bottom</div> </div> ``` Here's the [fiddle](http://jsfiddle.net/vQLyW/3/) Just note that this will remove this div from 'normal flow', and that sibling elements will be obscured/obscuring. The [CSS 2.1 spec](http://www.w3.org/TR/CSS21/visuren.html#absolute-positioning) provides this advice: > > ...the contents of an absolutely positioned element do not flow around any other boxes. They may obscure the contents of another box (or be obscured themselves), depending on the stack levels of the overlapping boxes. > > >
13,048,824
I am running the following bash code: ``` num=$(ls -1 $ini/*.ini | wc -l) echo "Running $num simulations..." for i in {1..$num}; do echo "a" done ``` And I get the following output: ``` Running 24 simulations... a ``` It should print 24 lines of 'a', but it doesn't. What should I change? Thanks!
2012/10/24
[ "https://Stackoverflow.com/questions/13048824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1596293/" ]
The curly brackets don't expand variables. You could use ``` for i in $(seq $num); do echo "a" done ``` See e.g. `man bash`: > > [...] > > > A sequence expression takes the form `{x..y[..incr]}`, where `x` and `y` are either > integers or single characters, and incr, an optional increment, is an integer. > When integers are supplied, the expression expands to each number between `x` and > `y`, inclusive. > > > [...] > > > Brace expansion is performed before any other expansions, and any characters > special to other expansions are preserved in the result. It is strictly textual. > Bash does not apply any syntactic interpretation to the context of the > expansion or the text between the braces. > > > [...] > > >
The brace expansion only works for literals, it does not expand variables. Possible workaround: ``` for i in $(seq 1 $num) ; do ```
55,446,686
In the "C++ concurrency in action" book there is an example: ``` class background_task { public: void operator() () const { do_something(); do_something_else(); } }; background_task f; std::thread my_thread(f); ``` Followed by the text: "In this case, the supplied function object is copied into the storage belonging to the newly created thread of execution and invoked from there. It's therefore essential that the copy behave equivalently to the original, or the result may not be what's expected." Could someone please explain to me in more details what these two sentences mean? What about other callable types that can be supplied to the thread object's constructor, are they not copied? How do I make sure that "the copy behave equivalently to the original" or why would it not behave equivalently? Thank you!
2019/04/01
[ "https://Stackoverflow.com/questions/55446686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The statement is contrasting the given example with the case immediately before where a normal function `void do_some_work();` is passed as an argument to `std::thread`. ``` std::thread my_thread(do_some_work); ``` The function object `background_task f` should be copyable. This is not possible if the `class background_task` consists of something that makes it non-copyable. For example, if `background_task` has a `std::mutex`, it will not be copyable because [`std::mutex`](https://en.cppreference.com/w/cpp/thread/mutex) is not copyable or movable. See **[Demo](https://godbolt.org/z/ljH6wP)**. The second thing is that even if `background_task` is copyable, it should be ensured that the copy constructor produces an exact copy of the original. An example of where this does not happen is the well-known case where the class has raw pointers and the copy constructor does not do a deep-copy. The callable `void do_some_work();` is also copied into the thread like the function object, but does not face the issues mentioned above.
There's nothing much to read into here. The callable, whatever it is, must be copied into the thread, otherwise invoking it will not be thread-safe. And if the callable is of a type that has shall-we-say "unexpected" copy semantics (think a really weirdly written copy constructor) then that's just bad news all around for anyone trying to use your code. That's it!
36,045,713
There is a complex method function, which takes some data from different collections. These collection get defined in some (optional) packages. Now I need to check if the collection is defined - which means that the package is added to the project. I tried to use an `if`, but this doesn't work. I still get the error `Articles is not defined` and the script aborts. ``` Meteor.methods({ data: function () { if (Articles) { Articles.find( { parent: null }, { fields: { title: true } } ); } } }); ```
2016/03/16
[ "https://Stackoverflow.com/questions/36045713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3142695/" ]
``` SELECT * from `table` where day='Thursday' AND (`9-10`='DA6220' OR `10-11`='DA6220' OR `11-12`='DA6220' OR `12-1`='DA6220' OR `1-2`='DA6220' OR `2-3`='DA6220' OR `3-4`='DA6220') ``` Looks like your AND is evaluated first. Please try the above
Elaborating on my comment: ``` SELECT * FROM `table` WHERE 0 < LOCATE("DA6220", CONCAT( `9-10`, " ", `10-11`, " ", `11-12`, " ", `12-1`, " ", `1-2`, " ", `2-3`, " ", `3-4` )) ``` The nice thing about this approach is that you don't have to repeat your search string repeatedly. It's probably *slightly* slower than the `OR` version.
15,062,364
``` var video_id='VA770wpLX-Q'; $.getJSON('http://gdata.youtube.com/feeds/api/videos/'+video_id+'?v=2&alt=jsonc',function(data,status,xhr){ alert(data.data.title); }); ``` I am having issue in using getJSON() in ie9 and below, the request is done but function is not running, it is working on all the other browsers except IE, already tested over different jquery versions still no luck, help!
2013/02/25
[ "https://Stackoverflow.com/questions/15062364", "https://Stackoverflow.com", "https://Stackoverflow.com/users/976070/" ]
Try this hack, it should make IE work with cross domain requests in this case: ``` $.support.cors = true; $.getJSON('http://gdata.youtube.com/feeds/api/videos/' + video_id + '?v=2&alt=jsonc', function (data) { console.log(data.data.title); }); ``` Worked for me <http://jsbin.com/ariyec/1/>
please use jsonp like this: ``` var video_id='VA770wpLX-Q'; $.ajax({ url: 'http://gdata.youtube.com/feeds/api/videos/'+video_id+'?v=2&alt=jsonc', success: function(data,status,xhr){alert(data.data.title);}, dataType: 'jsonp' }); ```
53,255
I want to make a matrix which has multiple nodes in a single matrix cell. These nodes should be centered to each other. I managed to make this work by using a nested tikz environment like so: ``` \documentclass{article} \usepackage{tikz} \usetikzlibrary{matrix} \usetikzlibrary{positioning} \begin{document} \begin{tikzpicture}[auto, semithick] \tikzstyle{block} = [rectangle, draw, minimum width=5em, text centered, rounded corners, minimum height=4em] \matrix[matrix of nodes, row sep = 2em, nodes={anchor=center} ] (mx2){ % First row: label1 & \node{\tikz{ \node[block](n1){node1}; \node[block, right=of n1](n2){node2}; }}; \\ % Second row: label2 & \node{\tikz{ \node[block] (n3) {node 3}; \node[block] (n4) [right=of n3] {node 4}; \node[block] (n5) [right=of n4] {node 5}; }}; \\ }; \draw (n1) -- (n4); % this fails \end{tikzpicture} \end{document} ``` ![enter image description here](https://i.stack.imgur.com/xfQ1N.png) Now connecting these nodes is impossible because of the nested tikz environments. I have tried to get a similar image using the `fit` library, but I did not manage to make it work. Can any of you help me out? Thank you in advance.
2012/04/25
[ "https://tex.stackexchange.com/questions/53255", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/13887/" ]
In general one can use the `remember picture` option and apply it *all* pictures, that should be accessible later. Since your pictures are nested and the options are inheritable, it’s sufficient to apply `remember picture` only to the upper level `{tikzpicture}`. ``` \documentclass{article} \usepackage{tikz} \usetikzlibrary{matrix} \usetikzlibrary{positioning} \begin{document} \begin{tikzpicture}[auto, semithick, remember picture, block/.style={rectangle, draw, minimum width=5em, text centered, rounded corners, minimum height=4em, text width=5em} ] \matrix[matrix of nodes, row sep = 2em, nodes={anchor=center} ] (mx2){ % First row: label1 & \node{\tikz{ \node[block] (n1) {node1}; \node[block, right=of n1] (n2) {node2}; }}; \\ % Second row: label2 & \node{\tikz{ \node[block] (n3) {node 3}; \node[block] (n4) [right=of n3] {node 4}; \node[block] (n5) [right=of n4] {node 5}; }}; \\ }; \draw (n1) -- (n4); % this works \end{tikzpicture} \end{document} ``` ![result](https://i.stack.imgur.com/ciXO5.png) Compile twice to get the right result. Note that I replaced `\tikzstyle{block}` by `block/.style` which is the preferred way. See [Should \tikzset or \tikzstyle be used to define TikZ styles?](https://tex.stackexchange.com/questions/52372/should-tikzset-or-tikzstyle-be-used-to-define-tikz-styles).
You can avoid the nested pictures ``` \documentclass{article} \usepackage{tikz} \usetikzlibrary{matrix} \usetikzlibrary{positioning} \begin{document} \begin{tikzpicture}[auto, semithick,remember picture, block/.style={rectangle, draw, minimum width=5em, text centered, rounded corners, minimum height=4em,text width=5em} ] \matrix[matrix of nodes, row sep = 2em, nodes={anchor=center} ] (mx2){ % First row: label1 & \node[block,right=1em,anchor=west](n1){node1}; \node[block, right=of n1](n2){node2}; \\ % Second row: label2 & \node[block] (n3) {node 3}; \node[block] (n4) [right=of n3] {node 4}; \node[block] (n5) [right=of n4] {node 5}; \\ }; \draw (n1) -- (n4); % this fails \end{tikzpicture} \end{document} ``` ![enter image description here](https://i.stack.imgur.com/2LPmW.png)
3,484,546
> > Let $x,y,z > 0$ such that $3x(x+y+z)=yz$ . Find the minimum value of $$P=\frac{y+z}{x}$$ > > > --- $$3x(x+y+z)=yz\Leftrightarrow 3(x+y)(x+z)=4yz$$ Or $$3(\frac{x}{y}+1)(\frac{x}{z}+1)=4$$ Let $a=x/y, b=x/z ( a,b>0)$ We have: $$4=3(a+1)(b+1)\ge3 \cdot 2 \sqrt a \cdot 2 \sqrt b=12 \sqrt {ab} \rightarrow \sqrt {ab}\le \frac 1 3$$ And $$P=\frac{y+z}{x}=\frac{1}{a}+\frac{1}{b}\ge \frac{2}{\sqrt {ab}}\ge 6$$ But by WA $Min\_P=6+4\sqrt 3$. Is my solution is wrong?
2019/12/22
[ "https://math.stackexchange.com/questions/3484546", "https://math.stackexchange.com", "https://math.stackexchange.com/users/682364/" ]
Let $y+z=2u$ and $yz=v^2$, where $v>0$. Thus, since by AM-GM $$\frac{y+z}{2}\geq\sqrt{yz},$$ we obtain $u\geq v$, $$3x^2+6ux-v^2=0$$ or $$x=-u+\sqrt{u^2+\frac{v^2}{3}}$$ and $$\frac{y+z}{x}=\frac{2u\left(\sqrt{u^2+\frac{v^2}{3}}+u\right)}{\frac{v^2}{3}}\geq\frac{2v\left(\sqrt{v^2+\frac{v^2}{3}}+v\right)}{\frac{v^2}{3}}=6+4\sqrt3.$$ The equality occurs for $u=v$ or $y=z$, which says that we got a minimal value.
Your solution is wrong because in your solution equality occurs when $a=b=1$ which is not possible because $x=y=z$ doesn't satisfy the constraints. Instead, you should use method of lagrange multipliers. Let $m = \frac{y}{x}$ and $n = \frac{z}{x}$. Then, lagrangian $$L = m+n-\lambda(3(1+m+n)-mn)$$ (the constraint is obtained by dividing the given equation by $x^2$ on both sides)Equations formed after differentiating $L$ with respect to $m, n$ and $\lambda$ are $$1 =\lambda(3-n)$$ $$1 = \lambda(3-m)$$ $$3(1+m+n)=mn$$ From first two equations, $m=n$, using it in third equation, we get, $$m^2-6m-3=0$$ $$\implies m = 3+2\sqrt{3}$$ Required minimum value is $6+4\sqrt{3}$.