qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
137,902
I created a device, based on an Arduino Uno, which runs on 6 rechargeable NiMH batteries. Now I would like to add a check, if the batteries have enough power left, to warn with a signal when the batteries needs recharging. As I understand, the voltage of the batteries will slowly go down, until they drop under the minimum required level. **How is a battery load level check implemented? What kind of elements do I need?**
2014/11/10
[ "https://electronics.stackexchange.com/questions/137902", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/56669/" ]
The most accurate way to know how much energy is left in a battery is to monitor the voltage, current, and temperature over time, then use knowledge of that particular chemistry to estimate remaining energy. There are ICs which do parts of this, sometimes called battery *fuel guage* ICs. Of course you can do the same thing with a microcontroller, but it takes constant A/D readings and the algorithm can be complicated, depending on how accurate you want to be. A much simpler but less accurate way is to just monitor voltage. NiMH cells start at about 1.4 to 1.5 V right after being charged, quickly drop to 1.2 or so, go down only slowly over most of the discharge cycle, then drop quickly at the end of charge. Usually you stop discharging at 900 mV or so. Letting the voltage of any cell get less than that can risk permanent damage. You could simply pick a voltage around 1.0 to 1.1 V and decide to warn the user when the battery gets that low, then go dead at 900 mV. The best levels depend on your load. Of course you need to consult the datasheet for whatever particular batteries you are using. The manufacturer will give you discharge plots at various currents, tell you how low you can go without damage, etc. As always **read the datasheet**.
I realize that this is a little late, but I have had this same problem and found a solution that looks like it will work great. Basically, the reference voltage on the Arduino is based off of Vcc (unless you provide an alternative Vref), which makes measuring Vcc very difficult. To fix this, you can base your measurements off of the internal Vref of 1.1v. Even though the internal Vref is smaller than Vcc, you can still use it to calibrate the Vcc measurement so that it is accurate. This has a lot of advantages for things other than battery measurement as it can keep the voltage readings steady even if you don't have a clean Vcc source. Please note that I did not come up with this solution, I merely found it here: [<https://provideyourown.com/2012/secret-arduino-voltmeter-measure-battery-voltage/][1]> Best of luck!
137,902
I created a device, based on an Arduino Uno, which runs on 6 rechargeable NiMH batteries. Now I would like to add a check, if the batteries have enough power left, to warn with a signal when the batteries needs recharging. As I understand, the voltage of the batteries will slowly go down, until they drop under the minimum required level. **How is a battery load level check implemented? What kind of elements do I need?**
2014/11/10
[ "https://electronics.stackexchange.com/questions/137902", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/56669/" ]
I realize that this is a little late, but I have had this same problem and found a solution that looks like it will work great. Basically, the reference voltage on the Arduino is based off of Vcc (unless you provide an alternative Vref), which makes measuring Vcc very difficult. To fix this, you can base your measurements off of the internal Vref of 1.1v. Even though the internal Vref is smaller than Vcc, you can still use it to calibrate the Vcc measurement so that it is accurate. This has a lot of advantages for things other than battery measurement as it can keep the voltage readings steady even if you don't have a clean Vcc source. Please note that I did not come up with this solution, I merely found it here: [<https://provideyourown.com/2012/secret-arduino-voltmeter-measure-battery-voltage/][1]> Best of luck!
You should use a voltage divider using 2 resistors to decrease the voltage to the range that mcu can meager, and connect the output to the ABC pin of mcu.
35,798,911
I want to pull a set of user stories from a SOURCE TFS instance and put them into a TARGET TFS instance using Excel. I know other people have done this! However, once I download the stories into Excel, I cannot rebind the spreadsheet to the TARGET TFS instance. I keep getting the following: > > "The reconnect operation failed because the team project collection > you selected does not host the team project the document references." > > > And, I dont see a way to clear the ID for the story or edit the document project/server references. **Q: How do I Migrate User Stories From One TFS Server To Another In Excel?** This should be easy!
2016/03/04
[ "https://Stackoverflow.com/questions/35798911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312317/" ]
You need to create another excel sheet with the same columns that is bound to the new TFS server. Then just copy and paste between them.
I think it would be easier to use the TFS API to read from one instance and copy them to another TFS instance. The following post provides an example: <https://blogs.msdn.microsoft.com/bryang/2011/09/07/copying-tfs-work-items/>
380,951
I'm working on an FPGA MAC module and I'm kind of confused with TX\_ER signal. A '1' in TX\_ER means there's a whatever error in the current packet being sent by MAC. To my understanding, a MAC frame consists of protocol specific header and FCS where I don't expect errors occur, and playload from upper layer which should be transparent to MAC. Then where does this error come from? If MAC is aware of this error, why does it send the frame?
2018/06/21
[ "https://electronics.stackexchange.com/questions/380951", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/111834/" ]
Typically, the MAC does not have enough FIFO to hold the entire packet. Therefore, it must start transmitting to the PHY before it has the entire packet. If an error condition, such as FIFO underflow, occurs, it may have already started transmitting. By asserting TX\_ER, it causes the PHY to generate an unambiguously invalid frame, guaranteeing that no receiver will accept it.
If some sort of error occurs while the MAC is sending the frame (say, a transmit buffer underrun) then it has to cut off the frame and indicate explicitly that it is an invalid frame. This is done by bringing tx\_er high while transmitting the frame. This signal is transferred via the PHY layer encoding and results in an assertion of the rx\_er signal at the receiver.
380,951
I'm working on an FPGA MAC module and I'm kind of confused with TX\_ER signal. A '1' in TX\_ER means there's a whatever error in the current packet being sent by MAC. To my understanding, a MAC frame consists of protocol specific header and FCS where I don't expect errors occur, and playload from upper layer which should be transparent to MAC. Then where does this error come from? If MAC is aware of this error, why does it send the frame?
2018/06/21
[ "https://electronics.stackexchange.com/questions/380951", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/111834/" ]
Typically, the MAC does not have enough FIFO to hold the entire packet. Therefore, it must start transmitting to the PHY before it has the entire packet. If an error condition, such as FIFO underflow, occurs, it may have already started transmitting. By asserting TX\_ER, it causes the PHY to generate an unambiguously invalid frame, guaranteeing that no receiver will accept it.
At least, two common cases--more known than others--are: * error propagation, and * carrier extension. IEEE Std. 802.3, Clause 35, Table 35-1 "Permissible encodings of TXD<7:0>, TX\_EN, and TX\_ER" briefs the usage of TX\_EN in full. [![](https://i.stack.imgur.com/71Fki.gif)](https://i.stack.imgur.com/71Fki.gif) (Some years ago the IEEE Get program was more open/useful: anyone could download a current IEEE 802.3 free, without registration and/or other borders... Today the registration is mandatory... What should we expect from tomorrow?...) You wrote that you're working on a MAC design, what is the (normative) ground of your design if not the mentioned standard?
24,827,445
Out-of-memory error occurs frequently in the java programs. My question is simple: when exceeding the memory limitation, why java directly kill the program rather than swap it out to the disk? I think memory paging/swapping strategy is frequently used in the modern operating system and programming languages like c++ definitely supports swapping. Thanks.
2014/07/18
[ "https://Stackoverflow.com/questions/24827445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1589404/" ]
@Pimgd is sorta on track: but @Kayaman is right. Java doesn't handle memory besides requesting it from the system. C++ doesn't support swapping, it requests memory from the OS and the OS will do the swapping. If you request enough memory for your application with `-Xmx`, it might start swapping because the OS thinks it can.
Because Java is cross-platform. There might not be a disk. Other reasons could be that such a thing would affect performance and the developers didn't want that to happen (because Java already carries a performance overhead?).
24,827,445
Out-of-memory error occurs frequently in the java programs. My question is simple: when exceeding the memory limitation, why java directly kill the program rather than swap it out to the disk? I think memory paging/swapping strategy is frequently used in the modern operating system and programming languages like c++ definitely supports swapping. Thanks.
2014/07/18
[ "https://Stackoverflow.com/questions/24827445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1589404/" ]
Because Java is cross-platform. There might not be a disk. Other reasons could be that such a thing would affect performance and the developers didn't want that to happen (because Java already carries a performance overhead?).
A few words about paging. Virtual memory using paging - storing 4K (or similar) chunks of any program that runs on a system - is something an operating system can or cannot do. The promise of an address space only limited by the capacity of a machine word used to store an address sounds great, but there's a severe downside, which is called `thrashing`. This happens when the number of page (re)loads exceeds a certain frequency, which in turn is due of too many processes requesting too much memory in combination with non-locality of memory accesses of those processes. (A process has a good locality if it can execute long stretches of code while accessing only a small percentage of its pages.) Paging also requires (fast) secondary storage. The ability to limit your program's memory resources (as in Java) is not only a burden; it must also be seen as a blessing when some overall plan for resource usage needs to be devised for a, say, server system.
24,827,445
Out-of-memory error occurs frequently in the java programs. My question is simple: when exceeding the memory limitation, why java directly kill the program rather than swap it out to the disk? I think memory paging/swapping strategy is frequently used in the modern operating system and programming languages like c++ definitely supports swapping. Thanks.
2014/07/18
[ "https://Stackoverflow.com/questions/24827445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1589404/" ]
@Pimgd is sorta on track: but @Kayaman is right. Java doesn't handle memory besides requesting it from the system. C++ doesn't support swapping, it requests memory from the OS and the OS will do the swapping. If you request enough memory for your application with `-Xmx`, it might start swapping because the OS thinks it can.
A few words about paging. Virtual memory using paging - storing 4K (or similar) chunks of any program that runs on a system - is something an operating system can or cannot do. The promise of an address space only limited by the capacity of a machine word used to store an address sounds great, but there's a severe downside, which is called `thrashing`. This happens when the number of page (re)loads exceeds a certain frequency, which in turn is due of too many processes requesting too much memory in combination with non-locality of memory accesses of those processes. (A process has a good locality if it can execute long stretches of code while accessing only a small percentage of its pages.) Paging also requires (fast) secondary storage. The ability to limit your program's memory resources (as in Java) is not only a burden; it must also be seen as a blessing when some overall plan for resource usage needs to be devised for a, say, server system.
177,982
I'm new to Blender and I like very much the esthetics of the viewport. Is it possible to replicate those settings when rendering it? [![enter image description here](https://i.stack.imgur.com/HFKhX.gif)](https://i.stack.imgur.com/HFKhX.gif) I got this look when those settings are applied. [![solid view settings](https://i.stack.imgur.com/lOwEt.png)](https://i.stack.imgur.com/lOwEt.png) Bigger resolution image: [![enter image description here](https://i.stack.imgur.com/XH9Mx.jpg)](https://i.stack.imgur.com/XH9Mx.jpg)
2020/05/11
[ "https://blender.stackexchange.com/questions/177982", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/96216/" ]
Change the render engine from **Cycles** or **Eevee** to **Workbench** in the **Render Properties** tab and play with the configurations: [![enter image description here](https://i.stack.imgur.com/OvThS.jpg)](https://i.stack.imgur.com/OvThS.jpg) also, this video show in details how to use Workbench: [Introduction to Workbench Blender 2.8](https://www.youtube.com/watch?v=tnDythbQCZM)
If you want to render *exactly* what you see in the viewport, you can use : View Menu / Viewport Render Image. Before doing that, it could be nice to deactivate some options in the Overlays and Gizmos settings. You can even uncheck the menu icons to disable them totally. [![Gizmos and Overlays settings](https://i.stack.imgur.com/Ez1pb.png)](https://i.stack.imgur.com/Ez1pb.png)
104,400
I've read that "would rather" has two different constructions; same subject and different subjects. Some of the examples have been listed below: > > 1. I would rather they did something about it. > > > Question 1: Does it mean "I would prefer them to do something about it" at present moment or in the future? > > 2. Rahul joined Engineering but he'd rather has joined medicine. > > > Question 2. Does it mean "He would have preferred to join medicine but he joined Engineering" > > 3. I would rather you stayed at home tonight. > > > Question 3. Can't we just say "you would rather stayed at home tonight." Without changing the meaning of above sentence? Source: <http://dictionary.cambridge.org/grammar/british-grammar/would-rather-would-sooner#would-rather-would-sooner__1> Note: I have read this question ["I would rather did it myself" or "I would rather do it myself"?](https://ell.stackexchange.com/questions/13224/i-would-rather-did-it-myself-or-i-would-rather-do-it-myself) which is a bit similar to my question because both are about "would rather". But all the example sentences and questions that I have asked in my question are different.
2016/09/23
[ "https://ell.stackexchange.com/questions/104400", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/32996/" ]
Your interpretations of all the meanings except 3 are correct: 3 means "he wishes he had joined medicine". You alternative ways of phrasing it (Q1.2 and Q4.2) are not correct, though. > > **I** would rather... > > > This specifies who wants something to happen. > > ... **they** did something about it... > > > This specifies who should do something. > > 1. **I** would rather **they** did something about it... > > > This specifies that **I** want **them** to do something. > > 1.2 **They** would rather do something about it... > > > This specifies that **they** want **themselves** to do something. > > 4. I would rather you stayed at home tonight > > > This specifies that **I** want **you** to stay at home tonight. > > 4.2 you would rather stay at home tonight > > > This specifies that **you** want **yourself** to stay at home tonight.
1. It means that right now, they are only talking. Instead, you wish that they would do something. 2. You are correct. It means you are not happy that you have been rung at work, with the implication that being rung elsewhere would be okay. 3. Corrected: "Rahul joined Engineering, but he'd rather **have** joined Medicine." So, Rahul wanted to join Medicine, but for some reason was unable to so he joined Engineering instead. 4. In this tense, it means that before "tonight" (e.g. in the afternoon) you are telling someone that you would prefer them to stay at home tonight. The sentence "you'd rather stayed at home tonight" doesn't make grammatical sense, but you could say "you'd rather **stay** at home tonight" which sounds awkward and is telling someone what they are thinking. It wouldn't really be used unless you were trying to imply that someone should stay home for a particular reason. If you added a question mark, it's asking someone if they would prefer to stay home tonight.
281,428
<https://en.wikipedia.org/wiki/Broad_Institute> > > The Eli and Edythe L. Broad Institute of MIT and Harvard (), often referred to as the Broad Institute, is a biomedical and genomic research center located in Cambridge, Massachusetts, United States. The institute is independently governed and supported as a 501(c)(3) nonprofit research organization under the name Broad Institute Inc.,[1][2] and **is partners** with Massachusetts Institute of Technology, Harvard University, and the five Harvard teaching hospitals. > > >
2015/10/20
[ "https://english.stackexchange.com/questions/281428", "https://english.stackexchange.com", "https://english.stackexchange.com/users/17129/" ]
The following are equivalent to "We are **in partnership** together.": * I am **partners** with you. * You and I are **partners**. --- * I am your **partner**. * I am a **partner** with you. I have deliberately listed them as pairs. The first two emphasize that I am in a dual relationship (me and you); the second emphasizes my singular role in a mutual relationship. The example in the question is more complicated. How many partnerships are there? Are there three -- one between Broad Institute and each of MIT, Harvard, and the hospitals?\* Or is there only one, the one between Broad Institute and a unified consortium of MIT, Harvard, and the hospitals? Grammar can take you only so far, and the grammar here doesn't tell us for sure, but MIT and Harvard University are separate and independent institutions, so there are likely three partnerships involving the Broad Institute, making the meaning > > Broad Institute Inc. is a partner with each of the Massachusetts > Institute of Technology, Harvard University, and the five Harvard > teaching hospitals > > > and making the plural *partners* appropriate. If the singular had been used, on the other hand: > > Broad Institute Inc. is a **partner** with the Massachusetts > Institute of Technology, Harvard University, and the five Harvard > teaching hospitals > > > you would be justified in concluding that the Broad Institute was one partner and the other was a group of two universities and five hospitals. \*I'm assuming here that the five hospitals act together as one entity. If not, my count is low.
Plural is always correct because there is always more than one entity in a partnership. However, I understand why is doesn't sound right since the verb is singular. How about changing the tense, as in "and has partnered with..."?
9,310,060
I wanted to send location information every 15miutes through service. The problem i faced it, the service get killed once after few hours. So, What am i thinking is to send location information & stop the service & Create it once again after a 15minutes. is it good idea to do that? How it can be accomplished? How it can be accomplished , i don't know exactly how to stop & create service every 15 minutes. Thanks..
2012/02/16
[ "https://Stackoverflow.com/questions/9310060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1089149/" ]
You can do this with help of AlarmManager, Reschedule Alarm every time after it invokes that is the best way, as per your scenario. AlarmManager is never killed because it directly connected with System RTC. [here](http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/AlarmService.html) is the sample example.
In Android you can use Timer and TimerTask in a Service. Here are some examples [Android - Controlling a task with Timer and TimerTask?](https://stackoverflow.com/questions/2161750/android-controlling-a-task-with-timer-and-timertask) [Pausing/stopping and starting/resuming Java TimerTask continuously?](https://stackoverflow.com/questions/2098642/pausing-stopping-and-starting-resuming-java-timertask-continuously) [Android Timer within a service](https://stackoverflow.com/questions/3819676/android-timer-within-a-service)
10,303,394
I just started learning Erlang. My task to write a simple script for testing web applications. I hasn't found work script in the Internet, and Tsung too bulky for such a task. Is anyone can help me (give working example of script or link where I can found it)? What would be possible to specify a URL, and concurrency, and time of testing and get the results. Thanks. This links not help: * <http://effectiveqa.blogspot.com/2009/12/minimal-erlang-script-for-load-testing.html> (not working, function example/0 undefined ) * <http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1> (work for socket, but I need concurrent testing)
2012/04/24
[ "https://Stackoverflow.com/questions/10303394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/922516/" ]
I use for such purposes basho [bench](http://wiki.basho.com/Benchmarking.html). It not so hard to start with it and add your own cases. Also it contains script, which draw all results.
Would like to build one? I would not recommend that way (because I have tried and there are so many things to consider to build one, especially spawning many processes and collecting the result back) As you already know, I would recommend tsung, although it is bulky, it is a full load test application. I have gave up mine, and went back to tsung because could not properly handle opening/closing sockets with too many processes. If you really want a simple one, I would use httperf. AFAKI, it works fine with single machine with multiple processes. <http://agiletesting.blogspot.ca/2005/04/http-performance-testing-with-httperf.html>
273,787
Is there a word or short phrase for this? I have a cell in the back of my head insisting that it's "[something] pain". The person who feels this way tries to wear their outcast status as a badge of honor, without letting other people realize that they are trying to do so. They are quietly trying to get themselves compared with those great geniuses of history who just wanted to fit in but, because of their genius, could not.
2015/09/13
[ "https://english.stackexchange.com/questions/273787", "https://english.stackexchange.com", "https://english.stackexchange.com/users/127519/" ]
You want to highlight this [poseur](http://www.merriam-webster.com/dictionary/poseur)'s objectionable [affectation](http://www.merriam-webster.com/dictionary/affectation)? We've got a lot of words for inauthenticity. How contemptuous do you want to be? To say he's an *aspiring outcast* suggests, to me, that the harm is relatively minor. To say he's a *professional outcast* underscores the artifice of his behavior with a little more irony.
I think that in the context you are describing [masochist](http://www.oxforddictionaries.com/us/definition/american_english/masochism) , with the following connotation, may fit: > > * (In general use) the enjoyment of what appears to be painful or tiresome: > *isn’t there some masochism involved in taking on this kind of project?* > > > (ODO)
273,787
Is there a word or short phrase for this? I have a cell in the back of my head insisting that it's "[something] pain". The person who feels this way tries to wear their outcast status as a badge of honor, without letting other people realize that they are trying to do so. They are quietly trying to get themselves compared with those great geniuses of history who just wanted to fit in but, because of their genius, could not.
2015/09/13
[ "https://english.stackexchange.com/questions/273787", "https://english.stackexchange.com", "https://english.stackexchange.com/users/127519/" ]
You want to highlight this [poseur](http://www.merriam-webster.com/dictionary/poseur)'s objectionable [affectation](http://www.merriam-webster.com/dictionary/affectation)? We've got a lot of words for inauthenticity. How contemptuous do you want to be? To say he's an *aspiring outcast* suggests, to me, that the harm is relatively minor. To say he's a *professional outcast* underscores the artifice of his behavior with a little more irony.
Perhaps martyrish or or martyrly? Also, masochistic, because of your use of the word "pleasure." If pleasure isn't a necessary component, then perhaps contrarian, egoistic, egotistical, self-conceited, vainglorious, or self-important.
273,787
Is there a word or short phrase for this? I have a cell in the back of my head insisting that it's "[something] pain". The person who feels this way tries to wear their outcast status as a badge of honor, without letting other people realize that they are trying to do so. They are quietly trying to get themselves compared with those great geniuses of history who just wanted to fit in but, because of their genius, could not.
2015/09/13
[ "https://english.stackexchange.com/questions/273787", "https://english.stackexchange.com", "https://english.stackexchange.com/users/127519/" ]
You want to highlight this [poseur](http://www.merriam-webster.com/dictionary/poseur)'s objectionable [affectation](http://www.merriam-webster.com/dictionary/affectation)? We've got a lot of words for inauthenticity. How contemptuous do you want to be? To say he's an *aspiring outcast* suggests, to me, that the harm is relatively minor. To say he's a *professional outcast* underscores the artifice of his behavior with a little more irony.
If you're looking for a word to describe someone that is pretentious and countercultural, I think [Bohemian](http://www.oxforddictionaries.com/us/definition/learner/bohemian) would fit well. It tends to be used more for artists, though. [Maverick](http://www.oxforddictionaries.com/us/definition/learner/maverick) is similar and can be used more generally.
273,787
Is there a word or short phrase for this? I have a cell in the back of my head insisting that it's "[something] pain". The person who feels this way tries to wear their outcast status as a badge of honor, without letting other people realize that they are trying to do so. They are quietly trying to get themselves compared with those great geniuses of history who just wanted to fit in but, because of their genius, could not.
2015/09/13
[ "https://english.stackexchange.com/questions/273787", "https://english.stackexchange.com", "https://english.stackexchange.com/users/127519/" ]
You want to highlight this [poseur](http://www.merriam-webster.com/dictionary/poseur)'s objectionable [affectation](http://www.merriam-webster.com/dictionary/affectation)? We've got a lot of words for inauthenticity. How contemptuous do you want to be? To say he's an *aspiring outcast* suggests, to me, that the harm is relatively minor. To say he's a *professional outcast* underscores the artifice of his behavior with a little more irony.
Try [pariah](http://dictionary.reference.com/browse/pariah?s=t "pariah"). It actually means "outcast", but when used sarcastically means someone who has affected this status.
100,498
If I wanted to explore a [discrete mathematics](http://en.wikipedia.org/wiki/Discrete_mathematics) approach to [continuum mechanics](http://en.wikipedia.org/wiki/Continuum_mechanics), what textbooks should I look into? I suppose a ready answer to the question might be: "computational continuum mechanics", but usually textbooks that discuss such a subject are usually focused upon applying numerical analysis to continuous theories (i.e. the base is continuous), whereas I would like to know if there is a treatment of the subject that builds up from a base that is discrete.
2014/02/23
[ "https://physics.stackexchange.com/questions/100498", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ]
At time 0 when you throw the ball, there only exists the horizontal velocity you gave it, as the acceleration due to gravity hasn't created a vertical velocity yet. This horizontal velocity will remain constant until the ball hits the ground and eventually stops. Gravity is adding a perpendicular component that will affect the resultant velocity, but not the horizontal component you gave it. Knowing the time it takes for the ball to hit the ground and the distance travelled it is possible to obtain the initial horizontal velocity, simply by calculating it as the horizontal distance travelled divided by the time it was airborne, as this component of the velocity is the only one driving the ball forward: vh \* 0.5s = 15.5m -> vh = 31m/s The final velocity would be calculated as you propose, vh would be 31m/s and vy can be found knowing the acceleration due to gravity and the time elapsed. How you define your maximum height is a matter of notation. You would need the mass of the object in order to calculate kinetic energy, forces and work.
Assuming no air resistance the horizontal velocity will not change. You can calculate the horizontal velocity from the given time and distance. Vertically is an accelerated motion with constant acceleration. You can calculate the initial height from there. You are right about the final velocity. You just need to calculate $v\_y$. You need the mass for kinetic energy. You can not find the mass from the given information as it applies to all objects when there is no air resistance.
100,498
If I wanted to explore a [discrete mathematics](http://en.wikipedia.org/wiki/Discrete_mathematics) approach to [continuum mechanics](http://en.wikipedia.org/wiki/Continuum_mechanics), what textbooks should I look into? I suppose a ready answer to the question might be: "computational continuum mechanics", but usually textbooks that discuss such a subject are usually focused upon applying numerical analysis to continuous theories (i.e. the base is continuous), whereas I would like to know if there is a treatment of the subject that builds up from a base that is discrete.
2014/02/23
[ "https://physics.stackexchange.com/questions/100498", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ]
At time 0 when you throw the ball, there only exists the horizontal velocity you gave it, as the acceleration due to gravity hasn't created a vertical velocity yet. This horizontal velocity will remain constant until the ball hits the ground and eventually stops. Gravity is adding a perpendicular component that will affect the resultant velocity, but not the horizontal component you gave it. Knowing the time it takes for the ball to hit the ground and the distance travelled it is possible to obtain the initial horizontal velocity, simply by calculating it as the horizontal distance travelled divided by the time it was airborne, as this component of the velocity is the only one driving the ball forward: vh \* 0.5s = 15.5m -> vh = 31m/s The final velocity would be calculated as you propose, vh would be 31m/s and vy can be found knowing the acceleration due to gravity and the time elapsed. How you define your maximum height is a matter of notation. You would need the mass of the object in order to calculate kinetic energy, forces and work.
It actually seems confusing. But it all depends on the same factors. The initial height, velocity & g. Let height = h. t = √(2h/g) And distance travelled is V×t as theta is 0. All velocity you have given is horizontal. No vertical component initially exists.
80,718
The phrase "bedroom eyes" came up in another question, and the person who used it remarked that to him/her it meant that, from a physical standpoint "that means they've got dilated pupils". This didn't mesh with what I remembered hearing, which was that the eyes were semi-lidded. Dictionary.com mentions nothing about either. UrbanDictionary, for whatever that's worth, mentions semi-lidded eyes a couple times, but not as the highest rated answers. Is there any specific physical implication to having "bedroom eyes"?
2012/09/05
[ "https://english.stackexchange.com/questions/80718", "https://english.stackexchange.com", "https://english.stackexchange.com/users/4256/" ]
Dilated pupils or heavy lids are not a part of the definition of “bedroom eyes”. More generally, there is no one specific physical aspect that is essential to he meaning of the expression “bedroom eyes”. Any “way of looking at someone that shows you are sexually attracted to them” (*[Macmillan Dictionary](http://www.macmillandictionary.com/dictionary/american/bedroom-eyes)*) can be called “bedroom eyes”. ![Marilyn Monroe’s famous “bedroom eyes”](https://i.stack.imgur.com/6KatU.jpg "Marilyn Monroe’s famous “bedroom eyes”") *Marilyn Monroe’s famous “bedroom eyes”* This is not to say that dilated pupils cannot be “bedroom eyes”. In fact they can, and for good reason: one cause of dilated pupils is sexual arousal. See *Wikipedia*’s article on the subject, “Mydriasis”, under the subsection titled “[Effects](http://en.wikipedia.org/wiki/Mydriasis#Effects)”. In scientific studies, subjects rate images of people with dilated pupils as more attractive. [![dilated pupils](https://i.stack.imgur.com/6zI20.jpg "dilated pupils")](https://i.stack.imgur.com/6zI20.jpg "dilated pupils") (Image from the article “[Eye Signals](http://westsidetoastmasters.com/resources/book_of_body_language/chap8.html)”, published in *Westside Toastmasters, For Public Speaking and Leadership Education*.) A tea made from the poisonous plant *Atropa belladonna* has been used by countless women to dilate their pupils or “darken the eyes”. Hence the name of the plant, *bella donna*, Italian for “beautiful woman”.
I always knew "bedroom eyes" as meaning the look of a woman's eyes when she attempts to open them during climax.
80,718
The phrase "bedroom eyes" came up in another question, and the person who used it remarked that to him/her it meant that, from a physical standpoint "that means they've got dilated pupils". This didn't mesh with what I remembered hearing, which was that the eyes were semi-lidded. Dictionary.com mentions nothing about either. UrbanDictionary, for whatever that's worth, mentions semi-lidded eyes a couple times, but not as the highest rated answers. Is there any specific physical implication to having "bedroom eyes"?
2012/09/05
[ "https://english.stackexchange.com/questions/80718", "https://english.stackexchange.com", "https://english.stackexchange.com/users/4256/" ]
Dilated pupils or heavy lids are not a part of the definition of “bedroom eyes”. More generally, there is no one specific physical aspect that is essential to he meaning of the expression “bedroom eyes”. Any “way of looking at someone that shows you are sexually attracted to them” (*[Macmillan Dictionary](http://www.macmillandictionary.com/dictionary/american/bedroom-eyes)*) can be called “bedroom eyes”. ![Marilyn Monroe’s famous “bedroom eyes”](https://i.stack.imgur.com/6KatU.jpg "Marilyn Monroe’s famous “bedroom eyes”") *Marilyn Monroe’s famous “bedroom eyes”* This is not to say that dilated pupils cannot be “bedroom eyes”. In fact they can, and for good reason: one cause of dilated pupils is sexual arousal. See *Wikipedia*’s article on the subject, “Mydriasis”, under the subsection titled “[Effects](http://en.wikipedia.org/wiki/Mydriasis#Effects)”. In scientific studies, subjects rate images of people with dilated pupils as more attractive. [![dilated pupils](https://i.stack.imgur.com/6zI20.jpg "dilated pupils")](https://i.stack.imgur.com/6zI20.jpg "dilated pupils") (Image from the article “[Eye Signals](http://westsidetoastmasters.com/resources/book_of_body_language/chap8.html)”, published in *Westside Toastmasters, For Public Speaking and Leadership Education*.) A tea made from the poisonous plant *Atropa belladonna* has been used by countless women to dilate their pupils or “darken the eyes”. Hence the name of the plant, *bella donna*, Italian for “beautiful woman”.
I am Italian and I was always told that "bedroom eyes" was a common Italian phrase (in the old days)that was said to mean a seductive beauty in the look of a woman's eyes. An Italian woman with "bedroom eyes" is said to be able to seduce a man with a simple look, due to the enchanting beauty and look of seduction in her eyes.
80,718
The phrase "bedroom eyes" came up in another question, and the person who used it remarked that to him/her it meant that, from a physical standpoint "that means they've got dilated pupils". This didn't mesh with what I remembered hearing, which was that the eyes were semi-lidded. Dictionary.com mentions nothing about either. UrbanDictionary, for whatever that's worth, mentions semi-lidded eyes a couple times, but not as the highest rated answers. Is there any specific physical implication to having "bedroom eyes"?
2012/09/05
[ "https://english.stackexchange.com/questions/80718", "https://english.stackexchange.com", "https://english.stackexchange.com/users/4256/" ]
I am Italian and I was always told that "bedroom eyes" was a common Italian phrase (in the old days)that was said to mean a seductive beauty in the look of a woman's eyes. An Italian woman with "bedroom eyes" is said to be able to seduce a man with a simple look, due to the enchanting beauty and look of seduction in her eyes.
I always knew "bedroom eyes" as meaning the look of a woman's eyes when she attempts to open them during climax.
36,276
I found that there is an article entitled [Understanding the StackOverflow Database Schema](http://sqlserverpedia.com/wiki/Understanding_the_StackOverflow_Database_Schema), I wonder how and where did the author get the source of information? Is it from public SO blog posts and articles and data dumps, or the author has other private information that can only come from the horse mouth?
2010/01/20
[ "https://meta.stackexchange.com/questions/36276", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/3834/" ]
Yes to both. The author (Brent Ozar) did some work for the SO team and does have special knowledge of the real schema. However, this article specifically targets the monthly data dump.
Reading the article, it sounds like it's just talking about the public dumps. Following the 'data mining the so database' link leads to a description of downloading the dump.
11,264,211
i can't create new android project. there is no "Android Project". There are android activity, android application project.... How i can create it?
2012/06/29
[ "https://Stackoverflow.com/questions/11264211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1491467/" ]
Do you have ADT plug-in installed? ([here](http://developer.android.com/tools/sdk/eclipse-adt.html))
Similar question [Eclipse Juno won't create Android Activity](https://stackoverflow.com/questions/11260619/eclipse-juno-wont-create-android-activity) see the issue: <http://code.google.com/p/android/issues/detail?id=33859>
7,777
See this video, about three minutes in, for details: I'm having difficulty achieving the 'spiccato' sound at sufficient speed, and am looking for tips on the bowing technique that will help me with this. Part of the problem is that my bow is not of very high quality, and is somewhat less 'springy' or 'jumpy' than most. How should I hold and manipulate my bow to better achieve the fast spiccato seen in the video I linked?
2012/11/12
[ "https://music.stackexchange.com/questions/7777", "https://music.stackexchange.com", "https://music.stackexchange.com/users/3194/" ]
If the time signatures match, you can play any tune to any rhythm. There is no right and wrong, only what *you* feel sounds good. Debussy to a disco beat? Led Zeppelin to a reggae beat? They've been done successfully. I suggest you choose a tempo first, then go through the 180 rhythms one by one to see which works for you. That sounds like a lot of work, but at 120bpm, listening to a bar only takes 2 seconds, so you could test out the whole lot in 10 minutes. I listened to the original version of Nadia, and it didn't have a percussion part. It might work with a basic rock rhythm (snares on the up-beats), or with a Latin rhythm - but that's just my subjective view.
To find the right rhythm, at first find out the basics. Is it 4/4, 3/4 or something more complex? You may be able to work this out by listening, or you may want to look at the sheet music. The next problem is that the rhythm may change through the song - if this is the case you may just need to go with a rhythm that fits at a basic level. Working out the tempo should be relatively easy once you have the correct rhythm.
12,804,373
We are a small company that develop applications that have an app as the user interface. The backend is a Java server. We have an Android and an Iphone version of our apps and we have been struggling a bit with keeping them in synch functionality-wise and to keep a similar look-and-feel without interfering with the standards and best practices on each platform. Most of the app development is made by subcontractors. Now we have opened up a dialogue with company that build apps using Corona, which is a framework for building apps in one place and generating Iphone and Android apps from there. They tell us it is so much faster and so easy and everything is great. The Corona Labs site tells me pretty much the same. But i've seen these kind of product earlier in my career, so i am a bit skeptical. Also, i've seen the gap between what sales people say and what is the truth. I thought i'd ask the question here and hopefully get some input from those of you who know more about this. Please share what you know and what you think.
2012/10/09
[ "https://Stackoverflow.com/questions/12804373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1624714/" ]
This is a very controversial topic and opinions can vary. Disclaimer: This answer is for all of the generic "code once for all platform "solutions. I have used Corona in the past for OpenGL related work and it works well. Assuming you are not making a game.....(game is another story since the user experience is similar and platform agnostic) Personally, I'd say stay AWAY from these solutions if you are building anything that is complex. Yes, you will only have to maintain one code base, but maintain two or three code base does NOT necessarily mean that more time is required, especially if you will make multiple apps and have common code between them. The top five reasons not to use them that I can think top of my head are: 1. You will often run into problems that you won't know how to solve and there is a much smaller community with each framework. 2. You won't likely save time because you will have to code parts natively and you will have to learn the respective platform anyways. 3. The look and feel, as well as navigation on Android and iOS is different. (Example: just look at the leather header on iOS). Having code a few apps for both iOS and Android, personally I feel that it is impossible to have the same user experience for both platform. Example: Android have a back button. 4. Performance will likely vary a lot. (Especially those HTML5 based, see how Facebook just switched to Native?...note that Corona is NOT html5 based though) 5. You have to pay. In summary, you won't save time AND money in the short term or the long term. :) However, this industry is moving VERY fast right now, so they may become much better solutions in the next few years.
I think it is a trully terrible idea if you want to do a quality app. Not specifically Corona; but any code once run anywhere tool for mobile apps. At least Corona is not based on html5 ; I don't have any bias against webapps but I simply don't know any good mobile app based on html5. I think it could very easily lead to way more maintainability problems than if you were maintening two clean code bases.
131,413
So I've had a talk with my manager and lead and they determined I wasn't getting a pay raise due to communication. My current job is remote and so as my manager and lead. During my review, they've stated I'm not calling/communicating with them enough. They seem to be very A type personality and I'm not the type of person to just ring up my manager on a Saturday to talk about my personal life. (my manager has does to be by the way) I try to keep professional and separate my work life and personal life. I do however call them if something urgent comes up and do talk with them during our project meetings and explain issues and give suggestions. I have multiple projects going and has been very hectic (60hr weeks for months) As a result, they've said I don't qualify for a raise. This really has never been an issue with my previous companies and I've been with this company for a little over a year. Is this a sign of things to come?
2019/03/13
[ "https://workplace.stackexchange.com/questions/131413", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/54235/" ]
**Side-answer** "60hr weeks for months" and you still work there?! Man, that is the biggest signal that you need to update and then use your CV. That is exploitation. And since you do not get a salary raise, I assume you do not get paid for the 50% extra-work either. *This is not a sign of the things to come, it is a proof of the things that already Are!!* **Main Answer** There are bosses like the one(s) you described, which value you talking to them more than you doing real work. I know this from experience. I was shocked 2 times in the past because of this: 1. Why for many years I did not get the raises and the recognition that I deserved (actually, more than deserved)? 2. Why did I started to get a lot of recognition after I started reporting to my (relevant) boss(es), even though my amount of work declined significantly? It was senior level work, true, but still, the levels of stress were through the floor compared to before. **Simple solution: give your bosses *what they expect*, not what you think they need.**
Does not getting a raise is a signal of something? *Indifferently.* Does not getting a raise AND working 60hr weeks for months IS a signal of something? **Yes it is.** They will squeeze you like a lemon for the most work you can do for the lowest amount of money. Is communication part of your job? Was is in your yearly goals? Did you had task named "communication"? It show that they just don't want to pay you and use some trivial excuse. What is "enough communication"? Monthly written reports? Daily verbal ones? Records of key pressed during a week and mile travelled by mouse? It cannot be surprise rule showed at review. If your boss felt you don't communicate enough they should let you know **when it was happening** not wait for review to cite it as a reason for not giving you a rise.
131,413
So I've had a talk with my manager and lead and they determined I wasn't getting a pay raise due to communication. My current job is remote and so as my manager and lead. During my review, they've stated I'm not calling/communicating with them enough. They seem to be very A type personality and I'm not the type of person to just ring up my manager on a Saturday to talk about my personal life. (my manager has does to be by the way) I try to keep professional and separate my work life and personal life. I do however call them if something urgent comes up and do talk with them during our project meetings and explain issues and give suggestions. I have multiple projects going and has been very hectic (60hr weeks for months) As a result, they've said I don't qualify for a raise. This really has never been an issue with my previous companies and I've been with this company for a little over a year. Is this a sign of things to come?
2019/03/13
[ "https://workplace.stackexchange.com/questions/131413", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/54235/" ]
**Side-answer** "60hr weeks for months" and you still work there?! Man, that is the biggest signal that you need to update and then use your CV. That is exploitation. And since you do not get a salary raise, I assume you do not get paid for the 50% extra-work either. *This is not a sign of the things to come, it is a proof of the things that already Are!!* **Main Answer** There are bosses like the one(s) you described, which value you talking to them more than you doing real work. I know this from experience. I was shocked 2 times in the past because of this: 1. Why for many years I did not get the raises and the recognition that I deserved (actually, more than deserved)? 2. Why did I started to get a lot of recognition after I started reporting to my (relevant) boss(es), even though my amount of work declined significantly? It was senior level work, true, but still, the levels of stress were through the floor compared to before. **Simple solution: give your bosses *what they expect*, not what you think they need.**
> > During my review, they've stated I'm not calling/communicating with > them enough. > > > I have multiple projects going and has been very hectic (60hr weeks > for months) As a result, they've said I don't qualify for a raise. > This really has never been an issue with my previous companies and > I've been with this company for a little over a year. Is this a sign > of things to come? > > > Possibly. If your lack of communication is such that they won't give you a raise, then you may not be cut out for this role unless you can change. Often working remotely means it is more difficult to stay in touch with others. Work harder to learn how your lead and manager expect you to communicate and then follow through. Make sure you understand specifically what they mean regarding calling/communication. It may have nothing to do with calling on Saturday and instead may mean that they want you to contact them when you are stuck, for example. If this is your first indication that your communication is insufficient, then you may be able to salvage things. If you were already warned and still this was the reason cited during your salary review for not getting a raise, then you may want to start polishing your resume.
131,413
So I've had a talk with my manager and lead and they determined I wasn't getting a pay raise due to communication. My current job is remote and so as my manager and lead. During my review, they've stated I'm not calling/communicating with them enough. They seem to be very A type personality and I'm not the type of person to just ring up my manager on a Saturday to talk about my personal life. (my manager has does to be by the way) I try to keep professional and separate my work life and personal life. I do however call them if something urgent comes up and do talk with them during our project meetings and explain issues and give suggestions. I have multiple projects going and has been very hectic (60hr weeks for months) As a result, they've said I don't qualify for a raise. This really has never been an issue with my previous companies and I've been with this company for a little over a year. Is this a sign of things to come?
2019/03/13
[ "https://workplace.stackexchange.com/questions/131413", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/54235/" ]
Does not getting a raise is a signal of something? *Indifferently.* Does not getting a raise AND working 60hr weeks for months IS a signal of something? **Yes it is.** They will squeeze you like a lemon for the most work you can do for the lowest amount of money. Is communication part of your job? Was is in your yearly goals? Did you had task named "communication"? It show that they just don't want to pay you and use some trivial excuse. What is "enough communication"? Monthly written reports? Daily verbal ones? Records of key pressed during a week and mile travelled by mouse? It cannot be surprise rule showed at review. If your boss felt you don't communicate enough they should let you know **when it was happening** not wait for review to cite it as a reason for not giving you a rise.
> > During my review, they've stated I'm not calling/communicating with > them enough. > > > I have multiple projects going and has been very hectic (60hr weeks > for months) As a result, they've said I don't qualify for a raise. > This really has never been an issue with my previous companies and > I've been with this company for a little over a year. Is this a sign > of things to come? > > > Possibly. If your lack of communication is such that they won't give you a raise, then you may not be cut out for this role unless you can change. Often working remotely means it is more difficult to stay in touch with others. Work harder to learn how your lead and manager expect you to communicate and then follow through. Make sure you understand specifically what they mean regarding calling/communication. It may have nothing to do with calling on Saturday and instead may mean that they want you to contact them when you are stuck, for example. If this is your first indication that your communication is insufficient, then you may be able to salvage things. If you were already warned and still this was the reason cited during your salary review for not getting a raise, then you may want to start polishing your resume.
61,924
There's an application called ShadowKiller that seems popular and supposedly works for Lion, but it just seems to die as soon as I try to start it on Mountain Lion. I'd like to get around of the shadows surrounding a window.
2012/08/24
[ "https://apple.stackexchange.com/questions/61924", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/12324/" ]
This one works well for me: [toggle-osx-shadows](https://github.com/puffnfresh/toggle-osx-shadows). It is easy to compile and use, and there are only 17 lines of code.
[ShadowKiller](http://unsanity.com/haxies/shadowkiller) still works for me on 10.8, but it's supposed to quit silently after it's opened. You can run it at login by adding it to login items. [Nocturne](http://code.google.com/p/blacktree-nocturne/) also has an option to disable the shadows. Related questions at Super User: * [Disable drop shadows around windows or the menu bar on OS X](https://superuser.com/questions/256707/disable-drop-shadows-around-windows-or-the-menu-bar-on-os-x?lq=1) * [How do I decrease the window shadow in Mac OS X?](https://superuser.com/questions/126374/how-do-i-decrease-the-window-shadow-in-mac-os-x)
61,924
There's an application called ShadowKiller that seems popular and supposedly works for Lion, but it just seems to die as soon as I try to start it on Mountain Lion. I'd like to get around of the shadows surrounding a window.
2012/08/24
[ "https://apple.stackexchange.com/questions/61924", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/12324/" ]
This one works well for me: [toggle-osx-shadows](https://github.com/puffnfresh/toggle-osx-shadows). It is easy to compile and use, and there are only 17 lines of code.
The program I use to do this on OS X 10.8.4 is ShadowSweeper. <http://download.cnet.com/ShadowSweeper/3000-2072_4-75966596.html> This one looks like it might also work but I haven't tried it myself. <https://github.com/puffnfresh/toggle-osx-shadows>
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
You can give <http://import-ant.sourceforge.net/> a try. It is a set of build file snippets that can be used to create simple custom build files.
One thing to look at -- if you're using Eclipse, check out the ant4eclipse tasks. I use a single build script that asks for the details set up in eclipse (source dirs, build path including dependency projects, build order, etc). This allows you to manage dependencies in one place (eclipse) and still be able to use a command-line build to automation.
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
I had the same problem and generalized my templates and grow them into in own project: [Antiplate](http://antiplate.origo.ethz.ch/). Maybe it's also useful for you.
> > I used to do exactly the same thing.... then I switched to maven. > > > Oh, it's Maven 2. I was afraid that someone was still seriously using Maven nowadays. Leaving the jokes aside: if you decide to switch to Maven 2, you have to take care while looking for information, because Maven 2 is a complete reimplementation of Maven, with some fundamental design decisions changed. Unfortunately, they didn't change the name, which has been a great source of confusion in the past (and still sometimes is, given the "memory" nature of the web). Another thing you can do if you want to stay in the Ant spirit, is to use [Ivy](http://ant.apache.org/ivy/) to manage your dependencies.
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
I had the same problem and generalized my templates and grow them into in own project: [Antiplate](http://antiplate.origo.ethz.ch/). Maybe it's also useful for you.
One thing to look at -- if you're using Eclipse, check out the ant4eclipse tasks. I use a single build script that asks for the details set up in eclipse (source dirs, build path including dependency projects, build order, etc). This allows you to manage dependencies in one place (eclipse) and still be able to use a command-line build to automation.
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
If you are working on several projects with similar directory structures and want to stick with Ant instead of going to Maven use the [Import task](http://ant.apache.org/manual/Tasks/import.html). It allows you to have the project build files just import the template and define any variables (classpath, dependencies, ...) and have all the *real* build script off in the imported template. It even allows overriding of the tasks in the template which allows you to put in project specific pre or post target hooks.
> > I used to do exactly the same thing.... then I switched to maven. > > > Oh, it's Maven 2. I was afraid that someone was still seriously using Maven nowadays. Leaving the jokes aside: if you decide to switch to Maven 2, you have to take care while looking for information, because Maven 2 is a complete reimplementation of Maven, with some fundamental design decisions changed. Unfortunately, they didn't change the name, which has been a great source of confusion in the past (and still sometimes is, given the "memory" nature of the web). Another thing you can do if you want to stay in the Ant spirit, is to use [Ivy](http://ant.apache.org/ivy/) to manage your dependencies.
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
If you are working on several projects with similar directory structures and want to stick with Ant instead of going to Maven use the [Import task](http://ant.apache.org/manual/Tasks/import.html). It allows you to have the project build files just import the template and define any variables (classpath, dependencies, ...) and have all the *real* build script off in the imported template. It even allows overriding of the tasks in the template which allows you to put in project specific pre or post target hooks.
I used to do exactly the same thing.... then I switched to [maven](http://maven.apache.org/). Maven relies on a simple xml file to configure your build and a simple repository to manage your build's dependencies (rather than checking these dependencies into your source control system with your code). One feature I really like is how easy it is to version your jars - easily keeping previous versions available for legacy users of your library. This also works to your benefit when you want to upgrade a library you use - like junit. These dependencies are stored as separate files (with their version info) in your maven repository so old versions of your code always have their specific dependencies available. It's a better Ant.
28,202
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly. Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one? Thank you.
2008/08/26
[ "https://Stackoverflow.com/questions/28202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2644/" ]
You can give <http://import-ant.sourceforge.net/> a try. It is a set of build file snippets that can be used to create simple custom build files.
I used to do exactly the same thing.... then I switched to [maven](http://maven.apache.org/). Maven relies on a simple xml file to configure your build and a simple repository to manage your build's dependencies (rather than checking these dependencies into your source control system with your code). One feature I really like is how easy it is to version your jars - easily keeping previous versions available for legacy users of your library. This also works to your benefit when you want to upgrade a library you use - like junit. These dependencies are stored as separate files (with their version info) in your maven repository so old versions of your code always have their specific dependencies available. It's a better Ant.
418,075
Does the logic inside flash memory devices require a power down after each WRITE operation? I was confused when reading the datasheet of the Micron Serial NOR Flash Memory. There is "To avoid data corruption and inadvertent WRITE operations during power-up, a poweron reset circuit is included... ". After WRITE operations (program or erase sector) to the Micron Serial NOR Flash Memory, it does not respond to any instruction during power-up except READ STATUS REGISTER, I have reset the circuit and the device remains in lock mode. I have to power down the chip to get correct values (previously written) back from the EPCQL.
2019/01/21
[ "https://electronics.stackexchange.com/questions/418075", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/210186/" ]
No, the logic in a Flash memory device does not need to be powered down between write cycles.
Flash memory is essentially a switch and a capacitor. To write to a cell, you put a 1 or a 0 (high or low voltage) and then turn the transistor on. The capacitor then matches the voltage that was applied to it during the write cycle. Keep in mind that the charge on the capacitor slowly drains out and will eventually need to be refreshed. So if by power down you mean disconnect the switch then yes. But the rest of the circuit is always powered up. [![enter image description here](https://i.stack.imgur.com/4sMcX.png)](https://i.stack.imgur.com/4sMcX.png) Source: <http://www.intersil.com/content/dam/Intersil/documents/an15/an1533.pdf>
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
If you have a file that just says "yes, software may be run" you can of course not stop him from copying that file. What you *can* do is to encrypt a file with something that is specific to the customer's system, the customer's name or an IP address or something. Then you can make your software check this IP address or print the customer's name on all reports or something. You can do it with simple symmetric encryption or using a signature, neither of them preventing him from tampering with the program to find the key. So tell your boss it's an obstacle but certainly not unbreakable.
Possibly what you want to do is use XOR encryption (XOR each n-byte chunk of the file with the key) and since as @AndreKR said what you actually want to do is impossible, you might want to sign the encrypted file with your private key, then you can verify that the encryption was done by you. Of course if you don't check this every time, and you don't use an opaque file-format and compiled/obfsucated code then it won't really make much difference It is impossible in the general case to stop digital duplication of data if you are going to display that data to the user - in the worst case they can just take screen shots (or even capture signals sent to the monitor)
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
Simple RSA encryption will not solve your woes, once the code is in the clear anyone can get it. A better question is "How much work am I willing to put into making it difficult for my client to get my code?" As no matter the language and method eventually it gets run, and when it's run it can be read. The only fool proof way is to host it yourself and not allow your client or his servers any access to your code.
Possibly what you want to do is use XOR encryption (XOR each n-byte chunk of the file with the key) and since as @AndreKR said what you actually want to do is impossible, you might want to sign the encrypted file with your private key, then you can verify that the encryption was done by you. Of course if you don't check this every time, and you don't use an opaque file-format and compiled/obfsucated code then it won't really make much difference It is impossible in the general case to stop digital duplication of data if you are going to display that data to the user - in the worst case they can just take screen shots (or even capture signals sent to the monitor)
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
You can use a license like [FlexNet Publisher License System](http://www.flexerasoftware.com/). There are two sides to the FlexNet license. The first is establishing that a site has a license. This can be done based upon IP, Mac Address, or an internal ID of the processor. Once you've licensed the site, licenses at that site can be done on an active user basis (you can have thousands of users, but only ten users at a time can use the software), seat license (you have ten users at the site who can use it, and only those people can use it. If an eleventh person wants it, the site must move the license from one person who is licensed to that new user. Or, buy more licenses). And, you can have a site license with unlimited users. FlexNet license can be broken, but are generally strong and can report back to you violations of the license policy. Of course, you'll have to pay a licensing fee to Flexera Software to use their licensing scheme. And, there may even be some sort of "open source" implementation of the FlexNet licensing scheme although I don't know of one. I've never used it because I believe fully in the open source software philosophy. That and the fact than no one would pay a cent for anything I wrote.
Possibly what you want to do is use XOR encryption (XOR each n-byte chunk of the file with the key) and since as @AndreKR said what you actually want to do is impossible, you might want to sign the encrypted file with your private key, then you can verify that the encryption was done by you. Of course if you don't check this every time, and you don't use an opaque file-format and compiled/obfsucated code then it won't really make much difference It is impossible in the general case to stop digital duplication of data if you are going to display that data to the user - in the worst case they can just take screen shots (or even capture signals sent to the monitor)
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
It sounds like what you are looking for is a digital signature. When you create the license file, you sign it using your *private* key. When the application loads the license file, it verifies the signature using your *public* key, which is hardcoded into your obfuscated license check. Obviously, the user can just patch the license check code itself - either to replace your public key with their own, or just to avoid the license check altogther - but there's really nothing you can do about that.
Possibly what you want to do is use XOR encryption (XOR each n-byte chunk of the file with the key) and since as @AndreKR said what you actually want to do is impossible, you might want to sign the encrypted file with your private key, then you can verify that the encryption was done by you. Of course if you don't check this every time, and you don't use an opaque file-format and compiled/obfsucated code then it won't really make much difference It is impossible in the general case to stop digital duplication of data if you are going to display that data to the user - in the worst case they can just take screen shots (or even capture signals sent to the monitor)
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
If you have a file that just says "yes, software may be run" you can of course not stop him from copying that file. What you *can* do is to encrypt a file with something that is specific to the customer's system, the customer's name or an IP address or something. Then you can make your software check this IP address or print the customer's name on all reports or something. You can do it with simple symmetric encryption or using a signature, neither of them preventing him from tampering with the program to find the key. So tell your boss it's an obstacle but certainly not unbreakable.
Simple RSA encryption will not solve your woes, once the code is in the clear anyone can get it. A better question is "How much work am I willing to put into making it difficult for my client to get my code?" As no matter the language and method eventually it gets run, and when it's run it can be read. The only fool proof way is to host it yourself and not allow your client or his servers any access to your code.
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
If you have a file that just says "yes, software may be run" you can of course not stop him from copying that file. What you *can* do is to encrypt a file with something that is specific to the customer's system, the customer's name or an IP address or something. Then you can make your software check this IP address or print the customer's name on all reports or something. You can do it with simple symmetric encryption or using a signature, neither of them preventing him from tampering with the program to find the key. So tell your boss it's an obstacle but certainly not unbreakable.
You can use a license like [FlexNet Publisher License System](http://www.flexerasoftware.com/). There are two sides to the FlexNet license. The first is establishing that a site has a license. This can be done based upon IP, Mac Address, or an internal ID of the processor. Once you've licensed the site, licenses at that site can be done on an active user basis (you can have thousands of users, but only ten users at a time can use the software), seat license (you have ten users at the site who can use it, and only those people can use it. If an eleventh person wants it, the site must move the license from one person who is licensed to that new user. Or, buy more licenses). And, you can have a site license with unlimited users. FlexNet license can be broken, but are generally strong and can report back to you violations of the license policy. Of course, you'll have to pay a licensing fee to Flexera Software to use their licensing scheme. And, there may even be some sort of "open source" implementation of the FlexNet licensing scheme although I don't know of one. I've never used it because I believe fully in the open source software philosophy. That and the fact than no one would pay a cent for anything I wrote.
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
It sounds like what you are looking for is a digital signature. When you create the license file, you sign it using your *private* key. When the application loads the license file, it verifies the signature using your *public* key, which is hardcoded into your obfuscated license check. Obviously, the user can just patch the license check code itself - either to replace your public key with their own, or just to avoid the license check altogther - but there's really nothing you can do about that.
Simple RSA encryption will not solve your woes, once the code is in the clear anyone can get it. A better question is "How much work am I willing to put into making it difficult for my client to get my code?" As no matter the language and method eventually it gets run, and when it's run it can be read. The only fool proof way is to host it yourself and not allow your client or his servers any access to your code.
4,241,831
I want to distribute s/w licenses as encrypted files. I create a new file every time someone buys a licence & email it out, with instructions to put it in a certain directory. The PHP code which the user runs should be able to unencrypt the file (and the code is obfuscated to stuff him hacking that). Obviously the user should not be able to write a similar file. Let's not discuss whether this is worth it. I have been ordered to implement it, so ... how do I go about it? Can I use public key encryption and give him one key? --- Can't I just give the user one key & keep the other? HE can read & I can write
2010/11/22
[ "https://Stackoverflow.com/questions/4241831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192910/" ]
It sounds like what you are looking for is a digital signature. When you create the license file, you sign it using your *private* key. When the application loads the license file, it verifies the signature using your *public* key, which is hardcoded into your obfuscated license check. Obviously, the user can just patch the license check code itself - either to replace your public key with their own, or just to avoid the license check altogther - but there's really nothing you can do about that.
You can use a license like [FlexNet Publisher License System](http://www.flexerasoftware.com/). There are two sides to the FlexNet license. The first is establishing that a site has a license. This can be done based upon IP, Mac Address, or an internal ID of the processor. Once you've licensed the site, licenses at that site can be done on an active user basis (you can have thousands of users, but only ten users at a time can use the software), seat license (you have ten users at the site who can use it, and only those people can use it. If an eleventh person wants it, the site must move the license from one person who is licensed to that new user. Or, buy more licenses). And, you can have a site license with unlimited users. FlexNet license can be broken, but are generally strong and can report back to you violations of the license policy. Of course, you'll have to pay a licensing fee to Flexera Software to use their licensing scheme. And, there may even be some sort of "open source" implementation of the FlexNet licensing scheme although I don't know of one. I've never used it because I believe fully in the open source software philosophy. That and the fact than no one would pay a cent for anything I wrote.
530,060
I saw that plasma can be contained in a tunnel by using magnets (like in this [picture](https://www.iter.org/img/resize-900-90/www/content/com/Lists/WebText_2014/Attachments/1/plasma_in_mast.jpg)). Can I use the same concept and keep an object of 600kg lets say in the center of a chamber (it doesn't have to stay still)? How can I calculate the field strength needed for it? And is it safe for humans (because in that object I keep people).
2020/02/09
[ "https://physics.stackexchange.com/questions/530060", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/253637/" ]
Note: I'm assuming that you want to levitate the 600 kg object to the center of it's container, leading to the following comments. Force from a magnet follows the inverse square law, meaning that if you try to use a constant magnetic field from an adjustable electromagnet, the position of your 600 kg object will be unstable. As you slowly increase the magnetic force, you will see your object continue to sit on the floor of its container until the magnetic force just exceeds the weight of the object. At that point, the object will rise slightly, the magnetic force will increase in a nonlinear fashion, and the object will quickly accelerate in a nonlinear fashion to the electromagnet. When that 600 kg object hits the electromagnet, the momentum and kinetic energy of that object is very likely to break something. Fortunately, this problem has been solved. To levitate the object, you need to include a sensor that can detect the location of the object, and have that sensor send feedback to a controller that adjusts the current to the electromagnet very quickly in order to hold the object at a given position. In it's simplest form, the current to the electromagnet would pulse very rapidly, causing the object to alternately rise a very slight amount (e.g., 1 mm) and fall by the same amount. Thus, with a high enough current pulse rate, the object would appear motionless.
A body need to be charged or magnetic to be affected my a magnetic field. A charged body can be contained in something like a cyclotron where it can be forced to follow a circular path. If the body is magnetic, it can easily be levitated by surrounding it with other magnets Humans are not affected by large magnetic fields upto around $20$T as evidenced by MRI scanners. Beyond that and the field might interfere with the electrical signals flowing throughout our body and more importantly, our heart. However, magnetic fields of such strength are extremely difficult to produce.
393,960
If you have two positive numbers that can be represented exactly by the finite arithmetic of your computer, that is, two machine numbers, if you perform their sum or subtraction with perfect arithmetic, is the result a machine number aswell? I suppose that in the case of multiplication the answer is not necessarily, but I'm not sure with sums or subtractions.
2019/06/28
[ "https://softwareengineering.stackexchange.com/questions/393960", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/339756/" ]
No, every data type has a maximal value, and adding two arbitrary values can always overflow. No concrete data type can be closed under an operation that could increase the value unless it uses an arbitrary amount of space per variable.
It can never be exact. For every possible number representation, there is a largest number it can store. Take two of those “largest numbers” and add them together. The answer is larger. Can it be represented exactly? **YES** - then the “largest number” wasn’t the largest. Contradiction! **NO** - then you have a result of an addition which cannot e represented exactly - just as you asked.
1,029,476
I am aware of the consequences and issues with running a single-node cluster. However, I'm still curious if it's possible. I plan on setting everything up myself. In other words, can I run the control plane and a worker node on the same physical machine.
2020/08/10
[ "https://serverfault.com/questions/1029476", "https://serverfault.com", "https://serverfault.com/users/85619/" ]
Please let me elaborate with this topic: > > "In other words, can I run the control plane and a worker node on the same cluster" > > > From k3s docs: > > A server node is defined as a machine (bare-metal or virtual) running the k3s server command. > > > A worker node is defined as a machine running the k3s agent command. > Adding more agent will create more worker nodes to run your application. > > > In this concept one master node (***running k3s server command*** and additional agent nodes ***running the k3s agent command***) still create one cluster with single control plane. However you can extend this approach by creating High-Availability K3s Server with multiple server(control planes) and agents nodes. As per k8s docs: > > [**Control Plane Components**](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components): > > > * kube-apiserver, > * etcd, > * kube-scheduler, > * kube-controller-manager, > * cloud-controller-manager, > > > As I can see there is also an option to run k3s wiht multiple agents on single machine Using Docker as the Container Runtime - [K3d](https://rancher.com/docs/k3s/latest/en/advanced/) ( (K3s in Docker) and docker-compose. As an alternative please follow: * [Learning environment](https://kubernetes.io/docs/setup/learning-environment/) using Kind, and minikube * [Production environment](https://kubernetes.io/docs/setup/production-environment/tools/) using kubeadm, kops,kubespray
Very much so, of course. The consequences are simply that you have no HV as the respective redundancy is missing. The official documentation even has a section on setting up a [single control-plane cluster with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/). (When talking about "the same physical machine" I would *highly* encourage you to setup two VMs on there, one for the control plane node, one for the worker.)
124,936
[![enter image description here](https://i.stack.imgur.com/o4kpE.jpg)](https://i.stack.imgur.com/o4kpE.jpg)I’m fairly new to grasping this concept so take it easy on me. In the following piece which I assume is 4/4 the sum of the notes simply do not add up. Lots of 32nd notes and some quarter notes equalling far more per bar than there should be. What am I missing I’m this time signature?
2022/09/12
[ "https://music.stackexchange.com/questions/124936", "https://music.stackexchange.com", "https://music.stackexchange.com/users/88519/" ]
The smaller notes are grace notes. They don't count for the time signature. Traditionally there would be only one or a few per "main" note. Here, Paganini is basically saying to play these figures ornamentally and out of time. This is especially underlined by the fact that *there is no time signature* and indeed *there are no bar lines.* There are no measures; there is no meter. Play the notes freely.
Since there is no time signature, it's no good trying to guess one! The one beat notes (crotchets, 1/4 notes), are the target notes the violinist will aim for, and play for a little longer than the flurry of other notes, which show how much of a virtuoso s/he is. Those notes are called grace notes, or ornaments, which a player like Paganini would play often, showing his prowess as a violinist. The notes themselves have little or no note timing value on their own, and are always written with smaller heads and stems to show this. Because there's no time signature, or even bar lines, there's going to be no 'pulse' running through that part of the Caprice.
2,349
Reading [this question](https://opensource.stackexchange.com/questions/2338/can-i-use-gpl-libraries-in-a-closed-source-project-if-only-the-output-is-distrib) caused me to wonder: Is it even possible to keep a GPL-licensed software internal to one company? Am I correct in thinking that any employee who has access to it, would be free to distribute it, either during his employment or after leaving? And also, that GPL specifically forbids stopping such distribution by NDAs and similar means? So in real world, if one wanted to keep such GPL software internal, access to it would have to be restricted to just a few individuals for have high motivation to stay loyal to the company?
2016/01/15
[ "https://opensource.stackexchange.com/questions/2349", "https://opensource.stackexchange.com", "https://opensource.stackexchange.com/users/3919/" ]
The employees are the company and as such, having access to the GPL code does not constitute distribution. If the Software is distributed externally, even to another company of the same group (different entity even if same parents) the GPL counts.
The GPL parts stay GPL, your modification stay yours. You certainly can restrict access to said modifications at will, as copyright (used the "standard" way) allows. I.e., in the case mentioned, access to the modifications under NDA, extra restrictions, the whole mile you'd go for company strategic secrets if warranted. GPL kicks in *if/when* the result is distributed, *then* full source has to be provided. As long as it stays in-house, GPL code is yours to do as you like.
10,265
I recently acquired an HTC Desire Z / G2, and installed Cyanogenmod 7 onto it. Something I've noticed though is that sometimes the battery drains very quickly, and sometimes it does not. Since I have a tendency to charge the battery very often, even when it has not discharged much, I get the feeling that the battery stats are not correct. As I recall, before resetting battery stats, I should make sure the battery is at 100%. My question to that is, **how can I tell if my battery is at 100%?** Is it when the LED power indicator is constantly green? When Android reports as 100%?
2011/06/10
[ "https://android.stackexchange.com/questions/10265", "https://android.stackexchange.com", "https://android.stackexchange.com/users/3410/" ]
"100%" isn't as straightforward a concept as you might think on Android. Have a read of this: <http://phandroid.com/2010/12/25/your-smartphones-battery-gauge-is-lying-to-you-and-its-not-such-a-bad-thing/>
Try this app [Battery Minder](https://market.android.com/details?id=com.rollerbush.batteryminder). It alerts the user if the battery is low or fully charged.
10,265
I recently acquired an HTC Desire Z / G2, and installed Cyanogenmod 7 onto it. Something I've noticed though is that sometimes the battery drains very quickly, and sometimes it does not. Since I have a tendency to charge the battery very often, even when it has not discharged much, I get the feeling that the battery stats are not correct. As I recall, before resetting battery stats, I should make sure the battery is at 100%. My question to that is, **how can I tell if my battery is at 100%?** Is it when the LED power indicator is constantly green? When Android reports as 100%?
2011/06/10
[ "https://android.stackexchange.com/questions/10265", "https://android.stackexchange.com", "https://android.stackexchange.com/users/3410/" ]
"100%" isn't as straightforward a concept as you might think on Android. Have a read of this: <http://phandroid.com/2010/12/25/your-smartphones-battery-gauge-is-lying-to-you-and-its-not-such-a-bad-thing/>
If the phone is on the charger, a lot of the lock screens will show the battery percentage or "Charged" if at 100%.
96,348
I understand that the MTM of a bond refers to Mark to Market value. I am trying to understand whether such price also includes accrued interest if any. I tried searching a lot in google, but couldn't find anything.
2018/06/12
[ "https://money.stackexchange.com/questions/96348", "https://money.stackexchange.com", "https://money.stackexchange.com/users/73179/" ]
Many companies never intend to reduce their debt to zero. Instead, they [roll over their debt](https://www.quora.com/What-is-rolling-over-debt): > > Rather than paying off the principle of a debt when it comes due, you take out > another loan for that amount to pay off the first debt. This will often be the > same lender you owe the money to. The terms may change (due date, interest > rate) and could be better or worse for the borrower depending on the interest > rate environment (and of course his credit rating). > > > So to answer your question directly, these companies do not pay both principal and interest periodically because they aren't trying to reduce their debt. Instead they expect to roll it over. Moreover, recall that the enterprise value of a business is the sum of its market capitalization and its debt. From this perspective, a business is thought of as being owned by both equity-holders and debt-holders. Assuming the business is successful, the debt holders expect periodic payments of interest and get paid first out of profits. The remaining profit is owned by the equity holders. Thus, these two classes -- the equity holders and the debt holders -- together "own" the enterprise. These kinds of businesses generally do not seek to remove the debt holders from the business. The debt-to-equity ratio may bounce around, but generally never falls to zero because the debt is rolled over. As long as the business remains sucessful, there is generally never a lack of investors willing to buy debt. The main issue is the interest rate, not the availability of credit. Businesses deploy the capital raised by debt issuances to fund projects and build the business. They do so with the belief that for every dollar they deploy they can generate cash flow of more than a dollar. If they were to use some of that cash flow to pay off principal, then they would have fewer dollars to deploy funding projects and building the business. If they believe their rate of return is greater than their cost of capital, they should prefer to spend their dollars investing in projects instead of not paying off debt. --- PS. There are such things as [sinkable bonds](https://www.investopedia.com/terms/s/sinkablebond.asp) which are > > backed by a fund that sets aside money to ensure principal and interest > payments are made by the issuer as promised. > > > Sinkable bonds are used to attract investors who may otherwise find the business's [creditworthiness too risky](http://smallbusiness.chron.com/advantages-corporate-sinking-funds-34244.html).
When I buy a bond I don't want the company to have the option of repaying the principal early. That's because they would do it at a time when it's to their benefit, typically because interest rates have gone down and they can borrow money more cheaply. But that means I lose the benefit of the higher interest rate that my bond carries, and I would have to reinvest the bond principal in a lower-yielding security. That's why I don't buy **callable** bonds, which are bonds that the issuer can pay off early under certain specific conditions.
96,348
I understand that the MTM of a bond refers to Mark to Market value. I am trying to understand whether such price also includes accrued interest if any. I tried searching a lot in google, but couldn't find anything.
2018/06/12
[ "https://money.stackexchange.com/questions/96348", "https://money.stackexchange.com", "https://money.stackexchange.com/users/73179/" ]
Many companies never intend to reduce their debt to zero. Instead, they [roll over their debt](https://www.quora.com/What-is-rolling-over-debt): > > Rather than paying off the principle of a debt when it comes due, you take out > another loan for that amount to pay off the first debt. This will often be the > same lender you owe the money to. The terms may change (due date, interest > rate) and could be better or worse for the borrower depending on the interest > rate environment (and of course his credit rating). > > > So to answer your question directly, these companies do not pay both principal and interest periodically because they aren't trying to reduce their debt. Instead they expect to roll it over. Moreover, recall that the enterprise value of a business is the sum of its market capitalization and its debt. From this perspective, a business is thought of as being owned by both equity-holders and debt-holders. Assuming the business is successful, the debt holders expect periodic payments of interest and get paid first out of profits. The remaining profit is owned by the equity holders. Thus, these two classes -- the equity holders and the debt holders -- together "own" the enterprise. These kinds of businesses generally do not seek to remove the debt holders from the business. The debt-to-equity ratio may bounce around, but generally never falls to zero because the debt is rolled over. As long as the business remains sucessful, there is generally never a lack of investors willing to buy debt. The main issue is the interest rate, not the availability of credit. Businesses deploy the capital raised by debt issuances to fund projects and build the business. They do so with the belief that for every dollar they deploy they can generate cash flow of more than a dollar. If they were to use some of that cash flow to pay off principal, then they would have fewer dollars to deploy funding projects and building the business. If they believe their rate of return is greater than their cost of capital, they should prefer to spend their dollars investing in projects instead of not paying off debt. --- PS. There are such things as [sinkable bonds](https://www.investopedia.com/terms/s/sinkablebond.asp) which are > > backed by a fund that sets aside money to ensure principal and interest > payments are made by the issuer as promised. > > > Sinkable bonds are used to attract investors who may otherwise find the business's [creditworthiness too risky](http://smallbusiness.chron.com/advantages-corporate-sinking-funds-34244.html).
> > bonds don't have these options but to repay the principal all at once. > > > Corporate Bonds are products meant to tap individual investor / Financial institution having excess money that they would otherwise park in Bank Deposits [depending on geography terms used like Fixed Deposits or Certificate Deposit or Time Deposits]. As these products from Bank offer low rates, it makes it attractive to invest in Bonds that give better rate. The investors expect a steady income of interest for their needs. Unlike Bank Deposits that can be encahsed premature; Corporate Bonds had a limitation. This [and other reasons] there are now mutual funds that invest in Bonds and individual investor can purchase the Mutual Fund. Thus there maybe very little market for Corporate Bonds that pay back the principal. From a Corporate point of view, this adds to cash flow issues at the time of Bond maturity; however they get access to cheap funds [compared to loans from Banks] and have time to plan for the redemption. They either have sufficient cash to pay it off; or issue new Bonds or take a loan from Bank.
96,348
I understand that the MTM of a bond refers to Mark to Market value. I am trying to understand whether such price also includes accrued interest if any. I tried searching a lot in google, but couldn't find anything.
2018/06/12
[ "https://money.stackexchange.com/questions/96348", "https://money.stackexchange.com", "https://money.stackexchange.com/users/73179/" ]
Many companies never intend to reduce their debt to zero. Instead, they [roll over their debt](https://www.quora.com/What-is-rolling-over-debt): > > Rather than paying off the principle of a debt when it comes due, you take out > another loan for that amount to pay off the first debt. This will often be the > same lender you owe the money to. The terms may change (due date, interest > rate) and could be better or worse for the borrower depending on the interest > rate environment (and of course his credit rating). > > > So to answer your question directly, these companies do not pay both principal and interest periodically because they aren't trying to reduce their debt. Instead they expect to roll it over. Moreover, recall that the enterprise value of a business is the sum of its market capitalization and its debt. From this perspective, a business is thought of as being owned by both equity-holders and debt-holders. Assuming the business is successful, the debt holders expect periodic payments of interest and get paid first out of profits. The remaining profit is owned by the equity holders. Thus, these two classes -- the equity holders and the debt holders -- together "own" the enterprise. These kinds of businesses generally do not seek to remove the debt holders from the business. The debt-to-equity ratio may bounce around, but generally never falls to zero because the debt is rolled over. As long as the business remains sucessful, there is generally never a lack of investors willing to buy debt. The main issue is the interest rate, not the availability of credit. Businesses deploy the capital raised by debt issuances to fund projects and build the business. They do so with the belief that for every dollar they deploy they can generate cash flow of more than a dollar. If they were to use some of that cash flow to pay off principal, then they would have fewer dollars to deploy funding projects and building the business. If they believe their rate of return is greater than their cost of capital, they should prefer to spend their dollars investing in projects instead of not paying off debt. --- PS. There are such things as [sinkable bonds](https://www.investopedia.com/terms/s/sinkablebond.asp) which are > > backed by a fund that sets aside money to ensure principal and interest > payments are made by the issuer as promised. > > > Sinkable bonds are used to attract investors who may otherwise find the business's [creditworthiness too risky](http://smallbusiness.chron.com/advantages-corporate-sinking-funds-34244.html).
1) Corporates want to sell bonds. Using non-standard practices such as an amortising repayment schedule is not generally attractive to investors. 2) Corporates have funding profiles: averaged over all of their bonds they have a series of principal outflows which are well dispersed. Consider 1Y, 2Y, 3Y, 4Y, 5Y bonds in 100m each, as opposed to a single 5Y bond in 500m. This actually synthesises the structure of your question. 3) Options to repay early create callable bond. The embedded option is a cost to the corporate and a nuisance to risk manage, so why bother?
96,348
I understand that the MTM of a bond refers to Mark to Market value. I am trying to understand whether such price also includes accrued interest if any. I tried searching a lot in google, but couldn't find anything.
2018/06/12
[ "https://money.stackexchange.com/questions/96348", "https://money.stackexchange.com", "https://money.stackexchange.com/users/73179/" ]
When I buy a bond I don't want the company to have the option of repaying the principal early. That's because they would do it at a time when it's to their benefit, typically because interest rates have gone down and they can borrow money more cheaply. But that means I lose the benefit of the higher interest rate that my bond carries, and I would have to reinvest the bond principal in a lower-yielding security. That's why I don't buy **callable** bonds, which are bonds that the issuer can pay off early under certain specific conditions.
> > bonds don't have these options but to repay the principal all at once. > > > Corporate Bonds are products meant to tap individual investor / Financial institution having excess money that they would otherwise park in Bank Deposits [depending on geography terms used like Fixed Deposits or Certificate Deposit or Time Deposits]. As these products from Bank offer low rates, it makes it attractive to invest in Bonds that give better rate. The investors expect a steady income of interest for their needs. Unlike Bank Deposits that can be encahsed premature; Corporate Bonds had a limitation. This [and other reasons] there are now mutual funds that invest in Bonds and individual investor can purchase the Mutual Fund. Thus there maybe very little market for Corporate Bonds that pay back the principal. From a Corporate point of view, this adds to cash flow issues at the time of Bond maturity; however they get access to cheap funds [compared to loans from Banks] and have time to plan for the redemption. They either have sufficient cash to pay it off; or issue new Bonds or take a loan from Bank.
96,348
I understand that the MTM of a bond refers to Mark to Market value. I am trying to understand whether such price also includes accrued interest if any. I tried searching a lot in google, but couldn't find anything.
2018/06/12
[ "https://money.stackexchange.com/questions/96348", "https://money.stackexchange.com", "https://money.stackexchange.com/users/73179/" ]
1) Corporates want to sell bonds. Using non-standard practices such as an amortising repayment schedule is not generally attractive to investors. 2) Corporates have funding profiles: averaged over all of their bonds they have a series of principal outflows which are well dispersed. Consider 1Y, 2Y, 3Y, 4Y, 5Y bonds in 100m each, as opposed to a single 5Y bond in 500m. This actually synthesises the structure of your question. 3) Options to repay early create callable bond. The embedded option is a cost to the corporate and a nuisance to risk manage, so why bother?
> > bonds don't have these options but to repay the principal all at once. > > > Corporate Bonds are products meant to tap individual investor / Financial institution having excess money that they would otherwise park in Bank Deposits [depending on geography terms used like Fixed Deposits or Certificate Deposit or Time Deposits]. As these products from Bank offer low rates, it makes it attractive to invest in Bonds that give better rate. The investors expect a steady income of interest for their needs. Unlike Bank Deposits that can be encahsed premature; Corporate Bonds had a limitation. This [and other reasons] there are now mutual funds that invest in Bonds and individual investor can purchase the Mutual Fund. Thus there maybe very little market for Corporate Bonds that pay back the principal. From a Corporate point of view, this adds to cash flow issues at the time of Bond maturity; however they get access to cheap funds [compared to loans from Banks] and have time to plan for the redemption. They either have sufficient cash to pay it off; or issue new Bonds or take a loan from Bank.
349,265
**I'm looking to upgrade to a gaming mouse.** Searching Amazon for a Linux gaming mouse isn't getting very many results. So I'm asking here for recommendations. I'm using Ubuntu 13.04 64bit Gnome Fallback Session for now. My preferences are for: \* Wireless. \* PS2 or USB receiver. \* At least 6 assignable buttons. \* Assignable macros would be a bonus. \* Works out of the box with Linux, or at least with a simple setup. \* Accurate and dependable. \* Comfortable, as in a low profile design. Feel free to share your own experiences or recommendations for gaming mice, joypads, or gamepads that work with Linux.
2013/09/23
[ "https://askubuntu.com/questions/349265", "https://askubuntu.com", "https://askubuntu.com/users/7463/" ]
Edit: I just read about [piper](https://github.com/libratbag/piper) on [Phoronix](http://www.phoronix.com/scan.php?page=news_item&px=GSoC-2017-Projects) and noticed that I missed this [answer](https://askubuntu.com/a/793479/40581). **Almost all mice work with Linux, but not the software they come with.** You can configure all buttons in some configuration files, but for advanced features there probably needs to be written a Linux equivalent of the Windows software. This is where manufacturers fail. I don't know of a manufacturer developing free and open Linux drivers and software for their mice. Of course from their point of view it's so unbelievable that all of them should combine their efforts into writing one good solution that works for Linux with every advanced gaming mouse they produce. Stoneage thinking. We've been there with WiFi, too. **Some articles I found that may help you:** * <https://wiki.archlinux.org/index.php/All_Mouse_Buttons_Working> * <https://wiki.archlinux.org/index.php/Razer> **You should probably just choose the mouse that matches your criteria and look for solutions on how make special features work with Linux.** Hint: A scroll wheel is no special feature, it's just 2 buttons (or 4 for a tilting scroll wheel). So it doesn't matter how many scroll wheels and buttons the mouse has, but how you assign the buttons.
**Logitech G700s Rechargeable Gaming Mouse** I'm adding my own answer for the gaming mouse I choose. The G700s works out of the box using Easystroke to assign button events, **but**... Easystroke won't assign the button mapping to the mouse on-board memory, so you're pretty much stuck with one profile even though the mouse has 5 on-board profiles built in. Easystroke also wouldn't recognize one of the games executable, so It was pretty much a global setting or nothing. I installed [Virtualbox and the VirtualBox Extension Pack](https://www.virtualbox.org/wiki/Downloads), put WinXP on it, downloaded and installed [Logitech's software](http://www.logitech.com/en-us/support/wireless-gaming-mouse-g700?crid=411) for WinXP to it, and then I could assign the different profiles to the mouse on-board memory which does indeed stay with the mouse in Linux. You could also just use a Windows PC temporarily to set up the mouse, then put it back on a Linux PC, it will save the profiles in the mouse, not the operating system.
349,265
**I'm looking to upgrade to a gaming mouse.** Searching Amazon for a Linux gaming mouse isn't getting very many results. So I'm asking here for recommendations. I'm using Ubuntu 13.04 64bit Gnome Fallback Session for now. My preferences are for: \* Wireless. \* PS2 or USB receiver. \* At least 6 assignable buttons. \* Assignable macros would be a bonus. \* Works out of the box with Linux, or at least with a simple setup. \* Accurate and dependable. \* Comfortable, as in a low profile design. Feel free to share your own experiences or recommendations for gaming mice, joypads, or gamepads that work with Linux.
2013/09/23
[ "https://askubuntu.com/questions/349265", "https://askubuntu.com", "https://askubuntu.com/users/7463/" ]
Edit: I just read about [piper](https://github.com/libratbag/piper) on [Phoronix](http://www.phoronix.com/scan.php?page=news_item&px=GSoC-2017-Projects) and noticed that I missed this [answer](https://askubuntu.com/a/793479/40581). **Almost all mice work with Linux, but not the software they come with.** You can configure all buttons in some configuration files, but for advanced features there probably needs to be written a Linux equivalent of the Windows software. This is where manufacturers fail. I don't know of a manufacturer developing free and open Linux drivers and software for their mice. Of course from their point of view it's so unbelievable that all of them should combine their efforts into writing one good solution that works for Linux with every advanced gaming mouse they produce. Stoneage thinking. We've been there with WiFi, too. **Some articles I found that may help you:** * <https://wiki.archlinux.org/index.php/All_Mouse_Buttons_Working> * <https://wiki.archlinux.org/index.php/Razer> **You should probably just choose the mouse that matches your criteria and look for solutions on how make special features work with Linux.** Hint: A scroll wheel is no special feature, it's just 2 buttons (or 4 for a tilting scroll wheel). So it doesn't matter how many scroll wheels and buttons the mouse has, but how you assign the buttons.
**For other Logitech Gaming Peripherals** There's a project called [Gnome15](http://www.russo79.com/gnome15) that supports Logitech Keyboards and Headsets... "Gnome15 is a suite of tools for the Logitech G series keyboards and headsets, including the G15, G19, G13, G930, G35, G510, G11, G110 and the Z-10 speakers aiming to provide the best integration possible with the Linux Desktop."
349,265
**I'm looking to upgrade to a gaming mouse.** Searching Amazon for a Linux gaming mouse isn't getting very many results. So I'm asking here for recommendations. I'm using Ubuntu 13.04 64bit Gnome Fallback Session for now. My preferences are for: \* Wireless. \* PS2 or USB receiver. \* At least 6 assignable buttons. \* Assignable macros would be a bonus. \* Works out of the box with Linux, or at least with a simple setup. \* Accurate and dependable. \* Comfortable, as in a low profile design. Feel free to share your own experiences or recommendations for gaming mice, joypads, or gamepads that work with Linux.
2013/09/23
[ "https://askubuntu.com/questions/349265", "https://askubuntu.com", "https://askubuntu.com/users/7463/" ]
**Logitech G700s Rechargeable Gaming Mouse** I'm adding my own answer for the gaming mouse I choose. The G700s works out of the box using Easystroke to assign button events, **but**... Easystroke won't assign the button mapping to the mouse on-board memory, so you're pretty much stuck with one profile even though the mouse has 5 on-board profiles built in. Easystroke also wouldn't recognize one of the games executable, so It was pretty much a global setting or nothing. I installed [Virtualbox and the VirtualBox Extension Pack](https://www.virtualbox.org/wiki/Downloads), put WinXP on it, downloaded and installed [Logitech's software](http://www.logitech.com/en-us/support/wireless-gaming-mouse-g700?crid=411) for WinXP to it, and then I could assign the different profiles to the mouse on-board memory which does indeed stay with the mouse in Linux. You could also just use a Windows PC temporarily to set up the mouse, then put it back on a Linux PC, it will save the profiles in the mouse, not the operating system.
**For other Logitech Gaming Peripherals** There's a project called [Gnome15](http://www.russo79.com/gnome15) that supports Logitech Keyboards and Headsets... "Gnome15 is a suite of tools for the Logitech G series keyboards and headsets, including the G15, G19, G13, G930, G35, G510, G11, G110 and the Z-10 speakers aiming to provide the best integration possible with the Linux Desktop."
90,696
My wife and I are traveling to London this year and will be there for the start of Wimbledon 2017. While we'll be in London for 10 days, we'll only be there for the first two days of the tournament. We're staying in London near Trafalgar Square. I'm trying to figure out if going to Wimbledon for a day is worth while. It looks easy enough to get to but I'm not sure about getting tickets. We're not huge tennis fans (but we're sport fans in general) so I'm thinking that Grounds Admission should be good for us - seems that will give us a chance to walk around and see things, etc...I realize we may not get to see actual play but that's OK with me. So, my question is...are Grounds Admission to Wimbledon for day one or two of the tournament easier to get OR can I get them in advance without really overpaying? I'm not sure I want to take the trip there to stand in the queue only to not get tickets due to long lines.
2017/03/28
[ "https://travel.stackexchange.com/questions/90696", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/6860/" ]
The short answer is - you aren't going to easily be able to get tickets. The All England Lawn Tennis Club Championships are a major national (and international) sporting and cultural event and as such demand is very high. There are two main ways to get tickets, and three less frequently used methods: * **The Ballot** - In advance, you enter in to a draw to determine who gets tickets. There is both a [UK ballot](http://www.wimbledon.com/en_GB/tickets/ballot_uk.html) (where most tickets are allocated) and [an overseas ballot](http://www.wimbledon.com/en_GB/tickets/ballot_overseas.html). An entrant does not get to decide what type or date of tickets they get (if any), and if successful, the entrant then pays the face value of the ticket. IF they can't make, or don't want the ticket, then they decline and it goes back in the draw. However for your purposes, both ballots for this year's tournament have closed. This is however the main way that tickets are sold. * **[Queueing on site](http://www.wimbledon.com/en_GB/atoz/queueing.html)** - Tickets are in fact sold on the day. However, this is very, popular, and queueing begins in the afternoon of the day before. Reportedly for some weekends, queueing has actually been beginning two days before. Getting "show court" tickets definitely involves queueing overnight. Bring camping equipment if you intend to do this. The official AELTC website indicates that if you're satisfied with grounds tickets, it may be possible to simply come very early the morning you wish to attend " If you would like to queue for Ground Passes, it is advisable to join The Queue a few hours before the Grounds open at 9.30am." - recent contact with friends suggests that arriving around 05:00 on a weekday got them in without much concern of missing out. Tickets are also resold to the queue as people leave the tournament, normally towards the end of the afternoon. I believe they are resold to whoever is at the front of the queue at that point. If you are dedicated enough to this approach, you *will* get in. The question is how much do you want to go. * **[Debentures](http://www.wimbledon.com/en_GB/atoz/debentures.html)** - Every 5 years, these are sold to fund development of the facilities. Their cost at the last offer was £50,000 for centre court, and £13,700 for court one. Holders of a Debenture receive a seat ticket for every day of the tournament on centre, or the first 10 days on court one. These tickets are permitted to be resold. There is no opportunity for you to buy a debenture before the tournament, but you may be well connected enough to know somebody who holds one and would be willing to give or sell you their ticket for your day(s) of interest. You will know if that applies to you. * **Corporate Packages** - These are your best opportunity I feel, although extremely expensive. Corporate hospitality packages, that combine tickets with tours, accommodation and more. The two official providers for overseas sales are [Pure Wimbledon](https://purewimbledon.com/) and [Wimbledon Experience](https://www.wimbledon-experience.com/). Prices start from £355 for your days of interest. * **Ticketmaster, day before only** - I've never heard of anybody getting a ticket this way, and I suspect if you choose to pursue it, your odds of succeeding are very poor, however apparently "several hundred" tickets are sold *the day before only* on Ticketmaster. If you are interested in this, then you [should register for the email newsletter](http://www.wimbledon.com/en_GB/contact/register.html), which will describe the process. Counting on this approach seems bound to lead to disappointment, however one could well attempt it, and then plan to join the queue if they failed to secure tickets. Information in this answer comes from knowing people who have used the first two methods, and the official website, especially the page "[Tickets - all you need to know](http://www.wimbledon.com/en_GB/tickets/tickets_what_you_need_to_know.html)" It's worth noting that if you get grounds tickets, you will absolutely be able to see actual play - you will have the opportunity to walk up to all courts except Centre, No1, No2 and No3. You'll also be able to view the big screens of "Henman Hill". It may get very crowded around courts where major stars are playing however. However, in the last few days of the tournament, all matches take place on "Show" courts, and grounds-only tickets are no longer available.
If you want to go, get tickets NOW; whether only ground tickets or court tickets. The thing is, if you want to go and do not have tickets when you are in London, it will be a bummer. Getting to the "All England Lawn Tennis and Croquet Club" from Trafalgar Square takes around 1 hour (says google) so it is feasible and could be a fun day.
40,984
I am collecting X and Y values from a web service (Twitter) via a python script. In the long run, this will run over a period of months and I intend on stopping at around the 6 million point mark. The original coords im getting are geographic WGS84, but I will need to convert these to projected WGS Web Mercator. Ill be later publishing this table to an ArcGIS Server map service and caching it. This is a personal project to learn python with no deadline and was wondering if it would be a good idea to solely make use of the [native spatial types](http://resources.arcgis.com/en/help/main/10.1/index.html#//002q0000006p000000) from SQL Server? My current untested plan: * CREATE a table with SSMS, with a GEOMETRY field setup (and some other attributes) * In my python script, make use of arcpy or [pyproj](https://code.google.com/p/pyproj/) to convert the lat/lons in WGS84 to WGS84 Web Mercator (or can I avoid this somehow and its all achievable with SQL?) * Make use of [pymssql](http://code.google.com/p/pymssql/wiki/PymssqlExamples) to INSERT records, and insert the points into the GEOMETRY field in the table. My question is, what would be a good, simple and efficient approach to take a pair of lat/lons in WGS84, and then insert them into a SQL Server table making use of SQL Server spatial types and have a resulting points layer that is in WGS84 Web Mercator, so that I can render/query them in ArcGIS Desktop 10.1? I do have access to arcpy/ArcSDE 10.1 if need be but was hoping to use this as an example of not requiring ArcSDE.
2012/11/13
[ "https://gis.stackexchange.com/questions/40984", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/325/" ]
I assume that you have one or several big files filled with x y and some other data. First, to my knowledge ther is no projection support in MS SQL (2008 r2 or later). there is third party solutions and proj.net library which you can use to build one. Therefore i see two options when storing data to database, if using MS SQL , you need to reproject data into wanted projection before inserting database or you just dump data into PostGIS db and do transform there. PostGIS has much more better toolset in database than MS SQL
geoAlchemy is supposed to do the job using GeometryColumns. However, I was not able to make it work on Windows/Python 2.7/sqlalchemy 0.9.6 due to AttributeError: type object 'ColumnProperty' has no attribute 'ColumnComparator'
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
"Easy to Google" has never been a reason to disqualify a question for Stack\* sites. StackOverflow and ServerFault have actually risen to the top of google searches. This is a good thing. We typically bring more to the table than a traditional googled answer does. The ability to see multiple answers, know the answerers reputation, and have good answers voted up are all things lacking in other sites. If we limited ourselves to that which could not be found on Google, we'd not have much. Please read [Joel Spolsky's StackOverflow launch announcement](http://www.joelonsoftware.com/items/2008/09/15.html). He details specifically why they created SO. Also check [this discussion](https://meta.stackexchange.com/questions/8724/how-to-deal-with-google-questions) on the SO meta.
I can't tell you how many times I have searched for something on Google, and found that the first (and best) result was a StackOverflow.com result. And then, invariably, one of the comments on the question says something like "geez, -1, you could find the answer to this sooo easily with a Google search". When this happens I think as loudly as I can "YES, DAMNIT! HOW DO YOU THINK I GOT HERE? THANK GOD THIS QUESTION WASN'T CLOSED AND I GOT THE ANSWER I WAS LOOKING FOR IN A CONVENIENT FORUM I CAN TRUST. I AM SO GRATEFUL TWITS LIKE YOU ARE RIGHTFULLY IGNORED!".
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
"Easy to Google" has never been a reason to disqualify a question for Stack\* sites. StackOverflow and ServerFault have actually risen to the top of google searches. This is a good thing. We typically bring more to the table than a traditional googled answer does. The ability to see multiple answers, know the answerers reputation, and have good answers voted up are all things lacking in other sites. If we limited ourselves to that which could not be found on Google, we'd not have much. Please read [Joel Spolsky's StackOverflow launch announcement](http://www.joelonsoftware.com/items/2008/09/15.html). He details specifically why they created SO. Also check [this discussion](https://meta.stackexchange.com/questions/8724/how-to-deal-with-google-questions) on the SO meta.
Simply going off of the first Google search result doesn't guarantee accuracy. The way Google ranks pages is by the number of other sites linking to that page with keywords related to what you typed in. Sometimes it is correct, sometimes it's a partial answer, and sometimes it's just a myth or stereotype that people still believe.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
"Easy to Google" has never been a reason to disqualify a question for Stack\* sites. StackOverflow and ServerFault have actually risen to the top of google searches. This is a good thing. We typically bring more to the table than a traditional googled answer does. The ability to see multiple answers, know the answerers reputation, and have good answers voted up are all things lacking in other sites. If we limited ourselves to that which could not be found on Google, we'd not have much. Please read [Joel Spolsky's StackOverflow launch announcement](http://www.joelonsoftware.com/items/2008/09/15.html). He details specifically why they created SO. Also check [this discussion](https://meta.stackexchange.com/questions/8724/how-to-deal-with-google-questions) on the SO meta.
If I have a cooking question that needs to be googled to find the answer, and that answer isn't on the stackexchange site, I'll ask it. If the site works out, my question will be #1 in Google results in a few months with a perfect, commnuity approved answer.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
"Easy to Google" has never been a reason to disqualify a question for Stack\* sites. StackOverflow and ServerFault have actually risen to the top of google searches. This is a good thing. We typically bring more to the table than a traditional googled answer does. The ability to see multiple answers, know the answerers reputation, and have good answers voted up are all things lacking in other sites. If we limited ourselves to that which could not be found on Google, we'd not have much. Please read [Joel Spolsky's StackOverflow launch announcement](http://www.joelonsoftware.com/items/2008/09/15.html). He details specifically why they created SO. Also check [this discussion](https://meta.stackexchange.com/questions/8724/how-to-deal-with-google-questions) on the SO meta.
I agree with what others have said, but want to make one big distinction regarding it. We need to add value to the answer, just a link to another site turns this into a directory, not a community. The link to another site is fine, but summarize, speak to how your experience coincides with what the article says, comment on variations that they didn't explore, etc. For @nohat's example about finding the right answer on SO through Google, that happens to me all the time too, but there is content in the answer. Sure they may have linked to the developer's website referencing their documentation or someone's blog, but they also did a quick test themselves and posted sample code or they gave some additional detail beyond the docs. They didn't just link and say RTFM.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
I can't tell you how many times I have searched for something on Google, and found that the first (and best) result was a StackOverflow.com result. And then, invariably, one of the comments on the question says something like "geez, -1, you could find the answer to this sooo easily with a Google search". When this happens I think as loudly as I can "YES, DAMNIT! HOW DO YOU THINK I GOT HERE? THANK GOD THIS QUESTION WASN'T CLOSED AND I GOT THE ANSWER I WAS LOOKING FOR IN A CONVENIENT FORUM I CAN TRUST. I AM SO GRATEFUL TWITS LIKE YOU ARE RIGHTFULLY IGNORED!".
Simply going off of the first Google search result doesn't guarantee accuracy. The way Google ranks pages is by the number of other sites linking to that page with keywords related to what you typed in. Sometimes it is correct, sometimes it's a partial answer, and sometimes it's just a myth or stereotype that people still believe.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
I can't tell you how many times I have searched for something on Google, and found that the first (and best) result was a StackOverflow.com result. And then, invariably, one of the comments on the question says something like "geez, -1, you could find the answer to this sooo easily with a Google search". When this happens I think as loudly as I can "YES, DAMNIT! HOW DO YOU THINK I GOT HERE? THANK GOD THIS QUESTION WASN'T CLOSED AND I GOT THE ANSWER I WAS LOOKING FOR IN A CONVENIENT FORUM I CAN TRUST. I AM SO GRATEFUL TWITS LIKE YOU ARE RIGHTFULLY IGNORED!".
If I have a cooking question that needs to be googled to find the answer, and that answer isn't on the stackexchange site, I'll ask it. If the site works out, my question will be #1 in Google results in a few months with a perfect, commnuity approved answer.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
I can't tell you how many times I have searched for something on Google, and found that the first (and best) result was a StackOverflow.com result. And then, invariably, one of the comments on the question says something like "geez, -1, you could find the answer to this sooo easily with a Google search". When this happens I think as loudly as I can "YES, DAMNIT! HOW DO YOU THINK I GOT HERE? THANK GOD THIS QUESTION WASN'T CLOSED AND I GOT THE ANSWER I WAS LOOKING FOR IN A CONVENIENT FORUM I CAN TRUST. I AM SO GRATEFUL TWITS LIKE YOU ARE RIGHTFULLY IGNORED!".
I agree with what others have said, but want to make one big distinction regarding it. We need to add value to the answer, just a link to another site turns this into a directory, not a community. The link to another site is fine, but summarize, speak to how your experience coincides with what the article says, comment on variations that they didn't explore, etc. For @nohat's example about finding the right answer on SO through Google, that happens to me all the time too, but there is content in the answer. Sure they may have linked to the developer's website referencing their documentation or someone's blog, but they also did a quick test themselves and posted sample code or they gave some additional detail beyond the docs. They didn't just link and say RTFM.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
Simply going off of the first Google search result doesn't guarantee accuracy. The way Google ranks pages is by the number of other sites linking to that page with keywords related to what you typed in. Sometimes it is correct, sometimes it's a partial answer, and sometimes it's just a myth or stereotype that people still believe.
If I have a cooking question that needs to be googled to find the answer, and that answer isn't on the stackexchange site, I'll ask it. If the site works out, my question will be #1 in Google results in a few months with a perfect, commnuity approved answer.
348
I just answered the question "[Buttermilk Substitute?](https://cooking.stackexchange.com/questions/3120/buttermilk-substitute)". Typing the question in to Google turned up pages and pages of detailed discussion about what to substitute for buttermilk. Do these "easy" questions belong here? I'm new to stackexchange, but I really want to see this site work out, so I'm asking some meta questions about how this place runs as they come up. Sorry if they're a bit basic. Ironically, I did not use Google to try and find an answer to this question (although I did search the site first).
2010/07/24
[ "https://cooking.meta.stackexchange.com/questions/348", "https://cooking.meta.stackexchange.com", "https://cooking.meta.stackexchange.com/users/1259/" ]
I agree with what others have said, but want to make one big distinction regarding it. We need to add value to the answer, just a link to another site turns this into a directory, not a community. The link to another site is fine, but summarize, speak to how your experience coincides with what the article says, comment on variations that they didn't explore, etc. For @nohat's example about finding the right answer on SO through Google, that happens to me all the time too, but there is content in the answer. Sure they may have linked to the developer's website referencing their documentation or someone's blog, but they also did a quick test themselves and posted sample code or they gave some additional detail beyond the docs. They didn't just link and say RTFM.
If I have a cooking question that needs to be googled to find the answer, and that answer isn't on the stackexchange site, I'll ask it. If the site works out, my question will be #1 in Google results in a few months with a perfect, commnuity approved answer.
12,688,636
I need to disallow /variable\_dir\_name/directory via robots.txt I use: Disallow: \*/directory Noindex: \*/directory is that correct?
2012/10/02
[ "https://Stackoverflow.com/questions/12688636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530753/" ]
There was a Lotusphere 2011 session SHOW107 that walked you through the steps. Read about it on [Jim's blog](http://blog.sequill.com/2011/02/xpages-data-source-control-for-relational-databases/), the [BleedYellow community](http://www.bleedyellow.com/blogs/06b53a4f-1d53-4919-842d-51d6c8be697d), an excellent [slideck](http://www.slideshare.net/pdhannan/ad116-01) from Paul (and related) as well as [SHOW107](http://www.slideshare.net/NotesSensei/show107-the-datasource-session-take-xpages-data-boldly-where-no-xpages-data-has-been-taken-before) itself. One caveat: The data sources don't show their fields in the Domino Designer. If you want that functionality then, instead of looking at *com.ibm.xsp.extlib.model.ObjectDataSource* you would subclass the DominoDocument and more or less overwrite all functions.
You could look in the source code of the extension library. There you will find the class *com.ibm.xsp.extlib.model.ObjectDataSource* as an easy example.
197,173
I have been spelling the word "**curiosity**" with a *u*, "**curiousity**," my whole life, and only today was Chrome's spellcheck bold enough to highlight my lifelong error. I have two questions: 1. The root word is *curious*. How or why has the quality of being curious come to be spelled without its *u*? Or is it the word *curious* that is unique, and both words were derived from a word with no *u*, like *curio*? 2. Since I have spelled the word this way my whole life and none of my English teachers/professors ever crossed out this "misspelling," is it not technically incorrect, just discouraged? Or perhaps it is archaic, which is why I could only find it [defined in a legal dictionary](http://legal-dictionary.thefreedictionary.com/Curiousity) with a capital "C:" *Curiousity*, not *curiousity*.
2014/09/18
[ "https://english.stackexchange.com/questions/197173", "https://english.stackexchange.com", "https://english.stackexchange.com/users/63982/" ]
Interesting question! Here's what the *OED* has to say about *-ious*: > > a compound suffix, consisting of the suffix -ous, added to an i which is part of another suffix, repr. Latin -iōsus, French -ieux, with sense ‘characterized by, full of’. ... by false analogy in cūriōsus curious (from cūra): see -ous suffix. > > > and, re: *-ous*: > > Nouns of quality from adjectives in -ous (however derived), are regularly formed in -ousness , ... a considerable number of those from Latin -ōsus have forms in -osity , as curiosity ... see -osity suffix. > > > and, re: *-osity*: > > The direct reflex of Latin -ōsitāt- in Old French was -ouseté , which is found in Middle English as -ouste , forming nouns from adjectives in -ous suffix... . Loanwords of this period having the latter termination and remaining in use were subsequently re-formed with -osity (e.g. contrariosity n., curiosity n.: compare also religiousty n., voluptuousty n. with religiosity n., voluptuosity n. (all first attested in late Middle English), and hidousty n. with the much later formation hideosity n.). ... > > >
The base (root) is "cure". cur(e) + i + ous = curious cur(e) + i + o(u)s + ity = curiosity EXPLANATION --The "i" is explained above by szarka. --The "e" is dropped as usual when adding the suffix that starts with a vowel. --The "u" is dropped in "curiosity" as part of another suffix spelling pattern (i.e., when adding the suffix "-ity" to a word ending with the suffix "-ous" drop the "u".) Another example of this pattern is "luminous"->"luminosity". See: curious. (n.d.). Online Etymology Dictionary. Retrieved August 08, 2015, from Dictionary.com website: <http://dictionary.reference.com/browse/curious>
17,265,105
I have a 3D scene (essentially a VRML file with one big IndexedFaceSet). I want to render the scene once into an image file. The image file will serve as a preview to the user, who will then be able to open the scene in a 3D viewer ([`X3DOM`](http://www.x3dom.org/) - it's a great library). I know the camera position, direction and field-of-view angle necessary for the preview, as well as the lighting. The preview image will be prepared offline on the server. Everything else on the server is written in Python, and I'd rather not introduce another language to the mix. I tried Matplotlib, but couldn't figure out how to perform proper shading and lighting there. I don't want to start a browser instance on the server and let X3DOM do the heavy lifting. I guess I can use PyOpenGL to render the scene once and save it to a file, but I'm hoping there's an easier way.
2013/06/23
[ "https://Stackoverflow.com/questions/17265105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/871910/" ]
You could install blender, import and render through that but it is probably overkill. OpenGLContext would probably provide all that you need: [pyOpenGL](http://pyopengl.sourceforge.net/context/) it seems to be quite well documented and reasonable to use and seems to support the import of VRML. If you need a higher resolution then you can make use of [YaFaRay](http://www.yafaray.org/) but I found it a lot harder to see if VRML import was supported directly or not.
Have you looked at OpenSceneGraph? It's intended to be used with c++ but there are [3rd party bindings](http://trac.openscenegraph.org/projects/osg//wiki/Community/LanguageWrappers) available for Python I believe, although maturity of those bindings may vary (that said it may be good enough to read a VRML and write an image).
566
With version 8 of the automated testing tool TestComplete, the vendor has introduced a new feature called "Keyword Tests" that provides a visual interface for creating automated UI tests without needing to be knowledgeable about writing code (or at the minimum have a minimum knowledgeability at writing code). In my personal experience with this tool prior to this version, I've been able to perform all sorts of neat tricks and magic acts to be able to automate the testing of applications using what some call "hand coded" tests. I'm starting to work with the new Keyword Testing piece and I'm finding that there are certain things that I just cannot do without having to use "hand coded" scripts. Does anyone else have experience with this tool and, if so, do you have any suggested restrictions of what can be done easily with keyword tests and what is best done with "hand coded" scripts?
2011/05/13
[ "https://sqa.stackexchange.com/questions/566", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/453/" ]
In my - admittedly limited so far - experience, I've found that TestComplete's keyword tests are useful for rapid automation of self-contained, small, highly modular test items. I have yet to find anything that keyword tests can do that coded tests can't, and I keep finding new things that coded tests do easily that keyword tests either can't do, or can't do without a lot of extra time and effort on the part of the tester. One thing I find particularly limiting is the inability cleanly integrate user-created objects (which is one of the more powerful TestComplete features in my view) with keyword test structures. Using code, it's easy. So far the most consistently useful feature of keyword testing that I've found is generating a quick set of handles to components in a previously untested form. The keyword test recorder and parser does seem to run faster than the 'code-generating' recorder.
Without knowing this tool, I think you describe exactly what keyword testing is... a simplified way of building tests, sacrificing the fine details.
532,892
I'm trying to create programmable Christmas lights - a very simple circuit controlled via Arduino or NodeMCU. My problem isn't on the microcontroller side, but the basic electronics. Now... I have 2 sets of 50 LED lights that use 3 V and draw 0.23 W meaning about 77 mA, which is significantly more than Arduino or NodeMCU can handle from a single pin. This doesn't really come as a surprise, so I need some kind of a transistor, relay or optoisolator to drive it. I have a bunch of 2N2222A transistors that I tried to use, but running 3 V from the lights' original battery pack to the collector and connecting the lights to the emitter and running 3 V to the base (or less using a resistor - didn't check for the ohms of the resistor as the results didn't change from the direct 3 V current) I did get lights on, but due to the voltage drop involved in the transistor, they were markedly dimmed. I didn't measure the exact drop at this stage yet. I had a 4-channel optoisolator on hand as well (HW-3999) that I decided to try as well, but it had the same results. This time I measured the voltage drop and it was around 0.6 V. Not surprising really, since the optoisolator (as I understand them) is basically a light-controlled MOSFET. So I would like to know what are my options here? Either use a relay as a switch or should I use 5 V as the base voltage and calculate the voltage drop involved in the optoisolator and/or transistor and then calculate a fitting resistor to use in order to drop the total voltage to around 3 V that the lights can handle, or should I just use a relay? Thanks!
2020/11/17
[ "https://electronics.stackexchange.com/questions/532892", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/268909/" ]
Well, I managed to solve this by using a higher voltage, apparently 3 V wasn't enough to open the 2n2222 all the way, but 6V seemed to work, then a 47 ohm resistor to drive down the voltage (at 77 mA current I calculated that the resistance of the LEDs was about 40 ohm) to a level the lights could handle. This seemed to work. Note that I have not hooked up a microcontroller at this point yet and basically this is more or less what I was trying to accomplish... probably I'll use a 9 V or somesuch DC source to drive the Arduino and use the same source to drive the LED's as well.
You need either to use a 3V relay, or better - go with mosfet. Problem (as You observed) is that it creates voltage drop. Why? 3V is not enough to turn it fully on. You have to look for oiwer mosfets compatible with 3V3 systems. Or build simple charge pump circuit (eg. on 555 timer) to create higher voltage for mosfet gate (and use open collector to drive it). Optoisolator probably is just not intended for such use - too small drain current? (couldnt find datasheet). Other option is to just use higher voltage supply and step it down for everything else. Example of suitable mosfet: PSMN2R7-30PL
532,892
I'm trying to create programmable Christmas lights - a very simple circuit controlled via Arduino or NodeMCU. My problem isn't on the microcontroller side, but the basic electronics. Now... I have 2 sets of 50 LED lights that use 3 V and draw 0.23 W meaning about 77 mA, which is significantly more than Arduino or NodeMCU can handle from a single pin. This doesn't really come as a surprise, so I need some kind of a transistor, relay or optoisolator to drive it. I have a bunch of 2N2222A transistors that I tried to use, but running 3 V from the lights' original battery pack to the collector and connecting the lights to the emitter and running 3 V to the base (or less using a resistor - didn't check for the ohms of the resistor as the results didn't change from the direct 3 V current) I did get lights on, but due to the voltage drop involved in the transistor, they were markedly dimmed. I didn't measure the exact drop at this stage yet. I had a 4-channel optoisolator on hand as well (HW-3999) that I decided to try as well, but it had the same results. This time I measured the voltage drop and it was around 0.6 V. Not surprising really, since the optoisolator (as I understand them) is basically a light-controlled MOSFET. So I would like to know what are my options here? Either use a relay as a switch or should I use 5 V as the base voltage and calculate the voltage drop involved in the optoisolator and/or transistor and then calculate a fitting resistor to use in order to drop the total voltage to around 3 V that the lights can handle, or should I just use a relay? Thanks!
2020/11/17
[ "https://electronics.stackexchange.com/questions/532892", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/268909/" ]
Well, I managed to solve this by using a higher voltage, apparently 3 V wasn't enough to open the 2n2222 all the way, but 6V seemed to work, then a 47 ohm resistor to drive down the voltage (at 77 mA current I calculated that the resistance of the LEDs was about 40 ohm) to a level the lights could handle. This seemed to work. Note that I have not hooked up a microcontroller at this point yet and basically this is more or less what I was trying to accomplish... probably I'll use a 9 V or somesuch DC source to drive the Arduino and use the same source to drive the LED's as well.
Any BJT based device is going to have a Collector to Emitter voltage drop. Specifically for the 2N2222, it is not to bad, looking at the plots it should be less then 100mV, assuming you have a base current that is reasonable, try something along the lines of 1mA (a 1k Resistor from your Arduino pin to the base should work) If it is still dim even with the setup above, you may have your circuit miss-wired etc. Best way to minimize power (voltage) loss would be to use a Logic Level MOSFET. There is going to basically be no voltage drop assuming you select an appropriate part, that is also compatible with a 3.3V Gate drive. But what you have should also give acceptable results. Also - Your not trying to use the 3.3V from the Arduino are you? It might not have enough power available to drive the lights depending on the exact model etc. Try with an external power supply.
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
> > As you would probably all agree, current is in practice measured by > putting a resistor of small resistance in series and then measure > voltage across that resistor. > > > No, I wouldn't agree. Current can be measured by a Hall-effect sensor and if the current is AC then using a current-transformer is also a big turn-to option. > > Measuring voltage precisely is no problem (using a cheap AD converter > and a microcontroller or i2c to USB controller typically gives > accuracy better than 0.1%). > > > I also disagree with this. You pay for what you get and usually cheap embedded ADCs are flaky on gain error, zero offset error and integral linearity errors and, for a 10 bit ADC, the accuracy might be upwards of 1% quite easily. The resolution might be 0.1% but that is a different story. > > I find it kind of funny that passive element of smaller accuracy costs > more than active element of higher accuracy! > > > This would make sense if you realized that a linear active element has an accuracy that is nearly always dictated by the accuracies of the resistors placed around them. Sure you can get an op-amp with low input offset voltage and bias currents but gain is dictated by the resistors external to the device. > > What is the possible solution to that problem? > > > Dig deep and buy a decent measurement-quality resistor.
Well if you're doing it at home I guess this current measurement is not used in a thousand different pieces. A resistor with 0.1% tolerance means if you order a 1 ohm resistor you get 1 ohm +/-0.1%, but if you have the possibility to measure your 1 ohm 1% resistor with an accurracy of +/-0.1%, this gives you the same result. So why not order an ordinary 1 ohm resistor and determine its actual value by measuring it? Or is a multimeter already part of a "electronic lab"?
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
> > As you would probably all agree, current is in practice measured by > putting a resistor of small resistance in series and then measure > voltage across that resistor. > > > No, I wouldn't agree. Current can be measured by a Hall-effect sensor and if the current is AC then using a current-transformer is also a big turn-to option. > > Measuring voltage precisely is no problem (using a cheap AD converter > and a microcontroller or i2c to USB controller typically gives > accuracy better than 0.1%). > > > I also disagree with this. You pay for what you get and usually cheap embedded ADCs are flaky on gain error, zero offset error and integral linearity errors and, for a 10 bit ADC, the accuracy might be upwards of 1% quite easily. The resolution might be 0.1% but that is a different story. > > I find it kind of funny that passive element of smaller accuracy costs > more than active element of higher accuracy! > > > This would make sense if you realized that a linear active element has an accuracy that is nearly always dictated by the accuracies of the resistors placed around them. Sure you can get an op-amp with low input offset voltage and bias currents but gain is dictated by the resistors external to the device. > > What is the possible solution to that problem? > > > Dig deep and buy a decent measurement-quality resistor.
Buy a cheap 5% shunt resistor, apply a known current to the shunt using a precision voltage source (you can buy 0.1% for under a dollar) and a known resistor (0.1%, higher value). Then use your 0.1% accurate ADC to measure the voltage across the shunt. ![schematic](https://i.stack.imgur.com/YAupq.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fYAupq.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) (of course, only for Rshunt/R1 < 0.1%)
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
Buy a cheap 5% shunt resistor, apply a known current to the shunt using a precision voltage source (you can buy 0.1% for under a dollar) and a known resistor (0.1%, higher value). Then use your 0.1% accurate ADC to measure the voltage across the shunt. ![schematic](https://i.stack.imgur.com/YAupq.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fYAupq.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) (of course, only for Rshunt/R1 < 0.1%)
Well if you're doing it at home I guess this current measurement is not used in a thousand different pieces. A resistor with 0.1% tolerance means if you order a 1 ohm resistor you get 1 ohm +/-0.1%, but if you have the possibility to measure your 1 ohm 1% resistor with an accurracy of +/-0.1%, this gives you the same result. So why not order an ordinary 1 ohm resistor and determine its actual value by measuring it? Or is a multimeter already part of a "electronic lab"?
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
Buy a cheap 5% shunt resistor, apply a known current to the shunt using a precision voltage source (you can buy 0.1% for under a dollar) and a known resistor (0.1%, higher value). Then use your 0.1% accurate ADC to measure the voltage across the shunt. ![schematic](https://i.stack.imgur.com/YAupq.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fYAupq.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) (of course, only for Rshunt/R1 < 0.1%)
Multimeters usually read consistently without much fluctuation so, that is to say, that a particular multimeter will consistently read 12.1v for example on a precisely 12v supply, so it reads a bit high but this only has a small implication to calculations. The solution may be to grab a bunch of 10% tolerance resistors (either half or double the value that you are seeking) because they are cheap (1% if you like, if your budget allows) and connect two in series onto a power supply. You can then measure the supply voltage, the current, measure the voltage drop across each resistor using your multimeter and calculate the resistance of each resistor comparatively. I presume you will not have too many that are very close to the specified resistance because they would already have been sorted and banded with a closer tolerance but, you should be able to find two resistors to add to get the value that you want either in series or in parallel. 10% tolerance on a resistor does not mean 10% fluctuation during operation, once they are manufactured thy are pretty much fixed. It is usual in instruments to have a shunt and an adjustment. This also removes the need for your tolerance to be so tight.
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
> > What I find curious is that it is easy to achieve 0.05% accuracy for voltage measurement, and it is damn difficult to get beyond 1% for resistors. Makes no sense. > > > It's quite simple actually ;) Resistors with high enough value can be made with film technology (thin or thick) which is very cheap. This is why your average SMD chip resistor costs next to nothing. Thru-hole parts are a bit more expensive but not much. High accuracy for cheap prices is achieved via **laser trimming**. Quite impressive when you consider how little these things actually cost. For low resistance values it gets more complicated, thicker films are required, trimming is more difficult, and in a high current density scenario, the laser-cut shape concentrates the current into a small part of the film, which decreases pulse power handling. If the resistor is wirewound, then it can't be laser-trimmed. Basically, less cheap/accurate manufacturing options are available for low resistor values. Also, the resistance of whatever sits between the resistive element and the PCB (like endcaps, leads, etc) begins to matter. And these are usually metal, which is inaccurate and has very bad temperature coefficient of resistance. For example if you buy a 0.02 ohms leaded resistor, its value will depend on how long the leads are after it is soldered. So, you say: > > Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. > > > [This one](https://www.digikey.com/product-detail/en/yageo/SQP500JB-1R0/1.0W-5-ND/18631) for example, isn't expensive. Now, obviously, it has a huge +/- 300ppm/°C tempco which means at its rated load of 5W, with a temperature rise of 200°C according to the datasheet, tempco alone will cause a +/- 6% drift, which means precision will be crap. Thus you would select [a 1% resistor](https://www.digikey.com/product-detail/en/riedon/UB5C-1RF1/696-1058-ND/2176624). It does have a much better tempco (50ppm/K). It is also expensive, since it is more of a niche product. If you want 0.1% though, you're in trouble because 0.1% of 1 ohm is 1 mOhm and this means the endcaps and leads matter. Thus you are stuck with this [luxury product](https://www.digikey.com/product-detail/en/vishay-foil-resistors-division-of-vishay-precision-group/Y09261R00000B0L/804-1042-ND/4225376) which, obviously, has 4 terminals and a TO-220 package so it can be kept cool with a big heat sink. It's basically supply and demand. Current sense resistors are used quite often, but in scenarios that don't require high accuracy, like in power supply, chargers, etc. So you can get low value current sense resistors like 10-100 mOhm in SMD format for low prices. But a high accuracy version will interest few customers. This is the reason why you're having problems getting cheap high-power, high-precision resistors: people choose a power resistor when it will get hot. If it's hot, you get tempco problems. Therefore, you need to do it like everyone else: * Rethink the project If your need for accuracy stems from a need to measure from 0 to 3A while keeping good precision near zero, you need more ranges like in a multimeter. Use a higher value shunt resistor for low currents. * Use a lower resistance value (less heat), for example a 0R1 resistor. This requires a lower offset amplifier (or calibration). This is likely your best option. * Use 4 wire sensing (eliminates inaccuracies due to terminal/wire resistance) This requires SMD resistors or very special thru hole resistors, but it is mandatory if you want accuracy on a 0R1 resistor. Here is some reading material. [link](https://electronics.stackexchange.com/questions/162185/recommended-layout-for-parallel-current-sensing-resistors) [link](http://www.analog.com/en/analog-dialogue/articles/optimize-high-current-sensing-accuracy.html) (second one is quite interesting!) * Require less accuracy by using calibration (but the resistor can still heat up, so you still need a low tempco). Also, if you want a resistor that is: very accurate, low drift, high dissipation power, etc... get a hundred 1% thin film SMD resistors and solder them on a double sided board that you make for this purpose using one of the cheapo $10 chinese PCB shops. Place the board vertically so it is air cooled by convection. The large surface area will do wonders for dissipation. A proper layout is a must though.
> > As you would probably all agree, current is in practice measured by > putting a resistor of small resistance in series and then measure > voltage across that resistor. > > > No, I wouldn't agree. Current can be measured by a Hall-effect sensor and if the current is AC then using a current-transformer is also a big turn-to option. > > Measuring voltage precisely is no problem (using a cheap AD converter > and a microcontroller or i2c to USB controller typically gives > accuracy better than 0.1%). > > > I also disagree with this. You pay for what you get and usually cheap embedded ADCs are flaky on gain error, zero offset error and integral linearity errors and, for a 10 bit ADC, the accuracy might be upwards of 1% quite easily. The resolution might be 0.1% but that is a different story. > > I find it kind of funny that passive element of smaller accuracy costs > more than active element of higher accuracy! > > > This would make sense if you realized that a linear active element has an accuracy that is nearly always dictated by the accuracies of the resistors placed around them. Sure you can get an op-amp with low input offset voltage and bias currents but gain is dictated by the resistors external to the device. > > What is the possible solution to that problem? > > > Dig deep and buy a decent measurement-quality resistor.
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
> > What I find curious is that it is easy to achieve 0.05% accuracy for voltage measurement, and it is damn difficult to get beyond 1% for resistors. Makes no sense. > > > It's quite simple actually ;) Resistors with high enough value can be made with film technology (thin or thick) which is very cheap. This is why your average SMD chip resistor costs next to nothing. Thru-hole parts are a bit more expensive but not much. High accuracy for cheap prices is achieved via **laser trimming**. Quite impressive when you consider how little these things actually cost. For low resistance values it gets more complicated, thicker films are required, trimming is more difficult, and in a high current density scenario, the laser-cut shape concentrates the current into a small part of the film, which decreases pulse power handling. If the resistor is wirewound, then it can't be laser-trimmed. Basically, less cheap/accurate manufacturing options are available for low resistor values. Also, the resistance of whatever sits between the resistive element and the PCB (like endcaps, leads, etc) begins to matter. And these are usually metal, which is inaccurate and has very bad temperature coefficient of resistance. For example if you buy a 0.02 ohms leaded resistor, its value will depend on how long the leads are after it is soldered. So, you say: > > Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. > > > [This one](https://www.digikey.com/product-detail/en/yageo/SQP500JB-1R0/1.0W-5-ND/18631) for example, isn't expensive. Now, obviously, it has a huge +/- 300ppm/°C tempco which means at its rated load of 5W, with a temperature rise of 200°C according to the datasheet, tempco alone will cause a +/- 6% drift, which means precision will be crap. Thus you would select [a 1% resistor](https://www.digikey.com/product-detail/en/riedon/UB5C-1RF1/696-1058-ND/2176624). It does have a much better tempco (50ppm/K). It is also expensive, since it is more of a niche product. If you want 0.1% though, you're in trouble because 0.1% of 1 ohm is 1 mOhm and this means the endcaps and leads matter. Thus you are stuck with this [luxury product](https://www.digikey.com/product-detail/en/vishay-foil-resistors-division-of-vishay-precision-group/Y09261R00000B0L/804-1042-ND/4225376) which, obviously, has 4 terminals and a TO-220 package so it can be kept cool with a big heat sink. It's basically supply and demand. Current sense resistors are used quite often, but in scenarios that don't require high accuracy, like in power supply, chargers, etc. So you can get low value current sense resistors like 10-100 mOhm in SMD format for low prices. But a high accuracy version will interest few customers. This is the reason why you're having problems getting cheap high-power, high-precision resistors: people choose a power resistor when it will get hot. If it's hot, you get tempco problems. Therefore, you need to do it like everyone else: * Rethink the project If your need for accuracy stems from a need to measure from 0 to 3A while keeping good precision near zero, you need more ranges like in a multimeter. Use a higher value shunt resistor for low currents. * Use a lower resistance value (less heat), for example a 0R1 resistor. This requires a lower offset amplifier (or calibration). This is likely your best option. * Use 4 wire sensing (eliminates inaccuracies due to terminal/wire resistance) This requires SMD resistors or very special thru hole resistors, but it is mandatory if you want accuracy on a 0R1 resistor. Here is some reading material. [link](https://electronics.stackexchange.com/questions/162185/recommended-layout-for-parallel-current-sensing-resistors) [link](http://www.analog.com/en/analog-dialogue/articles/optimize-high-current-sensing-accuracy.html) (second one is quite interesting!) * Require less accuracy by using calibration (but the resistor can still heat up, so you still need a low tempco). Also, if you want a resistor that is: very accurate, low drift, high dissipation power, etc... get a hundred 1% thin film SMD resistors and solder them on a double sided board that you make for this purpose using one of the cheapo $10 chinese PCB shops. Place the board vertically so it is air cooled by convection. The large surface area will do wonders for dissipation. A proper layout is a must though.
Well if you're doing it at home I guess this current measurement is not used in a thousand different pieces. A resistor with 0.1% tolerance means if you order a 1 ohm resistor you get 1 ohm +/-0.1%, but if you have the possibility to measure your 1 ohm 1% resistor with an accurracy of +/-0.1%, this gives you the same result. So why not order an ordinary 1 ohm resistor and determine its actual value by measuring it? Or is a multimeter already part of a "electronic lab"?
356,499
**NOTE**: Question was edited following comments and answers. As you would probably all agree, current is **in practice** measured by putting a resistor of small resistance in series and then measure voltage across that resistor. (I know there are other *scientific methods*, but are they used in everyday hobby applications?) Measuring voltage precisely is no problem. A cheap four channel AD converter (combined with a microcontroller or i2c to USB controller) accuracy might be upwards of 1% quite easily according to Andy aka (IMHO upwards of 0.5%). However, what I find problematic is that resistors of small resistance and small tolerance are very difficult to obtain or cost forbidding prices. Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. Only one *low power* 1 Ohm resistor with 0.1% tolerance cost several times ADC price. What is the possible solution to that problem? One idea is buying a cheap power resistor and determine its true resistance and correct the result, but how to determine its true resistance? Naturally, measuring small resistances is just as difficult as measuring small currents. Are there any simple hobbyist method to measure current with better precision than 1% that does not require an electronic lab at home or dozen of dollars per resistor?
2018/02/16
[ "https://electronics.stackexchange.com/questions/356499", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/174662/" ]
> > What I find curious is that it is easy to achieve 0.05% accuracy for voltage measurement, and it is damn difficult to get beyond 1% for resistors. Makes no sense. > > > It's quite simple actually ;) Resistors with high enough value can be made with film technology (thin or thick) which is very cheap. This is why your average SMD chip resistor costs next to nothing. Thru-hole parts are a bit more expensive but not much. High accuracy for cheap prices is achieved via **laser trimming**. Quite impressive when you consider how little these things actually cost. For low resistance values it gets more complicated, thicker films are required, trimming is more difficult, and in a high current density scenario, the laser-cut shape concentrates the current into a small part of the film, which decreases pulse power handling. If the resistor is wirewound, then it can't be laser-trimmed. Basically, less cheap/accurate manufacturing options are available for low resistor values. Also, the resistance of whatever sits between the resistive element and the PCB (like endcaps, leads, etc) begins to matter. And these are usually metal, which is inaccurate and has very bad temperature coefficient of resistance. For example if you buy a 0.02 ohms leaded resistor, its value will depend on how long the leads are after it is soldered. So, you say: > > Four power resistors of 1 Ohm and only 5% tolerance are twice as expensive and at least an order of magnitude less accurate. > > > [This one](https://www.digikey.com/product-detail/en/yageo/SQP500JB-1R0/1.0W-5-ND/18631) for example, isn't expensive. Now, obviously, it has a huge +/- 300ppm/°C tempco which means at its rated load of 5W, with a temperature rise of 200°C according to the datasheet, tempco alone will cause a +/- 6% drift, which means precision will be crap. Thus you would select [a 1% resistor](https://www.digikey.com/product-detail/en/riedon/UB5C-1RF1/696-1058-ND/2176624). It does have a much better tempco (50ppm/K). It is also expensive, since it is more of a niche product. If you want 0.1% though, you're in trouble because 0.1% of 1 ohm is 1 mOhm and this means the endcaps and leads matter. Thus you are stuck with this [luxury product](https://www.digikey.com/product-detail/en/vishay-foil-resistors-division-of-vishay-precision-group/Y09261R00000B0L/804-1042-ND/4225376) which, obviously, has 4 terminals and a TO-220 package so it can be kept cool with a big heat sink. It's basically supply and demand. Current sense resistors are used quite often, but in scenarios that don't require high accuracy, like in power supply, chargers, etc. So you can get low value current sense resistors like 10-100 mOhm in SMD format for low prices. But a high accuracy version will interest few customers. This is the reason why you're having problems getting cheap high-power, high-precision resistors: people choose a power resistor when it will get hot. If it's hot, you get tempco problems. Therefore, you need to do it like everyone else: * Rethink the project If your need for accuracy stems from a need to measure from 0 to 3A while keeping good precision near zero, you need more ranges like in a multimeter. Use a higher value shunt resistor for low currents. * Use a lower resistance value (less heat), for example a 0R1 resistor. This requires a lower offset amplifier (or calibration). This is likely your best option. * Use 4 wire sensing (eliminates inaccuracies due to terminal/wire resistance) This requires SMD resistors or very special thru hole resistors, but it is mandatory if you want accuracy on a 0R1 resistor. Here is some reading material. [link](https://electronics.stackexchange.com/questions/162185/recommended-layout-for-parallel-current-sensing-resistors) [link](http://www.analog.com/en/analog-dialogue/articles/optimize-high-current-sensing-accuracy.html) (second one is quite interesting!) * Require less accuracy by using calibration (but the resistor can still heat up, so you still need a low tempco). Also, if you want a resistor that is: very accurate, low drift, high dissipation power, etc... get a hundred 1% thin film SMD resistors and solder them on a double sided board that you make for this purpose using one of the cheapo $10 chinese PCB shops. Place the board vertically so it is air cooled by convection. The large surface area will do wonders for dissipation. A proper layout is a must though.
Buy a cheap 5% shunt resistor, apply a known current to the shunt using a precision voltage source (you can buy 0.1% for under a dollar) and a known resistor (0.1%, higher value). Then use your 0.1% accurate ADC to measure the voltage across the shunt. ![schematic](https://i.stack.imgur.com/YAupq.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fYAupq.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) (of course, only for Rshunt/R1 < 0.1%)
1,152,212
I recently purchased a new set of speakers only to find they hum when connected to my PC. I have tried these speakers on multiple other devices and there is never a hum. On stripping down my PC to only the motherboard and power supply I have found that as soon as the power supply is plugged in the speakers start to hum. This happens even when the power supply is turned off either at the mains or on the supply. The only way to stop the hum is to unplug the power supply from the mains. I have a few questions: 1) Is the hum more likely to be caused by the power supply or motherboard? (I'm assuming that the power supply is causing some kind of loop, though it seems strange it would do it when switched off) 2) If I don't fix this issue now, any new audio device could suffer the same problem? 3) Is there a way to fix this without buying a new power supply/motherboard/speakers? (I have no problem with upgrading either of the parts, just would like to know my options)
2016/12/02
[ "https://superuser.com/questions/1152212", "https://superuser.com", "https://superuser.com/users/670514/" ]
This is a fairly common phenomenon with laptops (but it happens with phones often enough). If your audio device works digitally, you'll likely never notice an issue, some USB devices with bad design may generate hum. Any device you plug in could "suffer" the same problem. You don't need a new mobo, just have the audio processed and output in a different way. If you're looking at analog solutions, find a low-pass filter that can cut out 60hz to 50hz range before you hit your speakers of course. If you're the spendy type grab a DAC and use that as your audio interface instead of a direct connection to your computer. They're often designed specifically to avoid line-in or line-out hum.
From Google (Ground Loop): "NORTH AMERICAN an unwanted electric current path in a circuit resulting in stray signals or interference, occurring, e.g., when two earthed points in the same circuit have different potentials." So, plug them into DIFFERENT Circuits and it should be solved.
1,152,212
I recently purchased a new set of speakers only to find they hum when connected to my PC. I have tried these speakers on multiple other devices and there is never a hum. On stripping down my PC to only the motherboard and power supply I have found that as soon as the power supply is plugged in the speakers start to hum. This happens even when the power supply is turned off either at the mains or on the supply. The only way to stop the hum is to unplug the power supply from the mains. I have a few questions: 1) Is the hum more likely to be caused by the power supply or motherboard? (I'm assuming that the power supply is causing some kind of loop, though it seems strange it would do it when switched off) 2) If I don't fix this issue now, any new audio device could suffer the same problem? 3) Is there a way to fix this without buying a new power supply/motherboard/speakers? (I have no problem with upgrading either of the parts, just would like to know my options)
2016/12/02
[ "https://superuser.com/questions/1152212", "https://superuser.com", "https://superuser.com/users/670514/" ]
Hum doesn't come only from ground loops. A bad filter input electrolytic capacitor in the power supply, the one that comes after the rectifier, can introduce lots of hum that could affect the internal audio chip. So this hum can be mounted over the audio signal. Not 100% guaranteed, but replacing the PSU, the capacitor or the motherboard itself could be the solution.
From Google (Ground Loop): "NORTH AMERICAN an unwanted electric current path in a circuit resulting in stray signals or interference, occurring, e.g., when two earthed points in the same circuit have different potentials." So, plug them into DIFFERENT Circuits and it should be solved.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
Well, all evidences might be taken into consideration. To be a prior art, evidence must be public and need to have a date and comprise one or more subject matter, which is claimed in new invention. But, like other evidences in the internet, video can be deleted from youtube and there will be a problem to prove that 1 year ago youtube (or any other public web-site) contained a specific video. If you want to make a reference in a patent application - it's better to provide a "hard-copy" of video to the USPTO on the CD or DVD.
Yes, you can submit Videos (Whether from Youtube or other platforms), voice, and even images as prior art for an invention. As [@patentico](https://patents.stackexchange.com/a/212/15787) mentioned that providing the hard copy of a video is a bad idea as you may face difficulties in providing publication date of that video. You may need to collect evidence of original publication date of the same. Also, a video will be considered prior art even if the source of that video and patent are same (if the time frame between the publication date of video and filing date of the patent is more than a considerable time provided by local IP laws). As your request, I have found one infamous patent case of Apple where their patent was invalidated by using a video as Prior art. The interesting fact about that case is, the video considered as prior art was Apple's own keynote video on iPhone release presented by Steve Jobs. Iphone's [Rubber-banding effect patent](http://worldwide.espacenet.com/publicationDetails/originalDocument?FT=D&date=20100929&DB=EPODOC&locale=en_EP&CC=EP&NR=2059868B1&KC=B1&ND=4) was invalidated in Germany after Google submitted Steve Jobs' presented keynote video as prior art. The court even rejected various amended claims by Apple. Only 3 out of 20 claims were granted. While searching for cases, I found an article with 4 such patent disputes where visual content was considered as prior art (including the above case I mentioned). Not just a YouTube video but, a comic, picture, or even Movie can be used a prior art as long as you're able to provide strong relation with the invention and publication date of the same. Here are the other 3 patent cases including one where a movie from 1968 was considered as prior art for Apple's iPad design. [4 cases where examiner found ridiculously awesome prior art.](http://www.greyb.com/4-cases-examiner-found-ridiculously-awesome-prior-art/)
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
I am not aware of a definitive answer to your question either in the statutes or in case law, but I can set out likely parameters for making such a determination. The following excerpt from 35 USC section 102 most directly addresses the issues relevant to your question: *A person shall be entitled to a patent unless— (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or* ... There are other limiting factors that are pertinent to your question. The key question, already addressed earlier, is whether the YouTube video can be proven to be prior to the date of the claimed invention. I don't know enough about the innards of the YouTube system to know whether definitive proof is available, though I suspect it is - provided that the Google folks will cooperate in demonstrating the actual publication date of the video. There is plenty of room for argument from the other side as to whether posting on YouTube is "publication" for purposes of the statute. Also, given the nature of the medium, it may be difficult to establish that the video is "truthy." Remember the videos showing fantastic basketball shots and people launched through hoops? It is also possible to misinterpret what a video actually represents, for example whether it shows an actual working process or a mockup such as stop motion animation. It comes down to proof - convincing a court as to the factual nature of the video and the date of its publication.
Under the law, prior art must fit within one of the categories defined in 35 U.S.C. 102. The most likely categories for a youtube video are (1) a "printed publication" or (2) evidence of the invention being "known ... by others in this country." There is at least one case holding that a video is NOT a printed publication. Diomed, Inc. v. AngioDynamics, Inc., 450 F. Supp. 2d 130 (D. Mass. 2006). In that case, the court ruled that "The definition of 'printed' cannot be stretched to include a presentation which does not include a paper component or, at minimum, a substitute for paper such as the static presentation of slides." To be "known by others," you must be able to show that the video was sufficiently available to the public. This could be shown by showing that it is searchable on the key search terms or that it was actually accessed by a number of people.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
I am not aware of a definitive answer to your question either in the statutes or in case law, but I can set out likely parameters for making such a determination. The following excerpt from 35 USC section 102 most directly addresses the issues relevant to your question: *A person shall be entitled to a patent unless— (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or* ... There are other limiting factors that are pertinent to your question. The key question, already addressed earlier, is whether the YouTube video can be proven to be prior to the date of the claimed invention. I don't know enough about the innards of the YouTube system to know whether definitive proof is available, though I suspect it is - provided that the Google folks will cooperate in demonstrating the actual publication date of the video. There is plenty of room for argument from the other side as to whether posting on YouTube is "publication" for purposes of the statute. Also, given the nature of the medium, it may be difficult to establish that the video is "truthy." Remember the videos showing fantastic basketball shots and people launched through hoops? It is also possible to misinterpret what a video actually represents, for example whether it shows an actual working process or a mockup such as stop motion animation. It comes down to proof - convincing a court as to the factual nature of the video and the date of its publication.
USPTO published a document related to America Invent Act (AIA) implementation where it categorized a YouTube video to “qualify as a Printed Publication under AIA and pre-AIA laws.” (Page 15 of ‘[First Inventor to File (FITF) Comprehensive Training: Prior Art Under the AIA](http://www.uspto.gov/sites/default/files/aia_implementation/fitf_comprehensive_training_prior_art_under_aia.pdf)‘) AIA adds a new provision to 35 USC 102(a)(1) in the form of “Otherwise available to the public” which had no counterpart in pre-AIA law. USPTO explains further that “an oral presentation at a scientific meeting, a demonstration at a trade show, a lecture or speech, a statement made on a radio talk show and a YouTube video, Web site, or other on-line material (this type of disclosure may also qualify as a printed publication under AIA and pre-AIA law)” [Citius Minds](http://www.citiusminds.com) has collated an article which explains in what form should a YouTube video be submitted to be considered by the examiner or jury, provide necessary evidence and make the most sense. - <http://www.citiusminds.com/blog/can-youtube-videos-be-used-as-prior-arts/>
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
I am not aware of a definitive answer to your question either in the statutes or in case law, but I can set out likely parameters for making such a determination. The following excerpt from 35 USC section 102 most directly addresses the issues relevant to your question: *A person shall be entitled to a patent unless— (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or* ... There are other limiting factors that are pertinent to your question. The key question, already addressed earlier, is whether the YouTube video can be proven to be prior to the date of the claimed invention. I don't know enough about the innards of the YouTube system to know whether definitive proof is available, though I suspect it is - provided that the Google folks will cooperate in demonstrating the actual publication date of the video. There is plenty of room for argument from the other side as to whether posting on YouTube is "publication" for purposes of the statute. Also, given the nature of the medium, it may be difficult to establish that the video is "truthy." Remember the videos showing fantastic basketball shots and people launched through hoops? It is also possible to misinterpret what a video actually represents, for example whether it shows an actual working process or a mockup such as stop motion animation. It comes down to proof - convincing a court as to the factual nature of the video and the date of its publication.
I have seen a YouTube video cited as prior art in an Office action, and a claim rejection was based in part on the YouTube video. So yes, YouTube videos can be prior art. In the Office action, the Examiner provided a screenshot of the video as well as its URL.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
Yes, you can submit a YouTube Video as prior art as long as the YouTube video is publicly available. YouTube videos usually have the publication date under the video, such as "Uploaded by X on Oct 17, 2011". If you provide a hard copy of the video itself, it be hard to prove that the video was public or its publication date, especially if the public version of the video gets removed at a later time. I would recommend making a "Print Screen" image of each second in the video that is considered prior art. Make sure that the "Print Screen" image shows the URL of the video as well as its publication date on YouTube. Then convert each "Print Screen" image into a PDF Document. Combine all the PDF pages into a single document and submit this to the USPTO. I'm a patent searcher and I have done this before. I hope this helps! :)
Yes, it can. I did a quick search and found over 100 patents with a youtube.com prior art citation. The earliest citations I found are in US 7783710, US 7844507, and US 7934725.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
Under the law, prior art must fit within one of the categories defined in 35 U.S.C. 102. The most likely categories for a youtube video are (1) a "printed publication" or (2) evidence of the invention being "known ... by others in this country." There is at least one case holding that a video is NOT a printed publication. Diomed, Inc. v. AngioDynamics, Inc., 450 F. Supp. 2d 130 (D. Mass. 2006). In that case, the court ruled that "The definition of 'printed' cannot be stretched to include a presentation which does not include a paper component or, at minimum, a substitute for paper such as the static presentation of slides." To be "known by others," you must be able to show that the video was sufficiently available to the public. This could be shown by showing that it is searchable on the key search terms or that it was actually accessed by a number of people.
USPTO published a document related to America Invent Act (AIA) implementation where it categorized a YouTube video to “qualify as a Printed Publication under AIA and pre-AIA laws.” (Page 15 of ‘[First Inventor to File (FITF) Comprehensive Training: Prior Art Under the AIA](http://www.uspto.gov/sites/default/files/aia_implementation/fitf_comprehensive_training_prior_art_under_aia.pdf)‘) AIA adds a new provision to 35 USC 102(a)(1) in the form of “Otherwise available to the public” which had no counterpart in pre-AIA law. USPTO explains further that “an oral presentation at a scientific meeting, a demonstration at a trade show, a lecture or speech, a statement made on a radio talk show and a YouTube video, Web site, or other on-line material (this type of disclosure may also qualify as a printed publication under AIA and pre-AIA law)” [Citius Minds](http://www.citiusminds.com) has collated an article which explains in what form should a YouTube video be submitted to be considered by the examiner or jury, provide necessary evidence and make the most sense. - <http://www.citiusminds.com/blog/can-youtube-videos-be-used-as-prior-arts/>
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
Well, all evidences might be taken into consideration. To be a prior art, evidence must be public and need to have a date and comprise one or more subject matter, which is claimed in new invention. But, like other evidences in the internet, video can be deleted from youtube and there will be a problem to prove that 1 year ago youtube (or any other public web-site) contained a specific video. If you want to make a reference in a patent application - it's better to provide a "hard-copy" of video to the USPTO on the CD or DVD.
USPTO published a document related to America Invent Act (AIA) implementation where it categorized a YouTube video to “qualify as a Printed Publication under AIA and pre-AIA laws.” (Page 15 of ‘[First Inventor to File (FITF) Comprehensive Training: Prior Art Under the AIA](http://www.uspto.gov/sites/default/files/aia_implementation/fitf_comprehensive_training_prior_art_under_aia.pdf)‘) AIA adds a new provision to 35 USC 102(a)(1) in the form of “Otherwise available to the public” which had no counterpart in pre-AIA law. USPTO explains further that “an oral presentation at a scientific meeting, a demonstration at a trade show, a lecture or speech, a statement made on a radio talk show and a YouTube video, Web site, or other on-line material (this type of disclosure may also qualify as a printed publication under AIA and pre-AIA law)” [Citius Minds](http://www.citiusminds.com) has collated an article which explains in what form should a YouTube video be submitted to be considered by the examiner or jury, provide necessary evidence and make the most sense. - <http://www.citiusminds.com/blog/can-youtube-videos-be-used-as-prior-arts/>
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
Yes, you can submit a YouTube Video as prior art as long as the YouTube video is publicly available. YouTube videos usually have the publication date under the video, such as "Uploaded by X on Oct 17, 2011". If you provide a hard copy of the video itself, it be hard to prove that the video was public or its publication date, especially if the public version of the video gets removed at a later time. I would recommend making a "Print Screen" image of each second in the video that is considered prior art. Make sure that the "Print Screen" image shows the URL of the video as well as its publication date on YouTube. Then convert each "Print Screen" image into a PDF Document. Combine all the PDF pages into a single document and submit this to the USPTO. I'm a patent searcher and I have done this before. I hope this helps! :)
I am not aware of a definitive answer to your question either in the statutes or in case law, but I can set out likely parameters for making such a determination. The following excerpt from 35 USC section 102 most directly addresses the issues relevant to your question: *A person shall be entitled to a patent unless— (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or* ... There are other limiting factors that are pertinent to your question. The key question, already addressed earlier, is whether the YouTube video can be proven to be prior to the date of the claimed invention. I don't know enough about the innards of the YouTube system to know whether definitive proof is available, though I suspect it is - provided that the Google folks will cooperate in demonstrating the actual publication date of the video. There is plenty of room for argument from the other side as to whether posting on YouTube is "publication" for purposes of the statute. Also, given the nature of the medium, it may be difficult to establish that the video is "truthy." Remember the videos showing fantastic basketball shots and people launched through hoops? It is also possible to misinterpret what a video actually represents, for example whether it shows an actual working process or a mockup such as stop motion animation. It comes down to proof - convincing a court as to the factual nature of the video and the date of its publication.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
I am not aware of a definitive answer to your question either in the statutes or in case law, but I can set out likely parameters for making such a determination. The following excerpt from 35 USC section 102 most directly addresses the issues relevant to your question: *A person shall be entitled to a patent unless— (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or* ... There are other limiting factors that are pertinent to your question. The key question, already addressed earlier, is whether the YouTube video can be proven to be prior to the date of the claimed invention. I don't know enough about the innards of the YouTube system to know whether definitive proof is available, though I suspect it is - provided that the Google folks will cooperate in demonstrating the actual publication date of the video. There is plenty of room for argument from the other side as to whether posting on YouTube is "publication" for purposes of the statute. Also, given the nature of the medium, it may be difficult to establish that the video is "truthy." Remember the videos showing fantastic basketball shots and people launched through hoops? It is also possible to misinterpret what a video actually represents, for example whether it shows an actual working process or a mockup such as stop motion animation. It comes down to proof - convincing a court as to the factual nature of the video and the date of its publication.
Yes, it can. I did a quick search and found over 100 patents with a youtube.com prior art citation. The earliest citations I found are in US 7783710, US 7844507, and US 7934725.
16
I'm trying to determine whether there is evidence that **definitively confirms** that a YouTube video can be submitted as prior art. If there is an example of one being used as the grounds for rejecting an application, that would obviously work, as would a statement or copy from the USPTO, but I wasn't able to find anything on their site that would specifically apply.
2012/09/05
[ "https://patents.stackexchange.com/questions/16", "https://patents.stackexchange.com", "https://patents.stackexchange.com/users/7/" ]
I have seen a YouTube video cited as prior art in an Office action, and a claim rejection was based in part on the YouTube video. So yes, YouTube videos can be prior art. In the Office action, the Examiner provided a screenshot of the video as well as its URL.
Yes, you can submit Videos (Whether from Youtube or other platforms), voice, and even images as prior art for an invention. As [@patentico](https://patents.stackexchange.com/a/212/15787) mentioned that providing the hard copy of a video is a bad idea as you may face difficulties in providing publication date of that video. You may need to collect evidence of original publication date of the same. Also, a video will be considered prior art even if the source of that video and patent are same (if the time frame between the publication date of video and filing date of the patent is more than a considerable time provided by local IP laws). As your request, I have found one infamous patent case of Apple where their patent was invalidated by using a video as Prior art. The interesting fact about that case is, the video considered as prior art was Apple's own keynote video on iPhone release presented by Steve Jobs. Iphone's [Rubber-banding effect patent](http://worldwide.espacenet.com/publicationDetails/originalDocument?FT=D&date=20100929&DB=EPODOC&locale=en_EP&CC=EP&NR=2059868B1&KC=B1&ND=4) was invalidated in Germany after Google submitted Steve Jobs' presented keynote video as prior art. The court even rejected various amended claims by Apple. Only 3 out of 20 claims were granted. While searching for cases, I found an article with 4 such patent disputes where visual content was considered as prior art (including the above case I mentioned). Not just a YouTube video but, a comic, picture, or even Movie can be used a prior art as long as you're able to provide strong relation with the invention and publication date of the same. Here are the other 3 patent cases including one where a movie from 1968 was considered as prior art for Apple's iPad design. [4 cases where examiner found ridiculously awesome prior art.](http://www.greyb.com/4-cases-examiner-found-ridiculously-awesome-prior-art/)
3,121
In Italian, when I am talking about an unknown person, I would use the third person singular, masculine. For example, I could say *Chi ha rubato le chiavi alla ragazza è **qualcuno** che ha potuto avvicinarsi alla ragazza* (literally, "he who has stolen the girl's keys is somebody who has been able to approach the girl"); if I would use the feminine gender, I would imply I am talking of a woman/girl. In this case, the sentence would not use *qualcuno*, but *qualcuna*. Is using *he* as a neutral gender acceptable in English? A friend of mine said that she considers using *he* as a neutral pronoun acceptable, but I have noticed that (for example) some error messages given from applications, or web sites use the singular *they*.
2013/02/25
[ "https://ell.stackexchange.com/questions/3121", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/95/" ]
Would your example in full be something like this in English? > > I don’t know who has stolen the girl's keys. [?] is somebody who has > been able to approach the girl. > > > In that case we can begin the second sentence with *It*. This is unusual, and it doesn’t mean that the thief is inanimate. Rather, *it* refers to the entire thieving episode. StoneyB’s answer gives a good picture of the overall situation, but I would be less cautious about using *they* to refer to a singular antecedent. Such use has a long and respectable history as shown [here](http://www.crossmyt.com/hc/linghebr/austheir.html). Moreover, the Oxford English Dictionary’s definition of *they* is > > Often used in reference to a singular noun made universal by every, > any, no, etc., or applicable to one of either sex (= ‘he or she’). > > > For those who find such use awkward, a workaround is often available by making the antecedent itself plural.
It was acceptable, and indeed the dominant convention, until the 1970s, when what was then called the *Women's Liberation* movement called its propriety into question. Today it's no longer acceptable to most of the institutions, public and private, which determine what is published and what is not. ***Eschew it.*** The language is still trying to sort out what is to replace it. *He or she* (or *s/he*) and its inflections *his or her*, *him or her* (*his/her*, *him/her*) are often used, but are clumsy (and the slashed variants are unspeakable). There's considerable sentiment for singular *they*, which is perfectly acceptable in isolation but creates impossible ambiguities and grammatical cruxes in complex utterances; I cannot believe it will ever be embraced by the academic community. People have suggested many new coinages to supply its place, but none of these has gained wide acceptance. My recommendation is that while you're waiting for the language to shake down you find a way to recast your sentences either with plurals or without gendered pronouns; in this particular case: "Whoever stole the girl's keys has to have been someone who was able to approach her".
203,665
I am looking to install a 100 amp sub from my main. The run is aprox. 225' from the main panel down into my lower front yard. The panel will be used to power a: * 30amp service for my RV * provide power for my recording studio/office in a 12x16 shed with: + 10 15 amp receptacles + a 120v electric wall heater + a 5k BTU 120v window ac unit (later I will switch to a mini split). I am trying to figure out what size wire to use and the breaker I need for the main panel to supply power to the sub. I have 1 unused 20 amp breaker/spot open in the panel (old electric dryer line no longer in use available). I will not be buying a larger RV and there will be no shop tools used in the shed THANK YOU to everyone who has commented, I really appreciate the input!!!!
2020/09/18
[ "https://diy.stackexchange.com/questions/203665", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/123435/" ]
Yikes. All your load is 120V. ----------------------------- The 30A RV is almost certainly a "TT30" which is 120V/30A. Receps are all 120V obviously. The good news is, receps are 0 amps. The bad news is, things *which plug into receps* are various amps, and since you haven't discussed what those will be, we have to punt over to the usual assumption of 180 VA per recep, or 1800 VA the bunch, or 15A. 120V electric wall heater will be 12 amps typically. However presumably you will not be running it at the same time you are maxing out the RV. You won't be running the A/C at the same time as the heater, and it's less than 12A, so we'll count it as part of the heater's allocation. **There are too few simultaneous loads here to be able to effectively balance them on the two 120V phases included as part of 240V.** Therefore while we can try to balance them on the 240V, we can't count on that balance existing. Therefore we must think about voltage drop for a single load. The worst case (for balancing and voltage drop) is the RV is maxed out @ 30A and nothing else is on. That must be the basis of our voltage drop calcs. (so for folks who do voltage drop calcs... just plugging 240V and breaker trip into your friendly neighborhood voltage-drop calc *won't tell the whole story*. Loads this lopsided require thinking about drop in instances of pure 120V load.) It's a pity; if only we had a way to guarantee balancing of the 120V loads, we could do our calcs based on your 240V draw, and the wires would be a lot thinner. A 10 KVA or even 5 KVA transformer could do that... but realistically, unless you get lucky on Craigslist (and know *exactly* what you are buying), the transformer would be more expensive than fatter wire. Failing to run this 240V would be *nuts* ---------------------------------------- So you might think "Why not run it as 120V-only?" Because your future mini-split won't like that. And you may get a bigger RV someday. I think we need to *plan* for **40A @ 240V** of service... but also watch our voltage drop at **30A @ 120V**. (24A [80%] for the RV and 6A for other misc loads.) 3% is a wire salesman's ideal but try earnestly to keep it under 5%. So this looks like #2 aluminum. ------------------------------- #3 aluminum would work, but it's generally a unicorn. And the limiting factor is the ability to support a lopsided 120V load based on the RV being most of it. On the upside, since you're not in Canada, you could breaker the #2 aluminum at 90A. That means it will be totally ready for that mini-split and larger 240V/50A RV should you ever get one -- at 60A actual, voltage drop @ 240V will only be 3.3%. If you wanted to super-chintz this thing, you might be able to swing #4 aluminum, but under certain conditions voltage drop would be noticeable. The cost differential #2 vs #4 will be tiny compared to total project cost. Also, get a BIG panel --------------------- This wire can be breakered at 90A, which is enough to run a pretty big house. Given how ridiculously cheap breaker spaces are, and how expensive and project-blocking *running out of spaces is*, a 30-space panel is not excessive. Disregard number of "circuits" in a panel spec; that number is useless. So a 16 space/32 circuit panel is only a 16.
<https://www.cerrowire.com/products/resources/tables-calculators/ampacity-charts/> Here is a Chart on wires needed for this job.The longer the distance from electric source the more heavy duty wire youll need.
203,665
I am looking to install a 100 amp sub from my main. The run is aprox. 225' from the main panel down into my lower front yard. The panel will be used to power a: * 30amp service for my RV * provide power for my recording studio/office in a 12x16 shed with: + 10 15 amp receptacles + a 120v electric wall heater + a 5k BTU 120v window ac unit (later I will switch to a mini split). I am trying to figure out what size wire to use and the breaker I need for the main panel to supply power to the sub. I have 1 unused 20 amp breaker/spot open in the panel (old electric dryer line no longer in use available). I will not be buying a larger RV and there will be no shop tools used in the shed THANK YOU to everyone who has commented, I really appreciate the input!!!!
2020/09/18
[ "https://diy.stackexchange.com/questions/203665", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/123435/" ]
Yikes. All your load is 120V. ----------------------------- The 30A RV is almost certainly a "TT30" which is 120V/30A. Receps are all 120V obviously. The good news is, receps are 0 amps. The bad news is, things *which plug into receps* are various amps, and since you haven't discussed what those will be, we have to punt over to the usual assumption of 180 VA per recep, or 1800 VA the bunch, or 15A. 120V electric wall heater will be 12 amps typically. However presumably you will not be running it at the same time you are maxing out the RV. You won't be running the A/C at the same time as the heater, and it's less than 12A, so we'll count it as part of the heater's allocation. **There are too few simultaneous loads here to be able to effectively balance them on the two 120V phases included as part of 240V.** Therefore while we can try to balance them on the 240V, we can't count on that balance existing. Therefore we must think about voltage drop for a single load. The worst case (for balancing and voltage drop) is the RV is maxed out @ 30A and nothing else is on. That must be the basis of our voltage drop calcs. (so for folks who do voltage drop calcs... just plugging 240V and breaker trip into your friendly neighborhood voltage-drop calc *won't tell the whole story*. Loads this lopsided require thinking about drop in instances of pure 120V load.) It's a pity; if only we had a way to guarantee balancing of the 120V loads, we could do our calcs based on your 240V draw, and the wires would be a lot thinner. A 10 KVA or even 5 KVA transformer could do that... but realistically, unless you get lucky on Craigslist (and know *exactly* what you are buying), the transformer would be more expensive than fatter wire. Failing to run this 240V would be *nuts* ---------------------------------------- So you might think "Why not run it as 120V-only?" Because your future mini-split won't like that. And you may get a bigger RV someday. I think we need to *plan* for **40A @ 240V** of service... but also watch our voltage drop at **30A @ 120V**. (24A [80%] for the RV and 6A for other misc loads.) 3% is a wire salesman's ideal but try earnestly to keep it under 5%. So this looks like #2 aluminum. ------------------------------- #3 aluminum would work, but it's generally a unicorn. And the limiting factor is the ability to support a lopsided 120V load based on the RV being most of it. On the upside, since you're not in Canada, you could breaker the #2 aluminum at 90A. That means it will be totally ready for that mini-split and larger 240V/50A RV should you ever get one -- at 60A actual, voltage drop @ 240V will only be 3.3%. If you wanted to super-chintz this thing, you might be able to swing #4 aluminum, but under certain conditions voltage drop would be noticeable. The cost differential #2 vs #4 will be tiny compared to total project cost. Also, get a BIG panel --------------------- This wire can be breakered at 90A, which is enough to run a pretty big house. Given how ridiculously cheap breaker spaces are, and how expensive and project-blocking *running out of spaces is*, a 30-space panel is not excessive. Disregard number of "circuits" in a panel spec; that number is useless. So a 16 space/32 circuit panel is only a 16.
The problem with trying to calculate voltage drop is the actual load must be used to determine loss, and you don't really supply enough load to justify a 100A panel, or calculate voltage loss that would result in significant voltage loss. With the load you show you could easily get by with 75°C #4 copper (#2 aluminum) on a 240v 60A feeder loaded at 45A and still only get 5% voltage loss even in the most imbalanced scenario. That's the 30A RV, and shop non-simultaneous heat/cooling on the one leg and everything else connected to the other leg with nothing turned on. Putting your 120v 30A RV on one leg and the shop heat/AC on the other leg then split your receptacles between two 15A or 20A circuits would provide better balance. For a 100A feeder #3 copper or #1 Aluminum if using 75°C conductors, or next size larger if using 60°C rated wire or source terminations. Those are the minimum sizes allowed by code for a 100A feed and nothing you present is enough load to justify upsizing.