source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
How do I pick a flux cleaner that's safe for use without industrial ventilation? I was looking at getting this flux cleaner: Fluxclene FLU200D but then I noticed that in the datasheet , it mentions an inhalation toxicity of 300 ppm. That sounds like it's easily exceeded by just spraying the cleaner onto my PCBs, even if I have reasonable ventilation. Should I avoid this cleaner or am I misunderstanding this parameter? More generally, how do I pick a safe flux cleaner? I've used isopropyl alcohol and found it entirely unsatisfactory in dissolving the flux residue. <Q> Cleaning Flux from PCB is mandatory if RA, RMA or Water soluble. <S> If No clean flux is used, then no need to clean unless you intend to apply coatings which may or may not adhere properly. <S> Kayzen and Zestron make non solvent based cleaners. <S> They are saponifiers (soap) with high alkalinity which remove flux and suspend it. <S> These cleaners require very good rinsing with water (deionized) since they have low resistance and can contribute to ionic contaminant problems. <S> I have used simple green cleaner with remarkable results on personal projects. <S> IPA is not a good flux remover unless immediately and as a temporary measure during touch up prior to a thorough cleaning with a saponifier. <A> You could try a no clean flux like this one: <S> http://www.gotopac.com/AIM_AF_NC264_5_1GAL_p/af-nc264-5-1gal.htm With this Flux you will not have to use a flux cleaner at all. <S> Which may be best if you are in an area where ventilation may be an issue. <S> Just be careful, the no clean flux does burn off quicker, increasing the risk of oxidation to your solder tip. <A> Keep in mind that if you're working on a low power device where battery life is a concern, then no-clean fluxes can lead to parasitic currents, shortening battery life. <S> It brand, and it works very well. <S> Sometimes I'll then wash the PCBA down with distilled water too.
I use "Flux Wash" from DeOX
How to connect multiple of the same device to an Arduino using I2C? I want to use two of the same magnetometer (HMC5883L) with my Arduino, but I cannot figure the code for calling each of them separately. I have read online that connecting multiple devices is completely doable, as long as you call their respective addresses, the issue that I'm having is that because I'm using two of the same device they each have the same address. So my questions are, can I change the address I2C address of one of my magnetometers and not have it break, how do I do this, is it a permanent change, and if so can I revert the HMC5883L back to default if need be? <Q> I just read the datasheet, without some external hardware (like, some kind of multiplexing buffer with channel select and chip enable) you cannot have two of these devices on the same I2C bus. <S> The device you are using has a fixed, factory set address. <S> There are no ways to change the address by software or even by external pins to adjust it's 7-bit I2C Bus address. <S> More complex ATMEL 8-Bit <S> AVR like the XMEGAs <S> have multiple I2C interfaces, so with those you could have two devices, one per channel. <S> Same with the simple and smaller ARM Cortex M0 - <S> > M3 for example, they all have multiple bus interfaces that can deal with this issue. <S> Something can do with a bit of hardware and software is to have an IC which blocks off the I2C Serial Clock (SCL) to either one or the other and alternate <S> which one is receiving the clock signals and therefore able to receive and respond to commands. <S> I guess a simple dual MOSFET with XOR control at the gates could do it, with simple circuitry. <S> Otherwise some kind of line driver/buffer chip with an enable pin and dual channel/multiplexed output will allow you to switch which output gets the SCL signal. <S> Either way it's not pretty. <S> You can always find a second, but similar IC/module magnetometer that has a different hard-coded I2C address or at least the ability to change it (usually external pin configurations/resistors) to allow multiple on the same bus. <S> EDIT: <S> Texas Instruments has an I2C Troubleshooting document which on page 8 shows a way to do the multiplexing in a simpler way than I described to split the I2C bus into sub-sections to deal with conflicting slave address issues like what you have. <S> good luck! <A> Common ways: Use two i2c channels. <S> Either hardware or bitbanged/software i2c. <S> It's not complicated protocol and there are plenty of libraries for this. <S> Use a hardware i2c buffer/bus switch/multiplexer etc. <S> There are many names for the same thing. <S> Some are controlled by an external gpio, others can be controlled through their own i2c address. <S> As @Jjones mentioned, a bridge would also work. <S> Finally, many manufacturers have alternative address versions of the same chips. <S> Funny enough, that sensor has plenty of unused pins that could have been used for setting an address, but they decided not to for some reason. <A> The LT4316 is your answer. <S> http://cds.linear.com/docs/en/datasheet/4316fa.pdf <S> This amazing little chip is a nearly passive solution. <S> Just used it on a board (ended up actually setting my shift to zero and using it as a buffer). <S> What I mean by nearly passive is that you can just set your i2c address shift with resistors and the change is invisible to your code. <S> The shift only applies to the address itself <S> and then it passes through the data unchanged. <A> You could use 2 arduino and communicate thru SPI with each one hosting I2C slave device.
Dedicated or spare micro controllers could be turned into a protocol bridge, like i2c-spi, or serial-i2c, or whatever.
Is a two-element circuit series or parallel? Is the voltage source in series with the resistor or is the voltage source in parallel with the resistor? If the voltage source is in parallel with the resistor, then why do they share the same current? If the voltage source is in series with the resistor, then why do they share the same voltage? They are both in series and in parallel, correct? <Q> A Source & its loads always share the same current & voltage as they are source & loads. <S> We say about parallel/series connection of them separately, not together. <S> That means, we say either two (or more) loads are in series/parallel with each other or, <S> two (or more) sources in series/parallel. <A> Yes. <S> Your reasoning is correct. <S> They share the same current so they are in series. <S> However, they are also in parallel since they share the same voltage. <A> Two elements is basically a degenerate case. <S> The voltages across both elemnts is identical, so they are in parallel. <S> Most circuits consist of a lot more than two elements, though, so in most practical instances the distinction is more obvious. <A> In my opinion, you are creating the confusion by including the power source as an "element." <S> And thinking an "element" is the same as a load. <S> The configuration terms "series" or "parallel" <S> apply to two or more loads. <S> Since your circuit has only one load, neither term applies. <A> This is not a series or parallel circuit, because the definition is based off the load, not the wires. <S> You have two wires connecting to only one element. <S> Also, the power source doesn't count as an "element" as you meant. <S> Think about it like this: <S> You have one line like below: <S> _________________________- Is this line parallel or perpendicular? <S> Answer: <S> Neither. <S> You need two lines to determine if it is parallel or perpendicular. <S> Now, you have two lines: _________________________- _________________________ <S> - They are parallel. <S> Similarly, you need at least two elements in a circuit for it to be parallel or series. <S> Series: |_______1_________2________| <S> Parallel: |___________1_____________| |\ __________2____________/
The current through both elements is identical, so they are in series.
Current over CAT6 ethernet cable How much current can a CAT6 cable reliably handle? I want to use 3 of the cores for +5V and 3 cores for GND. I'm wondering at what current I need to think of another power solution. <Q> At 5V you'll probably run into voltage drop issues before you run into current limitations (if the length is more than a few meters). <S> Some CAT6 cable is rated as low as 60°C, and some is AWG 24, so if your ambient could be as high as 50°C. <S> the current limitation might be as low as 2-3A. See, for example, this and this . <S> Edit: If the length could be as long as 10m, and assuming AWG24 <S> size-- resistance is nominally 84 ohms/km <S> so 0.84\$\Omega\$/10 <S> m, <S> so three in parallel, round trip, would be 0.56 ohm at 20°C. <S> If 5% voltage drop (250mV) was acceptable, that would be a current of 440mA maximum, so maybe 350-400mA maximum allowing for temperature. <A> 802.3at <S> Type 2 POE limits the current to 600ma per "mode" (pair of pairs) which is equivilent to 300ma per core. <S> So if you assume the IEEE got it <S> right then you can safely deliver about 900ma on a setup with three positive wires and three negative wires. <S> I expect the IEEE were pretty conservative to allow for less than ideal installation methods and that with a single cable in free air you could go somewhat higher. <S> However that is not the whole story. <S> As Sphero points out in a 5V system volt drop is likely to become an issue before cable rating does. <S> He came up with a current of 440ma for a 10m length and a somewhat reasonable volt drop. <S> An obvious solution to this is to use a higher voltage supply and then step it down at the remote end. <S> However this brings issues of it's own. <S> It's not such an issue for "ghetto PoE" type systems because Ethernet is isolated but the fact that you are using three pairs for power (leaving one pair for data) makes me suspect that you are not planning to use Ethernet. <S> If you have a common ground for data and power interconnection and you raise the power supply voltage significantly above the signal voltage <S> then you need to think very carefully about the impact of volt drop in the ground lead and also the impact of fault conditions where the ground is disconnected while the power and data lines remain connected. <A> I would suggest between roughly 1.5 amps and 2.2 amps based on quality of cable based on googling limit for power transmission on 23 and 24 gauge wire, lowest end for cheapest copper clad aluminium 24 gauge wire. <S> Longer length of wire you suffer from voltage drop because only 5volt power, normally in POE applications they step it up to 48v and then step down again to 5 volt. <S> There are ways to go beyond that if you are willing to go to trouble. <S> Take off outer layer and spread out the wires <S> and they don't overheat as quickly and much more <S> power can go through, counts as "chassis wire" rather than power transmission wire in look up tables. <S> You could also have 2 neutral wires, 2 wires at +5 volt, 2 wires at -5 volt, for some applications such as led lighting, and under balanced load would count similar to 10v with the current cancelling itself out in the neutral wires, similar to idea of house power being split phase 110volt/220volt in USA over 3 wires. <S> That would give a 33% boost to power, and under nearly full/balanced load a big decrease to voltage drop, while allowing 4 different led lights to be individually switched on and off from the 6 wires, half the time the "neutral" would be the positive side on led. <A> Please, stick to the NFPA 79 Standard , Table 13.5.1 , which specify the allowable ampacities for conductors with smaller sections (30 AWG to 10 AWG) than the NEC ampacities (14 AWG to 2000 kcmil). <S> This standard rates 2A for 24 AWG , 90 deg., single conductor. <S> And considering 8 carrying conductors inside a same cable, we should apply a 70% derating factor, with an estimate of 1.4A for 24 AWG , 90 deg., 8 carrying conductors. <S> This consider could vary according specific manufacturer certifications. <S> Also, this assumes a CAT6 Cable as 24 AWG.
It depends on whether cable is 24 or 23 awg, and what length of cable, and whether pure copper or copper around aluminium.
How long would an original gameboy last on a modern lithium battery? The original Gameboy was famous for it's battery life on 4AA batteries. My HTC One smartphone can last about as long as the GB did back in the 80s. So how long would an original gameboy last if powered by a modern lithium phone battery? <Q> Let's compare the energy densities of the battery types: Alkaline battery, 1.8 MJ/L Lithium-ion, 2.36 MJ/L (best case) Lithium battery, 4.32 MJ/ <S> L <S> Therefore, if a Gameboy lasted 10 hours on Alkaline batteries, in theory it would last 40.3 hours on Lithium (primary, non-rechargeable), or 20.4 hours on Lithium-ion batteries that occupied the same physical space. <S> If you used a phone battery, it depends on its size and capacity. <S> Most phone batteries aren't as large as four AA batteries, so you would have to go by the capacity of the battery. <S> Four AA batteries (in series for 6V) have a capacity of around 2500 mAh at 100 mA discharge rate. <S> (The Gameboy Wikipedia entry indicates it requires about 0.7 watts which is a little more than 100 mA at 6V.) <S> An HTC One battery is a Lithium-polymer battery at 2300 mAh, which I'll assume is 3.7V LiFePO 4 . <S> Since the voltages are different, there would have to be some conversion which would introduce efficiency losses, but we'll assume 100% for now. <S> The four alkaline AA batteries then are 2500 <S> mAh * 6V <S> = 15000 mWh. <S> The HTC One battery is 2300 <S> mAh * 3.7V <S> = 8510 <S> mWh. <S> Based on this, you'd get roughly half of the runtime using your phone battery. <A> Very roughly, an alkaline AA cell is around 2Ah, and 4 in series are 6V, so 12Wh. <S> An iPhone 5 battery is about 1.5Ah at 3.7V so less than 6Wh. <S> We might expect the Gameboy to run for roughly half as long on an iPhone 5 battery as it would on good quality fresh alkaline AA batteries. <S> An iPad 3 battery, on the other hand, is about 43Wh, so we might expect it to last about 3.5 times as long. <A> A typical alkaline AA has about 3.9 watt-hours, so 4 AA batteries will have around 15.6 watt-hours. <S> A Galaxy S5 lithium ion battery has 10.78 watt-hours.
So actually, the original Gameboy would have a shorter lifetime on a modern day lithium ion phone battery.
Is there a benefit to charging a supercapacitor in parallel and then discharging in series? I'm interesting in building a hand crank charger for a 12V supercapacitor bank. I'll need multiple supercaps in series to hit 12V but to my novice understanding, it seems more efficient to charge in parallel and then switch to series for discharge. I get more capacitance and my resistance should be lower. Therefore reducing the crank torque of the hand crank. I'm looking for someone who knows way more than me to tell me if I'm wrong. Thanks! <Q> With a given hand-crank X and super capacitor Y, if you stack up several Y's to charge with X, this will be lighter work, but will take longer. <S> That last part is of course simply conservation of energy. <S> If crank X cannot supply the stacked voltage, you will not get them full and parallel would be an option. <S> Your initial conclusion is correct, you get a higher visible capacitance and lower resistance, but to a generator/crank this is actually "more work". <S> Think of a low resistance as a short circuit and a high resistance as unconnected wires. <S> Try to turn the crank with nothing connected, then short the crank and try to turn it again. <S> You will soon find the short circuit is a lot more work. <S> Edit, due to your comment above: Be aware that the K-Tor, as advertised, supplies 120VDC. <S> No 12V at all. <S> You will need a switching power supply to be able to transfer the 120VDC somewhat efficiently to a 12V bank of power buffers such as the caps you want to use. <S> In which case you are free to optimise for the capacitor bank you build. <S> (13V5 for 5caps stacked, for example). <A> This is a 'standard' technique. <S> There is a device called a Marx generator that is used to generate very high voltages. <S> It consists of a series of capacitors connected in parallel with resistors and in series with spark gaps. <S> The capacitors charge in parallel through the resistors, then discharge in series through the spark gaps, multiplying the input voltage by the number of capacitors. <S> Sounds like you want to do the same thing, but at a much lower voltage. <S> Adding the required switches to select between series and parallel will certainly add quite a bit of complexity, but it's not clear if it is worth it or not. <S> The switches could also require quite a bit of power to actuate, depending on what sort of a device you use; this power will not be available for your load. <S> It might be a good idea to look in to the characteristics of your generator. <S> If you put the caps in parallel, the voltage will change more slowly with the input current. <S> This could be a good configuration if you have a low voltage generator that can supply a lot of current. <S> Otherwise, it might be a better idea to put them in series. <A> When your capacitors are paralleled, for charging, the total capacitance will be the sum of all the capacitances, the value of the equivalent series resistance (ESR) will be the values of the individual ESRs in parallel, and the cranking will be much more difficult than if they were in series. <S> In series, for discharge, the voltage across the string will be the sum of the individual capacitor voltages, the total capacitance of the string will be the capacitance of an individual capacitor divided by the number of capacitors in the string, and the ESR will be the sum of the individual ESRs of the capacitors in the string.
The same amount of energy will be stored in the capacitors in either configuration, so it's mainly a question of whether or not the parallel configuration will be beter for charging or not.
What happens to the capacitance of a system if capacitors are charged in parallel and then put into a series circiut If you have 2 capacitors 2V, 2F each. When you charge them in parallel, the system will have 2V and 4F when you attach a load to it. If you take the charged capacitors and then put them in series, does the system just change to 4V, 1F? What happened to the other 3F? Why does this happen? <Q> Each capacitor stores energy which is conserved. <S> The energy stored in one of the capacitors is $$\frac{1}{2}CV^2 = <S> \frac{1}{2}(2\:\mathrm F) ( <S> 2\:\mathrm V)^2 = 4\:\mathrm <S> J$$ for a total of \$8\:\mathrm J\$ of stored energy. <S> If the charged capacitors are placed in parallel appropriately, the voltage across the combination is \$2\:\mathrm V\$ and <S> the energy stored is \$8\:\mathrm J\$. <S> Thus the equivalent capacitance is $$C_\text{EQ} = <S> 2\frac{8\:\mathrm J}{(2\:\mathrm V)^2} = <S> 4\:\mathrm <S> F$$ <S> If the charged capacitors are placed in series appropriately, the voltage across the combination is 4V and the energy store is \$8\:\mathrm J\$. <S> Thus, the equivalent capacitance is $$C_\text{EQ} = <S> 2\frac{8\:\mathrm J}{(4\:\mathrm V)^2} = <S> 1\:\mathrm <S> F$$ <S> Yes, there is a \$3\:\mathrm F\$ difference but asking " what happened to the other 3 farads? " <S> is like asking " what happened to the other 3 ohms? <S> " when comparing series and parallel connected 2 ohm resistors. <S> No capacitance has 'vanished'. <S> Both capacitors still have \$2\:\mathrm F\$ each of capacitance. <S> What has changed is the configuration of the capacitors and, thus, the equivalent capacitance as seen by an external circuit. <A> Put very simply, the capacitance of a capacitor is related to the area of its metal parts storing the charge and the distance between them: $$ C = \epsilon <S> A/d <S> $$ <S> Where \$A\$ is the area, \$d\$ is their distance, and \$\epsilon\$ is a constant. <S> Which means for the purpose of this simplified discussion we can ignore \$\epsilon\$. <S> I am going to leave out units, because I am too lazy at 4AM to add superscripts and what not. <S> If we have one capacitor with A = 2, d = 1, we get C = 2.Now <S> we add another capacitor next to it. <S> This makes the two capacitors a system with effectively twice that area. <S> So we "see" a capacitor of A = 4, but still d = 1 <S> , we get C = 4. <S> Now if we put one capacitor of A = 2 and d = 1 on top of another of A = 2 and d = 1, on the outside, the top most pin and the bottom most pin, we only see A = 2. <S> We couldn't ever see anything other than A = 2, because our terminal has an area of 2.But, by adding them together we made the total gap between the two pins d <S> = 2.So <S> now we have a capacitor of A = 2 and d = 2, makes C = 1. <S> What little this means to charge and stored energy concerning Coulomb et.al. <S> I leave to another as the above explains the effect and I am off. <A> Yes, it does. <S> The energy contained is the same though, as the energy is \$1/2 <S> C V^2\$. <S> So putting them in series doubles the voltage, but the capacitance must decrease as a result.
Whether the capacitors are placed in parallel or series, the amount of energy stored is the same.
Figuring resistor wattage and value I have a remote controlled airplane that I am putting LED lights on. The power supply is a lithium polymer battery that is rated at 22.2 volts and 4000 miliamps. I have an LED that is rated between 12-19 volts. How do I figure what size (wattage) and what value resistor to use. I have heard about Ohm's law and am familiar with the equation but obviously don't know how to apply it. <Q> I'm going to take a WAG and assume this LED you have is designed for automotive use on a "12VDC" source (typically more like 14V), so it already has a resistor in there. <S> Be sure to verify this before hooking it up to a supply directly or you could damage the LED! <S> In that case, you will only need to add to the resistor(s) that are in there. <S> You can attach it to a power supply that outputs something like 14V (preferably a bench supply) and measure the current Im. <S> Take that current Im, let's say it's 100mA, and calculate the value of the extra resistor: <S> Rs = <S> \$\frac{22V-14V}{Im}\$ <S> so for our example, it would be 80 ohms. <S> Pick a standard E24 value <S> that's the same or a bit higher (say 82 Ohms in this case). <S> Power will be \$\frac{(22V-14V)^2}{R}\$, so for our example 0.8W. <S> Pick one that's a bit higher, say 1W or 2W. Go to your favorite distributor and find a resistor with the required specs. <A> You state that the LED has a voltage between 12-19 Volts. <S> All of the LEDs that I have used have been rated at one specific voltage so I am not sure if you are mistaken with this voltage range rating? <S> I think you should research this again. <S> The other piece of information that comes with the LED is its power rating which should be labelled on the package directly, for example for a LED of type X <S> it could be given as 40 mW, 2V LED. <S> Another indirect way of indicating the power rating of a LED is to give its rated current, for example for the same type <S> X LED: 20mA, 2V LED. <S> Power is then calculated via the formula: P = <S> V <S> x I, i.e. power= voltage x current = <S> 20mA <S> x 2V <S> = 40 mW. <S> Once you have the power rating, P, of your led substitute your values into the following equations to figure out: <S> a. <S> The current required by your LED. <S> b. <S> The size resistor to insert in series with your led and 22.2 V battery. <S> a. I = <S> (P of LED)/(Voltage of led). <S> b. R = <S> (Battery Voltage - led voltage)/I. Where I is the current solved for in the first equation. <S> Finally, run a wire from the battery to the anode of the LED (tail of the arrow), then from the other (cathode) side of the LED to the resistor and then from the other side of resistor back to the battery. <A> (i suspect the 4000mA is not max output but the full capacity of the battery e.g. 4000mAh) <S> ••• the apx./proposed resistor wattage P R = (1/0.3) · (0.8 · 22.2 BATT V - √(12·19)LED V ) <S> ·I LED ••• example: (30%) -1 ·(80% · 22.2 - 15.1)·1V·100 <S> * (see note!) <S> mA = 3.33· 266 mW = <S> 887mW =apx. <S> 3×250mW <S> OR 1×1W <S> ••• R = <S> U/R <S> = 2.66V/100*mA = 0.0266kΩ <S> = <S> 26.6Ω <S> ••• chk. <S> 2.66V²/26.6Ω = <S> 0.266 <S> W that is required to double or triple or more to keep the resistor cool enough (for not melting the solder in worst case) <S> ••• note: <S> 100*mA is typical for some power LED-s (is quite a heavy load for the battery - and may disable your airplane to take off <S> ** (see note.2!) ) <S> you need to know OR determine it by applying 15V to LED and measuring it´s actual current consumption with ampere meter ••• <S> note.2: it's better to use PULSED LED - requires a driver circuit - at (std.) <S> 10% duty <S> e.g. 10µs <S> ON 9µs OFF = <S> AVG.apx 10mA drain for battery instead of 100mA (which requires a filter to the rest of the circuit AND/OR to LED pulser circuit) -- nothing is easy if you want to make it reasonably right ...
Anyways, either way you need to inspect the package that your led came in and look for a power rating (do a search on the internet with the LED's part number if you can't find it on the package).
Low output impedance vs high output impedance of an amplifier I read that a low output impedance is desirable for an amplifier. I am unable to understand why from my analysis of the output side of the common emitter amplifier. So looking at the output of the Common Emitter amplifier below, the output impedance is \$Z_{0}=r_{o}||R_{C}\cong R_{C}\$. The load and \$R_{C}\$ will be in parallel with respect to the current source. Let's consider the two extremes: \$R_{C}=\infty\$ : \$R_{C}\$ branch is open so all the current will be flowing through the load. The load voltage is, \$\beta I_{b}R_{L}\$. \$R_{C}=0\$ : \$R_{C}\$ branch provides a short for the current source so no current reaches the load. Load voltage is 0. So what's going on? How come a low output impedance is desirable? Surely, something must be wrong with the way I'm analyzing the circuit! <Q> In general The output impedance of an amplifier is equivalent to a source impedance <S> \$Z_{S}\$ from the perspective of a load with impedance \$Z_{L}\$ . <S> Think of a voltage divider where \$V_{\text{out}}\$ is the output voltage of the amplifier without a load (i.e. \$Z_{L} = <S> \infty\$ ): simulate this circuit – <S> Schematic created using CircuitLab <S> The voltage delivered to the load <S> is $$V_{\text{load}} = <S> \frac{Z_{L}}{Z_{L} <S> + Z_{S}}V_{\text{out}}$$ <S> If \$Z_{S} \gg <S> Z_{L}\$ <S> then \$V_{\text{load}} \approx 0\$ , which is bad if you are trying to amplify a voltage for the load. <S> But if \$Z_{S} \ll Z_{L}\$ <S> then \$V_{\text{load}} \approx V_{\text{out}}\$ . <S> For current amplification you want the reverse: high output impedance from the previous stage and low input impedance from the next stage. <S> Think of a current divider: the current will mostly flow through the lower impedance, so a low input impedance from the next stage means most of the current will flow into the load. <S> Your case \$R_{C}\$ actually forms part of the load for the common emitter -- the total load to the common emitter is \$R_{C <S> } \| R_{L}\$ <S> where \$R_{L}\$ is the input impedance looking into the load. <S> As in the general case you want to maximize the input impedance looking into the next stage, so you want to maximize \$R_{C}\$ . <S> In the limiting case where \$R_{S} = <S> \infty\$ <S> the only load is the input impedance looking into the load (i.e. \$R_{L}\$ ), which is the maximum load impedance you can get. <S> In the limiting case where \$R_{C} = 0\$ <S> the collector is shorted to \$V_{CC}\$ <S> and there can be no voltage gain (the collector, which is the output node, <S> is just shorted to the supply). <A> For a current mode output, you want high impedance. <S> For a voltage mode output, you want low impedance. <S> For maximum power transfer, you want matched impedances. <S> An ideal current source has infinite impedance while an ideal voltage source has zero impedance. <S> Generally people work with voltage mode outputs, so that's why 'low impedance <S> = good' is prevalent. <S> In RF, everything is matched, generally to 50 ohms (both inputs and outputs). <S> That output is current mode, so you want high impedance. <S> Also, Rc is not exactly the output impedace. <S> If you transform that resistor and the source into a Thevenin equivalent, then it would be the output impedance and setting it to zero would be 'ideal'. <S> This is not equivalent to setting Rc to zero in your circuit. <A> So what's going on? <S> How come a low output impedance is desirable? <S> Surely, something must be wrong with the way I'm analyzing the circuit! <S> Medwatt - the answer is simple: For a voltage amplifier (voltage output) a low output impedance is desirable - however, a simple circuit like the common emitter stage cannot fulfill your desire. <S> If you want to follow the guideline for a low output resistor <S> (example: Rc= 10 ohms in common emitter circuit) <S> you will have practically no gain. <S> That means: <S> Good output resistance (low) of a circuit that cannot be used. <S> Hence, a trade-off is necessary between two conflicting requirements (gain vs. output resistance). <S> Note, that such a trade-off is necessary in most of analog electronic circuits. <S> As a consequence, a more complicated circuitry is needed to have high gain with a low output resistance - for example: A two-stage amplifier (common emitter in series with common collector). <S> (Many years ago, there was a song: "You always can`t <S> get what you want".)
For voltage amplification you want low output impedance from the previous stage and high input impedance from the next stage to maximize the voltage gain.
Frequency of square wave I have a hard time understanding the concept of frequency in square waves. With sine waves, it is straightforward. You increase the frequency and the signal appears more often in the same time interval. That can apply to square waves too. But I know that in order for a pulse to appear immediately (for example 0V to 5V in lim(time)->0) the frequency must be infinite. So what's going on here? On one hand we have the straight forward frequency that you increase it and you see more square waves over the same time. On the other hand we have the frequency harmonics that are almost infinite. What is true? What would fourier analysis give us? <Q> You're confusing bandwidth with the fundamental frequency, or repetition rate. <S> Square waves theoretically have infinite bandwidth. <S> (I seem to recall seven times the fundamental as a practical rule of thumb from school.) <S> Intuitively, more higher harmonics are needed to sharpen the rising and falling edges. <S> Plotting it out as a summation of sines is easy and will help with your understanding. <A> Forget Fourier analysis. <S> The fundamental frequency of a square wave, as measured for example by a frequency counter, an oscilloscope with a frequency measurement capability, or a microcontroller with a input capture module is simply one over the period (time measurement between successive peaks of the signal). <S> The period may be measured from one rising edge to the next as shown in the first diagram below, or from one falling edge to the next. <S> This works with symmetrical square waves (50% high and 50% low), but also pulses where the on duty cycle is much less than the off duty cycle (or vice versa), as in second diagram below. <S> Actually this is true for any kind of periodic wave (e.g. sine waves, triangle waves) -- just measure the period from one positive (or negative) peak to the next and take the inverse. <S> In the case of sine waves or other slowing rising signals, a Schmitt trigger may be needed to create a suitable rising edge to measure. <A> In theory a square wave has an instantaneous rise and fall. <S> But it has a dwell time based on the frequency. <S> Lets take a 1 hz square wave. <S> The signal goes from zero to 100% [1] in an instant. <S> Then will remain at 1 for 1/2 of the wave length or 500ms. <S> Then it goes negative to -1 and remains there for 500ms. <S> And the cycle repeats. <S> So, it is + for half the time and - for the other half. <S> It's the rise/fall that must be infinite. <S> Of course in real life that is impossible, but the lower the frequency, the less relevant the rise/fall time become. <S> Also, the power level will be calculated differently. <S> a logic based flip/flop circuit is one way to produce a fair square wave BTW. <A> I think it is more correct to speak of the period of a square wave instead of a square wave frequency. <S> A square wave is a periodic signal, where the period is the time interval after which the signal repeats the same pattern of values. <S> Moreover, we have the Fourier analysis . <S> This mathematical tool allows us to express a signal that meets certain conditions , such as a series whose terms are trigonometric functions. <S> In the case of a square wave, the Fourier series representation contains infinite terms , of which the lower frequency corresponds to the fundamental frequency of the square wave, and the period is the same as for the square wave. <S> The point of the speed at which a square wave grows, is unrelated to the fundamental frequency of the same, but rather with limited bandwidth. <S> What does this mean? <S> If all the infinite terms of the Fourier series are not included in the representation of a square wave, the sum represents "roughly square" signals; few more terms we include, the more "square" is the signal represented. <S> A band-limited signal is one that does not include all the harmonic components (the terms of the Fourier series), or rather, has a maximum value for the frequency of the harmonic components to consider. <S> This limit, usually due to system conditions. <S> Then, in a real system, a "square wave" can not include all the harmonics that should theoretically include. <S> This means that the signal is "not so square" and needs a certain interval to go from one value to another. <A> Any periodic wave can be expressed as a combination of sine waves of the basic frequency plus harmonic frequencies of sine wave shape. <S> A square wave can be expressed as a combination of a basic sine wave of same frequency plus other sine waves of higher frequencies of odd number. <S> That is, a square wave of 60Hz can be simulated by a combination of sine waves of: 60 Hz + 180 Hz + 300 Hz + <S> 420 Hz <S> +..... <S> For approximation of square wave, you can get a reasonable shape using the first few harmonics. <S> As you keep on adding more harmonics, the shape tends to become nearer to a perfect square wave.
A square wave behaves the exact same way as a sine wave, in that as its fundamental frequency increases, you will see more cycles in a given amount of time.
Charge 18V battery with charger rated at only 18.5V? I am wondering if an 18.5V, 3.5A charger be able to charger a 18V battery without any bad side effects? The old charger (not functioning anymore) was 24V 0.2A. I know the charger must have a higher voltage rating than the battery to be able to completely charge it. <Q> Probably not. <S> The voltage spec written on the side of a battery pack is usually nominal voltage during discharging, not maximum voltage during charging. <S> The wall adapter input to the battery charger needs to be the maximum battery voltage during charging, plus however much drops across the battery charger voltage regulator, plus the drop in protection circuitry & wiring resistance etc., plus enough that if the regulator is operating at the low end of the spec variation range (e.g. +/- <S> 5%) it's still high enough. <S> If the original wall adapter was 24V, you probably need to find another 24V adapter. <S> It's possible the charger will work with a somewhat lower (or higher) voltage, but as the only information we have is what was specified by the manufacturer, we have to take their word for it. <A> There are many dangers in haphazardly replacing one adapter by another when it comes to batteries. <S> Not only is enough voltage overhead important, but the charging current is important. <S> If the battery pack has electronics inside to regulate the charge current and voltage, you can probably replace the old 24V by a new 24V of 0.2A or more. <S> But often in simple battery-adapter systems the adapter is all the advanced electronics in existence, even with Lithium Ion/Polymer, where there are many other reasons to make the battery itself intelligent. <S> In that case you are going to need something that delivers the same output curve as the original. <S> Same peak current, same voltage/current drop-off curves. <S> For example if you'd just connect a 24V 4A supply and the battery is a 200mAh 18V Li-Ion pack without current-limiting electronics you stand to create ionized gas or even a (violent) explosion, unless the cells are very, very high grade. <A> Look, you have to be really careful when charging batteries. <S> Looks like you are not even aware of what the chemistry type of chemistry the battery is. <S> Stop for a while and try to get the grasp of things I'm going to say below. <S> You have to know the battery chemistry. <S> Charging depends upon that. <S> All types of batteries are changed in completely different ways. <S> I am not asking you not to charge. <S> I am asking you to first learn about your battery. <S> Its charging methods and then try to use proper methods to charge it. <S> BatteryUniversity.com is a good website that teachers about main battery chemistries. <S> The other thing is, there are certain phases of battery charging such as Constant Current CC, Constant Voltage CV etc. <S> So your charger needs to have both current and voltage control capability. <S> If you do not use them properly, you might end up damaging or critically decreasing the battery life. <S> So get to know your battery first. <S> Also go to ti.com. <S> They have charging ICs with reference designs you can easily get started with. <S> Good luck.
They have different current limits and voltage limits upon exceeded, the battery will explode.
Closing two circuits from one button press, with a delay? From one button press, I want to momentarily close two circuits with a small (< 1s) tuneable delay in between. What term should I google for to find a schematic? (For clarity, I want to press the normally-open momentary pushbutton once, which will close one circuit momentarily, then after a delay—the length of which is adjustable via a trimpot, for example, up to about 1s—momentarily close another circuit.) <Q> If you want to implement it in electronics rather than relays, search for: monostable multivibrator mono flop single shot <A> Here is a circuit using two 556 (dual 555) timers that I believe meets your needs. <S> The 556 is available in a 14-pin DIP package. <S> The top half of the first 556 (IC1) acts as a Schmitt trigger to clean up any noise bounce from the pushbutton. <S> The bottom half of IC1 provides a 1/4 second output. <S> It goes high when the pushbutton is pressed. <S> It's output is fed to the solenoid K1. <S> I have the solenoid connected to 5v, but you can connect the top end to 12v or whatever voltage is needed. <S> When the output falls, it riggers the next timer. <S> The top half of the second 556 (IC2) provides a one-second delay between the release of the solenoid and the beginning of the pulsed output to the spark gap generator. <S> The bottom half of IC2 provides a 1/4 second output. <S> It goes high when the one second delay is over. <S> I didn't know what that interface to the spark gap generator looks like so I just show an output line ("To Spark Gap Gen"). <S> You could add another NPN transistor interface if you like. <S> The output to the spark gap generator is also set at 1/4 second. <S> As shown in the timing diagram, the pushbutton can be pressed either for less time than the 1/4 second delay for the solenoid, or greater; in the lattter case the 556 timer will not re-trigger. <S> The times are all easily adjustable by modifying either the resistor values or capacitor values or both. <S> C1/R1 controls the duration of the output to the solenoid (currently 1/4 second); R2/C3 controls the delay between the ending of the pulse to the solenoid and the beginning of the pulse to the spark gap generator (currently 1 second); and R3/C4 controls the duration of the output to the spark gap generator (currently 1/4 second). <S> I used this calculator to figure out the values of the resistors and caps needed. <S> By making any of these fixed resistor trimpots, you can adjust the timing to whatever you need. <S> I suggest using tantalum caps instead of electrolytic since you can twice as good tolerance (5% vs. 10%). <S> You will also want to add 0.1 µF bypass caps between the Vcc (+5v) and the GND leads (14 and 7) of each IC. <A> This should work if I got what you meant, and here's how it works: <S> S1 is a momentary <S> NO SPST (Form A) pushbutton switch, and when it's pressed the solenoid valve opens up and lets gas into a mixing chamber where it mixes with air and, a second or so later (adjustable by R4) <S> the spark generator (igniter) is energized and ignites the mixture. <S> Thereafter, as long as the pushbutton isn't released, the solenoid will will stay open and allow gas into the chamber and the igniter will generate an arc, assuring the gas-air mixture will stay ignited. <S> Then, when the pushbutton is released, the valve will close and the igniter will stop generating arcs until the switch is pressed again, starting the cycle anew. <S> The 4 ohm resistances for the igniter and the solenoid are based on your 3 ampere data, I assumed a 12V supply, and the 50 millihenry inductance for the solenoid coil was arrived at pretty much by WAG. <S> A simulation is here if you want to play with the circuit, and if you want to build it, DigiKey has all the parts in stock with the comparator and the MOSFETS going for about USD 5.00. <S> Also, if you build it, set R4 at midrange before you power-up for the first time, and then adjust it for the delay you want. <A> A relay is a coil-operated switch, by putting an appropriate current through the coil you can close or open one or many switch(es) <S> (many different types exist). <A> Here , you can check for time-delay relays . <S> With these devices, you can implement the activation of two o more circuits defining a delay time between activations. <S> For Time-delay relays, normally, two or four independent contacts, activated by the same coil are implemented. <A> Use an RC filter. <S> On hitting the button, you apply voltage to the two circuits. <S> One is connected directly and reacts immediately. <S> The other is connected through an R and then a C to gnd. <S> The voltage will rise 68% of the way every R*C seconds. <S> You can make the R adjustable using a trimpot. <S> This is a passive low-pass filter.
One simple implementation is with a 555 timer IC . It would depend upon what your two circuits are, but most likely you will find something useful under "Time Delay Relay Schematic". The delay will depend on the turn-on voltage and your RC values.
GPS Antenna | When is an Active Antenna really Necessary? Many vendors provide positioning modules that support active antennas as well as ones that don't. I am using 4 positioning modules out of which 3 of them support active antennas and the other , CC4000 by TI , supports only passive antennas. To be honest I am testing modules for performance, in terms of TTFF for SGEE, CGEE and A-GPS. Take a look if its necessary. http://www.u-blox.com/en/gps-modules/pvt-modules/max-m8-series-concurrent-gnss-modules.html https://www.linxtechnologies.com/en/products/gps-modules/fm-gps-receiver-module http://www.ti.com/tool/cc4000gpsem However all three claim to have almost same acquisition gains,about 143 dBi. But let me make my question general so that it will help more people. For all 4 positioning modules that claim to have almost same acquisition gains, how would the antenna type (active/passive) effect its performance? I mean with a passive antenna if the module can have a acquisition gain of 143dBi why bother fixing it an active antenna? Is an active antenna necessary for applications that have really short antenna cables(about 2-3cm)? If I have gone wrong somewhere, kindly direct me. <Q> Active antennas contain a low noise amplifier and possibly a filter and line driver. <S> It is very important to put the LNA as close to the antenna as possible to get some gain before the cable loss. <S> The trick with GPS is that the signal is extremely small. <S> The GPS signal is actually below the noise floor of the LNA, so adding more gain really does not help at all above mitigating the loss in the cable. <S> What does help is the coding gain in the receiver when it despreads the signal. <S> This does not depend on the receiver as the coding gain is dependent on the design of the code. <A> The only reason for a active antenna is to improve signal quality from long cable lengths. <S> This is true for any antenna if its only being used for receiving. <S> If you don't have long cable lengths then the non active will be the same. <S> So given your length passive is the way to go. <A> In addition to mitigating cable loss, most active antennas have narrowband SAW filters to reject interference.
If you don't have a long cable you will experience no improvement using active over passive.
Why do different colored LEDs interfere with each other when connected in parallel? My very basic electronics education has taught me that parallel circuits are equivalent to separated circuits. To my surprise, when I was playing around with some electronics I found the following: Essentially I connected two red and two blue LEDs in parallel. The red ones lit up, but the blue ones didn't. Only when I removed the red LEDs would the blue ones light. Why is this? <Q> In that way, the red ones light but the rest don't achieve their voltage necessary to light. <S> Red LEDs have a voltage drop of about 1.8V. Blue LEDs have a voltage drop of about 3V. <S> You can see more colors and their corresponding voltage drop here in this table: <S> http://en.wikipedia.org/wiki/Light-emitting_diode#Colors_and_materials <S> To solve this issue, you need a separate current limiting resistor for each led. <S> You could think about it as if you were putting two different zener diodes in parallel. <S> If you have a 2 volt zener and a 5 volt zener, the 2 volt zener will reach it's voltage and prevent the 5 volt zener from ever passing any current. <A> Red and blue LEDs have different threshold voltages. <S> Red threshold voltages are lower, so the red LEDs are not "letting" the voltage get high enough for the blue LEDs to light. <S> To make your circuit work: <S> The LEDs could be in series (if you have a large enough supply voltage), or, If in parallel, use a separate limiting resistor for each LED. <A> The red ones have a lower forward voltage than the blue ones. <S> I.e. as soon as the voltage across the diodes is more than the "red" forward voltage the red diodes start to conduct and use up all the current. <S> Even if you put two diodes of the same type (e.g. two red ones) in parallel it is not a good idea because there may be a small difference of their forward voltage causing the currents going through the diodes being quite unballanced. <S> If you want to make sure the current passing through all diodes is exactly the same you have to connect them in series (of course this works only if your supply voltage is large enough, i.e. larger than the sum of all forward voltages; and some voltage reserve must be left to drop at the resistor)
The red LED has a much lower voltage drop for a given current.
How do metal cases work with connectors & isolation? I have troubles understanding how do metal cases behave in some cases. Case 1: I made passive attenuation box for amplifier(basically L-pad resistor), simple configuration: Amplifier => attenuation box => speakerIt has 1/4 jack sockets on input and output like this one And pot resistor with metal on it's case Everything works fine, but as soon as I touch case or any metal part of pot with my hand it starts screaming from a speaker high-pitch sound. Why does it do that? Other commercial equipment uses the same jacks with metal cases and it has no problems with touching the case. And I can't ground it obviously because it is passive and input is floating from transformer. Case 2: On other powered devices without ground connection(example - guitar FX pedals) you also have metal case with metal jacks and it doesn't have any problems with touching either metal case or pots. In both of these cases I don't see what's the trick here, at first I though I wire something in a wrong way, but there is no way to wire socket in such a way so that sleeve would not touch case - since the outer plane of socket touches case, obviously sleeve of connector will touch it no matter how you wire stuff internally. So basically the questions are - how to properly wire metal parts that touch the case in order to make case out of the circuit. I really tried to google that by myself, but it seems that most of people have no troubles at all with it, maybe I miss very basic thing. Thank you for reading such a long question. <Q> The simple answer is to use a plastic enclosure. <S> The next simplest answer is to buy parts that don't make electrical connections to the case. <S> You may have to look at data sheets to figure this out. <S> With some connectors, like BNC connectors, there may be a description of "Isolated" in the part name somewhere. <S> Other approaches would really require you to look at the data sheet for each part as you buy it, know what electrical connections get made to the case, and be ready to handle them. <S> For some parts, insulating hardware for case mounts is available. <S> For others, no such luck, <S> and you have to jury rig your own. <S> If every connection to the case is Ground, for example, you should be OK (but there may be electrical safety concerns). <A> You're upsetting the load seen by the amplifier's output stage, probably by adding shunt capacitance, so you're causing an oscillation, at the resonant frequency of the speaker inductance and the total shunt capacitance (or some other frequency derived more complexly). <S> That potentiometer looks like it may carry connections through to the metalwork. <S> Another possible solution may be to add a Zobel network across the output inside the amplifier, <S> typically 10R in series with 1nF. Tube amplifiers generally don't have these: solid state ones almost invariably do, and some also have a small series inductor in the output path. <A> It's a common problem any time you get a high power output signal anywhere near the instrument level input signal. <S> It's an induction/radiation problem, and it basically causes feedback. <S> Always keep your instrument and its cabling away from your speaker wiring. <A> This is perhaps a comment, but I hate plastic enclosures. <S> Why can't you ground the metal box? <S> If the whole thing is floating with a lot of gain, then it should sing when you touch it... <S> (maybe it would help with the florescent lights turned off?) <S> Ground is both your friend and enemy. <S> The good news is that if you make a nice metal box there is both an inside ground and outside ground. <S> (above a certain frequency... and only electrically, not magnetically, unless it is a thick steel thing.) <S> Getting it all right is a learning experience.... <S> I could tell a story about a steel lock washer on the wrong side. <S> Edit: <S> I should add that I have seen EMI leaking down metal shafts when you touch them. <S> (our solution is a trade secret, and would cost at least a beer. <S> :^)
For best safety, Plastic enclosures and no metal parts that can be touched by a user is the way to go (double insulation). Solution: isolate the case completely from all connections and components.
How does a USB 2.0 Wall Charger negotiate current output? I'm trying to use a mobile phone charger for my projects, and want a high current output. I have read this , however my measurements show something else. Here is a link to the Battery Charging Specification Rev. 1.2. 1.4.7 Dedicated Charging Port A Dedicated Charging Port (DCP) is a downstream port on a device thatoutputs power through a USB connector, but is not capable ofenumerating a downstream device. A DCP shall source \$I_{DCP}\$ at an averagevoltage of \$V_{CHG}\$ . A DCP shall short the D+ line to the D- line. I verified on three different chargers, and all read \$R_{DCP\_DAT}\$ as ~1.5 Ohms. Now, if there is a short between the D+ and D-, there is no detection on the charging port side, and the charger should always output \$I_{DCP}\$ {0.5 - 5.0A max} on the VBUS line - is this correct? I tested the current output of three chargers, but they are all completely different. Charger 1 - Nokia Rated current output: 1.3A Measured current output: 1.34A Charger 2 - Asus Rated current output: 2.0A Measured current output: 0.7A - 1.1A (unstable) Charger 3 - HTC Rated current output: 1A Measurent current output: 0.1A If all these dedicated charging ports have no current negotiation, how come only one charger is showing it's rated output? N.B All three chargers can charge a mobile phone in a reasonable amount of time. <Q> there is no detection on the charging port side, and the charger should always output IDCPIDCP {0.5 - 5.0A max} on the VBUS line - is this correct? <S> I don't understand this completely, because the BC spec is confusing to read, but yes, dedicated chargers (DCP) short the D+ and D- together to indicate what they are. <S> This doesn't indicate any particular current available, though, it just says that it's a DCP. <S> Different chargers supply different amounts of current. <S> The charger has no brain in it; it just supplies 5 V until the current draw is too great, and then its voltage starts to droop: <S> It's the "portable device" (PD) which has to be smart about limiting its own current draw from the DCP to stay within the dark region of the plot. <S> So it can try to draw up to 1.5 A, but if the charger voltage drops below 2 V at 0.5 A, then you can't draw any more than 0.5 A from it. <S> When the adapter’s output voltage starts to collapse, it is an indication that the current limit of the device is reached. <S> - <S> MAX8895 datasheet <A> I am not sure if this answer will solve your question, but Apple does a similar thing with their wall chargers to make sure that the devices are not being charged too quickly. <S> To accomplish this, the chargers have a voltage divider circuit comprised of 2 resistors that gives a reference voltage to one or both of the data pins. <S> Inside the iPhone, there is additional circuitry that reads the voltage on the data pins from the voltage divider. <S> Here is a diagram of an "Apple Compatible" charger featuring the voltage divider that I was talking about: I would assume that Apple has different voltage dividers for different rated chargers. <S> This way, a phone can tell if it can safely charge itself if the reference voltage is under 2v for example. <A> The question is ill-posed, conceptually. <S> Usual wall (and BC1.1/1.2) <S> chargers do not negotiate anything . <S> They only "advertise" their capability by means of some signature on D+/D- wires. <S> It is the DEVICE that decides to take maximum necessary current based on detected signature and the state of internal battery. <S> As other respondents noted, there is "Chinese signature" (with D+ shorted to D-), there is "Apple signature" with certain combination of voltages (using ~40k-70k resistors), there could be BC1.2 with sequential handshake. <S> Due to total awkwardness and complexity associated with BC1.2, this signature can be hardly found ever. <S> In modern days things can be different with advent of new Power Delivery Specification (PD), where "provider" (charger) actively advertises its capability with "consumer" through a serial communication channel with a fairly sophisticated protocol. <S> Originally the VBUS wire was thought for this purpose in PD1.1 (the idea is now abandoned), and now the CC pins in Type-C connector are used for this purpose. <A> I guess it's possible the charger is implementing data contact detection (to ensure full insertion- <S> the power contacts mate first).. <S> suggest you try connecting the D+ to a plausible VDP_SRC voltage.
For a dedicated charger or USB charger, the current limit is determined by loading the adapter.
What is a safe max. discharge rate for a 12V lead acid battery? I've got a 12V 2.4Ah lead acid battery which I plan to connect a water pump to. I've looked at various pumps, but the one I'm most interested in draws 2.2A. I'm not so interested in how long the pump can run, as it only will need to run for about 5 - 10 minutes/day. So, I'm assuming the battery is plenty for that. The battery will be charged via solar cell panels. However, I'm more concerned about the discharge rate. I've read that lead acid battery not should be discharged too quickly, as this might result in overheating the battery (and cause damage to it). How do I figure out what a safe maximum discharge rate is for a 12V lead acid battery? <Q> A quick point: You mention you have a 12 V 2.4 A SLA (sealed lead acid) battery, but batteries are rated in amp-hours not amperes . <S> Therefore I suspect you have a 12 V 2.4 Ah battery. <S> Now that we have that out of the way, a 12 V 2.5 Ah SLA battery from Power Sonic, as an example (a company that has datasheets for their batteries) shows several discharge rates that may be of interest: <S> Nominal Capacities: <S> 220 mA discharge rate = 10 hours (2.2 Ah) <S> 400 mA discharge rate = 5 hours (2 Ah) 1.5 A discharge rate = 1 hour (1.5 <S> Ah) 4.5 A discharge rate = 15 minutes (1.13 Ah) Max Discharge Current (7 Min.) <S> = <S> 7.5 <S> A Max Short-Duration Discharge Current (10 Sec.) <S> = <S> 25.0 <S> A <S> This means you should expect, at a discharge rate of 2.2 A, that the battery would have a nominal capacity (down to 9 V) between 1.13 Ah and 1.5 <S> Ah, giving you between 15 minutes and 1 hour runtime. <A> An easy rule-of-thumb for determining the slow/intermediate/fast rates for charging/discharging a rechargeable chemical battery, mostly independent of the actual manufacturing technology: <S> lead acid, NiCd, NiMH, Li... <S> We will call C (unitless) to the numerical value of the capacity of our battery , measured in <S> Ah <S> (Ampere-hour). <S> In your question, the capacity of the battery is 2.4 <S> Ah, hence, C=2.4 (unitless). <S> The vast majority of the batteries in the market will safely charge/discharge at a rate of less than 1C Amperes . <S> In an ideal world (without losses), this would translate into a 1 hour charge/discharge process. <S> In practice, the charging/discharging operation may require up to twice/half the time . <S> Without further information (datasheet), I would not charge/discharge any battery at a rate higher than 1C, for safety and endurance reasons. <S> In your question, less than 2.4 A would be a nice charge/discharge rate, as the manufacturer datasheet confirms. <S> Rates << 1C are commonly known as "SLOW" rates: 0.5C, 0.2C, 0.1C... <S> Charge/discharge rates higher than 1C are best avoided unless working with a properly known battery. <S> Rates > <S> > 1C rates are commonly known as "FAST" rates: 2C, 3C... <S> In the past, batteries designed for rates >1C were usually marketed as "high current" batteries, because not all batteries were capable of sustaining such rates safely or without compromising its endurance. <S> Nowadays, most batteries can safely be used at rates <S> >1C, up to the rating specified by the manufacturer. <S> However, a reduction in the battery life is to be expected. <S> Forcing a battery to rates <S> >5-10C involves serious risks. <S> Disclaimer : this is a rule-of-thumb, useful as an starting point when the datasheet is not available or when dealing with a no-brand/unknown battery. <A> Jose's answer states that the discharge rate isn't related to chemistry. <S> However, this is not correct. <S> It can vary up to a factor of 1000 depending on chemistry. <S> Different batteries chemistries have different properties. <S> Beyond the chemistry, this is also related to the battery design itself. <S> Size of the electrodes, the thickness of electrode coatings, electrolyte so it can also vastly vary upon this. <S> Some are designed for a lower self discharge rate, some for higher energy density or higher instant power output. <S> Larger electrode with thinner coating will have a higher discharge rate, while the opposite will lead to higher energy density. <S> The best is to check at the manufacturer datasheet if it is available. <S> Here is also a table with common values. <S> Concerning specifically on lead-acid, there are also several types, but two are most common, the car starter battery and the stationary battery. <S> Because of its construction, a starter battery is only suitable for short loads with high current, which most commonly take place when starting an engine of a car, truck … <S> The main characteristic of a starter battery is that they have big, thin, flat plates. <S> Starter batteries are not suitable for cyclic use (continuous charging & discharging) <S> A starter battery is relatively cheap. <S> Source <S> With your pump, make sure to use a stationary battery, since you are below 1C do that is totally fine.
By applying a charge/discharge rate much less than 1C , you usually extend considerably the life of a chemical battery. Ideally the manufacturer supplies the discharge rates on the battery datasheet. 125 mA discharge rate = 20 hours (2.5 Ah)
What causes old power supplies to start humming? I have a number of mains-powered devices that have begun humming (I assume at 50Hz, though I haven't measured it). These include clock radios, a speaker system, a lamp (20W halogen with a transformer before the switch), and a temperature-controlled soldering iron. With some of these devices, it can be really annoying (like the radio and lamp and the speaker that vibrate the floor, and my bed, and make it hard to get to sleep). I assume it's also wasting a small amount of energy. I have tried taking one of the clock radios apart, and cleaned some dust and crap off the PCB (I couldn't see any other obvious problems), and it did reduce the hum a little, but not much, and it started getting worse again. Is there a common cause for these hums? And is there anything that can be done about it? <Q> A transformer, as you well know, is made up of two or more coils around a core of ferrous material. <S> That ferrous material is not a solid lump of metal, but a series of plates laminated together with adhesive. <S> This is done because: Early transformer developers soon realized that cores constructed from solid iron resulted in prohibitive eddy current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires. <S> Later designs constructed the core by stacking layers of thin steel laminations, a principle that has remained in use. <S> -- <S> Wikipedia <S> So you have lots of steel plates stuck together, but not only that: <S> Each lamination is insulated from its neighbors by a thin non-conducting layer of insulation. <S> Lots of metal plates, each with an induced magnetic field. <S> That magnetic field acts between the adjacent plates stretching and squeezing the adhesive and insulation between them. <S> Over time that adhesive starts to break apart and the laminated layers separate from each other slightly. <S> This is the humming noise you can hear. <S> It's always present, but once the adhesive starts to break it gets louder. <S> These micro-fractures in the adhesive may not be visible to the naked eye, but in extreme situations they may be so bad the layers of lamination become loose and the transformer literally rattles as you shake it. <S> Also, the more current you draw through a transformer the larger the induced magnetic fields, and thus the louder the transformer hums (and the shorter its life span). <A> The laminations are moving wrt each other. <S> They're generally stuck together at the factory and over years the varnish gets brittle and the forces can cause the laminations to no longer be stuck together. <S> Magnetic forces from the field cause the hum. <S> If it's a valuable item, you can remove the transformer and take it to a motor rewinding shop and ask them to vacuum impregnate it for you. <A> I used super glue. <S> I applied it all over the transformer, used pliers to squeeze it in order to get the glue to penetrate deeply between the laminations, applied more glue where necessary, and then used a few C clamps to hold it together. <S> I put my amp back together 12 hours later and the hum is gone. <A> OK I know very little about laminated transformer cores. <S> But it was my understanding that the hum was due to magnetostriction of the iron. <S> Size change depends on the strength of the B-field and so happens at twice the AC mains frequency. <S> This size change gets transmitted as sound to whatever holds the transformer. <S> I don't think it has to do with a failing lamination's. <S> (But I'm happy to learn something if I'm wrong.) <A> I had the case in my Copal FP-220 flip clock radio. <S> The transformer was buzzing, and worked for 30 seconds, then shut down. <S> I replaced it with a generic 220v-9v transformer, and everything is working like new now!
A noisy transformer is often the sign of a failing transformer.
Transistors why does increasing base current increase collector current? I am new to this area so please can you keep your answers simple, thanks. From what I know for an npn transistor in the common-emitter connection the base-emitter junction is in the forward bias and the base-collector junction in the reverse. Electrons flow from the emitter to the base where some leave the base as they recombine with holes, forming the collector current and some pass into the collector forming the collector current. If we increase the base current we get an increase in collector current, why is this? My first thought would be that the collector current would decrease since more electrons are are flowing into the base electrode rather then the collector electrode. I really am at a loss of how to explain this. <Q> Fundamentally, it is the voltage across the B-E junction that determines the amount of current flowing through it. <S> It is an exponential relationship that is described by the Ebers-Moll equations . <S> An increase in this voltage results in increased currents in both the base and the collector, and indeed, an increase in base current causes an increase in B-E voltage. <S> I don't know if this description helps at all — the relationship between cause and effect can sometimes be confusing in solid-state physics. <S> Let me know if I can improve this answer. <A> Dave is correct... <S> I'll try to clarify some more. <S> In an NPN: <S> The base-emitter voltage and the doping of the base determine the rate of emitter electron current injection into the base, which is swept into the collector due to the potential drop from base to collector and the narrowness of the base region. <S> The base-emitter voltage and the doping level of the emitter also determine the rate of base hole injection into the emitter, which does reduce the collector current. <S> The ratio of the dopant densities sets the current ratio between the base and collector (Beta). <S> BJTs are designed with light doping in the base and a very narrow base width to maximize the diffusion of the emitter current to the collector. <S> As a result, the base current needed to develop the Vbe for a given rate of emitter current injection is very small compared to the emitter and collector current, and so BJTs have high current gain. <S> Here's an online reference that goes into some detail: Modern Semiconductor Devices for Integrated Circuits, Ch. 8 <A> Imagine, a water tank. <S> Water is not going to jump out - that's the same a a standard base-emitter junction. <S> Now imagine... tilting the tank on its side, to fill a water cup (or a hose with a funnel on top). <S> Sure you get the water you want, BUT there will also be a LOT of other water spilling over the edge. <S> That "extra" water slipped over a slightly forward-biased base-emitter junction (the tilted edge), then found itself in a highly foward-biased zone (hanging in the air). <S> The more you tip the tank,the more water will splash over the edge. <S> Another image some find useful is: a large freeway, 100mph limit, and a sudden bend (most cars wouldn't make the bend). <S> A larger current is more cars trying to take the corner at the same time.
But the base and collector currents generally have a fixed ratio to each other for any particular transistor under a particular set of conditions, so the increase in collector current is β times the increase in base current.
How to determine how much current and or voltage a given circuit "needs"? My apologies in advance if I make any wrong assumptions. Say you have an outlet that is rated at 120V and 10A. My power adapter for my MacBook says it is rated at 16.5V and 3.65A max. How was it determined that the power adapter needs to provide at most 16.5V and 3.65A? What is the process of actually figuring this out when designing electronics/circuits? The only thing I have been able to think of so far is that based on the physical properties of a circuit, you can only handle a certain amount of current before burning out which can be figured out empirically. In addition to that maybe the amount of voltage required has to do with power efficiency and driving the max amount of current you can handle. I understand Ohm's Law and I am not asking about how to calculate anything using it. This is more of a design question when you have a circuit in mind but need to figure out what resistors to apply and what voltage/current you will need to operate correctly. <Q> As you may have gathered from the other responses, this is generally not a trivial exercise. <S> At the individual component level, each manufacturer publishes a datasheet that lists, among many other things, the max and min supply voltages and the expected current for at least one recommended supply voltage. <S> Add up all the components on the same internal power bus (don't forget passives) and back-calculate the current going into each regulator at the voltage that feeds the regulator. <S> (more datasheets) <S> Add up all of those, back-calculating again through cascaded regulators, until you end up at the power input. <S> As for the input voltage selection, they may have chosen something just higher than the highest regulated voltage + dropout of that regulator. <S> I suspect though, that a laptop adapter would be just enough to charge the battery and the rest of the computer then runs off the battery terminals, even if there's not actually a battery plugged in. <S> Most of the time, this process is much more iterative in design than in testing. <S> There may be a total power budget to start with (probably specified in watts), then parts are chosen somewhat by experience to try and add up to less than the budget. <S> If it's over budget, then some of the parts are substituted for more efficient or less capable ones and the total power is calculated again. <S> Once the numbers work out, in many more ways than just power consumption, then it goes to the first prototype. <S> In some designs, the power budget is a major factor in the design, like an all-day netbook. <S> In others, it's more of an afterthought so long as it can still be cooled adequately, like a high-quality architectural drafting engine that is used both in the office and on a job site. <S> Of course, this is grossly simplified, but you get the idea. <A> The computer was probably designed with a power budget of 60 watts. <S> Most of this is consumed by the CPU, GPU, screen, and battery charger. <S> They also decided that the input voltage from the adapter would be 16.5 volts, requiring 60W/16.5V ~= 3.65 amps. <A> A computer is a very complex beast, with power management circuits that control the amount of power being fed to various subsystems. <S> A laptop computer is designed to limit the amount of power it consumes. <S> That is both to reduce the drain on it's battery and to not exceed the heat dissipation limits of the device. <S> A laptop has one or more voltage regulators that convert battery voltage (or external power supply voltage) to the different voltages needed inside the computer. <S> It will also have a charging circuit to charge the internal battery. <S> You would need to figure out the maximum current needed by the whole system when it was running at it's max ( <S> Battery charging circuit at maximum demand, screen at full brightness, disk drive spinning and seeking, DVD drive spinning and seeking, screen at full brightness, all cores on the CPU powered up and at maximum draw, graphics subsystem maxed out, fans running at full speed, etc. <S> Plus power loss for the internal voltage regulators...) <S> Then you'd want to add a small amount as a margin of error. <S> Note that the power management subsystem might prevent all those things from happening at once. <S> For example, it might slow down or stop charging the battery if all those other things were going on at once. <S> It might throttle performance on the GPUs if they start to get too hot or exceed the total current limits on the power supply. <S> Etc.
The power draw of the components can be measured under full load to ensure the power budget is met.
I need to remove 100mV of ripple out of my 12Vdc source. What size and type capacitor do I need? I am building a very inexpensive power monitor. I need a clean 12Vdc power source for the current transducers. I currently have 100mV of ripple on the 12Vdc source i am using which is causing issues related to the output signal from the CTs. I not an electrical engineer but more of a glorified technician. I am thinking a capacitor between the +12Vdc and GND will do the trick but do not know what size or type capacitor I need. <Q> If you have 100mV (however you're measuring it) of high frequency ripple from a switching power supply you may be better off adding a regulator or LDO. <S> Most regulators will let more high frequency stuff through than low frequency, so be sure to compare the regulation with an actual input from the power supply. <S> For example, the ubiquitous LM317 could be used to make (say) a 10V supply, but the ripple regulation is only specified at 120Hz (-65dB typical). <S> You will never be able to remove it entirely- only to attenuate it to some value. <S> Anything you add will also contribute some noise itself. <S> See EMF's answer for how to calculate the capacitance if you're just slapping a capacitor across the output, and select his answer if it's what you're looking for. <S> Please get into the habit of specifying how you are measuring the ripple and what it looks like in the frequency domain. <S> 100mV RMS of spikey SMPS noise might easily be 1Vp-p. <A> The formula relating ripple voltage and capacitance is: C = <S> It/ <S> V where C is the capacitance in farads <S> , I is the load current in amperes, t is the period of the ripple frequency in seconds, and V is the desired amplitude of the ripple voltage, in volts. <A> You cannot calculate this capacitor from the voltage alone, this depends on the load. <S> With simple rectifier, the voltage drops to zero 100 (or 120) times per second; at these moments the device is fed by the capacitor alone. <S> The voltage always drops at these moments, but if the capacitance is sufficient, the ripple stays in tolerable boundaries. <S> More advanced voltage converter will use a higher, generated frequency, requiring smaller transformer and smaller capacitor. <S> If the ripple amplitude is much smaller than the voltage, we can assume that the load consumes the constant current I. <S> Then if V is the acceptable amplitude of the ripple, and t is the approximate duration of "no power" phase, the required capacity is equal to I*t/V . <S> This is rather approximate because the shape of the rectified current is not rectangle, however may give an idea which capacitor it could probably be. <S> It is generally enough to take about 70 % of the calculated capacity. <S> You can read more about solution of this problem here . <S> However your power supply should already have the adequate capacitor. <S> If you observe unacceptable ripple, you are probably overloading it, and it will not serve for long anyway. <S> Check if it is rated for the current you need.
If your power supply is topped out at 12V under load and the ripple pulls it down even farther, you'll never be able to get a clean supply and the best you'll be able to do by adding capacitance across the load is to decrease the amplitude of the ripple.
Meaning of zero load and full load in circuits This might be straightforward but it has been tough to grasp for me. When a question says to calcualate the power dissipated by the Zener diode under zero load and full load. What exactly does the concept of load mean here? I know load can be thought of as a well defined output, then the circuit connected is load or the power consumed is. But then, what is zero load and full load? <Q> Essentially, a load receives power from a circuit while a source delivers power to a circuit. <S> So a zero load receives zero power while a full load receives full power (whatever that is in a particular context). <S> Since either an open circuit or short circuit receives zero power, neither of these is a full load . <S> For a Zener diode regulator, which I assume is the circuit context here, zero load means open circuit. <S> Since there is no power delivered to the load, the zener diode power is maximum. <S> Full load must mean minimum power for the zener diode, not zero power, since there is some minimum current through to maintain the (more or less constant) zener voltage across. <S> This is due to the fact that a zener diode regulator is a shunt regulator which means that current is shunted (diverted) around the load in order to maintain the voltage constant across the output. <S> If the load is 'light' or open, more current must be diverted through the zener diode and thus, the power dissipation is more for the zener. <S> If the load is near 'full', less current is diverted through the zener diode and thus, the power dissipation is less for the zener. <A> In the first image, there is no load so this would be a zero load. <S> All the current that flows in the circuit must go through the R1 and the zener. <S> In the second image, a load is present. <S> At full load, that would mean that maximum current draw that would go into the load. <S> Now the current will flow through the zener and the load. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> Zero load means no current is going through the load, i.e. an open circuit. <S> Full load means that all possible current is going through the load, i.e. either a short circuit or some arbitrary maximum current as defined by the situation or problem. <A> One might consider the Zener part of the load on the main voltage source, but it is not part of the load on itself. <S> The load as it is referred in your question means what is hanging off the Zener-regulated voltage. <S> Zero load effectively means there is nothing connected to the Zener-regulated voltage, or it's switched off by a FET, etc. <S> It does not mean there's zero load on the voltage source powering the entire circuit, because there's still current flowing through the Zener+resistor, which is a load on the source.
To be clear what is meant by load in your example: there is typically some voltage source (say a wall adapter) providing power to both the Zener+resistor and the other stuff hanging off the Zener-regulated voltage.
Preferences: Male or Female connector on board On a prototyping board for use by students, should one place male (pin) headers or female headers on the PCB, considering that most of the pins will be unused most of the time. I'm inclined to place the female headers on the PCB and have male pins attached to all cabling respectively. This way touching unused, open contacts by accident is less of an issue. But checking through catalogues (Hirose, Molex,..) I see lots of male headers for PCB mounting but few female ones. Is my reasoning backward? What I am missing? Conclusion: Considering all comments of this thread I am inclined to use male connector pins on the board. Doktor J has given a nice example of a shrouded pin header that would even provide some polarization, albeit I will probably use a 2mm pitch for compactness. Even better if those were side-stackable, that is if I could place like 2x20 holes in a row and have shrouded connectors of different width (say either 2x 2x10, or 4x 2x5, or a single 2x20) attached. However, this is not possible with the typical box-shape shrouding. Solution I just discovered FCI Minitek Headers . These are side-stackable and have polarization and shrouding. I'll go with these. <Q> Generally, proper design dictates that - for safety's sake - pluggable connectors supplying power do so using female contacts because, being shrouded, they're less likely to accidentally wind up causing a catastrophe when they're unplugged and hot. <A> Convention suggests that you should always have male pin headers on the board and female connectors on the cables. <S> If your concern is a stray tool accidentally coming into contact with the board connector, you can look into a shrouded pin header connection such as this one from SparkFun : <S> While it doesn't offer complete contact protection, a stray wire or screwdriver is much less likely to short the pins. <S> Then, if the students leave the board side of the cable connected and it's hanging loose, you don't have to worry about bare pins bumping into something Bad. <S> Lastly, another consideration is convenience: not only is it easier to find male pin headers, it's much easier to find female IDC connectors for ribbon cable, or just premade cables to save you that much work. <A> Male connectors (comparisons to the human condition wisely avoided) tend to be relatively simple and trouble-free. <S> Damage is usually easily visible. <S> By their very function female receptacles tend to be more complex and prone to damage or contamination, so I prefer to put the male part on the board where damage is generally more costly, all other things being equal. <S> Of course if power is coming off you want to prevent trouble so that may dictate what gets used where. <S> You don't want live mains voltage on an exposed male pin (or female connector) where someone could get shocked or a short could cause damage. <S> Barrel connectors as used in wall wart cords, which are by most estimates female in nature, actually have an exposed outer surface which can short. <S> Not usually an issue, but I've seen non-isolated auto cigar plug adapters that have positive voltage on the barrel (and thus a voltage relative to chassis ground). <S> If it touches grounded metal, a short occurs. <A> For your simple pin-and-socket ribbon type connections, the pin can be rigidly attached to the motherboard, while a socket, by its nature, flexes and cannot be as rigidly attached. <S> On the other hand, since the socket must have a plastic surround, etc, it is well-suited for attaching to the end of a cable. <S> A secondary issue is that the socket is more easily damaged than the pin, and a cable is usually cheaper to replace than a motherboard. <S> A convenient feature of having the pin on the board is that jumpers can easily be used on the same "headers" that are used for cables, so fewer part numbers need to be stocked. <S> And another detail: <S> Pin headers are a lot easier than sockets to get installed on a board without damage using automated equipment, flow soldering, etc.
I imagine the idea behind this is that the board is generally mounted in some sort of enclosure that protects it from accidental shorts, whereas if a cable is connected to the header and the other end is left hanging, female connections on the cable will prevent it from flopping around and coming into contact with the metal chassis or something else that power or signal pins shouldn't touch.
Anti-wind up scheme in the implementation of PID controller I want to implement designed PID controller. But I am facing the problem of how to limit the saturation limit in both positive and negative direction? I tried using zener diode, but I would like to know if are there any design procedures to get anti-wind up scheme in the implementation of PID controller using zener/diode combination? EDIT: this is the designed pid controller and i am using TL084 op-amps which has +15 volts and -15 volts as their supply. when the error is generated, due to integrator the control signal(Vc) output is going to saturation point of nearly 15 volts. I want Vc to be in the range of (0.85-3.8 volts), so that i can give this to SG3524 PWM IC to generate constant duty ratio PWM pulses. I tried by putting zener diode of 3.3 volt rating, but now Vc is coming upto 3.5 volts(basically saturation limit has come down to 3.5 volts) . The problem is how to limt the Vc to the specified range.Can any one please suggest modifications for proper design of anti-windup working scheme of this. Here VFb=-3 volts and VRef= 3 volts <Q> In a PID controller, the "D" term isn't really "derivative", but is actually the output of a first-order high-pass filter with a finite cutoff frequency [if it weren't, any 1GHz noise on the input would be amplified 1,000,000 times as much as a 1Khz signal]. <S> Likewise the "I" term need not actually compute a "pure" integral [which would be the nearly-infinitely-amplified output of a first-order low-pass filter with an infinitesimal cutoff frequency] but may instead be the output of a first-order low-pass filter whose a cutoff frequency correlates somewhat with the machine's slowest plausible response. <S> The gain of the filter may be set to control the DC behavior if the controller has been commanding a certain output for an arbitrary long time <S> but the system hasn't moved; the cutoff frequency may then be set to control responsiveness when things haven't gotten that far. <S> Unlike integrators, low-pass filters with a finite cutoff frequency have a limit to how far they can "wind up" with a given level of input. <S> An additional approach to prevent wind-up would be to--rather than low-pass-filtering the P term directly, either integrate the difference between the commanded output and what it would have been without the "P" term, or else low-pass filter the actual commanded output. <S> If the output stimulus is pegged to the point that the "P" term isn't able to have its full effect, the "I" term shouldn't operate on the "P" term, but only on its contribution to the output. <S> Using a low-pass filter with this approach will probably be easier than trying to use an integrator, since the filter can be set so that the loop feedback gain doesn't exceed one. <S> Otherwise, when using an integrator and trying to compute the difference between what the output would be with P and without P, it may be difficult to ensure that the integrator's output doesn't generate positive feedback to its input (which could destabilize the system). <A> Windup (as opposed to overshoot inhibition) is caused by the integrator continuing to integrate even though the output is saturated. <S> You can simply detect output saturation ( for example with a comparator) and inhibit integration. <A> Limiting reset windup can be attacked in a few different ways. <S> Another is to reset the integrator when the output is beyond saturation- <S> which can have the effect of not only preventing windup but also suppressing overshoot which would occur with startup of a normally tuned PID controller that did not have windup. <S> For a 1970-1980 era approach, put a reed relay with a small series resistor across the integrator capacitor and trigger the reed coil with deviation from the setpoint. <S> If you're not trying to do very long integral time constants (eg. 30-60 minutes), an analog switch may have low enough leakage that it can be used. <A> A number of ways todo it. <S> The entire PID (do you really need D?) is operating over <S> +-15V <S> BUT the resulting signal needs to be 0.85 -- <S> > <S> 3.8V <S> thus a final gain & offset stage to scale the output to this range would ensure a suitable signal. <S> This however will not solve your windup issue, especially considering you are driving the OPAMPS into saturation (and once in saturation their response is sluggish) <S> So... if you were to put 2 10V zeners in a back to back arrangement across the feedback capacitor of the Integrator you would stop the OPAMP saturating. <S> You could add a final clamping stage as well to help How to modify this circuit for variable clipping without affecting gain?
One way is to use a velocity algorithm that automatically stops integrating when the output saturates.
Automatically switching from 9V battery to DC wall adapter on insertion I have a simple circuit that runs off of a 9V battery. I'm re-designing it so that it can also run off of an external 12V DC source (ie: a wall adapter). I want to design the circuit so that if both the battery and the wall adapter are connected simultaneously, the wall adapter is used, and the battery is effectively disconnected from the circuit. I've found a few circuits online that might work , but they unfortunately might allow a trickle of current into the battery , and since it could be a non-rechargeable (ie: alkaline) cell, this could be disastrous. I've considered the using a barrel jack with a normally-closed three-terminal contact configuration , but I'm not quite sure how to start. How would I go about designing such a circuit? <Q> Your circuit will use power from the one with the highest voltage. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> When the adapter is plugged in, V1 will be 11 volts (ish). <S> When the adapter is removed, your circuit will have 8 volts at V1 from the battery. <S> There is no risk of the battery being charged by the adapter as the battery diode will block all current in the reverse direction. <S> The diode part numbers are not critical. <S> Just select diodes that match the current needed by your circuit. <A> The NC (normally closed) terminals (2 & 3 in the sheet) must connect the battery. <S> When you plug in the adapter, this terminals opens. <S> Try to determine on which pin (in addition to pin 1) <S> the adapter connects <S> (i can't determine the number from the sheet). <S> Edit : The battery connects between pins 1 & 2. simulate this circuit – <S> Schematic created using CircuitLab <A> Take a look at the PowerPath Controller <S> LTC4412 or the Prioritized PowerPath Controller <S> LTC4417 from Linear Technology. <S> They have some more of these PowerPath devices. <S> Or you can take a relay. <S> The wall adapter controls the relay to open/close the line to the battery. <S> AC wall adapter plugged in, relay on and battery line disconnected, vice versa. <S> Then you have no voltage drop. <S> With the use of diodes, even shottky, you always have the disadvantage of the diodes voltage drop. <S> And if the circuits current consumption is high, the size of the diodes will increase. <A> There's a DC 6V adaptor powering load (resistor+LED) when mains AC power is available at home. <S> 1K 10K resistor network biased to PNP transistor holds it in cut-OFF state when line power is available and thus disconnects the battery. <S> But if there's a power cut which is indicated by opening the spst switch placed next to 6V adaptor source, the PNP transistor's base is acted upon by 10K resistor only, pulling base voltage to GND level. <S> Hence PNP switches ON and load is now powered by 9V battery. <S> PN Diodes avoids interference between two sources. <S> Now you may think "why zener 3.2V is connected to 9V <S> battery?"Ans: During the testing I observed that battery voltage must be less than or equal to that of adaptor's voltage output. <S> So zener simply drops 3.2 volts across it and circuit works fine. <S> Thus only one source is active at a time. <S> And load is continuously kept powered up even when mains supply cuts OFF unfortunately. <A> I think Carpetpython's circuit is using a center negative DC barrel plug since pin 1 on the jack is the center post. <S> Flip the diode orientations. <S> With a center positive circuit, the load GND will be slightly above true 0V since there is the diode drop of ~200mV with an average Schottky diode.
Invert everything for a center positive DC barrel plug. All you need is 2 diodes for your 2 power sources. The problem with voltage drop will get worse.
Best way to control around 250 LEDs I have got the task of coming up with a solution of individually controlling around 250 LEDs. As I would be a complete newcomer in programming an Arduino or whatever else there is, the computing platform is relatively irrelevant for this job (*). It mainly should be portable and not overly large. The LEDs could be single colour but if there is a simple extension for RGB LEDs, this might be even better. If there is a hard limit in controlling such a number of LEDs and things would be much easier if we reduced this number, we might be able to put three or at most four LEDs on the same controller. (The LEDs will be behind a screen and not directly seen.) In that case, the number might be around 80. The question is mostly about what would be the best platform choice in this case? (*): I am mainly a programmer and not an electrical engineer, so diving into a C like library is my least problem. Hardware is my problem. Edit: The LEDs should be variably positioned. There may be smaller clusters or strings of a few LEDs but otherwise it should be rather arbitrary. There should be ~< 2m between the outmost LEDs. <Q> What's the budget? <S> Do you already have the LEDs? <S> For that many, I'd probably try to use some "neopixels" (LEDs with WS2812 or similar controller). <S> It can make the wiring much simpler by allowing daisy chain. <S> And you can stick on multiple power supplies if you need based on the number of LEDs and the brightness. <S> You only need one data pin if they're all in one string <A> You can stick to your Arduino but you need some extra electronics to handle all the I/ <S> O. <S> Your Arduino is excellent at sending out serial data, but have a limited amount of outputs to handle parallell data. <S> Like a shift register: 74HC595. <S> Check out this link to a straightforward serial to parallell, rows and columns arrangement. <S> That solution could probably be scaled to fit your needs, except for the RGB thing. <A> There is a method out there that many people use in those giant LED cubes. <S> Essentially what you do is wire the LEDs in a grid fashion. <S> On the X axis <S> (Vertically) <S> you will have all the ground pins of the LEDs connected. <S> On the Y axis (Horizontally) you would have all the positive LED pins connected. <S> You will end up with each columns' grounds connected, and each rows' positive pins connected. <S> You can use shift registers to turn on and off a columns' ground, and the rows' positive. <S> You get a coordinate system. <S> This is the method that uses the least components and pins. <S> You then could move the LEDs around through out your project, as long as the wiring stays the same. <S> Example: <S> Column 1's ground is turned on. <S> Row 3's positive is turned on. <S> You get the LED (1, 3) to turn on.
So you could use some kind of serial to parallell register.
Wheel speed from Shimano dyamo hub using arduino I'm trying to figure out a way to extrapolate wheel speed from the Dynamo hub I just purchased. I outputs 6V 3W AC, and I would like to potentially measure it using an arduino if possible, I' assuming I would have to measure the sinewave somehow. Any ideas? <Q> While the accepted answer works just fine, if you're using an arduino it can actually be slightly simpler. <S> Atmega chips have clamping diodes on the input pins, meaning that with the right current restrictions the voltage can be well out of range of what would otherwise be tolerated on the input pins. <S> Essentially you would just run the input signal through a large resistance, as in the following diagram from AVR182 : <S> Using this method, D1, R3, Z1, and C3 from Asmyldof's answer are no longer necessary, but a very small value for C3 may still be helpful for some noise filtering. <A> You may want to add an extra zener diode from the I/O pin to ground at 4.7V or 5.1V, to protect from transients. <S> An extra capacitor on the I/ <S> O pin could help to catch some transient noise. <S> If Arduino still uses Atmels and they support the Analog Comparator, you could use an I/O pin that uses that to create even more noise immunity. <S> Compare the signal with 1.1V reference and you'll get very little noise. <S> As such: EDIT1: Added a pull-down resistor to the schematics to make sure it fully turns off simulate this circuit – Schematic created using CircuitLab <S> EDIT3: <S> Added the capacitor I mentioned, but forgot to draw. <S> If they use an Atmel capable of Input Capture in their board design you can do the same with an external transistor on the input capture module to decode the signal frequency with maximum hardware support. <S> simulate this circuit <S> EDIT2: <S> Then, of course, finish the exercise with some exercises, to determine the number of pulses per wheel rotation. <A> What you want is a zero-crossing detector. <S> You will need to rectify the AC into DC, and the you basically set up a circuit where a transistor will turn on whenever it crosses zero. <S> There's more information in this article .
The dynamo usually outputs a simple full sine-wave, this means you can just measure the frequency by looking only at the positive peaks, by rectifying the power and then feeding the raw AC voltage with a sufficiently large resistor and a single diode to an I/O pin.
Does changing the gap between plates change the capacitor voltage? Consider an ideal capacitor which has a length of \$\ell_1\$ between its plates. The capacitor terminals are open; they are not connected to any finite valued impedance. Its capacity is \$C_1\$ and it has an initial voltage of \$V_1\$. What happens to the capacitor voltage if we make the gap between the plates \$\ell_2=2\ell_1\$ without changing the amount of charge on the plates? My thoughts on this: Increasing the gap will decrease the capacitance. $$ C_2 = \dfrac{C_1}{2} $$ Since the amount of charge is unchanged, the new capacitor voltage will be $$ V_2 = \dfrac{Q}{C_2} = \dfrac{Q}{\dfrac{C_1}{2}} = 2\dfrac{Q}{C_1} = 2V_1. $$ Is this true? Can we change the capacitor voltage just by moving its plates? For example, suppose that I'm wearing plastic shoes and I have some amount of charge on my body. This will naturally cause a static voltage, since my body and the ground act as capacitor plates. Now, if I climb a perfect insulator building (e.g.; a dry tree), will the static voltage on my body increase? <Q> A Wimshurst machine works by that process. <S> It puts charge on plates which are close together, then moves the plates apart to generate a high voltage. <S> When I was at school, in the '70s, a kid made one using PCB material for the disks, and gramophone needles to create the initial charge. <S> The 'work' was done by an electric motor. <S> Based on the length of spark it generated, I think it produced over 200,000V. <S> His dad took it work, where they designed telephones, and tested early electronic telephones with it. <A> Yes, the voltage increases. <S> It seems most of us learned of this in school. <S> My Physics professor had a setup with movable plates, and a very sensitive (actually, very high impedance) <S> voltmeter. <S> As the plates were pulled apart, the voltage went up. <S> This comes from the elemental formula Q=CV. <S> Pulling the plates apart lowers the capacitance. <S> The charge didn't go anywhere, so the voltage must rise. <S> This may seem counterintuitive, but the charge on the plates want to attract each other, and you are doing work by pulling them apart. <S> You can reproduce the experiment described above if you have a voltmeter with an FET input (or an oscilloscope, if you're that fortunate). <S> Ground the negative lead and hold the other lead in your hand. <S> If your shoes are not conductive, and you don't have any ESD straps connected, you should be able to deflect the meter simply by raising and lowering your foot. <S> By the way, rubbing the carpet creates the charge and picking up your feet and moving away is what raises those static charges to such high voltage levels. <S> On a practical note, this is how an electret condenser microphone works. <S> As the diaphragm vibrates, the capacitance between it and a fixed plate changes, and the voltage changes with it. <A> Q = <S> C <S> * U Since you decrease C by increasing the gap but Q stays the same, U will increase. <S> In my schooltime I did not want to believe it so my techer sent me into the experiments room with a high voltage power supply, plates, cables, isolators and a galvanometer. <S> I've tested it and it <S> it is true! <S> The voltage increases as you increase the gap. <A> The electric field between two parallel plates of area \$A\$ is roughly \$E = { Q \over \epsilon <S> A} \$, hence the voltage at a distance \$x\$ apart will be \$V(x) = <S> { Q x\over \epsilon <S> A} \$. <S> So, doubling the distance will double the voltage. <S> The electric field approximation will degrade significantly as \$x\$ gets larger than some fraction of some characteristic dimension of the plates. <A> As we know, a capacitor consists of two parallel metallic plates. <S> And the potential between two plates of area A, separation distance d, and with charges +Q and -Q, is given by $$\Delta V = <S> \frac{Qd}{\varepsilon_0 A}$$ <S> So potential difference is directly proportional to the separation distance. <A> You're correct. <S> You might notice that while charge is conserved, the energy stored in the capacitor after separating the plates has increased:$$E_1 = <S> \frac{1}{2}C_1V_1^2$$$$E_2 = <S> \frac{1}{2}C_2V_2^ <S> 2 = \frac{1}{2}\frac{C_1}{2}(2V_1)^2 = <S> C_1V_1 <S> ^2 <S> = 2E_1$$This extra energy comes from the mechanical work that you had to do to move the plates apart against the electrostatic force holding them together. <A> in the context described with plates not connected, the scenario and formulas indicate that for distance 2l you will require twice the voltage to polarize the same amount of charge.
The Voltage definitely increases.
Output of Inverting opamp with single supply I have the following circuit I'm looking into (it's actually a section of a larger design) simulate this circuit – Schematic created using CircuitLab V1 has an input range of 0 to 4V. Vout I suspect, would be 0V for the entire duration. However, when I simulate both here and using LTSPICE, I get a non zero answer. I am unsure as to why. It would seem that I am in fault since two simulations do not give me a non zero answer(although the simulation done here and on LTSPICE are also different from each other) and this circuit is "known" to work. What am I missing ? edit Updated schematic with R3. <Q> Vout is not the output of your opamp. <S> So, the output of the opamp can be looked at like 0V, the same as if the diode's anode would be connected to ground. <S> The input of the opamp has almost no current flowing in <S> so it is almost an open circuit. <S> The diode is also reversed biased so can be looked like open circuit. <S> Finally, you get a voltage source with two resistors is series and you are checking the voltage on the farther away one, this will be the same voltage as the voltage source because all the voltage falls on these two resistors. <A> Start by assuming the diode is a short circuit.... <S> With a positive input voltage, the op-amp will try and force the inverting pin to be the same potential as the non-inverting pin (0 volts) by taking its output negative. <S> That would happen in normal circumstances if there was a negative supply for the op-amp. <S> Given you have no negative supply all the op-amp can do <S> is drive its output hard against the 0 volt rail. <S> Now add the diode back in.... <S> The op-amp output is at 0 volts and there will be a positive voltage from V1 (via the two resistors) applying itself to the cathode of the diode. <S> This reverse biases the diode therefore, the op-amp output no longer has any influence on the actual circuit output. <S> The actual circuit output voltage then becomes the input voltage V1. <S> In reality there may be a tiny trickle of input current into the op-amp's input that very slightly reduces the output voltage by a few millivolts <S> BUT, your circuit might as well be V1 connected to Vout via 15kohm <S> - the op-amp does nothing for positive inputs. <A> For inputs < 0V, the output is -0.5*Vin <S> For inputs <S> > <S> 0V, the output is +0.5*Vin (if loaded with a 15K resistor to ground or a virtual ground, which I suspect your inverting stage has-- 15K and 30K to yield an output voltage of -|Vin|). <S> Edit:You always get a positive output voltage from this circuit fragment. <S> The 15K you mention is just to allow the op-amp to swing very close to the negative rail (ground). <S> If the input voltage is less than zero, it acts as an inverting amplifier, the op-amp output drives the diode anode to a voltage equal to -0.5*Vin (the op-amp output itself will be a bit higher to account for the diode drop). <S> If the input voltage is greater than zero, the op-amp saturates at the negative rail (ground) <S> the diode is reverse biased, and the circuit looks like a 15K resistor. <S> Hence, if you load it with 15K it will have an output of 0.5*Vin (voltage divider). <S> In the below top schematic, the shown (different) Vout = -|Vin|. <S> Of course the second op-amp requires a negative supply, but the first one does not. <S> In the below bottom schematic, Vout = +|Vin| and neither op-amp requires a negative supply. <S> simulate this circuit – <S> Schematic created using CircuitLab
If you would add another tag and check the actual output of the opamp you will see it is always set on 0V because you have an inverting configuration without a negative rail so you force the opamp towards the ground voltage which is 0V. This is actually a useful circuit- for low frequencies it acts as a precision rectifier.
Can I turn Radio waves into light? Wikipedia says that the frequency of light is 300 THz.I've made a radio waves transmitter that transmits about 100 MHz. If I increase the frequency of the transmitter to 300 THz, will the antenna produce spark or light ? Can I do this circuit practically o_O ?Is there any transistor or IC that can oscillate 300 THz ?Can I find an inductance ( coil ) of 0.0025 pH and capacitor of 1 pF ? I know that it is a science fiction question but please, don't make fun of me :) <Q> Can I do this circuit practically o_O? <S> Is there any transistor or IC that can oscillate 300 THz? <S> Can I find an inductance (coil) of 0.0025 pH and capacitor of 1 pF? <S> Not quite, no, <S> and no. <S> But this is an area of active research: <S> The Truth About Terahertz . <S> The basic principle of the tuned LC radio emitter is resonance. <S> The techniques for producing high frequency tuned signals at higher frequencies are also based on resonance, but because the frequency is higher the resonant elements need to be much smaller. <S> You also need some system for amplifying the signal, bearing in mind that terahertz is above the operating speed of almost all transistors. <S> Intermediate frequencies can be produced by a device called a Klystron, which is halfway between a vacuum tube and a laser in its operation. <A> 300THz transmitter? <S> (the band between infra red and microwaves) - with a lot of technology and know how perhaps. <S> See http://www.rpi.edu/terahertz/about_us.html <S> 300THz transistor/IC <S> - no. <S> Use discrete inductors and capacitors at these frequencies? <S> No. <S> At very high frequencies conventional capacitors and inductors are replaced by other devices (see resonant cavities) <S> In theory there is only one basic difference between a 'photon' of radio waves, light waves, far infra red waves, microwaves, ultra violet waves, <S> x-rays etc. <S> and that difference is the energy the photon . <S> This energy can be calculated using the simple formula: <S> E = <S> hf where E = energy in joules <S> , h = Planck' constant ( <S> 6.626 × 10−34 J·s) <S> and f is the frequency of the photon. <S> If you crunch the numbers you will see that the photonic energy of a radiowave is millions of times smaller than that of a visible light photon. <S> Light emitting 'transmitters' (into optical devices) use electrons jumping from one energy level to another rather than using a 'tuned circuit'. <S> It turns out that the energy gap is just the right amount to give a visible light photon. <S> There is no 'one technology fits all' that can produce photons of different frequencies (energies) across the entire spectrum. <S> Even solid state devices become more exotic as you demand higher and higher frequencies and circuit boards start take on the appearance of complex plumbing. <S> Can it be done? <S> Perhaps. <S> New developments in nanotechnology may well produce a single device capable of converting the energy from radio wave photons into TeraHertz , infra red or visible light photons etc.. <S> They've already developed nanotube transmitters and receivers using graphene. <S> see <S> http://berkeley.edu/news/media/releases/2007/10/31_NanoRadio.shtml <S> Unfortunately my crystal ball is on the fritz at the moment <S> so I can't see in the future. <A> It may be possible, but I don't know of practical devices that work in this fashion. <S> If you search likely terms you'll find some work, but more along the lines of physics experiments than electronics. <S> Transistors tend to stop amplifying at under 100GHz even for really good SiGe IC transistors. <S> In the reverse direction, there are (sort-of) practical light detection devices that use a nano-antenna array. <S> I have seen some work in Germany that looked promising, and I'm sure they're not the only institute working on it. <S> It's easier to go from light to DC than from DC to light. <A> An electro-optic modulator does what I believe you are asking about. <S> Here's an extract from the wiki: - Electro-optic modulator (EOM) is an optical device in which a signal-controlled element exhibiting the electro-optic effect is used to modulate a beam of light. <S> The modulation may be imposed on the phase, frequency, amplitude, or polarization of the beam. <S> Modulation bandwidths extending into the gigahertz range are possible with the use of laser-controlled modulators. <S> As you can see, AM, FM or PM are achievable. <A> Hmm, Well there are non-linear crystals whereby you can mix "light" of different wavelengths. <S> Search for OPA's (optical parametric amplifiers). <S> But you have to start with light... a laser. <S> I guess in principle you could start with 100MHz and double up to 300THz, but that's a lot of doubling : <S> ^) <S> If I stretched your question a bit, and asked how to turn electrons into light... (not in an atom) <S> Then I would think about accelerators, where you get synchrotron radiation. <S> And at the end of an electron beam you can build a free electron laser. <S> (Years ago I worked at an FEL, not quite visible (3-10 um), but you could see it when it blew holes in things.)
You can get tuned light of a particular frequency using a LASER (Light Amplification by Stimulated Emission of Radiation), which is also a resonant process.
Electronic component for tapping I'm in need of electronic component that is able to be tapping, giving a type of massage on the skin. I did some search and can't find anything that is as big as the tip of the finger with the nail.Do you know about something? I'm thinking of using some type of ATTiny controller to control the rhythm and the sequences. I was thinking of some electromagnetic type of a pump or servo that could be used but didn't find anything.Thanks a lot. <Q> I think a small enclosed vibration motor will suit your needs. <S> This type is ~ 10 mm diameter, 2mm thick. <S> The drawing in Spehro's answer made me curious, because the thingy he shows is looks a lot like the motor I have, so I dissected one. <S> Conclusion: <S> the motor I showed is definitely a rotating type, not a pulsing type. <A> You would want a solenoid. <S> A solenoid pulls or pushes the actuator shaft in or out when you energize (apply current to) <S> the solenoid coil. <S> If the shaft is too thin you can put a rubber or plastic cap on it. <S> Look here for some examples of what I'm talking about. <S> I'm sure you can find more suppliers now that you know what you are looking for. <S> I used the search terms solenoid and braille to find that address. <S> On further search, I see that Sparkfun also sells a 5Volt solenoid with a 4 to 6 millimeter throw. <A> Haptics includes both vibration motors and resonant actuators used for tactile feedback. <S> This is a good TI application note on it. <S> From the same supplier, a typical pager motor:
One very useful keyword/phrase you could use for searches is "haptic actuator".
Double tap push button which resets if not pressed twice in quick succession? I'm new to electronics, so don't know where to begin - I'm trying to create a push button which needs to be pressed twice in quick succession, to actually power on a device. Pressing it just once, or having a long gap between presses should be equivalent to not pressing the button at all. Is there a way to do this ? How would I get started ? <Q> There are many ways to do this. <S> In most modern cases, this function would be a few lines of code in a microcontroller that is already present and is (for example) woken up by any closing of the switch. <S> A simple way to do it without a microcontroller is with a CD40106B hex Schmidt trigger gate and a CD4013B dual-D flip-flop that are both powered on continuously. <S> You'll also need three RC circuits (so about 7 resistors and three capacitors). <S> The time constants are: Debounce the switch. <S> Needs a pullup resistor, discharge resistor, series resistor to ST input (1K-10 <S> K is okay for this purpose) and a cap to ground (1uF ceramic is okayfor all these). <S> Should be maybe 20-50msec, so 20-50K. Time the space between presses. <S> Needs a series resistor to output, series resistor toinput and a cap to ground. <S> Time is up to you, probably < 2 seconds, so 500K-2M. Time power-on reset. <S> Needs a series resistor to Vdd, series resistor to input and capto ground. <S> Time should be something like 100msec, <S> so ~100K. Clock <S> both FFs from the debounced pushbutton (rising edge <S> , so you need two ST inverters). <S> D input of the first FF is tied to Vdd, so output goes high on the pushbutton press. <S> Feed theQ output through delay (2) and through two ST inverters to self-reset and to the D input on the second FF. <S> Delay (3) goes to the reset input on the second FF so that power is off when first applied (there is a brief instant after power is applied where a single press of the switch may turn the power on depending on the state of FF 1. <S> Another gate would get rid of that, or connect the cap to Vdd rather than ground. <S> Something like this fairly straightforward circuit: <S> (the ST "Not 5" is unnecessary, it's there because the FF symbol does not show the /Q output). <S> The unmarked inputs are Reset. <S> Set inputs must be grounded. <S> simulate this circuit – <S> Schematic created using CircuitLab R8 deliberately slows the turn-on of Q1 <S> so the battery (generously bypassed by C4/C5) won't glitch if there are caps in the controlled circuit. <A> Of course there is a way to do this. <S> But in most solutions there must be some circuit that receives power while you are pushing the button, otherwise it won't be able to detect the rapid pushing. <S> I would use a micro-controller with suitable code programmed into it.... <A> Microcontroller, which is powered on by first push, then checks the second push, and in case of success, send signal to main device. <S> Or the same algoritm in the device's microcontoller (if there is one already).
An IMO more complex solution could use a pulse generator that triggers a monostable, and a gate that detects a second pulse while the monostable is still triggered.
Why is the capacitor short-circuited in this example? Please forgive me if anything wrong in my question. I am a school student. I have following circuit diagram: simulate this circuit – Schematic created using CircuitLab There are three capacitors with equal capacitance \$C\$. As they are in series combination, the total capacitance should be \$C/3\$. But my teacher said that the second capacitor (C2) is short circuited, so the output will be \$C/2\$. My question is: why is C2 termed as short-circuited in this diagram? I asked him again on why C2 is short circuited, but he said me that I should search about it. Unfortunately, I couldn't find anything helpful on google. <Q> The short-circuit in red puts the 2 points A and B at the same voltage, bypassing the capacitor C2. <A> Any element for which terminals are connected by a conductor, as the capacitor in the figure, is said to be shorted . <S> By having their shorted terminals, the voltage thereof is zero (more precisely, the potential difference between them ), so that this element is not operational in the circuit, and can be removed for analysis. <S> The other two capacitors are in series, hence that: $$C_{eq} = \dfrac{C}{2}$$ <S> provided that the capacitors are the same value. <S> Another correction: <S> As they are in series combination, the total capacitance should be 3C. <S> If they were in series, the capacity would be: $$C_{eq} = \dfrac{C}{3}$$ <S> Take a look a this previous answer. <A> The vertical wire drawn next to the vertical capacitor shorts the two terminals of the capacitor. <S> Any current flowing through this circuit segment will flow through the vertical wire and completely bypass the vertical capacitor due to the short. <S> This means you can ignore the shorted capacitor -- it has no effect on the circuit. <S> The two remaining capacitors are in series because they have one terminal each connected directly to each other by a wire. <S> If they were in parallel then both terminals would be connected directly to each other by wires (i.e. they would be in parallel if you connected the two vertical wires on the left). <S> Also, the equivalent capacitance \$C_{eq}\$ of \$n\$ capacitors \$C_{1}\$, \$C_{2}\$, \$\ldots C_{n}\$ in series is $$\frac{1}{C_{eq}} = <S> \sum_{i = <S> 1}^{n}\frac{1}{C_{i}}$$ <S> Since all the capacitors have capacitance \$C\$ and one is shorted here the equivalent capacitance is $$C_{eq} = \frac{C}{2}$$ <S> Capacitance adds when capacitors are in parallel . <A> From your question: "Why is C2 termed as short-circuited in this diagram?" <S> it seems you are asking <S> "What in this diagram indicates that C2 is short-circuited?" <S> But in your comment: "... <S> but why?... <S> I couldn't understand it. <S> " , it seems you could be saying: <S> "I (now) understand that it is short-circuited, but why is it short-circuited?" <S> as in, it makes no sense to short-circuit the capacitor, so why is it drawn this way? <S> If you are actually asking why as in "why is it drawn this way (short-circuited)? <S> " , then the answer most likely is that it was drawn that way to provide an example of a shorted capacitor in a circuit for the purpose of introducing the concept. <S> In "real life", a circuit diagram would not normally include a permanent wire connecting both ends of a capacitor. <A> This is an old post but worth saying that the confusion here is that you are assuming that since you are learning something that it has some use, some method of problem solving but your teacher was simply pointing out what a short circuit would look like if it were drawn on a diagram. <S> Absolutely worthless information regardless of how you look at it mostly because if something is short circuited it is essentially not there. <S> Typical schooling to teach problems and not solutions. <S> Sounds like your teacher doesn’t know the answer by telling you to search on google. <S> Either way the concept is simpler than it seems: circuits may be corrupted by short circuits.
A short circuit here means that there is no resistance (impedance) between the two terminals of the shorted capacitor.
Automatically measure voltage over a wide range There are voltmeters that can measure voltage over a wide range without the need to switch the range manually. I'm quite curious how do they do it, because I'd like to make a tiny device capable of the same, up to 1000V. I was thinking about utilising a capacitor - if you connect it to voltage on one side, you'll get opposite voltage on other side, but high current will not flow. simulate this circuit – Schematic created using CircuitLab The change in potential should be measurable, shouldn't it? If that's not the way, what is? <Q> If you put aside automatic range adjustment for the moment, it gets conceptually much simpler. <S> Say that you have a knob, and each position of the knob activates a different amplifier: x1, x10, x100 etc. <S> Typically, one of these values will be optimal to make the best use of the ADC, while the lower settings will give too small output (therefore higher measurement error) and higher amplification will cause the signal to saturate and put the instrument out of scale. <S> You can of course go the other way around and use different types of attenuation, to use signals of amplitude greater than the range of the ADC. <S> Note that you can achieve the same effect by chaining amplifiers (e.g. 10x) and reading the signal at various stages (again, using switches), instead of having different ones in parallel. <S> Now, regarding the automatic switching, it gets more complicated because you have to add some intelligence to assess which is the best amplification to get the best measurement. <S> The simplest way (at least conceptually) is to read the ADC and compare the output value with thresholds: if it returns the highest value you can assume it's saturated, therefore you can activate a lower amplification; if it's lower than a certain threshold, you can assume that amplifying the signal will provide a better reading. <S> Of course you'll have to digitally multiply the ADC reading to compensate for amplification. <S> There is also a use for a serial capacitor ( AC coupling ), but it's not related to signal amplitude <S> and I'd like you to disregard it for now, else it would complicate things. <S> There is an interesting article describing the exact same concept, but using resistors (voltage dividers) as attenuators, instead of amplifiers. <A> As a alternative to the common approach that clabacchio has already explained well, you can use very high resolution <S> A/Ds that require 10s of ms per reading when the result is only to display to a human. <S> You generally want to update a digital display in the 2-4 Hz range, so you have at least 250 ms per reading. <S> There are delta-sigma A <S> /Ds available that claim over 20 bits. <S> Let's say 8 real bits is good enough, which gives you 1/2 percent resolution. <S> If you arrange the highest voltage of interest to maximize the output of a 20 bit A/D, then you can read a voltage 1/2 12 lower and still get 8 bits. <S> For example, if you want the meter to read up to 1 kV, then it will still be able to read 1/4 volt with 1/2 percent resolution. <S> If that's good enough, then no range switching is required. <S> The only "auto ranging" would be in how the result is displayed to the user. <A> I'd like to add an other option <S> , that's certainly the cheapest (but clearly not the most accurate) : instead of measuring the voltage itself, measure its logarithm . <S> Here is the basic idea: simulate this circuit – <S> Schematic created using CircuitLab <S> The principle is the following (left schematic) : the voltage across a diode is the log of the current flowing through it <S> (I= <S> Is*(exp(V/Vth)-1) <S> , where Vth~25mV at room temperature). <S> So what you have to do is make a current flow through it that is proportional to the applied voltage : that's what resistors do ! <S> The output voltage will stay below 0.7V at 1kV input voltage, which is perfect to measure with the arduino internal 1.1V reference. <S> Let's improve it (but keeping it simple) <S> The circuit on the right is basically just the same, but I split the 1M resistor in 4 series resistors ( <S> so that 1kV applied voltage is OK with common 1/4W resistors) <S> I replaced the diode with a diode-connected transistor (whose voltage is much more accurately the log of the current). <S> What are the drawbacks? <S> The current flowing through the diode will be proportionnal to the voltage across the resistor, Vin-Vout, not Vin itself. <S> However, that's easy to compensate for in software (and that can also be corrected in hardware, look for "transdiode amplifier" op-amp circuit). <S> The coefficient Vth is proportionnal to temperature (in kelvin) so expect ~0.3%/ <S> °C change. <S> This circuit won't measure well very low voltages (when input voltage gets close to thermal voltage) <S> Finally, the input impedance of this arrangement is only 1MOhm, much less than ADC input. <S> This may load unacceptably the circuit you're trying to measure. <S> All of those can be somewhat mitigated (1. is easy to do in software, 2. can be compensated for example using a second transistor in a transistor array IC, like LM3046...) <S> but it gets <S> very complicated very fast if you want really high resolution. <S> So, finally... If you want the most simple circuit possible, without any (potentially unreliable) gain switching algorithms or special parts, and if the input voltage doesn't go very low, and if you're OK with limited accuracy, then this might be an option. <S> And above all don't forget... <S> take extreme care, high voltage IS lethal !
The way instruments can measure various ranges of a certain quantity is through amplification.
Do transmission lines induce electric or magnetic fields in towers? Transmission lines have EM fields around them due to the high amount of current. Do these fields affect transmission towers (metal or otherwise)? If so, would it be possible to estimate the current flowing through the lines by measuring EM fields at the base of a transmission tower? <Q> Do these fields affect transmission towers? <S> To some extent. <S> You have electric field simply because of the voltage on the wire. <S> This static electricity has an effect on nearby objects. <S> Air ionization, attraction forces, etc. <S> BUT, the effect on towers will be minimal simply because power lines and comms towers are perpendicular to each other. <S> So pretty much no inductive interaction comes into play. <S> What's more, such structures are always grounded. <S> So there's no voltage on them. <S> The only thing you may be able to detect on the tower is its magnetic field originating from the 50Hz current flowing up and down the structure. <S> However I said this current will be minimal, <S> so <S> yeah. <S> If you'd be able to measure currents in power lines from any of that, I have no idea. <A> Transmission lines for power are balanced whether it is single, dual or three phase. <S> This means that any magnetic effect seen at some several metres away will be nearly zero. <S> Current travels down one wire and back up the other, therefore the mag fields tend to cancel at some distance away. <S> This means you can't easily estimate the current flow. <S> The electric fields also tend to cancel but E field has nothing to do with current flow. <A> I am a geologist, not an engineer. <S> That said, I inadvertently ran an experiment which is relevant to this thread. <S> I recently had a geophysical survey done under a power line (132 kV three phase 50 Hz). <S> The image below is a cross-section showing induced polarization of the soil and rock under the power line. <S> Note the spatial association of highly chargeable zones (red) at a depth of 100-200 m with the towers. <S> The measured range is 0 - 40 mV <S> /V. Bear in mind that the spatial resolution gets poor in the last 400 m at both ends of the line <S> so the relationship is not as strong. <S> The survey uses surface electrodes spaced every 20 m to measure current. <S> At 40 m intervals, at a distance of 50 m perpendicular to the measurement array, a current pulse is injected and the decay after each pulse is recorded. <S> Some rocks are more chargeable than others and this is what we are looking for. <S> My geophysicist does not see 50 Hz spikes in the decay curves so he is convinced these are real rock properties of the anomalous zones. <S> I'm not a big believer in coincidences when it comes to geology and suspect <S> the 50 Hz frequency is not even relevant here. <S> This sure looks like there is some sort of field induced in the towers. <S> Note that the transmission lines themselves do not seem to have an impact. <A> You mention transmission lines and EM fields so presumably a question about RF. <S> Transmission lines are typically Co-axial cables, balanced twin or hollow wave-guides. <S> The Co-ax and hollow wave guide theoretically have no external fields if made from perfect conductors, some small leakage might exist. <S> A balanced twin (less common these days) will have an unbalanced near-field and this will cause local effects, for this reason they are separated from surrounding metal structures. <S> The unbalanced field can be measured (still small) and an SWR meter does just that. <S> However the amount of power coupled onto a massive tower structure will most likely not give useful measurable values. <S> Voltages induced on the structure will more likely occur near the radiator (antenna) where the transmission line ends.
The magnetic field induces currents (and voltages) in conductive structures.
Difference in self discharge rate between Lithium iron phosphate battery vs Lithium Polymer Battery What is the difference in self discharge rate between lithium iron phosphate battery and lithium polymer battery ? I have a remote application in which the self discharge rate matters a lot. <Q> You can look at similar parts' datasheets and white papers to get an indication of the typical self-discharge rates for the style of LiPo and LiFePO4 cells you are considering. <S> Typically, both the LiPoly and LiFePO4 types have self-discharge rates roughly less than 5%/month when stored under ideal temperature and state-of-charge conditions. <S> One more thing to consider is that things like self-discharge can be quite variable, depending on how well the manufacturer can maintain a cell-to-cell and lot-to-lot consistency in the process. <S> Even with this, the conditions that your device will be subjected to may cause a cell to change, and you will either want to design to mitigate these conditions or factor in some allowance for capacity dropping over time, and self-discharge increasing. <A> Probably you already have found that LiFePO4 are costly than Li Polymer. <S> If both attends your specification of capacity I recommend go with <S> LiFePO4 if weight is a important factor. <S> If not, buy the cheaper. <S> For better autonomy, avoid regulators. <S> Try to use the voltage supplied by the battery directly. <S> If it's not possible, never use linear ones but switched ones (take a look at buck converter). <S> Also, take a look at some ways to auto charge your batteries, like solar panels <S> (they are cheap if you are not handling with big loads). <A> I have a motorcycle battery that is made up of eight A123 ANR26650M1A LiFePO4 cells in 4S2P configuration. <S> I was wondering about self-discharge and made some measurements with it connected/drawing about 2 mA and disconnected. <S> The pack was somewhere around 60% charged. <S> Voltage measurements were made on a daily basis. <S> The A123 specs indicate that in the range where measurements were made, a drop of .02 V in the pack voltage is about 3% of charge capacity. <S> The results were: Connected: 13.404, 13.396, 13.388, 13.384, 13.379, 13.374, 13.371, 13.368 <S> Disconnected: 13.368, 13.373, 13.374, 13.374, 13.374, 13.375, 13.375, 13.375, <S> 13.375 <S> I'm guessing that the rise when first disconnected had something to do with being disconnected. <S> After that the voltage was very stable. <S> The 1 mV increase in mid-test may be associated with a heat wave that arrived during the test period and could be either the battery or my meter. <S> Things were so stable that I could only estimate that the self-discharge rate was no more than 6%/year. <S> To get a better estimate I'd have to make these measurements over several months <S> and I wasn't interested in waiting that long. <A> Generally the self-discharge factor is used when the battery is stored for long periods of time. <S> The higher the OCP of the battery the higher the self-discharge rate.
With the limited amount of infomation on the particular parts you are asking about, it is not possible to find the difference between the two chemistries with respect to self-discharge. Therefore, the SOC (State of charge) at which you store your battery will mainly determine the self-discharge rate of your battery, due to the OCP (Open circuit potential) at which the battery is at that specific SOC.
What is the cheapest sensor that can detect a wall regardless of color? I'm making a small toy that will use the sensor reading to avoid colliding with a wall. Before buying the sensors I want to confirm that can they detect green, pink, or white colored walls. <Q> Search for HC-SR04 on any shopping site. <S> Their price starts at about 1 GBP. <S> It appears to be the same part as at dx.com. <S> They will need something to drive the transmit pin, and time the 'echo'. <S> Most people use a microcontroller. <S> The sensor has an input pin to trigger the transmitted signal, and an output pin which it asserts to say when it detected the echo. <S> An alternative is IR. <S> The technique measures the brightness of light reflected from a surface. <S> It needs some calibration, and is usually only good for about 20-30cm. <S> Further, they can be confused or swamped by sun-light, and some types of lighting. <S> So typically they are measured with the IR emitter on, and off, to get a difference. <S> Usually people use a MCU. <S> Search for 'micromouse distance sensor', and you'll find lots of information. <S> Silicon Labs make Si1102 Proximity Sensor IC . <S> It uses IR. <S> However, is autonomous, and does all of the processing itself. <S> Its sensing distance is programmed with a couple of resistors, and it raises a digital pin when the sensing threshold is crossed. <S> So it doesn't actually need an MCU. <S> It is under 2 GBP from distributers, and also needs an IR emitter (which are under 0.50 GBP). <A> An ultrasonic sensor should do the job quite okay. <S> You can buy some already easy-interface'able to uC, just like the one here: http://www.dx.com/p/navo-ultrasonic-sensor-distance-measuring-module-green-270051 <A>
There is two simple ways,1.Direct use of ultrasonic sensor should do the job.2.Configuring basic IR sensor with filters which can be used as Obstacle detector . So the total cost needs to include an MCU.
How to remove energy from inductor using semiconductors? So I have this coil and I'm driving dc current through it. This coil has inductance and thus stores energy from the dc current. I have to remove this energy before I can change the polarity of my h-bridge to prevent dangerous voltage rises.How can I efficiently remove this stored energy from the coil immediately after it has been disconnected from the current source?Would just using a bipolar capacitor in series with small resistor and paraller with the coil like this? simulate this circuit – Schematic created using CircuitLab <Q> In general, you dissipate the energy in an inductor by allowing it to circulate it through a resistance. <S> In the simplest (single-ended) form, you have a 'flywheel diode', which just circulates the current through the inductor. <S> The dissipation occurs as Vf <S> * I in the diode and Rl <S> * I^2 in the inductor, where Rl is the resistance of the inductor. <S> The voltage of the 'bottom end' of the inductor rises to Vf above the supply rail during circulation, so doesn't impose much extra voltage stress on the rest of the circuit. <S> To cause the current to decay faster, you can add additional resistance in series with the flywheel diode. <S> This adds R * I^2 to your dissipation, but increases the overvoltage by IR volts, which is the trade-off. <S> Pretty much <S> you're just trading-off <S> voltage spike height against speed of dissipation. <A> Just parallel the inductor with back-to-back Zeners, or a TVS, like this: Or don't do anything at all if the MOSFETs you're using have parasitic diodes which can take the current hit from the inductor when you switch. <S> Or, if they don't, you could do this: <A> OK this also may work (in response to the parallel cap idea.) <S> simulate this circuit – <S> Schematic created using CircuitLab <S> So this might work, depending on L and R. L <S> is the coil inductance and R the coil resistance. <S> You choose C such that RC = <S> L/R or C = <S> L/R^2. <S> This then makes it a low Q resonant circuit. <S> (Search for Zobel network) <S> And it will decay with a time constant of RC = <S> L/R. <S> If you have to voltage head room you can add more series R to the coil and get it to switch faster. <S> (Is there some way to make the schematic smaller?)
Alternatively you can add a zener diode in series with the flywheel diode (but anode to anode) which allows the voltage to rise higher, and then dissipates Vz * I in the diode, while adding Vz to the over-voltage.
There is a missing component in this circuit and couldn't figure out why? I'm designing a BA1404 Stereo FM transmitter and I found the following source: Source Link You can find the component list, circuit scheme and the other stuff on this page. But when I look at the scheme, I couldn't find the 10uH Inductor on it. There are some images of the assembled circuit board and you can see the 10uH Inductor on the board. I'm totally new at electrical circuits. Am I missing something at the scheme? By the way, are there some other sources on the internet where I saw the same circuit and similar assembled boards? <Q> It's definitely on the board (the green component on the top right), but not in the schematic. <A> When in doubt look at the data sheet and therein you can find a schematic that looks like this: - From this <S> I reckon the missing inductor may be associated with the antenna tuning on pin 7. <S> As this is a transmitter, it's quite important to use a tuned circuit to remove unwanted harmonics that may spread RF "noise" (there are more appropriate words) into areas it shouldn't. <S> As usual I'd stick to the data sheet but if you want my best guess, it's in series with the 270R resistor and acts as an RF choke allowing the antenna signal to be a bit bigger <S> but I'd follow the data sheet. <A> I suspect the 10uH inductor is series with the power input to filter the incoming power.
My guess, based on its positioning, is that it's part of the power supply filtering .
Voltage regulator (LM350) gives no current I am trying to use a Fairchild LM350 to get 8 volt output. I am using basic circuit (page 5 of datasheet) to do that. I get the voltage I want but whenever I try to measure the current (by putting the meter in series with a resistor) it gives me 0 A. I am giving it 8 volt and the resistor is 650kΩ. I tried smaller resistor also but still nothing. Why is this happening? <Q> There are a few possibilities: <S> The resistor is not pulling enough current for your meter to measure. <S> 8V across 650kΩ is 12.3μA, a fairly small amount. <S> You may have forgotten to move the meter's test probes to the appropriate ports on your meter. <S> Some meters require you to move one probe to another port to measure current, and often there may be more than one port: one for higher currents and one for lower currents. <S> You might be using a range that is too large, for example a 10A range, when a 200mA range would be more appropriate. <S> (Varies by meter.) <A> A voltage of 8V tied to a load of 650kΩ means you're trying to measure 12uA. <S> It's entirely likely that your meter can't measure less than 1mA accurately. <S> Which multimeter are you using? <S> Things to try in the meantime: Try 800Ω instead of 650kΩ for load and see if your meter picks up 10mA. With the meter in series with the 650kΩ load use a 2nd multimeter if available to ohm out the first meter. <S> If it reads 0Ω the fuse is not blown <A> Probably because you're connecting 8 volts to the input through a 650k resistor and expecting something to happen. <S> In the first place, if you expect to get 8 volts out of the output <S> you'll need to connect greater than 8 volts to the input, with no series resistor between the source and the regulator. <S> How much greater depends on your load current; read the data sheet. <A> So the problem was what Martin Petrei, Dan Laks and gbulmer suggested. <S> My fuse was bad. <S> I tried the LED and it was working. <S> Changed fuse and now I can read current just fine :)
Some meters may not be able to measure this small current.
DC Motor + 9 V battery + 270 ohm resistor = Nothing (Motor won't turn) I'm trying to wire up a DC motor to a 9-volt battery with a resistor in between, and the motor just won't turn. It works if I connect the motor directly to the battery, but not with the resistor in between. Why is this? Motor: 1.5-volt DC motor.* Resistor: 270 ohm. (Red-Violet-Brown-Gold) (Because it's what I had laying around) Battery: Normal 9-volt battery. * I'll try to find more specific details, like a datasheet, when I get home. Unfortunately, I don't have the motor with me while I'm typing this question. More specifically, I am trying to create something like this circuit: With the intention of, when the switched is flipped, the motor changes direction. However, when I make this circuit, the motor doesn't spin, so in troubleshooting, I reduced it down to a single resistor, the motor, and the battery, with no switch or direction-changing, but the motor still doesn't spin. <Q> As Ignacio said, use a DPDT switch in an H-Bridge configuration. <S> Also, you are just wasting power using resistors. <S> Change to a 1.5V battery <S> and you don't need a resistor. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> For a variation on your circuit, you could use two 1.5V batteries and stick with your SPDT switch: simulate this circuit <S> The resistors you are using are not only wasting power, but they are also limiting the current too much so that the motor will not turn. <S> If you must use a 9V battery, you could use a smaller value resistor per Saidoro's suggestion and put it in series with the motor. <S> Then either use the DPDT circuit above, or the SPDT circuit with two 9V batteries. <A> With a 270 ohm resistor, even if you took out R2, that motor can at max get 33 mA of current, at an even less voltage due to drop across the resistor. <S> Not a whole lot of motors can get by with just 33mA. Even the little vibrating motor in a cell phone needs around 70 mA and up, and usually at 3 or more volts. <S> Give the motor some more juice. <A> The motor's datasheet probably lists its effective resistance, you'll want to hook it up in series with a resistor 5 times larger than whatever that is. <A> The circuit you posted is attempting to create a virtual ground. <S> (I know this because you said so in your comment!) <S> A summary of what the others are telling you is that the motor's resistance is much smaller than that of the resistors, and so it upsets the voltages. <S> But to follow your question further, there is a way to "amplify" a virtual ground using an op-amp. <S> A simple search on "virtual ground" will reveal some examples of how this is done. <S> But in your case the motor will still demand more current than the op-amp can supply, and so you would need to add a power amplifier (or a pair of transistors) following in order to make it work. <S> (You'll see examples of this, too). <S> There is one exception, that if your motor is tiny enough, it could work at low currents using just the op-amp. <S> This would be something on the order of the little vibrating motor from a cell phone. <S> You said it was just 1.5 volts, so I thought of this possibility. <S> You'd probably want a small resistor in series if the motor is 1.5v and the supply ends up being 3 or 4v. <S> The logical conclusion is to make use of methods suggested by the others. <S> But I thought it would be instructive to follow up on your original question.
The motor probably has a low resistance compared to the resistor, which would mean that it is not getting the 1.5 volts it needs to run while connected in series with it.
Why does a headphone attenuator need separate resistance for each side instead of a shared resistor for both? I have a Shure headphone attenuator and opening it up it looks like there are separate pins on the pot for the left and right speakers (the pot has 5 pins, which I believe are left in/out, right in/out, and ground), like this: (I have no experience creating schematics and I'm not sure what I should be using to represent the input or how to represent the 5 pin pot, but I've tried to convey what I'm talking about) simulate this circuit – Schematic created using CircuitLab Why is the resistance applied separately to the left/right sides, instead of using a pot with 3 pins and applying between both speakers and the ground, like this? simulate this circuit My reason for wanting to understand this is that I want to modify a 4-pole TRRS headphone extender to create a fixed-resistance attenuator compatible with headphones with a remote/microphone and I'd like to understan the circuit on my existing attenuator. <Q> If you do as you say there will be interaction between the left and right channels. <S> Stereo systems go to great lengths to minimize crosstalk between channels. <S> Ideally there should be less than 1% of the signal in the left channel getting into the right channel and right to left (this is about -40dB). <A> It's easier to understand if you only drive one speaker at a time. <S> Let's assume the speaker amplifier's output resistances are zero (the ideal case). <S> Then the ground resistance (aka the common impedance) (R2) and the 8 ohm speaker (speakers aren't really 8 ohm resistors, but close enough for this purpose) will be a parallel resistor equal to something under 8 ohms. <S> That 8 ohms forms a nearly equal voltage divider with the left speaker so that half of the drive voltage for the left speaker appears across the right speaker, causing very bad crosstalk. <S> As you lower the value of R2, the overall parallel combination is reduced and as it approaches zero, so does the amount of crosstalk. <S> In your screenshot, the impedances are not common to the speakers individual current flows, so they don't cause crosstalk. <S> This is similar to why star grounding (or ground planes) is used in power supplies: <S> Minimizing the common impedance in the current return path. <A> It should be left signal in, right signal in. <S> (That's 2 pins)Then there would be 2 pots (ganged). <S> So the left pot wiper and right pot wiper to keep the signals separated and keep stereo signals. <S> (That's 2 more pins) <S> Then everything shares a ground. <S> (The fifth pin) simulate this circuit – Schematic created using CircuitLab <S> So the Actual pot has Left and Right IN, Shared GND and Left and Right OUTs
Any signal in one channel will result in a voltage across the resistor that will then excite the other channel.
How are crossing lines implemented on microchips? I always imagined the photolithographic microchip manufacturing to be a 2D layer creation process without layering, thus creating a topological problem for circuitry when you have some \$K_{3,3}\$ or \$K_5\$ in it, which would certainly be the case for any non-trivial design. And there are papers out there talking about producing "3D" chips with multiple layers to save space, thereby adding to the confusion. Yeah, that's sad, but that is what I learned in school, a bunch of mysterious riddles.It's no wonder people start conspiracy theories about aliens catering those technologies to us. So how can we build complex processors and chips just using a 2D topology ? <Q> It turns out that there are layers, but people sometimes skip those when talking about how a microchip works. <S> The process that introduces layers is called Back end of line, or BEOL . <S> It basically works like this: Create the 2D chip layer using photolithography <S> Apply an insulating layer Drill holes into that layer <S> Apply a conducting layer, also filling the created holes and create circuit paths or interconnects <S> Repeat those steps as often as needed and your manufacturing process and maybe other considerations such as thermal design allows <A> There have always been at least two conductive layers on chips that can be used to route signals — the silicon itself and at least one metal layer. <S> Vias (holes) in the insulating silicon oxide layer allowed current to flow between the layers where needed. <S> Modern chips, especially high-density, high performance logic chips, have many layers of metal and oxide — 6 or 8 or more, similar to a multi-layer PCB. <A> Here is SEM (Scanning electron micrograp) showing a cross section across the width of a couple of transistors. <S> Labels on the right hand side is function/position in stack. <S> Labels on Left hand side are materials. <S> The black vertical structure connecting the gate to the 1st metal layer is called a contact. <S> It is comprised of a Titanium seed layer, A TiN barrier layer and a Tungsten plug. <S> Interlayer Via's between M!,M2,M3 and M4 are not shown. <S> As a bonus, there is something very unusual about this structure. <S> can any one say what it is? <S> reply in the comments.
In the earliest manufacturing processes that had only one layer of metal, "jumpers" that allow signals to cross could be created either by diffusing or implanting a conductive path into the bulk silicon, or by creating a path in the "poly" (polycrystalline silicon) layer that was used for the MOSFET gates in some processes.
How does a CPU choose a path? This is the most baffling question of all other concepts. I ask my teacher "How does the computer choose a path?" "They program it" "How do they program it?" "..." I have a basic understanding of how a transistor works, how the CPU handles things, and the latter, but how does the CPU physically choose a path!? I want to learn the college level stuff, but google is not helping!!! All I get are these novice translations! Please help because I'm crying over not knowing the answer. Literally. EDIT: I need a thorough answer explaining what is going on in the hardware please. <Q> What do you mean by saying "choose a path"? <S> All a modern CPU is, is a fancy hardware interpreter. <S> It starts like this: You issue a command in a high level language like: i = 5 + 6; <S> This gets translated to machine instructions (and pseudo instructions), commonly known as assembly, by the compiler: mov ebx, 5mov eax, 6add eax, ebx <S> This gets translated to bytes, the direct equivalent of assembly that the cpu can understand, by another class of translators known as assemblers: 01010111 00110101 0000010101010111 00110111 0000011001111111 00110101 00110111 <S> The Control Unit of the CPU, then reads the first byte of the instruction, rougly known as opcode . <S> For instance, in the above example, 01010111 or what you know as mov instruction, mightsignal passthrough of the immediate value 5 through the ALU to the register known as 00110101 or eax . <S> The internals of the interpreter, or the control store, or however it's called in different implementations, differ from implementation to implementation. <S> For example, in a microcodestoring implementation, there might be a small memory mapping opcodes to corresponding signals, likewise: 01010111 -> enable_ALU_passthrough, reg, immediate_val01111111 -> enable_ALU_add, reg, reg <S> That's a rough example of how a cpu might work. <S> Values are approximate and almost certainly are not correct. <S> If you want to dig through, I recommend the excellent Structured Computer Organization by Tanenbaum which walks you through building a simple CPU interpreting java bytecode in the first few chapters. <A> I'll make a brief attempt to explain the Datapath implementation, since it is a large topic. <S> CONTROL WORD : <S> Control Word is basically the input code ( You can say, The master code) <S> which controls what operation the computer will perform. <S> A General control word will consist of an opcode, specifying a particular operation, like add or shift, followed by a few parameters like location of operands or the operand itself etc. <S> In this figure, the control word wont be directly visible, so I have added another figure. <S> Be careful, the second figure is not directly related to the first one. <S> Here a simple control word is shown. <S> ---> <S> DA stands for Destination and specifies the location where the result of computation will be stored. <S> ---> <S> AA and BA specify the location of operands A and B. --- <S> > MB, MD are the Mux B and Mux D <S> enable input (More on that later). <S> --- <S> > <S> FS is the function select, and specifies what function the unit will perform. <S> Now back to figure 1. <S> --- <S> > <S> A select and B select inputs are applied to Mux A and Mux B, which select the data inputs from the registers R0 through R3. <S> --- <S> > <S> The input B is then passed to Mux b, to decide whether it is needed or not, because some operations only require a single operand, like shift and increment. <S> --- <S> > <S> The A input and the output of Mux B ( which consists of either input B or a constant, as seen in the figure) is then applied to ALU. <S> Note that B input is also applied to the shifter. <S> --- <S> > <S> The opcode or Function select determines what operation will it be. <S> At this point, the output of both the shifter and ALU is applied to Mux F, which selects whether it the output of ALU or Shifter which is needed. <S> The Mux F select maybe a part of opcode. <S> --- <S> > <S> Finally, the result passes through Mux D, and then it is applied to each of the registers for storage purposes. <S> Which register to store in is decided by the And gates which enable Loading operation, with the address of the registers applied via decoder. <S> I hope this explains it. <A> Ken Shirrif has a number of blog entries which take apart microprocessors of the 80s in loving detail: http://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-how-it.html http://www.righto.com/2013/01/a-small-part-of-6502-chip-explained.html <S> http://www.righto.com/2014/09/why-z-80s-data-pins-are-scrambled.html (etc). <S> The 6502 is a good subject for this as it's one of the last microprocessors to be designed by a single engineer, by hand, on paper (well, acetate sheet with "Rubylith" tape). <S> There's also "NAND to Tetris": http://www.nand2tetris.org/ <A> If you have a single input signal which the MCU/CPU wants to route to differernt locations, as shown in "DEMUX" below, then the MCU forcing A/Bbar high will result in IN going to A, and by forcing A/Bbar low, IN will be sent to B. <S> In the MUX, if A/Bbar is forced high, A will be sent to C, and if A/Bbar is forced low, B will be sent to C. <S> So, you can see that in one case a signal can be sent to many locations sequentially, and, in the other, many signals can be sent, sequentially, to a single location.
Depending on the opcode, the control unit asserts control signals to different parts of the cpu, such as enabling one register to send its contents to the ALU, or reading from the instruction store to a register, etc, depending on the instruction.
Why do microwave ovens, with metal walls, not blow up? Why does a microwave oven with metal (?) walls work fine, but if I (theoretically) put a metal spoon in it, "bad things" may happen? Maybe these internal walls are not conductive? <Q> Metal in a microwave is really not a big problem. <S> The walls of every microwave ever made are metal, the window contains metal mesh, mine has a metal shelf and a metal base for the turntable. <S> or it will really do unpleasant things like arc and get dangerously hot. <S> The rules are complex and as the average microwave oven owner doesn't have a post-graduate degree in physics with at least a minor in high-energy radio <S> it's just easier to say "no metal." <S> People who really do know better will also know that they can ignore the note on the box, but the lawyers can point to the note on the box after your attempt to home-sinter aluminum powder burns the kitchen down. <A> The metal walls of the microwave oven reflect the microwave radiation. <S> A metal object in the middle of the microwave field can do several things. <S> It could reflect the radiation like the walls do. <S> That's bad if there is nothing else in the oven to eventually absorb the radiation. <S> All that microwave power ultimately has to end up somewhere. <S> It's better for the oven if it ends up heating your food. <S> Even if there are absorbers in the field, the reflections will make the field uneven, creating hot spots and cold spots. <S> A metal object could absorb some of the radiation itself, depending on the impedance of the material at microwave frequencies. <S> That would heat the metal object, which could possibly heat it higher than the temperature the floor of the oven is intended to handle. <S> Depending on the size and arrangement of metal objects, they can act as antennas and generate significant potential. <S> I have seen small metal objects arc to each other. <S> All in all, your microwave oven is intended to heat water molecules. <S> Any deviation from that will make it less efficient, and possibly cause problems to the oven, depending on how much expense was put into protecting itself. <S> These are high volume consumer items, so I don't have much confidence that quality was a high design priority. <S> They probably did the absolute minimum they felt was necessary. <A> When they say "don't put metal objects in a microwave" what they really mean is "don't put the food in a metal container. <S> " Obviously the container will reflect the microwaves and <S> the food won't cook. <S> Now here's the problem. <S> If the energy is not going into the food, it has to go somewhere . <S> In general you should not operate a microwave without food in it, and equally you should not operate a microwave with food in a metal container. <S> Someone <S> I shared a kitchen with once tried to cook a burrito wrapped in aluminium foil in a microwave. <S> The burrito was unable to absorb the energy as it was shielded inside a very effective Faraday cage. <S> * <S> As a result, the plastic lining of the microwave was the next thing around that could absorb the energy, and it melted. <S> Despite the damage, the microwave remained functional. <S> It's certainly possible that a metal object could act as an antenna and generate sparks, but I wasn't present when the event happened, so I cannot say if there were any. <S> I can say there was no localised charring as you might expect if there had been sparks. <S> *if you're not familiar with the term Faraday cage, it's just a posh name for the (theoretically) perfect shielding an object gets from being fully surrounded by a conductor. <S> http://en.wikipedia.org/wiki/Faraday_cage <A> Metal in a microwave is not inherently a bad thing. <S> As others have mentioned, metal alone in the microwave isn't very good as there is no load to absorb the energy. <S> However, many microwaves do come with a metal rack, and if you examine the rack, it doesn't have any pointy edges which would be rather efficient arc sources. <S> Other metal items especially alone can also be arc sources. <S> However, many a manual for microwaves in their section on cooking and such, have suggestions for defrosting meat to put small pieces of aluminum foil on the corners of the package so that the corners don't get cooked during the defrost process. <S> Now, you should try to make the foil as smooth as possible to avoid creating points for arcs to originate from, but even if it is a little crumbly it is okay due to the much larger load of the meat needing to be defrosted absorbing the vast majority of the energy. <A> Yes, it's all about avoiding standing waves and directing the energy usefully. <S> The EM waves from the magnetron normally warm up the foodstufs they can reach and that loading keeps the energy density in the cavity down. <S> If the EM cannot reach the food because of metallic packaging, the energy density throughout the coupled carity/waveguide/magnetron rises until the cavity losses and the energy dissipated in the waveguide and magnetron anode/cathode balance out the energy generated. <S> This results in very high potentials at the various antinodes in the cavity, (which can cause arcing), and excessive heat generation in the magnetron, (which can reduce its lifetime/reliability). <S> All that power has to go somewhere:)
The general guideline of "do not put metal objects into a microwave" does make sense - metal in the oven has to have a certain shape, size, alloy, distance from other pieces etc.
What do ripples in frequency response curve of filters depict? I am trying to understand the frequency response curve of various types of filters (Butterworth, Chebyshev etc). The curves are shown here for reference : One thing I do not understand is what do the ripples in passband show. The curve is clearly Gain vs Frequency. So all the ripples show is that the gain of filter varies slightly with the frequencies in the passband. How is that suppose to create a problem ? We will only get an output which is varying in amplitude. It would have created a problem had we been getting a distorted output, which was possible only when the filters introduced a distortion, and I dont think it has anything to do with gain at different frequencies. Based on above assumption, why would one not simply opt for a filter with steepest rolloff (elliptic in the figure) , without worrying about the ripples in passband ? Edit : It seems I am not able to properly express my doubt. Here is another attempt : Many articles on filter design mention "Butterworth response is maximally flat, while others like Chebyshev and elliptic have ripples". My query is what has this "maximally flatness " or presence / absence of ripples anything to do (if at all) with the purity of applied signal. Purity in the sense, I apply a signal of a particular frequency, and I get an exact replica back. Will the situation be different in case of different filter types, ie , will I get some spread out or mis-shaped waveform if the filter response has ripples? If that is the case, then how can this be inferred from the frequency response curve alone , because frequency response curves only show that the gain of the filter varies with frequency ; they dont speak anything about what the shape of wave will become if the curve has ripples or not. My doubt arises because the texts generally differentiate between various filter responses by citing something like "Chebyshev response differs from butterworth because it has ripples in the passband". Additionally, if all of the above is not true, ie ripples bear no relation to altering the shape of input, then what do they signify ? ( One of the users made and attempt at that. If possible, please extend or elaborate a little) I am talking of only a simple situation with just one input (let alone many inputs). Maybe someone is kind enough to point me to some resources which show response of these filters to a single sine input. Thank You <Q> Bode plots show both gain and phase for good reason. <S> Looking at only the gain response without considering phase response, you're missing an important part of the system performance. <S> Butterworth gives the least gain ripple in passband and stopband, and has lowest phase distortion / group delay -- though higher-order filtering is needed to achieve a decent cutoff slope. <S> If your application cares about group delay or phase shift , then Butterworth gives the least distortion. <S> Unfortunately, to achieve both zero passband ripple and very steep cutoff stopband transition at the same time, then a very high-order Butterworth filter would be required. <S> Near-ideal performance usually has a high cost -- in this case, a higher-order filter requires more components and thus more money and board layout space. <S> Chebyshev or elliptic improves the cutoff transition, making a very steep cutoff for comparable order. <S> Higher-order filters usually require more components, so this translates directly to saving money and board layout space. <S> However the real cost is that these types of filters require accepting some level of ripple in the passband and stopband (and the phase response is not so linear as Butterworth). <S> There is some design flexibility, you can trade off how much gain ripple is acceptable. <S> Passband ripple does indicate that there will be some level of distortion in the signal -- the question is whether the level of distortion is acceptable. <S> If maximum passband ripple is 0.1dB and signal-to-noise ratio is good, the ripple may not make a difference. <S> But if zero passband ripple is required, then you need a Butterworth filter. <S> In an application requiring least distortion, Butterworth wins. <S> In an application requiring low component count but where neither group delay nor passband ripple is important, then Chebyshev or elliptic wins. <S> It all depends on what signal characteristics are important. <A> will I get some spread out or mis-shaped waveform if the filterresponse has ripples? <S> Yes, the waveform will be mis-shaped if the filter response has ripples. <S> The amount of distortion depends on the amount of ripple in the frequency response. <S> See the image below which show the distortion in waveform. <S> The plot shows output waveforms for 3 different filters with differing amount of ripple in the passband. <S> You can clearly see that the blue signal is significantly different from the other two. <S> Filter design involves a tradeoff between the number of components and flatness of the response. <S> In general the list below gives filters in decreasing order of component count (and decreasing flatness of frequency response) for some given frequency response specifications: <S> Butterworth filter Chebychev filter Elliptic filter <S> Note: <S> The outputs shown in the figure above are artificial. <S> In general you would never use a filter with a frequency response like the blue one for lowpass filtering. <A> If the ripples are too big and I'm using the filter for an audio application <S> I'll probably hear the shape of those ripples in the music so yes, mainly they are undesireable. <S> The ripples do usually show something - they indicate to me that the higher/steeper filters are probably constructed physically (and mathematically) from a series of 2nd order filters.
The ripples in the pass band are typically an unwanted side-effect of producing a higher order filter that has a steep roll-off.
Mounting a PCB with velcro I need a lo-fi way of mounting a PCB (Tessel plus RFID card) under a table. The easiest method by far would be velcro but I'm worried about static discharge frying my circuitry. Should I be? <Q> But a big problem with Velcro is that it does not last very long. <S> The self-adhesive side will stick OK to begin with, but will dry out and let go in a comparatively short time. <S> I've seen this happen on a regular basis with electronic modules fixed to housings with Velcro. <S> They all came back with the modules dropped off but the Velcro-to-Velcro interface intact. <S> The Velcro had just come unglued from the inside of the housing. <A> Velcro-type material (hook and loop fastener) is available with many different adhesives. <S> The stuff that we use is from a company called Aplix and it comes with a VERY high strength adhesive. <S> The part numbers we use are Hook: A800R0107H000-R and Loop: <S> A800R0107L000-R <S> This particular material is white and is 4" (100mm) wide. <S> We use it because it is the only material permitted by Riddell for use in the football helmets they manufacture for Professional Football players. <S> The adhesive used is from a class called VHB (Very High Bond) and is similar to the stuff used to build, among other things, passenger aircraft. <S> This stuff is that good and that strong. <S> And, yes. <S> We have used it to mount circuit boards to various objects and surfaces. <S> Just be sure to trim <S> the through-hole leads on the bottom of the PCB so that they don't poke through the material. <A> Possibly use a conductive hook & loop material, (beware - the V word is trade marked). <S> With this material static dissipates quickly, if any is generated at all. <S> A bit more pricey <S> but it does exist. <S> See: http://www.lessemf.com/fabric.html#207 . <S> http://www.adafruit.com/product/1324 <S> If that doesn't seem well enough use a few simple snap together buttons. <S> One side riveted into the PCB the other side screwed to the table surface. <S> http://www.artfire.com/ext/shop/gallery_item/DesireMeNow/104098 <S> http://www.coversuperstore.com/Snaps-Grommets-Hardware/ <A> Usually for a properly designed fully assembled board ESD isn't really a problem. <S> And if I were you unless the desk is made of material that is unsuitable I would just bolt the board to the desk with a few screws. <S> All my boards have mounting holes for this.
It may be a problem if you frequently rip the Velcro off, as you may cause the friction to generate a charge. The Velcro is probably no worse than any other plastic (nylon) that is close to your circuit, as far as ESD is concerned.
Square Wave input into Transformer Can a square wave generated using an astable NE555 be used as input of a transformer instead of AC? Say, I generate a square wave of 60Hz using NE555 and a 24V at 6A supply and feed it into a transformer with 50:1 turns ratio, will I get 0.48V at 300A out? <Q> In practice, if you fed a 60Hz "power" square wave to a transformer, the higher order harmonics in the square wave would mean that a regular AC power transformer wouldn't be as efficient as being fed with a sine wave. <S> A 24V 6 <S> A supply is capable of providing a power of 144 watts and your output requirement is also 144 watts <S> so you need to be aware that you might only get 90% efficiency and expecting 144 watts from the transformer output is a little naive. <A> For every action there is a reaction, and think how a transformer can buzz with AC. <S> Now make it a square wave <S> and you could hammer it around a bit, even more so than with sinusoidal AC. <S> So, it depends on your frequency a bit, too, I'd say, for you to set up a noticeable or worse mechanical standing wave in your transformer itself. <S> However, depending on the size and robust quality of the transformer it should be OK to do at your power level, at least. <S> On the other hand, will you get square wave out? <S> I think it won't be "as square" as the input. <S> The windings (and their proximity to the plates, if any), also create a changing inductive environment, so it may goof with the shape of electrical wave(s) a bit. <S> And the fact that the transformer will have those mechanical waves (manifested as vibration or sound), assumedly worsening at certain frequencies, leads me to believe that it will not pass all frequencies at the same efficiency, thereby further distorting the "squareness" of the wave. <S> That may be saying the same thing twice in two different ways, but maybe at least for more than one reason. <S> It may be, though, that the two are just manifestations of the same condition. <S> Not sure at the moment. <S> Either way, not going to be a perfect representation of the input. <S> Also, I wonder if you are going to get EMF (or reverse EMF) effects at the points of voltage reversal, which may grossly distort the output. <A> I don't see why it would harm anything other than your efficiency, and the fact that it may saturate if the core isn't rated for the frequency and pulse width you provide until it is tuned to function.. <S> All car audio amplifiers pretty much use pulse width modulated power supplies and they could use tiny to massive transformers in the audio section. <S> It's the circuitry you use along with it that will aid it's output.. smoothing capacitors generally change the square wave to a sinewave under load <S> so I think youll be just fine <S> , however, ensure you will not be blocking lower frequencies with the cap as the cap in series is a crossover. <S> Measure your impedance of the circuit to . <S> Try adding a capacitor to your load and fire up a 100ohm resistor with a capacitor of say 47uF to your fgen with a square wave of the same frequency you are thinking of using, then scope it's output to see what happens.. <S> I think you'll be pleasantly surprised. <S> Square wave inverters have been known to work much better than the choppy, spiky and really crappy modified sinewave inverters.. <S> In fact, I've bought several of those clunkers come to find out that the oscillator circuit could easily be swapped with a true sinewave oscillator board in some of the most standard brands out there. <S> If you look in the circuit for the inverter, you'll see a pcb that normally plugs in vertically <S> and they are pretty standard in China and on Ebay. <S> 99% of the inverter and amplifier manufacturers produce these units for all of the manufacturers worldwide, so, they make it easy upon them selves <S> so swaps can be made in the time of demand. <S> If you are trying to invert dc to ac using this method, it's simple to change, and buy if you pull the oscillator out of the device before ordering! <A> This is exactly what happens in the ignition system of older cards with points. <S> The make-and-break of the points provided a 12 V square wave to the ignition coil (a transformer)!, which then output several thousand volts. <S> A similar method was also used in "the olden days" to produce high voltage AC from a battery, such as in spark-gap radio transmitters.
You'll need to couple the square wave with a capacitor to the transformer because standing DC voltages will just cause heat but there is no problem feeding a transformer with a square wave in principle.
How and why is a magnetic field transmitted with alternating current? I apologize for the basic question (or if it is incorrect in its assumptions,) but how does one account for magnetism in alternating current? If I generate alternating current to a motor, why do my coil poles become magnetized, and why does the magnetic field rotate? Looking at a graph of an AC waveform, can the magnetic properties be visually determined? <Q> Firstly, a point about the title of your question. <S> You initially ask why it is that a magnetic field is "transmitted" with alternating current. <S> In terms of transmission, electromagnetic radiation only occurs where a circuit has (or becomes) an antenna. <S> Otherwise, only an electromagnetic field is generated around the conductors. <S> This is called a "near-field" and it is not 'transmitted'. <S> This current can be D.C. too - <S> the phenomena is not limited to A.C. current. <S> The direction of the magnetic field depends upon the direction of current flow. <S> Therefore, if you connect a D.C. source a magnetic field is generated with a constant polarity. <S> If you connect an A.C. source the magnetic field alternates in polarity. <S> It is interesting to note that a magnetic field is only measured or observed when a charge moves relative to you as an observer. <S> It is a relative property! <S> If you were able to move down the wire with the charge (or alongside the wire at the same speed as the charge), so that the charge is no longer moving relative to you, you would see that the magnetic field disappears completely!The electric and magnetic fields are related in a peculiar way, similar in some sense as to space and time in that they are relative properties and depend entirely upon the observer. <S> Here's a really good book which I think you would enjoy: <S> [ http://books.google.co.uk/books/about/Electromagnetics_Explained.html?id=MLzPNpJQz9UC&redir_esc=y][1] <A> Magnetic fields are generated by electric currents. <S> Alternating curents will produce alternating magnetic fields, so the waveform of the current really only tells you how the field strength of the magnetic field will vary with time, it says nothing about rotation with more information. <S> There are several different types of motors. <S> I am not really an expert on motors, but most industrial motors are 3 phase AC motors. <S> This means they are powered by three different AC waveforms of the same frequency that are 120 degrees out of phase. <S> Basically, this means that when one phase is at a maximum, one of the others is increasing and the other is decreasing. <S> If you build 3 electromagnets and arrange them radially about a point, then you can create a rotating magnetic field in between them. <S> If you then stick a permanent magnet in there, it will spin at 3600 RPM (1 revolution per cycle, assuming 60 Hz AC). <S> Other types of motors use the magnetic fields to create eddy currents. <S> The eddy currents create opposing magnetic fields which interact with the externally applied field to create a torque on the shaft. <S> It's not possible to create eddy curents without a time-varying magnetic field, so these motors require AC to run. <S> DC motors use a commutator to generate AC internally. <S> The rotor contains coils of wire which are connected to the commutator so that the coils generate magnetic fields of different polarities depending on the shaft angle. <S> The time-varying magnetic fields produced by the spinning rotor are precisely timed to interact with the stator magnets to generate torque. <S> In this case, the timing is determined mechanically by the commutator and not electricaly by the applied waveform. <A> For the first questionSo a changing current has a varying Electric field and since the electric field is varying it generates a magnetic field. <S> This is basically best understood if you review Faraday and Ampere-Maxwell Laws of Electromagnetics. <S> Here is a link for how the motor works hope this helps. <S> http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/indmot.html#c1
With regard to magnetic fields, a magnetic field is generated when a current flows through a wire.
Why do op-amps have labeled terminals? For some type of op-amp circuits, the inverting terminal is placed on top, while on others it's at the bottom (i.e. inverting vs non-inverting amplifier). I don't understand the convention - how do you decide what to label each terminal? Aren't either labels valid since both terminals are equivalent? <Q> An individual op-amp will have one non-inverting input (usually denoted with a + symbol) and one inverting input (usually denoted with a - ). <S> They are very much not equivalent. <S> As their description makes apparent, one inverts <S> it's input value, and the other does not. <S> Now, with regard to the drawn symbol , which is on top is generally a function of what will make the schematic clearer and/or easier to draw. <S> You decide what to label each terminal based on the part datasheet, which will tell you which physical pin maps to what function of the device. <S> Fundamentally, an op-amp's output is the difference between the input pins, multiplied by the op-amps gain (ignoring the non-idealities of op-amps for the moment). <S> \$ V_{OUT <S> } = <S> GAIN <S> * (V_{+} - <S> V_{-})\$ <S> Now, let's take the two simplest possible configurations: - input is grounded (e.g. 0V), input signal is on + <S> input: <S> Circuit behaviour is: <S> \$ V_{OUT} = <S> GAIN <S> * (V_{+} - 0)\$, which simplifies to \$ V_{OUT} = <S> GAIN <S> * V_{+}\$ <S> + input is grounded (e.g. 0V), input signal is on - input: <S> Circuit behaviour is: <S> \$ V_{OUT} = <S> GAIN <S> * (0 - V_{-})\$, which simplifies to \$ V_{OUT} = <S> GAIN <S> * <S> -V_{-}\$ <S> In the latter case, the input voltage is \$-V_{-}\$, which has the effect of inverting the input voltage on the - across the voltage at the + pin (in this case ground, so the term simplifies out). <S> The fact that you invert the sign of the input is why the inverting input is called <S> the inverting input \$-\$. <S> I don't know how you're analyzing an op-amp circuit that you are not seeing a difference when you swap the inputs, but you're apparently doing it wrong. <A> They are not equialent because the op amp has to know WHICH WAY to adjust the output if the pins are not sitting at the same voltage. <S> If you switch them, then the op amp output will simply zip over to one rail or the other and stay put. <S> Op amps are generally designed with a nergative feedback loop. <S> In this case, the outp of the op amp is fed back around, modified in some way, and then subtracted from the input. <S> If the result is negative, the op amp lowers its output voltage. <S> If you swap the inputs, the feedback will be positive instead of negative and the op amp output will be driven to one rail or the other. <S> This can be useful when you want hysteresis (e.g. schmitt trigger). <A> The terminals are not equivalent. <S> You are using one of the rules of opamps incorrectly. <S> One of the rules is that the inputs (inverting and non inverting) are equivalent. <S> That is only true if feedback is used. <S> The op amp will look at its input and then adjust the output pin accordingly such that the two terminals are equivalent. <S> If no feedback is used, it will swing the output voltage to the rails as it tries to "balance" then inputs <S> but it never will. <A> Differential amplifier substracts two signals. <S> Substract is noncommutative operation. <S> You cannot swap operators (or signals) and have same result. <S> \$4 - 1 \neq 1 - 4\$ Commutative property in Wikipedia
They're both valid ways to draw an op-amp, though the circuit has to accommodate which connection is where.
Waterproofing a GPS antena I need to use this GPS antenna on a model boat. As a result of placement, it will be submerged periodically (though I don't expect it to work while submerged), and damp often. What can I encase it in that won't degrade its performance while dry? I've considered simple two-part epoxy (the translucent kind), but worry about signal attenuation. <Q> There are dozens of waterproof GPS antennas that have elements such as you indicate inside. <S> Most have a LNA amplifier stage as well to compensate for the cable losses. <S> A decade ago they were under US$ 5.00 each in 50 quantities. <S> When casing your antenna I would keep a few mm of space between the antenna and adjacent dielectric parts to prevent detuning. <S> I would also keep all metal away from above and adjacent to the antenna for at least 10mm. <S> The thickness of the radome on these small commercial ceramic patch antennas is typically 0.8 to 1.2mm and may even be required to get the correct frequency when installed. <S> I would go with a already encased unit unless there is a compelling reason to embed it into a covert location. <S> If hiding it I would put a small dome to give it an air space and then cover it with fibreglass and polyester resin. <A> You can use any enclosures but metal which blocks RF signal. <A> If you want to encase something with a wire against submersion you need "soft contact" around the wire. <S> Something springy. <S> When choosing a default IP67+ box you should make sure there's a nice place to have the wire go out past a piece of rubber pushing all around the wire. <S> When choosing an encasing compound, you should try to avoid using only hard and shrinking materials such as epoxy and polyester. <S> If it isn't a huge block you're making, the attenuation should be minimal. <S> Best might be to make it a 3mm to 5mm layer of tough rubber or such. <S> I have not actually researched my two-part silicone as well as epoxy and polyester for chemical make-up, so I cannot say how much the chemicals will interfere with metals or electronics, as such my advise to use it only for an already non-IP67+ plastic box. <S> If it hasn't got an outer box, first find out what the chemicals might do.
For safety, use IP rated enclosures which is rated least 7 for waterproof(liquid ingression). If the antenna has no electrical bits reachable to you, you could try a layer of two-part silicone rubber, or some similar rubber-like compound.
How can a sensor apply a voltage? (from http://www.adafruit.com/blog/wp-content/uploads/2009/06/tmp36pinout.jpg ) Have a look at this picture. It's a simple heat sensor. Though it looks like it's really simple, I don't fully understand how the middle pin works. How can a voltage be applied? I thought it was merely an abstract concept for the difference in potential electrical energy. And how does it work for this heat sensor in particular? <Q> Here's a link to the spec sheet. <S> I'd call it a temperature sensor. <S> (heat has units of energy.) <S> You have to apply power so the internal circuitry provides an output voltage. <S> It looks to be a silicon band gap temp sensor described here <A> How can a voltage be applied? <S> I thought it was merely an abstract concept for the difference in potential electrical energy. <S> Your thinking has gone wrong here, somewhere around "abstract concept". <S> "Potential", "electrical", and indeed "energy" are all abstractions. <S> We reason with abstractions because they are much easier to reason with than trying to re-derive everything from first principles or observations every time. <S> Voltage is a measurable property of electric fields. <S> I can't work out what you mean by "apply current instead of voltage", especially since you seem to think that current isn't an abstract concept in the same way that voltage is. <S> Generally by "apply a voltage" <S> we mean "connect to some voltage source". <A> http://www.analog.com/static/imported-files/data_sheets/TMP35_36_37.pdf <S> -- see page 8. <S> The simple sensor actually has about 10 transistors on it, carefully constructed so that the VBE (a transistor parameter) of two of the transistors differ in a way that is largely dependent on temperature. <S> The difference causes some current across some resistors. <A> You seem to have a fundamental mis-understanding. <S> The principle in electronics is that you have a signal that is representative of a real world signal. <S> The term <S> Analog electronics really comes from "analogous" or similar to. <S> Instead of trying to build light and sound amplifiers you move the signal to the electrical domain, operate on it and then perhaps move it back. <S> This movement back and forth is accomplished through transducers which usually require power to operate. <A> The sensor really does supply a voltage. <S> It uses a couple of internal transistors to build a temperature sensitive voltage divider. <S> It then uses some more transistors to provide that voltage through a buffer amplifier. <S> The upshot of all this is that it provides a voltage proportional to the temperature, and you can draw up to 250µA of current when measuring the voltage. <S> It actually works based on the fact that the gain of a transistor is in part dependent on the temperature. <S> If you set your bias and base resistors to provide a certain current through the transistor, the actual current will depend on the junction temperature of the transistor. <S> Current flowing through a resistor results in a voltage drop across the resistor. <S> In this sensor, the two transistors play the part of of the resistor - measuring the voltage drop is an indirect measure of the current.
In this device there is circuitry that converts the temperature of it's substrate to an electrical signal.
Recognising output impedance of an amplifier though inspection The output impedance of FET amplifiers seem to be a hit and miss affair when I try to find it just through inspection. For example, sometimes it's easy to predict, whereas some other times I have to go through very tiring circuit analysis just to get it; and my answer through inspection seems to be way off from the actual value. For example, the common gate amplifier has an output impedance of \$1/g_{m}\$ which wasn't intuitive at all. So, I wonder if there's a way that one can tell from inspection because its possible that the way I'm inspecting things is wrong. <Q> I often deal with this problem, it's not trivial. <S> Many times there are local feedbacks. <S> In that case, you can avoid complicated equations. <S> Find the cold resistance \$R_{out}^0\$. That is without considering the feedback. <S> By visual inspection, look at how the feedback behaves. <S> Will it increase the current drawn in response to your voltage stimulus? <S> If yes, that means it will decrease the resistance, so the output resistance will be $$R_{out} = <S> \frac{R_{out}^0}{1 - G_{loop}}$$ <S> The reality is that you can't do the basic stuff all over again every time you analyze a circuit. <S> At some point, you just have to remember a few formulas, especially since there's only a few basic configurations. <A> I'm not a transistor level design guy. <S> (The only useful way for me to use a transistor is to wrap an opamp around it.) <S> Unless there's a big resistor in series on the output I can't guess the output impedance. <S> (Well Vthermal <S> /Ic if bipolar) <S> But I have measured it. <S> And you quickly find out that the output impedance of an opamp is complex. <S> And sometimes you care about the complex impedance and sometimes just the resistive (real) part. <S> Good opamps will sometimes have graphs, but then you have to look at the measurement circuit to figure out what they are showing you. <A> I think, your question mainly concerns non-linear circuits (with BJT s, FET s,...). <S> In this case, it is indeed not a simple task to estimate the value of the output resistance because in many cases - as mentioned already by O. Lathrop - such circuits will contain feedback loops that are connected at the output node. <S> In such a case, the approximate value of the (dynamic, differential) output resistance r,out can be approximately determined if you know the output resistance without feedback (ro) and the loop gain A,loop of the circuit: r,out=ro/(1-A,loop) . <S> (Note that for negative feedback the loop gain has a negative value). <S> In your example (common gate/base amplifier) you have mentioned <S> 1/gm which is the input resistance. <S> This resistance can be easily found by inspection. <S> Output current (I,source; <S> I,emitter)=gm*input voltage (V,GS; VBE). <S> In common gate/base configuration the input voltage remains the same and the "output" current turns into input current. <S> Hence: input voltage/input current= 1/gm . <S> (At the same time, this is the transistor`s ouput resistance in common drain/collector configuration) <S> The output resistance r,o in common gate/base configuration (without feedback) is nearly identical to the output resistance in common source/emitter and is mainly determind by the resistor Rd (Rc) that is connected at the output node.
You simply have to find the signal voltage-to-current ratio at the source/emitter node (gate/base grounded) which is known from the classical common source/emitter configuration:
8V AC signal into 3.3V digital input pin I'm a noob and I'm doing this to learn, basically. I have a typical old-style door bell with a transformer that outputs 8V AC, the wire the goes through the push button on the door and finally hits a solenoid-based ding-dong thingy. I'd like to tap into this circuit and extract a signal that I can safely feed into one of the digital 3.3V input pins of my very delicate Raspberry PI. I have a few diodes, capacitors, resistors, a couple of transistors and a lot to learn. <Q> You can use one of your pretty transistors and a couple of the diodes, resistors and capacitors in several ways, here's one: simulate this circuit – Schematic created using CircuitLab NOTE ABOUT VALUES: I just estimated the value of capacitor C1. <S> If the RasPi sees a 50Hz or 60Hz (depending on your location) on/off when you keep the button pushed, it's too small. <S> I guesstimate it to run empty in about 200ms, but since the diode doesn't full rectification if I'm off by too much... <S> If it is too large the signal will stay on much longer than the button press, up to you if you mind about that. <S> For this design it is very very important to always only use AC power supplies that are unrelated to your Raspberry Pi. <S> If the Raspberry Pi is powered from the same AC power source through a rectifier and capacitor, don't connect BELL Wire 2, or this will cause serious problems! <S> The Diode sends the current only into the capacitor. <S> The capacitor gets charged when BELL Wire 1 is higher than BELL Wire 2, when BELL Wire 2 is higher than BELL Wire 1 the Diode blocks any current that wants to escape out of the capacitor. <S> The capacitor's "sort of DC" now feeds the resistor's base through R1, allowing it to turn on. <S> This then pulls the RasPi input pin down to its GND, away from the 3.3V Power that it was fed through R2. <S> Once the BELL transformer's power disappears the capacitor will empty itself into the transistor and after a very short time (much less than a second) <S> the transistor will have depleted it so much it will switch off again, letting the RasPi pin go back high through R2. <S> So you do need to remember about this: The signal you see at the Raspberry Pi will be inverted: When the bell goes the Input Pin will be tied to GND (0) and when the button is not pushed it will go to 3.3V (1). <A> One way to go would be the following. <S> Your signal is half rectified by \$D_1\$, that removes the negative voltage component. <S> It then passes through \$R_1\$, which effectively raises the impedance of your signal, so that it can be clamped without any major current draw. <S> \$D_2\$ and <S> \$D_3\$ clamp your signal to levels roughly between GND and 3.3V. <S> \$C_1\$ is optional, as it would provide stable-ish signal for you to sample (otherwise you'd end up with a half wave "rectangluoid". <S> This is an extremely crude solution. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> What I would recommend instead, is using an optocoupler . <S> It was designed for this exact purpose, that is isolating two domains safely against each other. <S> You can find plenty of info all around the place. <S> (Note that you can still do the \$R_1 C_1\$ filtering to get your ON-OFF signal) <A> EDIT: <S> I realized I misread the question and thought the OP was trying to power his RPi from the doorbell circuit. <S> I'll leave this answer up for now in case it is useful to the OP, but it is not a direct answer to the question. <S> The fundamental AC-to-DC converter is called a full-wave rectifier. <S> It's made out of four diodes arranged in a specific way. <S> A bulk capacitor is attached to the output to smooth out the waveform. <S> You can never achieve a perfect DC voltage from this circuit as the natural sinewave from the AC source will manifest itself as ripple in the output. <S> But the larger the capacitor, the smoother it'll be. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The diodes and capacitor you choose is based on the current and voltage requirements of your circuit. <S> As long as the input voltage stays above the drop-out spec of the linear regulator, the regulator will output a nice 3.3V DC for your RPi.
To achieve the necessary regulated 3.3V for the RPi, assuming it's not doing anything that draws a lot of current, a linear regulator can be connected to the output of the rectifier. You shouldn't have too much trouble finding parts that will work at that small voltage and current.
Catch both sides of a clock change PIC interrupt I am trying to build an IR remote receiver using a PIC16F628A. To do that, I need to catch both the falling and rising edge of the external interrupt pin. From the datasheet it appears you can only receive an interrupt on one or the other edges. How would you recommend going about this problem? <Q> If you can't wire to one of those, I'd switch the edge of the INT interrupt early in the interrupt routine. <A> Here are a couple of simple ways if you don't need anything fancy: <A> The selectable edge you mentioned above is only for RB0 pin, and PortB 4-7 are on change. <S> You don't need to sample both rising and falling edge, you can only use one. <S> Also you will need to use timer or some counter in software for the width of the pulses. <S> I suggest you send data to pc and analyse it on your pc <S> so you can calibrate your PIC side software better. <S> Buy some universal IR receiver circuit they are cheap and simple to use (dont bother with photo transistors and some homemade circuits). <S> There are lots of good tutorials on this subject maybe even some complete projects you should check them out.
If you want to interrupt on both edges, then you can use an interrupt on change pin.
Effect of non-inverting op amp on the AC and DC components of the input I was wondering what the effect of an op amp like the one in the image below is for an input signal such as \$0.5\sin(2000\pi t) + 0.5\$ V? Are the DC and AC components amplified with the same gain? And also, what would be the phase relationship between the input and the output? <Q> As long as the AC signal frequency is within the op amp's bandwidth the AC and DC gains are the same for the circuit you have drawn. <S> The gain is $$\frac{v_{O}}{v_{I}} = 1 + <S> \frac{R_f}{R_i}$$ <S> If your op amp datasheet specifies its gain-bandwidth product <S> you can easily calculate the bandwidth if you know the closed loop gain you need. <S> I've drawn your circuit in Circuit Lab with your input signal \$v_{I}(t) = <S> 0.5\sin(2000\pi <S> t)+0.5\text{V}\$, and I'm using \$R_{i} = <S> 1\text{k}\Omega\$ and \$R_{f <S> } = 100\text{k}\Omega\$ for a gain of \$101\$: simulate this circuit – Schematic created using CircuitLab <S> If you run CircuitLab's DC solver you will see that \$v_{O} \approx 50.5\text{V}\$. <S> Unless you happen to be using supply voltages greater than \$50\text{V}\$ <S> the op amp will not actually be able to force <S> \$v_{O}\$ that high and it will saturate. <S> If you need a high gain like \$101\$ as I've simulated and you are not be able to get rid of an undesirable DC offset like \$0.5\text{V}\$, you will need to add AC coupling. <S> For most op amp circuits you can simply add a capacitor in series with your input to block the input's DC offset (you just need to determine the appropriate capacitance for the frequencies of interest). <S> However, for this circuit that would be a bad idea since the op amp's non-inverting input bias current (which is very low but non-zero) would have nowhere to flow except into the AC coupling capacitor. <S> To avoid this you also need to add a resistor from the non-inverting input to ground. <S> Think of this as a simple \$RC\$ high pass filter. <S> The AC coupled non-inverting amplifier looks like this: simulate this circuit <S> If you run Circuit Lab's DC solver on the AC coupled circuit you will see that \$v_{O} <S> \approx <S> 0\text{V}\$. <S> You can run a frequency domain simulation in Circuit Lab for the Bode plot of the AC coupled circuit. <S> You can see that the gain is very low at DC and low frequencies, is \$101\$ in the midband (including your input frequency of \$1\text{kHz}\$), and then decreases at \$-20\text{dB/decade}\$ at high frequencies. <S> I don't know what frequencies are important to you <S> so you might need to choose different capacitor and resistor values for the \$RC\$ filter. <A> Most datasheets have a gain vs. frequency plot. <S> That should get you partway towards your answer. <S> At each frequency of interest, find the gain of the opamp itself from the plot and analyse the circuit as if it was DC with that internal gain. <S> Then add the components back together, keeping track of which result was for which frequency. <S> Note that the gain plot does not include any feedback. <S> It is the gain of the opamp itself. <S> Vout/(Vin+ - Vin-) <A> The closed-loop gain of the circuit is 1+Rf/Ri. <S> That gain is flat until you start approaching the closed-loop bandwidth of the circuit, which is the gain-bandwidth of the amplifier/closed loop gain. <S> At that point the gain is 3dB less, and starts to roll off at 20dB/decade. <S> (For a dominant-pole compensated op-amp which applies to the majority of general-purpose op-amps.) <S> That's a pretty good approximation of what goes on, if you need to be more exact you can write the open-loop transfer function from the graph in the data sheet and do the math to get the closed-loop transfer function expression.
The AC and DC components of the signal are both multiplied by the closed-loop gain, and the output will reflect that as long as inputs and outputs are within the voltage range, current and power output, and slew rate capability of the amplifier and its power supplies.
Child-Safe setup for ESD protection environment on desk I need to setup an anti-static workbench in my spare office, and kids sometimes get into it ever since they learned how to pick the locks. My intended approach is to: Lay out an ESD mat on the desk. Punch a whole through the mat, and connect an ESD common point ground to the mat through the whole with a snap-cap. Connect my ESD wrist strap by removing the alligator clip and connecting the wrist strap to the common point ground. Connect the banana jack of the common point ground to the GND pin on the NEMA-5-15 wall outlet or equivalent on the power bar on my desk, ie: I have one problem with this setup, assuming it is correct: I have a small, flat nema-jack connected to the GND of the wall outlet with an exposed banana plug sticking out of the wall. I'm worried that one of the kids could pull the banana plug out of the GND plug, and plug the banana jack into the hot or neutral blade slots of the wall outlet. Is there a safer way to go about this? Are there maybe 3-prong plugs that have two plastic blades for the neutral and hot, and a fixed metal plug for the GND so there's no way someone can electrocute themselves by playing with the connector to the mat? <Q> Get yourself one of these guys, and wire your ground into it. <S> For good measure, double insulate the hot and neutral lugs inside the plug. <S> The worst the kids will be able to do is unplug it, and they're less than $10 at most places electrical products are sold. <A> This is what you're looking for. <S> Anti-static grounding plug <S> Assuming the same thing is available in your local plug style, and your house grounds are correctly wired. <A> The metal screws in the plug's wall plate are also grounded. <S> Make sure the grounding wire has at least 1M ohm resistance between the common grounding point on the mat and the end of the wire, terminate the wire in a ring terminal, and screw the ring terminal to the screw in the wall plate. <S> This has worked very well for me for years - it doesn't tie up the outlets, nor does it get unplugged easily and left off. <S> Note that the common ground point (item 2 in your list) is already terminated in a ring terminal <S> , so you shouldn't have to modify it at all.
You can confirm grounding with a multimeter, but as long as the outlet is grounded, all metal parts, including the screw, accessable to the user must be grounded.
Record signal when two items touch each other I have a Carrera GO!!! race track. There's a plastic mechanical lap counter that adjusts every time a car passes it. What I would like to do is to have some sort of electrical connection that when the car passes through the lap counter, the counter creates a connection between two wires or something. When this happens, I would like to send that signal to a computer/laptop which records it into some sort of database. The time between each connection would be the lap time, and the amount of connections would be the lap count. When this is recorded, I would like to present it on a monitor real time. There's more, but that would be after I figured out a way to detect a signal to work with... Have anyone done anything like that? or similar stuff. I'm aware of that there exists stuff like this on the market. But that would require buying new track set etc. (that would probably be cheaper as well :P, but hey, who doesn't like to have something to do on their spare time ;)) UPDATE: I'm adding a couple of pictures of the counter: Beneath the car you can se a whit plastic pin. And right in front of the car, you can see a pin sticking out of the track. This pin is pushed to either side, depending on which direction the car goes, and ajust the counter with one. Here you can se how the counter works. (1) the car passes by, and pushes the pin (2) the pin is then pushing an arm of some sort to turn a gear (3) - which is the counter. Now, at the very bottom, it looks like there could be placed some sort of metal, or wires, that could send a signal to a computer when they touch.. But it happens very fast. Hope this helps for clarifications.. <Q> An IR Diode and IR Transistor pair would work well. <S> Make or break the line of sight, and your microcontroller or other input method can talk to the computer. <S> Drill a small hole on each side, and adjust it so that when the center plastic gets pushed down, the signal breaks. <S> That's a count. <A> A magnet on the car is not a big deal and it should be able to cause a reed switch to operate just fine: <S> - Someone else mentioned using the soundcard input <S> so I'll also hijack that idea. <S> A small battery and 100k resistor feeding the reed switch would be a basic idea for detecting a glitch from the magnet <S> but, you could use a 5V logic supply and virtually any value resistor in series with the reed to do the job. <S> Some form of debounce circuit (or code) will be needed but this is trivial. <S> You could position the reed switch anywhere on the circuit - in fact you could have several so you can record section times just like F1 - <S> OK I'm ranting now. <S> Cool idea - think I might apply this to my train set LOL. <S> EDIT - another idea <S> Again, this disregards the current track counter and should use some form of overhead gantry with a light emitter that could "spray" light across the track width. <S> Sensors would be positioned in the grooves and normally they would be receiving a signal from the emitter except when the light is "broken" by a car passing. <S> Depending on the track width, you may actually be able to mount the sensors trackside and still get a decent "break" signal when a car passes - it's all about angles. <S> IR might be less effective than visible light but should also work. <S> The emitter needs to be pulsed in order for the receiver to be "decoupled" from ambient lighting effects. <S> Probably run at 10kHz to 100kHz. <S> This method also gives you the opportunity to measure the "blanking" time (using another counter) to determine car speed as it passes by. <A> If you can create a mechanical contact, for instance by using the spring of the lap counter, the you can use a pull-up or pull-down circuit to obtain a pulse signal. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Then with an Arduino or any other microcontroller board you can setup the communication with the PC. <A> I'd place very small flat metal contacts like this. <S> Solders wires to those contacts and then connect the soldiered wires as the switch in the linked example below. <S> Make sure you test with a multimeter that it'll break continuity when the car passes through and maintains continuity when in the default resting position. <S> This is how you would use a MCU (Arduino) to monitor your contact count . <S> Except if you had it connected like the schematic in the link with a +5v at one end of the switch and ground at the other end of the resistors you'd just want to test for low/ground signal. <S> You'll HAVE to debounce the switch and maybe even add a small delay to allow for the open circuit time. <S> The circuit in your case would be high until the car came through and opened up the switch which would cause the circuit to be pulled down to ground. <S> Depending how robust or lack of it, at this point you could just write the code to compare lap times and send that data to the built in serial terminal and your done. <S> You could view the lap times with just a little bit of comparison math.
I reckon that a reed switch positioned in the track groove and a lightweight magnet on the car would work nicely - forget about the current lap counter (although you could still use it). Another way would also be to record the lap time straight in the microcontroller and then send it to the PC via UART-to-USB.
Is it possible to step-up the output of a laptop USB port to 12 watts? I've got a USB device that requires 12 watts to charge. My laptop appears to output less that that. My question is: Is it possible to step-up the output of a laptop USB port to 12 watts? <Q> No. <S> This is basic physics. <S> There is no free lunch (or energy). <S> If the laptop only puts out 500 mA at 5 V, for example, then you get 2.5 W. <S> You could convert this to a different combination of voltage and current, but the result can't on average exceed the 2.5 W you put in <S> (It is possible to get higher power out for short durations, but that's clearly not what you are asking about. <S> The average out still can't exceed the average in.). <S> Since no conversion will be 100% efficient, you will actually get a little less power out, with the remainder getting dissipated as heat in the converter. <S> For example, let's say you can make a switching power supply that is 90% efficient. <S> That means with 2.5 W in <S> , you get 2.25 W out at some other voltage and current combination. <S> The remaining 250 mW will heat the switching power supply. <S> You could get, for example, 10 V at 225 mA, 24 V at 94 mA, 2 V at 1.13 A, etc. <A> As a practical thing, no, you cannot. <S> Olin is incorrect in stating that you can't get more power out than power in. <S> In actual fact, you can get more power out than power in. <S> You just can't get it continuosly. <S> This is in fact done all the time in charging a battery. <S> You charge the battery with a low current (low power) and the battery can later deliver a much higher current (more power.) <S> Power is voltage * current. <S> Energy is voltage * current * time. <S> 5V*.5A*1Second is 2.5Joules of energy. <S> That is 2.5Watts for 1 Second 5V*2.4A*0.2083Seconds <S> is also 2.5 Joules of energy. <S> That is 12Watts for 0.2083 Seconds. <S> So, you could charge a large capacitor from your USB port until it reaches (very nearly) 5Volts. <S> Then, you would let the device charge itself from the capacitor. <S> The capacitor charges slowly through the USB port (drawing only 2.5 watts, but for a relatively long time.) <S> When you then connect the charger to the capacitor, it can then discharge much faster (delivering more power, but for a very short time.) <S> Switch back and forth (allowing propert amounts of time for the charge and discharge cycles) and you could deliver enough energy to charge your device - but it would take at least (12/2.5=4.8 <S> ) 5 times longer to charge than usual. <S> The diagramm shows what I'm talking about. <S> If you switch the capacitor to the USB, it will charge at (maximum)2.5Watt. <S> When you switch the capacitor to the load, it will discharge at a much higher rate - the capacitor calculator that I used (not the one in the simulator) says that R1 will discharge at a maximum of 52Watts - your charger would likely not draw that much, since it limits the charging current.. <S> I doubt that your charger would like pulses, and I doubt that it would be worth your while to find out - a 1F capacitor costs over 50US dollars. <S> Still, it could be done if there were some really serious need to do it. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> The USB 3.0 spec allows a special battery charging mode that increases the output to 1.5 A, while not allowing data transfer during that time. <S> With this option, you could use a USB Y-adapter to connect to 2 separate USB ports. <S> As long as your device is capable of making the request, this could supply up to 15W total. <S> The USB 3.1 spec adds power profiles allowing up to 5A across 12V or 20V, giving you significantly more than what you need. <S> However this is a fairly new spec, and not all devices will support these power profiles. <A> As a concise answer, no. <S> A USB port that sticks to USB specs of 500mA output max provides 5V * 0.5A = 2.5 Watts. <S> With real world efficency losses (nominally 80~90% efficent), you can provide a Higher Voltage with Less Current, or a Lower Voltage with More Current, but you can't create powerout of nowhere. <S> The output power will always be less than the input power (unless we talk about theoretically perfect systems, then the output will be equal to input power). <S> That said, many USB ports don't adhear to a strict 500mA output. <S> 12W is roughly 2.35A at 5V, so basically a Tablet charger. <S> Apple computers, with special drivers for the iphone and ipad, will provide that, for the ipad, but that is not to USB spec. <S> Other computers vary. <A> Since most of the laptops comes with more USB ports, you may get it with a Y cable like this: <S> This doesn't allow a single port to increase its output power, but allows you to split it over two of them. <S> I guess that you may also use more cables to distribute the power over more ports, although I'm not sure to recommend that. <A> While I agree with pretty much everything Olin has said, in this particular case he may not be entirely correct. <S> I finally went and checked Wikipedia because I had a vague memory that USB3 might be able to deliver higher power <S> ; check out the claimed power delivery capabilities of 3.0 and 3.1 in the last para of each.
The thing you CANNOT do is get more energy out than you put in. Any device that you could build to do the job would cost more than it is worth - just get a separate charger.
2.4G signals over single antenna I want to run run two different 2.4 GHz radio cards off of the same antenna. Is this possible? I was thinking of running both into a splitter and running a single cable to the antenna. The two radio cards are operating on different channels. Is this possible or no? <Q> Yes, it's possible. <S> I'm going to assume that your "radio cards" are Wi-Fi radios or something else which must each receive and transmit. <S> However, you can't simply connect both radios to the same antenna as you describe. <S> If you do that, the transmitted power from one radio will render inoperable the receiver of the other radio. <S> It's possible it just won't work, but in all likelihood you will break your radios. <S> Receivers are designed to handle the extremely weak power received from a distant transmitter -- not the full power of a transmitter coupled directly to it. <S> What you need to solve this problem is a diplexer . <S> A diplexer is a 3-port device. <S> One port goes to the antenna, then each of the remaining two ports goes to one of your radios, each on a different channel. <S> The diplexer has within it filters which isolate the different frequencies of each channel, so each radio does not see the other. <S> Unfortunately, a diplexer with a frequency response steep enough to sufficiently isolate two 2.4 GHz <S> Wi-Fi channels will be very expensive. <S> A separate antenna is almost certainly a more viable solution. <A> Short answer: Probably not as well as you would like, if at all. <S> And you'd greatly risk damage to your device. <S> Longer answer: -A <S> splitter's directionality is only as good as the reflectivity of the load (antenna). <S> The return loss of a common antenna could be as high as 9 dB. <S> So, you output from radio A at 20 dBm, you lose 0.5 dB through the splitter, you lose 9 dB at the reflection, your reflected signal is split back through the splitter, losing another 3.5 dB towards <S> radio B. Which means you only have 17 dB of isolation <S> (and this is very offhand. <S> The actual directivity may be even worse). <S> So, your 20 dBm output A signal is hitting the front end of your radio B with a possible 3 dBm of power when it's in receive mode. <S> -Even <S> if your radio survives, you need to look at the channel to channel rejection. <S> It's a safe assumption that if radio A transmits, radio B will still be swamped and unable to successfully receive a signal. <S> So, if you have these two radios and are willing to sacrifice them for science, try it out. <S> edit: I'm assuming here that if you're trying to cut costs by using one antenna, you probably don't want to add the high, high cost of a diplexer. <A> Generally not. <S> Even if they are using different channels, the broadband noise from one that is transmitting will desensitize the receiver of the other radio. <S> For example, suppose one radio transmits at 1 W (+30 dBm) and has and adjacent power ratio of 50 dB. <S> Then the noise in the adjacent channel is -20 dBm. <S> The receiver may have a general sensitivity of -80 dBm. <S> But obviously if there is a million times more noise power than signal <S> , it's not going to receive anything!
My semi-professional opinion is that it will either not work or, worse, break some radios. It might be possible, depending on what channels/frequency you're talking about, to get a diplexer.
Replacing the thermocouple for a multimeter? I bought a no-name but decent multimeter, and it came with a thermocouple. Let's say I broke it in some way. Can I just replace it with any other thermocouple, or is each thermocouple calibrated for one specific model of a multimeter? I have been looking at thermocouples, eg on eBay, e.g. this one . They write some specs but they don't write which multimeter it fits with? So does any thermocouple fit with any multimeter (or digital thermometer)? <Q> That means that a K type thermocouple must be paired with a meter that is calibrated for K type thermocouples. <S> K type thermocouples are the most common so it is very likely that your meter can handle the one you looked at on Ebay which is a K type. <S> However there are much cheaper K types available than that one which is specified for very high tempertures. <S> As far as calibration goes, there are standard tables of thermocouple voltage versus temperature for each type of thermocouple. <S> This is possible because each type of thermocouple uses the same composition of wires. <S> Thus your multimeter would have been designed to use the table for K type thermocouples. <S> The actual accuracy you can achieve is determined by the specified accuracy of the thermocouple and how well the meter conforms to the standard table values which are nonlinear. <A> If the original had a yellow plug <S> it was probably type K (Chromel-Alumel) <S> and you can replace it with any other type K thermocouple. <S> The other common color codes are blue (T) and black (J). <S> I only say "probably" because there is no way of knowing what some random maker in China might do, but <S> those (ISA) color codes are very widely used despite other standards that existed in Japan and Europe. <A> There are a few different types of thermocouples. <S> In general you can replace a thermocouple with the same type, but you can't with different a different type because the calibration constants will be different.
In general any thermocouple can be used with any meter that handles thermocouples as long as they are compatible.
Do I need OS for ARM Cortex-M0(3)? I'm a developer of control devices for switch mode power supplies which need to generate a PWM signal(s) with a frequencies about 100 kHz, measure analog signals, communicate vie USART, make relatively simple calculations and so on. It is also important to start up fast and react very fast on some events like external interrupts (often within a fraction of microsecond). Now I'm using Atmel Studio with GNU C compiler. I'd like to go from 8 bit AVRs to ARMs. The main reasons are (hopefully): faster, more flexible, more powerful calculations, better community support. I'd really want a short learning curve and fast development cycle. I decided to use ARM Cortex-M0 and M3 processors. Will I be able to use operation system on ARM? The reasons why I think it could be useful in my case - faster learning curve and development time. But I have some doubts that I will be able to run application within several milliseconds after supply voltage will be applied and that I will be able to interact with ARM peripherals (like timers). So do I need an OS in my case? <Q> You ask two very different questions. <S> Will I be able to use operation system on ARM? <S> Of course, but it will likely be an RTOS like FreeRTOS rather than <S> a dekstop OS like Linux. <S> So do I need an OS in my case? <S> Strictly speaking: of course not. <S> But the question you should has asked is <S> will I benefit from using an RTOS. <S> That is more difficult to answer. <S> The Benefits of an RTOS are multithreading and the (additional) libraries, including hardware abstraction. <S> The downside is that it takes time to learn how to use these facilities. <S> My gut feeling is that for the application you describe using an RTOS will not help you much, so the learning will cost you more than it saves you. <S> But knowing an RTOS will probably be useful for some future project. <A> A general purpose operating system, such as Linux, is a different thing from a real-time operating system (RTOS), such as FreeRTOS and ARM's own RTX. <S> You can search for more info about the differences. <S> From the description of your application I would say that you certainly don't need, and probably don't want, a general purpose operating system. <S> I believe the ARM Cortex M0 and M3 microcontrollers do not have a Memory Management Unit (MMU) <S> so that will make it difficult to even run a general purpose operating system on them. <S> An RTOS does not require an MMU. <S> If you are already familiar with AVR programming but you're not familiar with RTOS programming then using an RTOS on your new ARM project is probably going to increase your learning curve rather than decrease it. <S> An RTOS does not require a lot of time to boot up like a general purpose operating system. <S> An RTOS application can be up and running within milliseconds. <S> An RTOS should not get in your way of accessing the microcontroller peripherals such as timers. <A> I think it depends on what exactly are you doing. <S> If the controlling system is simple than usually having an RTOS there just adds burden instead of decrease it. <S> One reason you need an RTOS is when there are more than one thing going on in your MCU. <S> I have a rather complicated AVR project that called for an RTOS on an ATtiny85 : an light dimmer that is controlled over fiber optics. <S> The MCU have two things to do at the same time: watch and time the AC phase sense signal and emit the TRIAC control signal at an appropriate time, and a software serial port at 9600 baud for the optical interface. <S> I have to program both of them using timer-based interrupt-driven non-blocking code.
An RTOS could be used but it is probably not necessary. Any application can be written in a self-contained way, without an OS or RTOS.
Can I use this relay for DC http://www.dx.com/fr/p/ssr-25da-25a-solid-state-relay-white-134494#.VDmPWPl_uap It says AC output but how can this transform DC to AC ? Shouldn't this act like a switch ? If my circuit is DC and I close the switch it will transform to AC or will it stay DC ? <Q> According to one review on DX: by Docteh on 11/26/2012 Involvement:Expert (understands the inner workings) - Ownership:1 week to 1 month Pros: Works perfectly. <S> This one also comes with a plastic cover. <S> The cover has holes that allow a screwdriver to go in and be used without the cover. <S> Cons: <S> Appears to use more power on the DC side than a similar more pricier SSR like the Crydom ones. <S> Another downside is that AC SSRs such as this only turn off at the zero crossing point, which means you have to use this with AC voltage on the output side. <S> Most likely (as no datasheet or part list to confirm), it uses a Triac or similar for the actual load control switching, which means that no current can be flowing for it to turn off. <S> On AC, this is simple as the AC signal crosses the Zero point between Positive and Negative. <S> On DC, the DC signal needs to be off for that to happen. <S> It's a catch 22. <S> So no, you need a DC switching SSR. <A> KG-25DA Voltage Range (V) Input:3-32VDC, Output:24-380VAC, Rated Current (mA) 25000, Material Plastic, Dimensions (cm) <S> 6.2 <S> *4.5 <S> *2.5, <S> Weight (kg) 0.098 <S> ,… From here You are asking two different questions (ie "Can I use this relay for DC?" and "can it transform DC to AC?"). <S> The answer to both is "No!". <S> Answers here. <S> Reasons below: <S> Can I use this relay for DC DC output - no . <S> It says AC output <S> but how can this transform DC to AC ? <S> It cannot and does not <S> and that is not <S> it's job. <S> Shouldn't this act like a switch ? <S> Yes. <S> It is a controlled switch. <S> See below for details. <S> If my circuit is DC <S> and I close the switch it will transform to AC or will it stay DC ? <S> Neither. <S> It does not transform anything. <S> The control input accepts DC. <S> The switched output switches only AC. <S> they are not connected, transformed or changed. <S> Explanations: <S> A low powered control or input signal is used to control (turn on and off) a high powered output or "LOAD". <S> The load is intended to be a high voltage high current AC powered device. <S> Low voltage and current AC powered devices can be controlled down to certain minimum levels. <S> While it will work in some very special cases with a DC load these are not normal or useful. <S> The control signal is intended to be a low voltage low current DC signal. <A> I think you misunderstand quite how an SSR works. <S> Basically you provide a DC signal which "activates" the relay. <S> At that point a virtual switch inside the unit connects the two AC terminals together. <S> It is designed to switch AC power on and off. <S> You can think of it as a high powered opto-isolator - you provide the "LED" signal and the AC connections are equivalent to the switched transistor collector and emitter.
This is an electronic device designed to act as an electrically controlled switch. There is a red LED on the DC side of the Solid State Relay. This one will not work well for you.
Smart ways to control an LED ladder? I have two sets of LED ladders (12 in each ladder, so 24 LED's) that I want to control from my microcontroller. I don't have enough inputs for 24 LED's (and want to make it scalable), and the microcontroller has other tasks to attend to, as well as control the LED ladders. I don't have a set amount of PIN's I can dedicate for a ladder, but the lesser the better (if a number is required, maximum of 8 pins per ladder) The LED's will be updated very infrequently (every few hours), so I don't want to eat up processing power by multiplexing continously. My microcontroller doesn't have a DAC Currently, I am thinking of using shift registers (8 bit and 4 bit cascaded). However, since shift registers are "one way" devices I'll have to clear the registers to set a lower value. I don't think this is a big problem (for me, as they get updated infrequently), but are there better methods to control an LED ladder? Edit: By an LED ladder, I mean a single line of leds (like a bar of an equalizer) that shows a level of something. The LED's don't need to be controlled individually. <Q> If you want expandability, and to use almost zero Digital <S> I/ <S> O pins for the purpose of merely LED driving, think about using a proper LED driver chip. <S> You can do schenanigans with shift registers, but you will only go so far. <S> If you are going to invest in board space for these ICs, why not just use a good old I2C bus LED driver with 16 channels each? <S> Like the TLC59116 by Texas Instruments <S> That gives you 14 * 16 total LEDs, all individually controllable (open drain, you can connect them straight to the IC with resistor from your voltage rail (up to 17V rated too!) <S> Not only are they individually controllable, but because it's an I2C bus you can add/remove any device from your "motherboard" in a very easy way (power + I2C connector, done!). <S> Up to 224 LEDs with 8-bit PWM dimming, bus-wide commands, or individual LED commands. <S> Quite amazing. <S> Try it! <S> I should point out that by all means, try other types of multi-channel LED driver ICs, however the fact that there are I2C bus compatible ICs out there makes them EXTREMELY useful for expand-ability and quickly add/remove large amounts of LEDs, merely changing some software to deal with the hardware changes. <S> The ICs are fairly cheap, only a few $ each but they will be better than trying to administer to many shift registers as fast as possible - let an IC deal with it, because that is what the are designed for. <S> Finally, you may use these more for "logic" than actual power driving, if the LEDs you are trying to drive are heavy duty (like, hundreds of mA to Amps each) by using inverting buffers and push/pull totem pole (also called cascode circuits) to operate the gate of a MOSFET (usually N channel, in a low-side power switch circuit). <A> <A> Use TLC5955 or TLC5954. <S> Control up to 48 LEDs with 4/5 GPIO.
You can individually control 24 LEDs with 2 I/Os and 2 shift registers if you can stand a little blink when you update the display, otherwise it'll cost you 3 I/Os. Four hardware address pins allow up to 14 devices on the same bus.
Did I do this circuit diagram correctly? I just need some feedback on the circuit diagram I made (it's my first time): This is for a mini-car. There are two battery packs, one connected to an Arduino UNO, and the other connected to two motors. Both battery packs have a SPST switch. The battery pack connected to the motors has a 6V regulator, and an electromagnet connected to the Arduino (digital pin 8) has a 5V regulator. Any feedback would be fantastic! EDIT: Okay, I changed some stuff around. Instead of connecting the motors/electromagnet directly to the digital pins (stupidly), I added MOSFET transistors to control the motors/electromagnet from the Arduino. The motors have a diode and capacitor. I hope this is a step in the right direction. *the 6V regulator I mentioned earlier is not in here; I need to read more about it later and will add it in EDIT2: Okay, hopefully this is finalized circuit. <Q> I'm sorry to say, you need to go back to the drawing board. <S> An Arduino Uno (i.e., ATMega328P) can only sink or source an absolute maximum of 40mA through any one of its IO pins, and Atmel only guarantee up to 20mA. <S> That is not enough current to power (or <S> sink power from) things like motors, electromagnets, etc. <S> You need to switch the motors etc with transistors, and add flyback diodes to absorb the induced back EMF from the collapsing magnetic fields. <S> Treat them like you would a relay - google "Arduino Relay" for how to do that. <S> Also, your regulators make no sense, and your batteries are backwards. <A> Battery pack 1 is backwards, the '5V regulater' coming out of D8 is disturbing. <S> Please do not drive motors from digital microcontroller pins like that, and the battery there is also backwards. <S> The 6V regulator shown is also weird. <S> What is the SPST switch going there? <S> Can you describe what the purpose of this whole circuit is? <S> I may be able to draw a diagram on paper to show what it should be more resembling. <S> Ah a mini car, <S> sorry. <S> So you have two 7.2V battery packs on this car? <S> If you want bi-directional control of the two motors, you will need a proper H bridge motor driver circuit or buy one. <S> If you only want "forward" you can do it easily with an N channel MOSFET for each one, and flyback diode, and you CAN use a digital IO pin for each one in that case. <A> Both your batteries are shown backwards - the negative terminals of the batteries should be connected to the Arduino ground, with the positive terminals connected to the switches. <S> If your voltage regulators are intended to be Zener diodes, they should be connected across the load, with a resistor in series between the [load and zener] and the positive supply. <S> However Zener diodes make very poor regulators where the current can vary widely. <S> To determine an appropriate regulator, you will need to know what current the motors draw. <S> As others have said, you can't control high currents directly with an Arduino digital output - you will need to control transistors with the Arduino, and have the transistors switch the motor or electromagnet currents. <S> PWM can be used to control the motor speed, but it won't reverse the motor direction - you will need H-bridges to reverse polarity on the motors to provide direction control. <A> I have some additional notes. <S> When reading your questions I think you might well need some basic understanding of electronics. <S> As the others already pointed out, the pins of an arduino are not fit to bear the current of a heavy load. <S> But why is this a problem for you? <S> This is because of basic laws of circuit theory. <S> Have a look at Kirchhoff's Law for deeper insight. <S> Important for you is the fact, that any one-port (i.e. a part with exactly two wires coming out) like the motors, magnets and regulators show the same current on each wire respectively. <S> And if you compare current with pressurised air and conductors with pipes, a connection between two one-ports like your magnet and the regulator resembles a closed pipe with no means for the air to leak off sideways. <S> Hence every liter of air which comes out of one part has to enter the next. <S> With current it's the same. <S> And a part with two connections (one-port) has the same properties. <S> What goes in on the one side has to come out at the other side. <S> So, if your motor needs let's say 500 mA when driven with 6 V, the 500 mA will enter the arduino pin in your schematic damn sure. <S> And 500 mA will heat up the tiny components of your arduino until they melt or crack and fail. <S> The high current can be brought elsewhere by using parts with effectively more than two connections like transistors, but this is too much to explain right here. <S> The next thing are your voltage regulators. <S> A one-port can only control voltage and currents between and at the connections of itself. <S> So the voltage regulators in your circuit may well coerce a voltage of 6 or 5 V at its own wires but not elsewhere in this circuit. <S> Hence the voltages on the motors and magnets may not be controlled like this. <S> Now go google some example circuits, basic information and come back with more information about - power consumption of your motors (current or power) - model number of the voltage regulator you intend to use - an updated schematic <S> then it may be much easier to help.
You should use a Low Dropout linear regulator, or better, a switching buck regulator, to reduce the voltage to the motors.
Protecting a circuit from the effects of a capacitor's charging During its first outing, I managed to burn out the (control circuitry) of two of the batteries connected to my LED jacket (see this , this and this question for context.) I thought I should have been well within spec for the total current I was drawing, so I'm now wondering if it wasn't the total, but just how quickly I was trying to draw it (the draw is quite "peaky".) In order to mitigate this in the next iteration of my design, I was thinking I could put a few hefty (10,000µF?) capacitors across my power bus. But I understand that they will draw a large amount of current themselves while charging. Can I prevent this by putting a resistor in series with each capacitor? What resistor values would be appropriate if the voltage across the bus is 5V and I am using four 10,000µF capacitors? Or is there a better way to limit the current drawn by the capacitors on startup? <Q> My money is the fact that the "batteries" are in parallel. <S> If you open them up you will find a LiPo battery at (around) 3.7V. <S> That feeds a boost converter to raise the voltage to 5V. Putting voltage regulators (of any sort) in parallel is never a good idea as they will basically fight each other for control over the target voltage, resulting in nasty things happening. <S> You would be better off splitting your system into separate power zones with different groups of LEDs powered off different batteries. <S> That way they only share a common ground, not common power. <A> Guess I hadn't really shutted-upped after all ;-). <S> But, that's more because the strips will be "transient-ing" like mad. <S> Try a smaller cap, with a slight series resistance at each end of each strip. <S> Which values are a bit dependant on the strip length / expected peak current per strip. <S> The capacitor is only useful if it can win from the resistance and reluctance of the wiring in it's low series resistance. <S> For example, with a 10Ohm resistor you will not be using the caps much, because the wiring is quite sturdy and short. <S> On gut feeling I'd probably go with 470uF ~ 1000uF with a normal (non-low) ESR rating on all ends and leave it at that. <A> For inrush current limiting, varistors are available. <S> At high currents, they increase the resistance limiting the current. <S> Once capacitors are charged and current demand drops, the resistance drops. <S> Digikey should have these. <S> Seetharam
In actual fact, what you have there aren't just batteries, but batteries with boost converters. As another small answer to the question with many potential answers, I think it might not be an altogether bad idea to add some capacitance.
Communication between micro controllers - I2C, SPI, UART? Basically I have two microcontrollers that I need to communicate with each other. Both controllers send and receive data. Basic idea that I have: I2C, SPI - What I think we can NOT use these protocols in this case. Because both are master slave based protocols. So if one controller is configured as master and other as slave, then in that case if the slave uC wants to transmit data then it can not initiate the transmission and it is also not allowed to generate the clock. UART - I guess this should work as it's asynchronous. So nobody is bound to be slave or master. My question is, if the above made assumptions are right? If no then please correct me. <Q> Your assumptions are correct, yes. <A> If this is for your own setup, then the only thing that matters is consistency. <S> You control everything, so it's up to you. <S> You can go with a standard protocol, or make your own, or modify one to suit your needs . <S> If you already have a bus like I2C or SPI in use, you might as well continue to use it. <S> That said, while I2C and SPI are master-slave protocols, this can easily be worked around through the use of an interrupt/signal pin. <S> If the slave wants to talk to the master, it toggles the interrupt pin, and the master initiates a I2C/SPI session. <S> Or polling. <S> As for generating the clock, unless you have a specific need for both microcontrollers to do that, why bother? <S> Do you need to run one at a different speed? <S> Choose a clock speed ahead of time and stick to it in your implementation. <A> The devices on a I²C bus are not given predetermined roles of master and slave: <S> A device is a master if it is currently controlling a transaction with a slave, or is actively trying to do so. <S> A device is a slave if it has a slave address and is either communicating with a master or listening for masters. <S> Other devices on the bus are simply inactive. <S> I²C supports multiple masters and multiple slaves. <S> If you have multiple devices that need to be able to send or receive data at will, you assign a slave address to each of them, and program them to assume the role of master when they need to transmit or request data. <S> If multiple masters are present the masters take turns: Before starting a trasaction masters wait for the current transaction to finish, if there is one. <S> If multiple masters begin a transaction at the same time, the first one to notice that the bus is not at the state it wants to backs down (this is known as arbitration). <S> In short, what you want certainly isn't impossible with <S> I²C. However,implementing I²C is a bit of a pain and likely overkill for just two microcontrollers trying to talk to each other. <S> I would choose UART in your case, thanks to its simple and flexible nature.
SPI and I2C are both normally master/slave protocols, though there are ways of "bending" them to be able to work either way around. But for simplicity, yes, UART is probably the easiest and most sensible.
Wiring up an LCD display with unknown controller I found an old LCD in my pile of trunk. I would like to be able to use it. However, someone (possibly / probably me) sawed off the part of the PCB with the series code, so I can't know what the controller chip is. I'm trying to figure it out. My analysis so far: The display has backlight. There are 16 pins, so HD44780 would be my first guess (3 power pins, 11 data pins and 2 for backlight). The three pins that are wired up could be power, looking at the traces, and the last two LED anode and cathode. So I tried to wire it up with pin 1 to 0V, pin 2 to 5V, and pin 3 (probably contrast) to 0V. I got this result: This looks like HD44780 to me, with a 2x16 screen. But I've got a few questions about this: Are there other (not HD44780-compatible) displays that would give this display when only power is supplied (but no controls), or is this specific for HD44780? The pinout I have used up to now for all LCDs with HD44780 I've used so far (as far as I can remember) had a pinout like this: Power (Vss, Vdd, Vcontrast) Control (RS (register select, command or data), R/W (read / not write), E (enable) Data (DB0 - DB7) Possibly backlight (Anode, Cathode) But is this a standard? Can I assume that this uses the same pins for the same functions? And, if not, is it safe to just try it, or could things go wrong? If it could go wrong, is there a way to say judging by the PCB traces what lines would be data, probably? I don't think it's important, but in the end I intend to use this chip with a PIC. I have used HD44780 displays with PIC before, so I have working code to try out (and also other displays with HD44780 (or ~compatible) to check the code and setup). <Q> If it looks like an HD44780 <S> and it smells like an HD44780 <S> then in all probability it is an HD44780. <S> Most people just stick to what is cheap and easy to use, which means HD44780. <A> I'm almost certain that I saw a character LCD just like that in the past. <S> Google's image search had discovered several. <S> http://www.orientlcd.com/AMC0801AR_B_Y6WFDY_8x1_Character_LCD_Module_p/amc0801ar-b-y6wfdy.htm <S> S6A0069 controller <S> http://www.voti.nl/shop/catalog.html?LCD-16 HD44780 controller <S> http://blog.hydrotik.com/2007/09/04/making-things-as3-part-3-serial-lcd/ <S> http://www.open-electronics.org/an-ultra-customizable-lcd-shield-for-arduino/ HD44780 controller <S> On a different note. <S> Judging from the photos in the O.P., a portion of the PCB broken off. <S> That may present problems for reverse-engineering. <A> Since you've already got power/ground/contrast mostly sorted out and - from your top photo - pins 15 and 16 are obviously power to the backlight, the heavy hitting's done and all that's left is data and control. <S> Since the controller is COB, I don't know of any way to visually tell the difference between data and control lines, but since the ,worst that could happen if you got any <S> /all of them mixed up is a garbled display or no display, that wouldn't harm anything. <A> Are there other (not HD44780-compatible) displays that would give this display when only power is supplied (but no controls), or is this specific for HD44780? <S> This is not specific for the HD44780 or similar controllers, but as a matter of fact most character lcds are HD44780 compatible. <S> You can take it for granted. <S> The pinout I have used up to now for all LCDs with HD44780 <S> I've used so far (as far as I can remember) had a pinout like this <S> snip <S> But is this a standard? <S> A defacto standard, there can be variations, but again, market forces have consolidated that 14/16 pin straight header as pinout. <S> There is also the Dual Inline pinout. <S> Can I assume that this uses the same pins for the same functions? <S> And, if not, is it safe to just try it, or could things go wrong? <S> If it could go wrong, is there a way to say judging by the PCB traces what lines would be data, probably? <S> Based on the looks, yes, go ahead it should work. <S> The rest look like datalines into the Chip on Blob. <S> They are all Thin, with no caps or resistors on them.
You already verified the power pins, and the backlight pins. Yes, there are other display chips, but they are few and far between. That made me think that this might be a more a less standard module. You can assume anything you want, but in the end the proof is in the pudding, so the answer to all your questions is, basically, "Hack at it until you get it right."
Sublime Text editor to develop ARM-based software Until this moment I used Atmel Studio to develop software for AVR-based devices. Now I'd like to go to ARM Cortext M. Additionally I'd like to try codesign and version control (I still not experienced in those useful things yet). I know that Atmel studio have the plugin to work with GitHub repositories. But I just acquired with Sublime Text 2 editor which is very nice to work with. It is also able to work with GitHub (as I have read in the Internet, I just not tried it myself) but I'm not sure that Sublime Text will be handy to compile GCC and ASM sources to HEX's. Does anyone tried to use Sublime Text 2 as a source editing tool for such implementations? Or traditional IDE's will be much easier to work? <Q> Yes, you can use SublimeText to edit source code, and not just for ARM, but for anything; and many people do. <S> It's a very popular source code editor. <S> The tricky part comes when you want to do more than just edit. <S> Not least of those functions are: Compiling the source into target binary files (usually ELF files). <S> Uploading the compiled firmware into the chip How you would do that is somewhat dependant on your host OS and what scripting tools you have available for doing the tasks. <S> In general the compiler will be GCC, which is freely available for all the main operating systems, so <S> if you know the correct flags to use when compiling source for your specific target then scripting the compilation shouldn't be too much of a problem. <S> When it comes to uploading the code, though, that all depends on how you do the uploading. <S> Do you use a hardware programmer? <S> Does your target chip have a bootloader installed? <S> For either of those, what tools are available that are scripting friendly that you can use to add functionality to SublibeText? <S> So unless you can find a resource online where someone has actually done just what you want to do, for the same target chip or chip family as you are using, then it's going to be a lot of work to get going. <S> You'd be better off starting with a traditional IDE and using that as a template (investigate how it compiles and uploads) and migrate it to SublimeText, or operate a hybrid environment where you do the editing in SublimeText but use the IDE to compile and upload the code. <A> As others have said, Sublime is an editor (a very powerful one), not an IDE so using Sublime for debugging is cumbersome, though not impossible (see footnote at bottom). <S> I do a lot of embedded programming though, so describing my workflow may be helpful. <S> All of my coding is done in ST2 with a few key packages: <S> Package Control : <S> no description needed I hope <S> SublimeTODO : <S> for tracking TODOs in my projects BracketHighlighter : for aiding in visualizing scope in C SublimeClang : by far the most crucial package for me. <S> This excellent plugin provides real-time syntax checking, code verification, and definition/implementation finding. <S> In fact, I rely on this so heavily that since it is technically for ST2 only, it has prevented me from moving to ST3 until I can try to port this. <S> I also make use of custom snippets for code formatting. <S> Once my coding is ready to be tested on the hardware, I'll use an IDE for debugging. <S> Any code changes I need to make are still done in Sublime however. <S> Some people may find that awkward, but I've grown used to that. <S> As for IDE's, many of the IC manufacturers provide there own IDEs (TI, Freescale, Microchip, etc.), however there are some good & free ARM-centric IDEs out there now. <S> Em::Blocks is the one I've used the most. <S> It's well maintained and a solid application. <S> CooCox is also quite good. <S> Footnote: The same developer who created SublimeClang also created a GDB plugin ( SublimeGDB ), however for debugging I think a dedicated IDE with memory views, variable watches, and expressions is hard to beat. <A> Since this question was posted, and it's been more than 2 years from now (Jun 2018), a new full-featured editor has appeared on the scene: https://atom.io/packages/language-arm Atom, among other things, supports Git version control out-of-the-box.
A traditional IDE provides far more than just editing facilities, so you would have to either implement, or find someone on the internet who has already implemented, the functions you require for working with ARM.
ESD protection for pH measurement electrode input I want to puild a pH meter, which uses a normal glass electrode. Such an electrode behaves as a pH dependend voltage source (<+-500mV) with very high internal resistance (10MOhm - 1GOhm, depending on the electrode). As a consequence, the voltage measurement needs to be performed with very low current. Further, I read that current flow can have a significant impact on the lifespan of these electrodes, so keeping it as low as possible is crucial. I looked around and found a few promising amplifiers with input currents in the femtoampere range - e.g. the INA116 , LMP7721 , and LMC6001 . However, I'm wondering now how to protect these amplifiers against ESD, since the device will have to pass standard EMC tests for household appliances. Some of the amplifiers have limited protection built-in, but as far as I understand, this is not enough to protect against the tests that are performed on external device connectors. Adding diodes to the power rails or a TVS diode seems to be out of the question due to the substantial added leakage of even low-leakage types. However, since the voltage source I want to measure has a very high resistance anyway, would the amplifier be adequately protected by a very high resistance (e.g. 10MOhm) series resistor? Or do I have to use a connector / electrode design which mechanically prevents direct contact? <Q> I don't know about PH sensors. <S> I'd start by "cheating" and see how other people had done it. <S> If you do end up needing a low leakage diode then you might want to look into using a small transistor. <S> The c-b diode (open emitter) is low leakage (1-10 pA) but slow (100's of ns) <S> The c+b - e (diode connected transistor, c/b shorted) is faster, but has a lower breakdown voltage. <S> I've measured a few 2N3904/6's. <S> and Bob Pease recommends the 2n930, 2N3707 and 2N4250. <S> But you won't find these leakage number on the spec sheet. <S> (For B. Pease reference search the web for B. Pease "Bounding and Clamping".) <A> I've done some low leakage ESD protection recently. <S> The scheme I used was fairly simple. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> It's hard to show in circuit lab, but there are only 3 components here. <S> 2 0603ESDA bidirectional clamps, and 1 SMAJ14CA bidirectional TVS diode. <S> The TVS has too high a leakage current on its own. <S> The birectional clamps of a leakage current of less than 100pA typical. <S> They keep the TVS out of the circuit until they clamp, and then the TVS takes over. <S> It satisfied IEC61000-4-2 Level 4, criteria B. <A> You can use a simple RC network. <S> A 10 meg resister and a 100 pF capacitor. <S> The cap must be low leakage, a C0G/ 100v ceramic will do.
With the right current limiting the op-amps own diodes will protect it.
Does ferric chloride etchant "go bad" after some period of time? I have some old circuit board etching solution (ferric chloride) that is unused but a few years old and has not been in a temperature controlled environment. It has been sitting, all but forgotten, on a shelf in my garage. I'd like to etch some small boards this weekend, but would rather not ruin them with old chemicals. Should I just go ahead and use it or buy some new etchant? <Q> It was stored in my house though, not the garage. <A> I have a really old bottle of FeCL as well, and wondered if it's still good. <S> I was able to find this post by Mike Blue on a forum : <S> Iron containing alloys consume the available HCI in ferric chloride solution. <S> The HCl is the major active etching ingredient. <S> The generally accepted shelf life of ferric chloride solution is approximately six months. <S> Adding HCl acid can restore the cutting ability of the FeCl3. <S> (Emphasis mine.) <S> Since I don't want to mess with obtaining or working with hydrochloric acid, I think I'll just get a new bottle next time I need to etch a board (which may never happen, given how easy it is to get prototype boards from the likes of OSH Park and others). <A> Calibrate the time of exposure to the solution, it may not be the same as new but the solution sits on shelf before you buy it. <S> May change from some sort of reaction to bottle and atmosphere and light exposure driven reactions. <S> I have used solutions that were hyper old but the other guy says 6 months. <S> If I thru away everything 6 months old I would have clean house and no materials to dink with.
I have successfully used ferric chloride from a ten year old bottle.
What is this component that is found in lots of phones? I was taking apart an old phone (an old LG GM360 ). It was always a terrible phone. Anyway, I found one of these inside when I was tearing it down: I have also found some of these in the back of my current phone (Galaxy S4 - i9505), where there are several of them, I do not think they are antennas, as according to the instruction manual the antenna is at the top right of the device (as you look down at it from the back), and yet these things are not really in that position. They have holes cut out from them in the case, so only the plastic, black part is visible. What are these things? EDIT: It turns out they are antenna connectors. Is it possible to actually connect a commonly available antenna to them? What connector would I need for this type? <Q> This is some sort of RF test connector with a built-in switch. <S> Possibly an MS-156 or MS-147. <S> There is a trace going under the connector from the bottom, and another one from the top. <S> One side goes to the radio in the phone, the other goes to the antenna. <S> Generally the test connector is installed so that when something is plugged in, the antenna gets disconnected and the external connector is connected to the radio in its place. <S> This allows easy testing of the radio during assembly to ensure that everything works correctly. <S> This is necessary because if the antenna is not bypassed, the whole phone would need to be placed in a faraday cage to isolate it. <S> My phone (a galaxy S3) has three of them under small round stickers. <S> They're not really designed for use with external antennas, but that doesn't mean it's not possible, just not convenient. <S> Generally they're designed to work with pogo pins in an automated tester <S> , it may be very difficult to find the correct adapter. <S> And then you would need to know what frequencies the radio needs to use, and find an antenna that wil work across those frequencies. <S> Modern smart phones have radios for GPS, Bluetooth, and Wifi in addition to the cell network radios LTE, GSM, Edge, 3G, etc. <S> These radios operate in different bands and in some cases share the same antenna. <S> There may also be complex RF switching to select the correct antenna and/or switch antennas on the fly, complicating the process of properly adding an external antenna. <S> You may need more than one antenna to get everything to work correctly. <S> Samsung does an excellent job integrating everything, though, so they are not very obvious. <S> Looks like there are at least 4 antennas in the back cover - two on the top right, one on the top left, and one at the bottom. <S> Then there is another one that curves around the inside of the top right corner. <S> Then there is at least one more down at the bottom with the speaker. <A> If I'm not mistaken, that's an 'MS147' RF connector. <S> In its normal state it acts as a pass-through on the PCB and connects the phone's RF stage to the built-in antenna. <S> But when you plug a mating connector in, the pass-through is broken and connects to your external antenna instead (or whatever else is on the end of your cable). <A> They are often used for testing the boards before the antenna in the plastic back-shell is snapped on. <S> If you know what you are doing, you can use them to build a phone into a metal display and still have reception through external antennae. <S> Edit1 <S> : As I said above: Yes you can use them, IF you know what you are doing, what with impedance, wavelength, antenna design, and such. <S> They are probably {EDIT2:} NOT {/EDIT2} SMA connectors, though I am not sure.
They are auxiliary antenna connectors for every RF device the designer thought to put one in for. From looking at the images of the S4 in ifixit, there is definitely more than one antenna. You can see two of them after taking the battery cover off; they are under the two small plastic covers on the sides at the top.
When is the AA battery voltage drop? I have a device which is running on two AA Duracell batteries. The batteries (brand new) together is producing 3.2 volts (1.62 volts individually), which is estimated to run for at least 2 years. The device is always running and consumes 300 micro amp which isn't a lot. Could someone tell me how long can the batteries last on 3.2 volts (1.62 volts individually) before dropping to 3.0 volts (1.5 volts) in years/months/weeks/days/hours/minutes? So basically, how long until the batteries voltage begins to drop? <Q> I don't have good news for you. <S> Below is the discharge rate for Durcell batteries done by some company. <S> Look at the DC label; this is a Duracell coppertop battery. <S> The complete test can be found here: link . <S> There was no µA test (it would take too long), but I guess the voltage will drop below 1.5 V after about 1-2% capacity discharge. <S> There will also be battery self-discharge (very slow). <S> You can test this; it's not that long. <S> I would suggest to use different kind of batteries. <S> Alkaline and a 3.0 V requirement is just the wrong battery for this application. <S> You can also consider different battery chemistry . <S> If you use some lithium AA battery, for example Energizer L91 - they have much more energy "available" before voltage drop below 1.5 V, but be careful - <S> they have also higher initial voltage (about 1.7 V) and they are expensive ( <S> in my country they cost 2-4x more than alkaline Energizer or Duracell). <S> Image from Energizer Ultimate Lithium L91 datasheet: link <S> However if you want to pull 0.3 mA (300 µA) from battery for two years (over 17000 hours) - you need more than 5 Ah (5000 mAh) before the voltage drops below 1.5 V. <S> This is probably too much for any AA battery available on the market. <S> There are also nickel-zinc AA batteries , they have nominal voltage 1.65 V, but they have less capacity (50% less than Energizer L91). <S> Another idea might be three batteries with a low-dropout linear regulator . <S> Three AA batteries in series will provide more than 3.0 V until they are completely empty, but a linear regulator may be necessary for some devices - three new alkalines may have 4.95 V initially. <A> The batteries (brand new) together is producing 3.2 volts (1.62 volts individually), which is estimated to run for at least 2 years. <S> The device is always running and consumes 300 micro amp which isn't a lot. <S> Your math is off: A high quality AA cell has about 3000mAh, which gives you about 10,000 hours run time, or a bit more than one year. <S> But the discharge voltage will drop quickly lower than 3,0V - end voltage <S> is about 0.9V/cell. <S> I would expect the drop from 3.2 to 3.0V to occur within one or at most two weeks. <A> using any regulator needed to defined the load that the battery seeing unless your load is resistor.to be able to calculate battery life you must follow the manface datasheet if we use linier regulator we are following the constant currentif we use switching regulator we are following constant powerto be possible to follow this line we must use test load in the output of the regulatorand read the voltage during test one measurement give you <S> different resulted <S> so needed put at list 20 measurements inbuffer and create moving average(shift <S> so make last out new measurement in and devoid by 20)the time between measurement (if your design is for years) can be between 30 mints to one hour <S> In this way measurement get good stable results if we are using only dc current in the circuit so AVG and RMS are the same.if you are measuring voltage with Microcontroller <S> you must use moving average to gat good reading otherwise you have Noise. <S> For the chart hear is the pollinial that I have developed Voltage(t) = <S> - (60.10778E-2 • CRNT• t)⁵ <S> + <S> (84.63365E-2 • <S> CRNT • t)⁴ - <S> (10.91466E-1 <S> • <S> CRNT• t)³ <S> + <S> (12.58045E-1 • CRNT• t)² - <S> (10.32172E-1 • CRNT• t) <S> + 1.600119 <S> • n <S> CRNT-current in mAt-time in hours <S> n-number of batteries in serial <S> Generally an electrical circuit contains different currents, if the design relies on AVG So can be a big mistake between reality and measurement
My guess estimated time for 0.3 mA (300 µA) before dropping to 1.5 V will be somewhere between 50-150 hours.
Pot and Knob shafts, Ds and Flats Is it a standard in D-shaft, right-angle potentiometers for one endpoint to point the flat face of the shaft to the right of noon, sweep clockwise, and have the other endpoint to the left of noon? And for the corresponding knobs to align the flat face of the shaft with the indicator line? I'm looking for the right combination of knob and potentiometer for my application. From the user perspective I want a front-panel knob that indicates low at 7:30, mid at noon, and high at 4:30. The board is perpendicular to the front-panel so the pot needs pins perpendicular to the shaft. Before noticing the note "shaft shown in CCW position", I bought this pot ( https://www.bourns.com/data/global/pdfs/PTV09.pdf ) and this knob ( http://www.newark.com/pdfs/datasheets/spc/TA-790.pdf ). They look and feel great except for this one problem: the angles are wrong by 180degrees. I looked around a bit more for alternatives: Either a knob with the indicator line opposite the flatted side of the shaft, or a potentiometer with opposite endpoints (and keep in mind the knob needs to point parallel to the board, not orthogonal to it). So far this seems impossible. Everyone I've looked at seems to be in agreement that this is the standard. Some alternatives I would like to avoid: mounting the pot on the other side of the board (there's no space) using a round shaft with a set-screw knob (it needs to be fairly precise and not change with time) panel-mount pots The best alternative I can think of is to use a knurled knob/shaft but I really like the D shaft because it's so hard to screw up alignment. <Q> It does look like some potentiometers are offered with options of where the full CCW position is. <S> But anything other than the standard (30deg right of up, or about 1:30) is a special order part, or <S> at least I couldn't find any in stock anywhere. <S> Here's an example: http://www.mouser.com/ds/2/414/p160-20748.pdf <A> We use spendy Selco collect knobs. <S> You can set them where ever you want. <S> (Well and then you need another ~$1 per instrument to include the (cheap) selco spanner tool for your customer.) <S> ($1 is cheap compared to the cost of the knob.) <A> Do you have any possibility to think about orienting your circuit board parallel to the back of the panel? <S> Or at least making a separate parallel circuit board to accomodate the pot? <S> If you could engineer either of those concepts into your packaging you could easily use the straight up style of the Bourns pot (PTV09A-1) and orient it as needed to make your chosen knobs work. <S> On the other hand if you must stay with the right angle
You could also use a knob with a movable pointer on the face.
How should I test a coaxial cable? I'm a ham radio operator, but haven't been very active on HF bands for a while. I have an inverted-V antenna and a long run (approximately 25m) of coax that runs from my upstairs "shack" down the wall, through a conduit that I buried in a trench and up near a tall oak tree in which the antenna is mounted. Other stations don't seem to be able to hear me very well, and so I am suspecting that the coaxial cable may be waterlogged. Other than the obvious tests with an ohm-meter, how can I test the coax to see if it is still OK? I don't have an antenna analyzer, but do have an antenna tuner and a dummy load. <Q> You can measure a lot of things (impedance, velocity factor, distance to short-circuit, distance to open-circuit, ...) with a TDR (Time Domain Reflectometer) as shown and explained in multiple tutorials. <S> For example: Cheap and simple TDR using an oscilloscope and 74AC14 Schmitt <S> Trigger Inverter How to measure coax velocity factor VF and impedance Z "TDR" or Time Domain Reflectometer, build and use this circuit Determining Velocity Factor of coaxial cable Understanding DTF or Distance To Fault, using a TDR Determining Coax Impedance with a TDR <A> Have you thought that mounting your antenna in a tree may be 90% of your problem. <S> Most hams do every thing in the realm of feasibility to get their antennas mounted on towers that are higher than surrounding buildings, vegetation and trees. <S> Consider this. <S> Try seeing how bad a GPS receiver works in a woods or forest with a tree canopy overhead. <S> First hand experience will show that it works pretty bad. <A> Using a transmitter, you shouldn't be able to transfer any power into it. <S> Anything that looks like a load it actually loss ( <S> heating the coax). <S> There are a number of downsides to this test. <S> The transmitter isn't necessarily going to like operating into this. <S> Not all transmitters have an adjustable output network ("tune" and "load"--do any of them have this anymore?). <S> And, of course, you shouldn't be radiating any of this while testing. <S> A better instrument would be a noise bridge. <S> This lets you measure the impedance looking into the feedline. <S> Again, it shouldn't look like a resistor. <S> And again, I hope I haven't dated myself with something else you can't buy. <S> But it's an ingenious little gadget.
If you disconnected the antenna, and shorted the end of the coax (or left it open-circuit), you should be able to verify that from the other end.
Is it detrimental to an SMPS to turn it on and off frequently? I've got a 5V SMPS powering two separate servos: and The smps is a tps562200 The application is an autonomous sail boat (3 feet long). One's a winch, the other a rudder. I have code that scans various sensors and when appropriate makes adjustments, anywhere between every 2 seconds and every 30 seconds (depending on how stable things are). The two servos never need to be active at the same time. To not waste power on the switcher's quiescent current, I pull its EN line low whenever not in use. I'm looking at two possible options: disable the switcher in between each servo access be a little smarter about it, and keep it enabled during the autopilot cycle From an power savings standpoint, turning off the switcher more than makes up for the energy lost in recharging the caps on the servos, but does this much cycling do any damage to the switcher / associated circuitry? <Q> EDIT: <S> Oh damn! <S> I see you have NFETs as low side switches already... <S> I guess my post is not so useful to you. <S> As tcrosley mentions the whole operation of the converter is "ON/OFF" constantly anyway, so if you force it off for longer it will be okay. <S> MY SUGGESTION WAS: <S> I highly suggest you just use a spare digital IO line for each, and some P Channel MOSFETs as high side load switches. <S> Let the Switching converter idle with no load, maybe keeping a nice fat 1000-2000uF capacitor charged nearby, before the P FETs. <S> Then when it comes time to turn on the rudder/winch servos, in firmware you should first turn on the high side P FETs and wait 100ms for the power supply to settle just in case, and then continue with your signals/movement. <S> I have made a fancy schematic diagram for you to explain what I mean. <A> In fact, internally, that is how a buck converter works: <S> The switch between the supply and the rest of the TPS562200 buck converter is being turned on and off 650,000 times a second. <S> I have connected up the enable lead of a SMPS boost converter to a PWM output of a LCD controller, thus turning it on and off several times a second, without any issues. <A> It shouldn't hurt the switcher, but are you sure it's worth it? <S> The TPS562200 has an idle current of less than 1mA. <S> Of course the servos also draw some idle current (for example the Hitec HS-785HB sail winch servo draws 8.7mA at 6V). <S> But how much do you really save by switching them off? <S> Is your battery so small that an extra 20mAh would significantly reduce operation time? <S> When servos are not powered their holding torque is reduced (since it then relies purely on gear-train friction and magnetic cogging in the motor). <S> You may find that the sail arm and/or rudder will not hold position under heavy loading.
No, there is mo problem in using the enable lead to turn on and off the SMPS as often as you want to.
Level vs edge triggering, usefulness of level triggering Many processors / µCs / dev-platforms (BeagleBoard, Arduino,...) use interrupts. These can be triggered by the detection of: HIGH level RISING edge CHANGING level (either FALLING or RISING edge) FALLING edge LOW level Now either of two things must be true: FALLING and LOW (/ RISING and HIGH) are virtually the same When a LOW (/HIGH) level is applied over a non-trivial time, the controller is stuck repeating the interrupt service routine over and over Both of these don't make sense to me: The first cannot be true, since it would be totally useless to make the difference in the first place then. So if the second one is true: how could this be useful? What application is there that is not better off with a combination of RISING and FALLING instead of HIGH? Research so far: This question is just a stub, so it didn't help: https://electronics.stackexchange.com/questions/92833/what-is-the-difference-between-level-triggering-and-edge-triggering This one is also not too useful as it is about when those interrupts are triggered, not the implications of the differences: What does edge triggered and level triggered mean? This one mainly elaborates on the differences in detection of the different trigger events: Why edge triggering is preferred over level triggering? <Q> As an addition to the other answers, another look from a practical perspective: <S> The level triggered interrupt is an indication that a device needs attention. <S> As long as it needs attention, the line is asserted. <S> A device may want the master to clock data out of the devices buffer. <S> It may need immediate attention to prevent buffer overflow (so using interrupt is a good choice vs polling) but it wouldn't be practical if the device has to keep switching edges while the buffer still contains data. <S> The master does its processing as fast as possible, clears the interrupt flag when ready and immediately recognizes that there is more to do. <S> When some particular thing happens, the device generates a pulse on the interrupt line and its done (fire and forget). <S> The master just takes notice of it and goes on. <S> My point is, there is practical use for both. <A> Edge sensitive interrupt only fires when it detects appropriate edge. <S> That means, only single interrupt will happen. <S> If interrupt is enabled AFTER transition it will not react and message ia lost. <S> Usable when certain event must be captured. <S> Level sensitive interrupt will happen whenever it will be enabled and appropriate level present. <S> Thus request will be serviced even if interrupt is enabled some time later. <S> Furthermore, multiple devices can be attached to single interrupt line. <S> Usable when state or condition is important. <S> Your nr. 2 statement is correct. <S> Either interrupt must be disabled or level cause serviced. <S> In other words, you use edge sensitive when you are waiting for something and level sensitive when something might be already waiting for you. <A> Level triggered (high or low) can allow the source to say "nevermind" or to keep the trigger active until the ISR gets around to it. <S> Interrupt latency is not guaranteed on a single core with multiple triggers, though it's usually pretty fast. <S> Generally, the signal for a level-triggered interrupt is itself edge-triggered <S> and you have to clear it in the ISR <S> or else you'll come right back into it again. <S> As Ignacio said, level triggered can also do something continuously while active, though you should write your software to not get stuck in an "interrupt loop". <S> Not getting to your main code can be somewhat difficult to debug. <S> Edge triggered is good for things that happen once on some event. <S> If the event happens again, then your response will happen again, so you'll need to be careful about repeated events like switch bounce. <A> In this case the ISR would be responsible for energizing and deenergizing a solenoid only once; the fact that the interrupt trigger is still present would be responsible for extending the ring duration.
Edge triggered interrupt is an event notification. Level triggering is useful when a continuous action requires firing off repeated events, e.g. ringing a bell while an alarm is active.
Why do we need 3 phase power supply? Why do we need 3 phase?For the reason that voltage will never be zero in 3 phase? OR we need high voltage? <Q> 3-phase delivers 3x the power with 1.5x <S> the copper (3 wires instead of 2). <S> This doubles the usefulness of each pound of copper (aluminium etc) <S> Over long distances, this is a significant cost saving. <S> 2) <S> Three phase motors run smoothly with no additional complexity to define their running direction. <S> Power delivery is approximately constant as the rotor follows the rotating field (with some slip, in an induction motor) with no torque variation or vibration. <S> Reversing can be as simple as interchanging any two phases. <S> The advantage of smoothness also applies to the 3 phase generators : they absorb power from e.g. the turbine smoothly : a large single phase generator would probably be shaken apart from the torque variations. <A> There are several reasons 3-phase is desirable over 1-phase power. <S> One advantage of 3-ph over 1-ph has to do with instantaneous power (i.e. power generated or consumed at any instant in time within the power cycle). <S> For example, consider a heating element (power resistor) in a 1-phase circuit. <S> Voltage and current are in phase. <S> Both cross zero twice during a cycle as they go positive and then negative. <S> Their product is power which is a sinusoid at 2x the fundamental frequency (multiply two sine waves <S> and you get a new sine wave with twice the frequency). <S> The power dissipated by the heating element has a sinusoidal waveshape that sits above the zero line (because the two V & I sinusoids, when multiplied together, always give a positive value). <S> The power sinusoid also reaches zero twice during the fundamental cycle at the same time V or I cross zero. <S> The resistor doesn't produce heat (consume power) at these zero-crossing instants in time (resistor remains hot due to its thermal mass). <S> Now replace the resistor with a 1-ph induction motor. <S> For similar reasons, despite V & I being out of phase, there are times in its power cycle when the motor doesn't produce mechanical power (it remains spinning due to its own and its load's inertia). <S> In a 3-ph system, the phases are staggered by 120 electrical degrees. <S> If a 3-ph heating element is connected (Y or delta; doesn't matter for the purpose of this discussion), each individual resistor "sees" a zero crossing but collectively the three are producing heat at all times. <S> There is no instant in time when heat is not being produced. <S> Similarly, with a running 3-ph motor, there is no instant in time when it isn't producing mechanical power. <S> The result is a simplified motor (no starting winding needed as is the case with a 1-phase motor), smaller frame for same horse-power because instantaneous power is never zero. <S> Unlike the 1-ph motor, the 3-ph motor doesn't need the larger bulk to "coast through" a zero crossing. <A> Total power in three phase system is almost constant and so creates less vibration. <S> Total power in single phase system is pulsating and so creates more vibrations. <A> 3-phase is easy to generate, easy to transmit, and easy to manipulate in many different ways. <S> In a delta configuration it makes for very easy long distance transmission, since there is no "ground" needed - the ground equivalent functionality for one phase is provided by the other two phases. <S> In a star configuration it provides three individual single-phase power circuits for feeding to consumers. <S> Switching between delta and star arrangements is as simple as using a transformer with the primary and secondary windings in the right arrangement. <S> It's has to be by far the most flexible of all the arrangements you can come up with. <A> The most power efficient <S> large AC generators and AC motors are three phase (or some multiple of 3). <S> Another reason is that in 3-phase transformers the core is much less liable to saturate for a given size - this means better power transmission efficiency.
Comparing 3-phase with single phase transmission, 3 phase has a couple of significant advantages: 1) More efficient use of conductors : given the same peak voltage between conductors and same current in each conductor,
Very wide range high voltage measurement with optical isolation I have a voltage source I'd like to measure, it is from 0V to 1kV, any value in that range. The accuracy isn't that important, but measuring with an accuracy with 1-5% would be perfect. This has to be then optically isolated, so I could read the voltage on a microcontroller. The second part is not hard at all, when using regular optoisolators. The problem is scaling the voltage down so that I can light up the diode of the optoisolator, and keeping the results semi-linear or semi-logarithmic so I could calculate the output voltage. Is using voltage dividers with a diode the only available option? With such a wide input voltage range getting good results with that method is not possible, because the voltage defines what resistor types and values to use. Any advice? <Q> If you're REALLY careful with your choice of parts, you can resistively divide the 1kV input into something you can read directly. <S> It won't be isolated that way, so you'd need to figure out some kind of protection in case a resistor comes loose or you drop a wire across it or something like that. <S> Maybe a sacrificial buffer amp? <S> That might be a good idea by itself. <A> If you scale your 1kV down to something manageable, you could then use it as the input to a voltage-to-frequency converter on the HV side. <S> Then take the output of the VFC to your opto-isolator as a simple digital pulse-train. <S> On the other side of your opto, use your microcontroller to do a frequency measurement. <S> If you prefer to stay in the analog domain, then you could try a 'linear optocoupler' like a Vishay IL300. <S> Internally it has 1 emitter and 2 matched detectors, and you use the output of one of the detectors to linearise the output of the other one. <A> You have to use resistor divider to scale down, and then isolation amplifier. <S> Simple ones are available from TI. <S> Then use your MCU's ADC. <S> Alternatively you can use some simple I2C or SPI ADC and isolate it's communication channel. <S> This way you will use standard and interchangeable parts. <S> Last option i can see is linear optocouplers. <S> Personally i really don't like them. <A> This will be crude, but you could measure the photo-current from an LED. <S> (HV->resistor->led-> PD->current.) <S> The light is fairly linear with current above 1mA or so... <S> below that you'd have to measure the LED. <S> (it's like a 3/2 power law or something.) <S> Here's a plot, https://www.dropbox.com/s/atyo4uvsb09fgd7/LED-PD.BMP?dl=0 <S> But it would be easy to do a few calibration points (with low voltage and smaller resistor) <S> For the current measurement my DMM has a uA current range smallest digit is 10nA. <A> It sounds like you want a linear optocoupler. <S> Given your accuracy requirements the following circuit lifted from http://www.avagotech.com/docs/5954-8430E should be a good tradeoff (there are more sophisticated solutions, with more complex & costly schematics) <S> Claimed performance is as follows: <S> Typical Performance of the Wide Bandwidth AC Amplifier: <S> • 2% linearity over 1 V p-p dynamic range <S> • <S> Unity voltage gain <S> • 10 MHz bandwidth <S> • <S> Gain drift: –0.6% /° <S> C • <S> Common mode rejection: <S> 22 dB at 1 MHz • 3000 V DC insulation <S> You still need a voltage divider, power supplies etc. <S> to complete that. <S> Regarding the divider, someone said above you need to be really careful (TM). <S> Well, here's a part suggestion from R.B. Northrop's Introduction to Instrumentation and Measurements [3 ed.] <S> Such a probe makes an external voltage divider with the DC voltmeter’s input resistance. <S> For example, the Fluke Model 80K-40 high-voltage probe is designed to be used with any DC voltmeter having an input resistance of 10 MΩ. <S> It makes a 1000:1 voltage divider, so 1 kV applied to the probe gives 1 V at the meter. <S> The probe and meter present a 1000 MΩ load to the high-voltage CUT [Circuit under test]; hence, the probe and meter draw 1 <S> μA/kV from the CUT. <S> So yeah, <S> 1000:1 dividers for 1kV (actually up to 40kV) are done (by Fluke); datasheet: http://www.farnell.com/datasheets/1816372.pdf claims +/-2% accuracy for DC. <S> Even has fan/user videos: <S> https://www.youtube.com/watch?v=kXq4FCQ0C38 <S> You can actually buy much more accurate ones (up to 0.01%) from places like Ross Eng. , <S> but I suspect they cost accordingly.
An external resistive voltage divider probe is generally used to measure DC voltages from 1 to 40 kV. If you use an active buffer between the source and the ADC, you can also apply some gain and offset so that the expected range to measure fills up the entire range of the ADC (with some headroom for out-of-range detection).
Why is there a 0R resistor linking GND and AGND in analog voltage reference circuit? This is related to another question I've just posted ( What's the purpose of a ferrite bead inductor on this circuit? ), regarding the battery charger described in the AVR450 Application Note - Battery Charger for SLA, NiCd, NiMH and Li-Ion Batteries , which one day I hope to build. On page 40, there's a schematic showing the MCU connections (picture below). Marked in red is a 0Ω resistor that is puzzling me. I suspect that it is just a wire jumper linking AGND and GND . But I don't understand why there's a jumper there. My questions: What does the jumper represent? Why are AGND and GND separated like that? <Q> Digital circuits are noisy, but they can (mostly) handle their own noise without noticing. <S> Analog circuits notice noise a lot; in fact, they have to pass noise just like a signal because they really can't tell the difference. <S> The best way to keep digital noise out of analog circuits is to keep them separate, both physically and electrically. <S> But they have to be connected somehow in order to convert from one to the other, hence the jumper in exactly one spot , which is probably next to the converter on the physical board. <A> I'll repeat what I wrote in your previous question and expand on it a bit: R33 may have been a thought to allow for a ferrite bead in the analog ground (usually not a good idea) or it may be used as a net tie to enforce a single point connection between the analog ground nets and the ground. <S> In communication between the engineers and layout persons it's sometimes easier to give two nets that are connected together separate names and join them at a single point, either with a physical 0R (0\$\Omega\$) resistor or with something called a net tie , which appears as copper on the board but looks like a component joining two nets in the schematic. <S> Obviously the net tie costs less than a 0R resistor (shorting jumper). <S> and that's almost always a bad idea. <S> If the difference between the two grounds exceeds more than a few hundred mV at any time, bad things will happen. <S> It's usually best to tie them together closely and to a solid ground plane if possible. <A> The jumper represents a zero ohm resistor. <S> It is used on a PCB like a jumper. <S> If it is installed then AGND and GND are shorted together. <S> If it is not installed then they are isolated. <S> The two grounds are isolated from each other like that to deal with potential noise issues. <S> If you have problems with the MCU injecting noise into the ground (which can upset sensitive analog circuits) then you can remove the 0 Ohm resistor and isolate them. <S> This can have other problems (such as ground loops) <S> but sometimes it is necessary. <S> It is simpler from a PCB design standpoint to just short the grounds together (by installing the 0 Ohm resistor). <S> It is more complicated (and error prone) to design your board with two ground planes. <S> The jumper is there so that both configurations can be done (separate or unified ground planes).
The other idea is to try to put some impedance between analog and digital ground-
Why is an op amp's bandwidth higher at lower gains? If I build a resistor network where the op amp has a lower gain, it is able to maintain its gain for a larger bandwidth. Why? <Q> This is called constant gain-bandwidth product but it isn't true for every op amp. <S> It is only true for voltage feedback op amps which use dominant pole compensation for stability. <S> Such op amps can be approximated as a first order system since one pole dominates all others and the others can be ignored. <S> (However, this is not true of current feedback op amps since current feedback op amps do not have a constant gain-bandwidth product .) <S> A first order system has a transfer function of the form $$H(j\omega) = <S> \frac{H_0}{j\omega\tau + 1} = <S> \frac{H_0}{j\omega/\omega_c <S> + 1}$$ <S> where \$H_0\$ is the DC and passband gain, \$\tau\$ is the time constant of the dominant pole and \$\omega_c\$ is the cutoff frequency (bandwidth). <S> The gain of this system is $$|H(j\omega)| <S> = \frac{H_0}{\sqrt{(\omega/\omega_c)^2+1}}$$ <S> For \$\omega << \omega_c\$ the gain is approximately \$H_0\$ and the bandwidth does not come into play. <S> If \$\omega >> \omega_c\$ the gain-bandwidth product can be approximated as $$|H(j\omega)|\omega <S> = \frac{H_0}{\sqrt{(\omega/\omega_c)^2+1}}\omega \approx \frac{H_0}{\sqrt{(\omega/\omega_c)^2}}\omega = <S> H_0\omega_c$$ which is a constant. <S> Since it is a constant, increasing the gain requires a decrease in the bandwidth while decreasing the gain allows an increase in the bandwidth. <A> Op amps are compensated with a dominant pole. <S> That means the open loop gain rolls off at a constant 20dB/decade vs. frequency. <S> Negative feedback increases the input impedance, decreases the output impedance and increases the bandwidth. <S> Because of the single pole rolloff, the product of noise gain (or non-inverting gain) and bandwidth are constant. <S> Another nice feature of dominant-pole compensation is that the amplifier will be stable at any closed-loop gain. <S> So if your amplifier has a dominant pole at 10Hz and an open loop gain of 100dB <S> your gain*bandwidth will be 1MHz (10*100,000). <S> So at a gain of 1000 you will have a 1KHz bandwidth. <A> In a first-order system, the product of the gain and bandwidth is constant. <S> So, increasing R makes the gain go up, but the bandwidth go down, in equal amounts. <S> Simple as that. <S> (please note this is only true for a first-order system or a system such as a closed-loop opamp amplifier that can be well approximated as a closed loop system).
This is simply a consequence of the fact that the gain is proportional to R (typically it is some kind of gm time R) and the bandwidth is inversely proportional to R (the bandwidth is some variant of 1/RC).
Industry standard ways of connecting boards / modules? There are literally thousands of different types of connectors when I'm searching on a distributor such as element14, and I'm not sure what are the "standard" ways of connecting boards together. Currently, when I need to connect two different boards together, I will use a 2.54mm spaced pin header, and some ribbon jumper wires. While this is fine for prototyping, I want a more standard solution to connecting modules together (such as USB, D-Sub, HDMI connectors, etc). Digital Communications The protocols I normally use a I2C and SPI, so Vcc, GND, and the bus wires (SDA, SCL, MOSI, etc). For these protocols, are there any standard physical connectors which I should use when I want to connect boards together? e.g. RJ45 or something? I've always associated RJ45 with telephone comms, so not sure if there are more appropriate standard connectors. Analaogue Communications For example, if I want to drive a motor or fan, what type of socket should I use on my board? On computer fans, hard drivers, toys, etc. I have seen them use Molex or some tiny white connector. Is this a standard way of connecting analogue input/outputs? AFAIK Molex is a brand of connectors, but nowadays is it just a name for the type/shape of connector? Protocol-less digital communications For example, push button switches, and LED lights, etc. How should I connect these between boards? Finally, in my latest project I need to connect my around 20 different connections (these include both vcc/gnd lines, analogue inputs, switch inputs, as well as an I2C line, and some multiplexer line select lines,) between two boards. Should I have different connectors for each of these, or is there one standard connector I can use for all the pins? The distance of communication I'm talking about is 1 - 2 meters at most, however any information regarding longer range communication is also appreciated. <Q> That is fairly standard. <S> For higher currents, I'd use a molex MTA connector with crimp connectors. <S> I don't believe I've ever had to move a low-level analog signal between boards, so no idea what I'd do there. <S> Shielded cable, <S> but I don't know what connector I would use. <S> Perhaps a mini SMA ; I have an instrument that uses an LVDT and the connector to the LVDT signal conditioner uses SMA connectors. <S> Ribbon cable is pretty standard for digital signals and can be purchased premade off the shelf. <S> Going outside the enclosure, well, things get complicated. <S> My preference is always some form of circular connector because it's easier to drill/punch a couple of round holes than mill a rectangle. <S> What I choose really depends on how many signals are needed. <A> Back in the day when I was doing this type of thing, I used to use the D-style connectors a lot for standard digital logic signals. <S> They were cheap, came with pigtails already, and were a lot more durable than standard headers. <S> That said, there's not really a standard way to do it, because a lot depends on the electrical, environmental and mechanical needs. <S> High power or low power? <S> Do you see connecting and disconnecting it a lot? <S> Does it need to be mechanically strong or can it be fairly flimsy because there isn't a lot of vibration or movement in the chassis? <S> How many signals need to cross? <S> How high speed are the signals...will they need a ground plane through the middle of the connector, or match a specific impedance? <S> Examples off the top of my head: <S> The edge connectors that PCIE cards use (like a graphics card in your PC) are cheap, but aren't really made to connect and disconnect a ton of times, so probably not good to use in a lab environment. <S> The molex connectors are usually used for power connections because they can carry a ton of current, so they would probably be good in other high-current situations like driving motors. <S> Are you fairly space constrained? <S> There are a bunch of teeny, tiny connectors that cost a bundle. <S> The best is when I was working for a defense contractor and we were getting a bunch of Hellfire missiles back from the first Gulf War in the 90's. <S> A bunch of them came back because the connectors that were supposed to connect the missile to the helicopter were getting a ton of sand in them and failing to make contact. <S> And how were they tested? <S> The ground crew had to put each missile on the helicopter rail and slam it back HARD to see if that particular missile was going to lock into place and make contact through the connector or if it went into the crap pile to try to clean out later and try again. <S> Anyway...there are no real standards. <S> Just figure out what works best for your needs. <S> Good luck!! <A> When it comes to consumer products, I found that a nice way to get some ideas is to simply disassemble existing products and see what kind of connectors are used. <S> For at least board-to-board low frequency communication and some power transfer, I've found that FPC/FFC (flat flex) connectors are fairly popular due to the very small and thin form factor. <S> The cables are off-the-shelf, with a lot of options on length, pitch and number of channels, and IMHO are very easy to assemble and disassemble. <S> Furthermore, it is possible to directly plug custom "flexible boards" into these connectors without a cable at all by building the cable directly into the flexible board, as regular off-the-shelf flat flex cables are essentially the same thing as such boards, though this is definitely not as popular as simply using an off-the-shelf cable.
For board to board digital connection within the same enclosure, whenever possible I use ribbon cable with IDC connectors.
Simplest possible way of latching the voltage peak? I have an input signal like this. I wish to display the highest voltage for a period of time with LED. So I need to retain the highest voltage until the next new highest voltage comes. I tried D flip flop, but being a digital component, it makes everything 0 or 1. But in my (analog) case, say, the 1st highest voltage is 5V, the next could be 7V, and I need to preserve the values! In addition, if the next peak is 4V (<5V), I need to latch 4V. So what is the simplest possible analog way of latching the highest voltage until a new peak (possible lower or higher) comes? <Q> I think, the type of a circuit that you are looking for is called peak detector . <S> The simplest peak detector consists of a diode and a capacitor. <S> The diode prevents the capacitor from discharging, so it retains the max voltage less the diode drop. <S> The diode drop is a shortcoming of a simple circuit like this. <S> It doesn't "see" voltages that are less than the diode drop. <S> (That page where this snippet came from is worth skimming through.) <S> An OpAmp peak detector is more precise. <S> It compensates for the diode voltage drop. <S> ( source ) <S> There is a fair amount of information about peak detectors: EEVblog video , Planet Analog article . <A> Another way of handling this is to use a simple microcontroller, and feed the input signal into its ADC (analog to digital converter). <S> With this arrangement you can easily determine the peaks in firmware. <S> With the other schemes using the capacitor, there is no method shown to discharge the capacitor in order to measure the next peak (your example of 7v - <S> > <S> 5v -> 4v). <S> With firmware, this is trivial. <S> Another advantage of using a microcontroller is the low parts count -- just the microcontroller, a decoupling capacitor, and a pull-up resistor on the reset line. <S> One microcontroller to consider is the PIC16F1786 which has a 11 channel, 12 bit ADC. <S> It is available in a 28-pin DIP package from Digi-Key for $2.54. <S> It has an internal oscillator so no external crystal is needed. <S> Here is a suggested circuit: <S> You could use three of the microcontroller's outputs (or any number, e.g. 8 if you want) to directly drive a bank of LED's without any additional circuitry (except for the LED resistors), since the I/ <S> O pins can source or sink 25 mA. <S> It has an internal 4.096v reference, so dividing the 8v input signal in half is ideal. <S> The combination of a 4.096v reference and 12-bit ADC means each count corresponds to exactly 1 mV. <S> The program (in C) would be quite simple, probably not even a full page of code. <A> One sort of hybrid analog-digital way would be to have a series of comparators fed from the input signal and a resistor ladder (a DIY flash ADC converter) and some SR flip-flops to hold the states. <S> For example: simulate this circuit – Schematic created using CircuitLab <S> The LM339s have open-collector outputs which allows for easy level shifting down to 5V with just the pullup resistor. <S> When any comparator output goes low <S> the latch is set and the /Q output goes low, turning the respective LED on . <S> The D inputs should be grounded. <S> You can either ground the clock inputs or use the clock inputs as an edge-sensitive reset input (rather than the level-sensitive reset as shown). <S> You can expand this to more outputs as required, though at some point (even though you specified an analog approach) you should consider just doing all this in a micro which would be a rather simpler approach (only one IC package and perhaps half a dozen resistors for many LEDs). <S> You would have to write a bit of code though (it's a pretty easy starter project for a micro, so I'd recommend that approach if you're so inclined). <A> In fact, googling for "inverting peak detector" yielded this picture:
If its a fairly periodic signal, you should be able to use a an inverter circuit that feeds into a peak detector circuit using op-amps. You will need to use a voltage divider to divide the maximum input of of 8v to fit the maximum voltage of the ADC.
Assembler coding for ARM (Cortex-M0 and M3): is it possible/practical? Unfortunately there are no questions on Stack regarding ARM and assembler at all. My concern -- is time critical devices. Let's take for an example one of my AVR-based device (program compiled with GCC) which should do something up to INT0 interrupt. It working with 8 MHz internal oscillator (125 ns one machine cycle) but it took up to 5 microseconds to react for the interrupt. After the code investigation I came to the conclusion that in the beginning of interrupt service routine processor make a lot of work to save it's state which is almost uncontrollable for high level programming languages (such as C is). If I'd use assembler I could for example throw a pin change in the very beginning and keep the rest of necessary calculations after that. Or I could have much more control over the registers' use and therefore much less time to save those registers. If I'd go to ARM (which I'm planing to do soon) I will have much faster processor core with much more registers and memory space which looks promising. But will I ever be able to have any control over such time critical processes to obtain for example reaction time within let's say hundred nanosecs'? <Q> It's a straightforward RISC architecture with few surprises and plenty of registers. <S> You can mix C and assembler provided you have a good understanding of the calling conventions. <S> There is a special low-latency ARM interrupt mode called FIQ , which swaps out some of the registers to a bank in hardware so they do not need to be saved in the ISR. <S> 100ns latency to doing something useful is still going to be hard - at 100MHz <S> that's 10 clock cycles, and FIQ takes up to 12 before it executes the first instruction. <A> Yes, it's certainly possible- <S> all the startup code your C program uses will typically be written in assembler (.s files). <S> Many of the things that people want to do with ARM processors lean on existing infrastructure of protocol stacks and graphics libraries. <S> If you're writing stand-alone applications, using it like a super 8051 or PIC you can certainly use assembler for everything (or write your own UDP stacks etc). <S> You can hand code critical sections, of course, and use C for the bulk of the programming. <S> I looked at the ARM7DTMI core assembler coding some time ago, and it looked fairly pleasant to work with- <S> I estimated it would take no longer to get up to speed than with any other new processor in assembler (but in fact we're using C exclusively with ARM <S> cores- <S> its very suitable lingua franca for domain experts, junior folk and expert programmers alike). <S> Keep in mind <S> that typical ARM implementations are not as tightly coupled as simple processors- <S> there is a peripheral bus that may run at a different frequency than the processor bus. <S> You may not be able to, say, toggle a port pin at anything like as fast as you might expect from the clock frequency. <S> Generally if something is really time critical <S> it's best to have a peripheral handling it autonomously (or to use a helper FPGA). <A> This one is a bit old, but deserves a better answer. <S> The original question was about interrupt latency. <S> Since the original platform is an AVR, the ARM-based replacement part is going to be a Cortex-M3/M4 or M0. <S> Both of these devices have interrupt latency of at most 12 instruction cycles. <S> Thats the time from stimulus to running your code. <S> In practice, it will take longer to do anything useful. <S> Its hard to write to an IO in much less than 3-5 instruction cycles (load the address, load the value, store the value). <S> Longer if the device busses, ram, or flash have additional latencies. <S> If you truly need latencies in the .1us range, you need peripherals or custom logic rather than software. <S> If the actual need is bounded/fixed response times, you can get that with proper interrupt system configuration. <S> Cortex-Ms have features that can reduce the interrupt latency to 6 cycles under the right circumstances (late arrival and tail chaining). <S> That can be turned off if you need a fixed 12-cycle latency.
It's very reasonable to program ARM in assembler.
What does short to ground mean? How to do that? I've just started a simple project using raspberrypi and a capacitive touch sensors breakout board. I had a few problems using I2C, but after some time of research I managed to get stuck on one only... On one of the forums I found an answer that solved the same issue for another user, so I'd like to try it out: Turns out I forgot to short the ADD to GND to set address 0x5a. The thing is, I don't really know what that means... Am I just supposed to solder ADD to GND? There's GND already on this board (apart form ADD). Do I just connect them both to GND? What does shorting to ground actually mean? Thanks a lot for your help and sorry for the trouble :) <Q> Short to ground, just means to have a direct connection to Ground. <S> A "short" is any direct connection between two nodes. <S> So you have a direct connection between ADD and GND <S> would be a short. <A> Short is an English word and is used almost in a slang/jargon type of way in Electrical Engineering. <S> Short in English means that the length or distance of something is close by, or it is over a relatively small distance. <S> The use of the word "short" in "short to ground" is actually short (lol!) <S> for the full term "short circuit". <S> To create a short circuit in Electronics, you basically remove everything in the way between one node and another, and directly connect them. <S> Can be done numerous ways, but the simplest is of course a [short!] <S> wire or PCB trace. <S> The use of language in Electrical Engineering must make it very hard for non-english speakers to understand the strange things we say, and it can be quite a problem with international design/product collaboration. <S> Even a problem for simple tutorials or online 'instructables'[not <S> a real word by the way!] <S> and causes grief all the time i'm sure of it! <A> As an addition to the other answers, and on the use of English more than anything, "short" usually implies a "short circuit" which usaully means an unintentional (faulty) condition or temporary setup (as in "use a shorting link" (jumper or wire) for example), as in the connection is shorter than it should be, something is being bypassed or over-ridden by the short. <S> If the state is intended & by design <S> then it's usually "connected to ground", "wired to ground", etc. <S> rather than "shorted".
In any circuit, technically, you have shorts everywhere, but the term "short to.." is generally used for ground or some power node.
Using a bottle of water as a resistor Today, while drinking some water from a \$500mL\$ bottle, I started reading the info about the water and found out that the conductivity (\$\sigma\$) at \$25°\$C is \$147.9\mu S/cm\$. So it came to my attention that maybe I could calculate the resistance of the water bottle, from top to bottom. After some measuring, I found out that the bottle can be approximated as a cylinder with \$18cm\$ height and \$3cm\$ base radius. So we can do the following: \$R_{eq} = \frac{\rho L}{A}\$, where \$\rho = \frac{1}{\sigma}\$ is the resistivity, \$L\$ is the bottle's height and \$A\$ is the base area. By doing this, I got \$R_{eq} \simeq 4.3k\Omega\$. Then, I bought a new full bottle, made a hole on it's bottom (of course avoiding leakages) and measured the resistance (with a digital multimeter) from this hole to the "mouth", at first making it so that only the tip of the probes touches water. The measured resistance was really high, ranging from \$180k\Omega\$ to even \$1M\Omega\$ depending on how deep in water I positioned the probes. Why is the measured resistance so different from what I calculated? Am I missing something? Is it possible at all to use a bottle of water as a resistor? Edit #1:Jippie pointed out that I should use electrodes with the same shape as the bottle. I used some aluminum foil and it actually worked! Except this time I measured ~\$10k\Omega\$ and not the \$4.3k\Omega\$ I calculated. One thing I was able to notice while lighting a LED with water as a resistor was that the resistance was slowly growing over time. May this phenomenon be explained by the electrolysis that happens while DC current travels through water (the electrodes slowly get worse because of ion accumulation at their surfaces)?This would not happen for AC current, right? <Q> The formula you use is valid for a certain area, but the size of your probes is nowhere near the area you used in your calculation. <S> If you want a closer approximation, you'll have to use electrodes similar in size as the area you calculated the water column for, one flat on top, one flat at the bottom. <A> I agree with @jippie. <S> For instance, take this cross-section of a good old-fashioned carbon rod resistor: <S> You notice the wires don't just stick into the carbon rod - instead they attach to metal plates the same diameter as the carbon rod. <S> The same with a more modern carbon film resistor: <S> Here the wires attach to nickel caps which connect with the carbon tube right around its circumference, not just at one point. <A> As Jippie already pointed out, one of the issues is that your electrodes were much smaller than what your calculations assumed. <S> They seem to assume the entire top and bottom areas of the cylinder will be the electrodes. <S> However, the resistivity of "water" varies widely. <S> Very very pure and deionized water has very high resistivity. <S> Even tiny amounts can make a large difference to resistivity. <S> Another issue for making a resistor from water is that there will be electrolisys at the electrodes. <S> With no impurities and inert electrodes (like graphite), you will get hydrogen released at one electrode and oxygen at the other. <S> With impurities and chemically active electrodes, lots of things can happen. <S> For example, if you electrolyze salt water, you will in part get chlorine gas. <S> Most metals will corrode at one end of the other if used as electrodes. <S> Water simply isn't a good substance to make resistors out of. <A> I've tried to measure the conductivity of water a few times with a DMM without much luck... or reproducible results. <S> (using big flat probes.) <S> Reading this, http://en.wikipedia.org/wiki/Conductivity_(electrolytic) <S> I think the problem may be DC electrolysis in the water/ probe ends. <S> Now I'll have to try it AC some day! <S> Edit addition: (Friday Fun.) <S> So I was motivated to measure the resistance of water. <S> I put some 1/2 inch diameter SS posts in a plastic tub with ~1" of Buffalo tap water in the bottom. <S> (A picture and data are here.) <S> Signals from a function generator where sent through the probes to an opamp TIA. <S> (R = 1 k ohm) <S> I moved the probes around an got ~ <S> 1k ohm of resistance (See TEK000). <S> Then I stuck the probes into a DMM (resistance scale). <S> The resistance changed rapidly at first (starting at ~3k ohm) then slowly rose up to ~50k Ohm, at which point the DMM auto ranged and went to ~300k Ohm <S> and then the resistance dropped to ~200k <S> Ohm. <S> I then played some, Looked at step response, changed voltage drive amplitude. <S> (again data is in dropbox link) <S> I then sprinkled a pinch of salt. <S> The resistance dropped quickly to ~100 Ohms (closer 150) Trying to measure with a DMM the resistance was 40 k Ohm! <S> The time constant was a lot faster with salt in the water. <S> To measure the resistance of water you need to do it AC with a frequency that is faster than the time constant of the water. <S> (Time constant of the water changes with electrolyte concentration.) <A> I did my high school physics project on the DC conductivity of pure water (32 years ago) and found that increasing the current decreased the resistance linearly at first and then quite dramatically, the former and latter possibly caused by electrolysis at the electrodes (as mentioned by Olin Lathrop) causing ionization, the opposite of what you have found. <S> Hydrogen and oxygen gas at the electrodes will reduces their conductive surface area, increasing resistivity, but the hydrogen and oxygen travelling to each of the electrodes will conduct electricity, so you may have reverse/competing effects that may depend upon the shape and size of the electrodes. <S> Perhaps my electrodes were large enough to discount the former effect (reduction in surface area) leaving only the latter.
The resistivity of any real water you likely have access to is all about what impurities are in it.
Power an op amp when having only higher supplies I have a +-24 V supply and I want to power an op amp. I initially tried an LM324 using a single +24V supply, but I then discovered that I want the op amp to be allowed to output negative voltages too (which the 324 can't). I could not find an op amp that can take e.g. max +-25 V so that I can connect it immediately to my supply. Would just a voltage divider do? <Q> If you want enormous voltage swings on the output, then you could use a high-voltage op-amp which you can power directly off the +/-24V rails. <S> Don't do this if you're going to put the output in something which can't take such large voltages though. <S> Or if you're doing AC-only (audio, for example), you can couple the signal via a capacitor and bias things up off you 0V rail. <S> "High-voltage op amp" in google found me this: <S> LM143 http://www.ti.com/lit/an/snva516/snva516.pdf which will take +/- <S> 40V supplies. <S> Never used it, don't know what it's like, but there are clearly suitable products around. <S> You should say something about the type of signal you're trying to handle. <A> Using just a pair of voltage dividers to produce the supply rails for the op amp is a bad idea as the supply current drawn by the op amp will vary and will result in varying voltage supplies. <S> You should use a pair of linear regulators instead such as the 7812 for the positive rail and 7912 for the negative rail. <S> (More modern equivalents also exist). <S> Or as mentioned by Will Dean, if you want to be able to produce large output voltage swings, use a special purpose high-voltage op amp IC. <A> Here are 1000+ op-amps on digikey that can do ±24V rails <S> Alternatively you could bias the opamp to halfway between your power supply (12V) to have a similar effect. <S> In this scenario "negative" values of output would be on one side of 12V and "positive" values on the opposite. <S> Finally, no you do not want to use a voltage divider for a power rail to an IC for a variety of reasons. <S> Fluctuations in current consumption will cause the supply to dip or rise and the chip won't operate properly. <A> A LM324 can handle up to 32 V supply, so ±15 V would work and give you most of the voltage swing a LM324 can have. <S> A opamp takes relatively little current, so basic linear regulators will do. <S> You've got plenty of headroom, so the 78xx series will be fine. <S> You can use a 7815 to make +15 V from the +24 V supply, and a 7915 to make -15 V from the -24 V supply. <A> You can use a resistor and a zener diode to provide a limited voltage version of your 24 volt rails, maybe +/- <S> 15 volts. <S> It won't be as rock-steady as a linear regulator but would certainly suit most applications. <S> Most op-amps only need a few milli amps or less <S> so 1k resistors to 15 volt zeners can supply up to 9mA. <S> In fact that will be the current going into the zener when not connected to an opamp. <S> If space is an issue and leaded components are the only option this might be a decent solution.
If you're trying to output a signal moving a few volts either side of 0V, then you could produce +ve & -ve rails from your +/-24V using a positive and negative regulator, then power any old op-amp of those.
How do I analyze this circuit with diodes? If diodes in this circuit are ideal diodes: I'm making assumptions to solve this circuit for I and Vo but I have some questions:1. What is the voltage at the cathode of the upper diode if the other two diodes are off (i.e. open circuit) ?2. If all diodes are on (i.e. short circuit), then will the circuit look like a resistor with voltages 16 and 12 at its ends? <Q> In the case you describe, yes, you can just tie Vo to 16 V. <A> What is the voltage at the cathode of the upper diode if the other two diodes are off (i.e. open circuit) ? <S> Here's how to reason to the correct answer. <S> (1) If the other two diodes are off, the current through the upper diode is zero <S> (2) If the current through the upper diode is zero, the voltage across the upper diode is zero or negative. <S> (3) <S> The voltage across the diode is the difference of the anode voltage and cathode voltage: $$V_{diode} = <S> V_{anode} - V_{cathode}$$ <S> From (1), (2), and (3), it follows that the voltage at the cathode of the upper diode is 16V (or more). <S> But this is inconsistent with the other two diodes being off; if the cathode of the upper diode is 16V (or more), the voltage across the other two diodes is positive since their cathodes are at 12V (there is no current through the resistor). <S> Since we've reached a contradiction, we know that the other two diodes must in fact be on rather than off . <A> As shown in the original post, the lower diodes can never be turned OFF because they'll never be reverse biased. <S> "Ideal" diodes can be likened to switches with zero resistance between the contacts when ON, and infinite resistance between the contacts when OFF. <S> Redrawing the circuit from that point of view, with perfect switches and with the original post's context allowing us to turn OFF the two bottom diodes, we have:
If the diodes are ideal (note, it also applies for the threshold model) there is enough voltage difference to keep them on, therefore they will behave like short circuits (or, in the threshold model, as voltage drops).
Driving 48 LEDs with 1 Single Current Source I need to drive 48 UV LEDs for UV paint curing system. (LED: 3.5V, 20mA) I found a simple current source , modified it, set the current to 20mA. The voltage on 3 leds is 10.50V (3 x 3.5V, exactly what I need). When I add another 3 LEDs in parallel to existing 3 (having 6 LEDs total), the current stays same but the voltage across each line drops slightly. When I finally have 18 LEDs (6 parallel lines), the current was still ~20mA but the voltage across each parallel line dropped to 9.5V. Somehow it must be okey for the LEDs because they did not lose their brightness. My questions; When I finally add all 48 LEDs current source will keep the current at 20mA and the voltage will drop more. Will that drop on the voltage be OK for the LEDs? Adding new LEDs does not change the current drawn from the power source, so is safe to drive 48 LEDs with one single current source or it will burn down things (LEDs, transistor or power supply) in long run? simulate this circuit – Schematic created using CircuitLab <Q> So each time you add another string of parallel LEDs, the 20mA is getting split even more and the brightness of all of the LEDs is going down a little. <S> The relationship between current and apparent brightness of an LED is complicated. <S> But I promise they are. <S> There's another complication as well. <S> LEDs have a negative temperature coefficient. <S> As they heat up, their forward voltage drops. <S> Due to manufacturing variability, one string of LEDs will inevitably drop its forward voltage a little more than the others, which will cause it to grab a larger chunk of the 20mA, which will cause it to heat up more, which further decreases its forward voltage, and so on. <S> The end result is that one string will tend to "steal" most of the available current and appear brighter. <S> The easiest way to prevent that is to put a small resistor in series with each string. <S> Regular resistors have a positive temperature coefficient (their resistance increases with temperature), so that works to balance out the current in each parallel strings. <S> If you truly want 20mA going through every LED correctly, you will need to decrease the value of R2 to an appropriate size. <S> For example, two strings of LEDs will require 40mA of current. <S> So R2 will need to be half the value. <S> Four strings will require R2 to be one quarter the value. <S> And so on. <S> By the way, the resistor value of R2 should be 35 Ohm, not 25 Ohm. <S> I got this by taking the voltage that the base of Q1 will start to conduct (0.7V) and calculated the resistance that will result in 20mA at that voltage: $$\frac{0.7V}{20mA}=35 <S> Ohm$$ <S> Your equation for R2 based on the number of LED strings (to maintain 20mA for each string) <S> is:$$R2=\frac{0.7}{n*20mA}$$where <S> n <S> is the number of LED strings. <A> I assume you have a separate current source for each triplet of LEDs?In that case you should measure the voltage across your +12V source. <S> Maybe it's dropping. <S> However, if you use the same current source for 2x3 LEDs and so on you have to adjust it to 40mA (adding 20mA per row), since they're in parallel. <S> The current splits. <S> Keep in mind that LEDs don't behave the same as resistors. <S> Half the current through one LED won't result in a 50% voltage drop. <S> The loss in brightness could be subtle as well meaning, you simply don't see it <S> but it is there. <A> If you want the same brightness for all the LEDs you have as in the original (3-LED) circuit, you'll have to let 20mA across each parallel string of your LEDs. <S> To achieve that, you'll have to duplicate your circuit (R1, R2, Q1, Q2) for every string of LEDs. <S> Simply connecting the parallel strings together, and increasing the total current (to a multiple of 20mA) is not a safe way to go, as the current will not be properly shared between the strings. <S> If the voltage source provides a constant voltage then this will provide an acceptable solution.
You may think the LEDs are the same brightness as you add more strings in parallel, but the 20mA is actually splitting itself between all of the parallel strings. Or, instead of this moderately complicated circuit, you may use a simple series resistor (one resistor in series with each parallel string). It's not actually linear, which is probably why you don't think the LEDs are getting any less dim.
Why a lower/higher than resonant capacitor is required in tesla coil design? People are telling me that the primary tuned LC circuit should have a capacitor a little higher or lower than resonance, but not at it. Why is this the case? Don't you want the most power you can get? <Q> The reason is the output streamer is part of the topload capacitance. <S> The streamers will lower the secondary resonance. <S> I have heard a foot is equal to 2pf . <S> So you want your primary lower tuned a little, so the streamers don't detune to fast. <A> Tesla coils are homebrew projects, since they are not commercially made. <S> Sometimes re-purposed components are operated near or beyond their capabilities. <S> Apparently neon sign transformers are operated near their breakdown voltage. <S> So, de-tuning the circuit is sometimes done to prolong the life of the components. <S> Use better parts, and you can operate it at resonance. <A> The reason is, say you have a 15000 volt neon transformer to charge your capacitor to 15000 volts. <S> That will work fine, if you use ltr cap. <S> If you use a resonant cap, you get a condition of resonant rise, where in just a few micro seconds your once 15000 volts is now 75000 volts or higher. <S> What is your transformer and cap rated for...? <S> They will fry very quick. <S> You can avoid with safety gaps in place, but if they miss a beat, well you know what happens. <S> So ltr cap prevents this, 15000 in 15000 out roughly.
It's true that at resonance, you will get the most "power" (or voltage, or whatever you seek) out of it, but your parts could burn out at this point.
How to calculate Vo in this circuit? In this circuit: The diode is an ideal diode The diode is on since V(anode) > V(cathode) and then we replace it by a short circuit. Now, how to calculate Vo when two sources are existed? I = (10 + 2) / (2k + 4.7k) = 1.79 mA. I tried using KVL: -10 + 2k (1.79 mA) + Vo - (-2) = 0 => Vo = 4.42 Or: Vo = 4700 * (1.79 mA) = 8.413 Which answer is correct? and why the other one is not correct? <Q> Neither answer is correct, but you mostly had the right idea. <S> You correctly calculated the current through the whole circuit: <S> \$\dfrac{10 - <S> (-2)}{2000 <S> + 4700} <S> = 1.791 mA\$ <S> The next step you could take would be to calculate the voltage drop across each resistor: <S> Across the 2K resistor: 2000 <S> * 0.001791 <S> = 3.58 <S> V <S> Across the 4.7K resistor: 4700 <S> * 0.001791 <S> = 8.42 V <S> If we start at the 10V end: <S> Vo = 10 - 3.58 <S> = 6.42 <S> V <S> If we start at the -2V end: <S> Vo <S> = -2 <S> + 8.42 = 6.42 V <A> You don't "ignore" the other source(s) to use superposition, you connect them to ground. <S> So the the voltage from the 10V source is 10V <S> * <S> (4.7/(4.7+2)), and the voltage from the -2V source is -2 <S> * <S> (2/(4.7+2)). <S> You can also use a quick and general method that works for any number of resistors and corresponding sources: <S> Vo = <S> (V1/R1 + V2/R1 <S> + ... <S> Vn/Rn) <S> * (R1 || R2 || ... <S> || Rn) <S> Where || represents the parallel resistance <S> 1/(1/R1 + 1/R2 <S> + ... <S> + <S> 1/Rn) <S> So in this case, Vo = (10/2 - 2/4.7)(4.7 || 2) <S> Either method should give the same result if you don't make any errors. <A> The diode is ideal and forward biased therefore it becomes a short. <S> The current in the resistors is \$\dfrac{12V}{4k7 <S> + <S> 2k0}\$ = 1.791mA (which you have already calculated) <S> The voltage across the 4k7 is 1.791mA * 4k7 = 8.418 volts. <S> But the bottom of the 4k7 resistor is 2 volts lower hence the output voltage is 6.418 volts. <S> Neither of your answers are correct. <A> First task at hand is to identify whether the diode is forward or reverse biased. <S> You can think of it this way, imagine the diode is reverse biased, in that case the anode receives a 10v and the cathode receives a -2, which means your assumption of the diode being reverse biased is wrong and it should be forward biased. <S> Now that we know the diode is conducting, apply a simple nodal equation at point of output, (Vo-10)/2k + (Vo+2)/4.7k =0; <S> Solve for Vo, which will be approx 6.42V. <A> Since you have more than one source of voltage in the circuit, why not use the Superposition Theorem ( http://en.wikipedia.org/wiki/Superposition_theorem ). <S> Put the -2V source to zero and apply the voltage divider rule to find: Vo(-2V grounded) <S> = 10V <S> x 4.7k/(4.7k+2k <S> ) = 7.0 V.Next, put the +10V source to zero and apply the voltage divider rule to find Vo(+10V grounded) = <S> -2V <S> x 2k/(4.7k+2K) <S> = - 0.6V.Sum these two results to get <S> the actual Vo: <S> Vo = <S> Vo(-2V grounded) <S> + Vo (+10V grounded) = 7.0 - 0.6 = 6.4VYou only need to use 2 digits of precision because that's all you have to start with in your circuit data. <S> Note: <S> The concept of grounding each independent voltage source is for analytical purposes only. <S> You wouldn't do it in practice. <S> In practice, you would measure Vo with a voltmeter relative to ground (or the [-] terminal of the +10V supply OR the [+] terminal of the -2V supply). <S> Notice this solution doesn't require the need to calculate circuit current. <A> Apply KVL only on closing loop circuit. <S> -10 + 2k (1.79m) <S> + V1 - 2 = 0 V1 = <S> 8.42 Volt Or V1 = <S> 4k (1.79m <S> ) = 8.41 Volt Note <S> that Vo is Voltage between 4.7k resistor and 2 Volt DC supply. <S> Hence, Vo = <S> V1 <S> - 2 = 6.42 Volt
To find the voltage at Vo, start at either end and add or subtract the voltage drop across the resistor.
Analyzing a circuit with diodes In this circuit: Assuming ideal diodes We have 8 combinations of assumptions. I have proved that 5 of them are wrong. The three remaining assumptions are: All diodes are on. D1 is on, D2 is off, and D3 is on. D1 is on, D2 is on, and D3 is off. When all diodes are on, the circuit will look like this: I = (16 - 12) / 4.7k = 0.85 mA. Vo = 16v. When D1 is on, D2 is on, and D3 is off, the circuit will look like this: I = (16 - 12) / 4.7k = 0.85 mA. Vo = 16v 3 is identical to 2. So , is it possible that we have three valid assumptions in this circuit? and if not, what is the wrong with this solution? <Q> Your options 2. <S> and 3. <S> are not valid, because the voltage across the OFF diode is not negative. <S> For your ideal diode to be OFF <S> it must have a negative voltage and zero current, while it should have a positive current and zero voltage when it is ON. <S> Always check for these 2 conditions. <S> You are stating that your ideal diode has zero current and zero voltage, which is none of its 2 available states when you are analyzing your circuit. <S> (Please note that this is being theoretically strict (as we should be when analyzing a circuit of ideal components). <S> Practically your diode can have V = 0, I = 0, but it is not to much use when you are analyzing your circuit) <A> All the diodes will be on. <S> Your conditions 2 and 3 are false. <S> How would 2 conduct and 3 not conduct, and vice versa? <S> The diodes are in parallel. <S> For this problem best way to approach is to short all the diodes and calculate current in all the paths. <S> If you get a result with -ve current, then you can infer that, the diode in that particular path is reverse biased. <S> But in this case you get positive current in all the paths while calculating under short circuited conditions. <S> So all the diodes must be, as assumed, forward biased. <S> And, yes the output voltage will be 16V. <A> All diodes are "on." <S> Assuming that these are silicon diodes then the voltage drop across any one diode follows an exponential function. <S> As a rule of thumb you can estimate this voltage as about 0.7 volts for a silicon pn diode when the current varies from about 0.1 milliampere to 10's of milliamperes. <S> This is what they mean by "forward drop. <S> " It is about 0.3 volts for a silicon Schottky diode or a germanium pn diode. <S> Since there can be 16-12=4 Volts across the three diodes then the diodes will be in forward bias and conduct. <S> Now, whatever current is in D1 will split and be equally shared by D2 and D3. <S> So the voltage across D2 and D3 will be slightly less than that in D1. <S> But roughly, D1 will have 0.7 volts of drop and D2 and D3 will have 0.7 volts of drop. <S> The net drop is 1.4 volts. <S> The output terminal must be 16-1.4 volts lower in electric potential, so the output will be about 14.6 volts. <S> Since we now have an estimate for this voltage we can find the current in the resistor. <S> It is (14.6-12)/4700=0.55 milliamperes. <S> Our earlier assumption about diode current is valid and the above estimates will be accurate. <S> Lastly, no diode can be truly off as long as the lower voltage source is less than 16 volts. <S> Some folks will argue that "on" means "appreciable current." <S> Then "on" versus "off" is a matter of assertion. <S> But there is definitely current if there is a positive potential difference as shown. <A> We appear to be talking here about an ideal diode, which is a mathematical abstraction. <S> The question is: what state is such a diode in at zero volts? <S> I'd like to assume that it is undefined. <S> Because of the voltage sources there is no question that D1 is on. <S> The question then becomes which of D2 and D3 turns on first? <S> Assume D2 turns on first and has zero volts drop across it. <S> Zero volts across D3 keeps it in an indeterminate state in which you don't know if it is on or off. <S> If you were to place a resistor in either leg to measure current in that leg, you upset the current in that leg to assure the diode in that leg stays off. <S> If you place an identical resistor in both legs you would still not be able to predict which diode turns on first, nor would you be able to tell after the fact because the voltage at the junction of the two sense resistors will rise enough to instantaneously turn on the other diode. <S> (Heisenberg's Uncertainty Principle?) <S> The bottom line is that you know for sure that at least one of them is on giving you zero volts and that's all you care about. <S> So if I change my mind that zero volts is an on state rather than <S> being indeterminate as I assumed at first, then the only state that can exist is for all three diodes to be on and the other two states are invalid. <S> And how many angels can dance on the point of an ideal pin?
Or if you do care, you could say that all ideal diodes are identical, therefore they have identical turn-on time and both will turn on simultaneously, resulting in zero volts and putting them both in an indeterminate state, which conflicts with the fact that at least one has to be on.
Does it matter which way you plug stuff into the wall socket I have a european socket and if you saw one you will know that you can plug stuff either way. But american sockets can only be plugged in only one way. So does it matter which way its plugged it because it works no matter what. I heard somewhere they use more power that way but i don't know if that's true <Q> There are two ways to answer this. <S> No. <S> Voltage is a relative measurement, and in any AC power system (American or European), one voltage oscillates above and then below the voltage of the other. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> However, yes . <S> There are things in the world besides those two prongs. <S> For example, the Earth. <S> In the US, an electrical outlet really looks something like this: simulate this circuit <S> The connection between one half of the outlet and Earth is made near the electrical distribution panel: <S> Now consider, if someone should touch the half of the outlet connected to Earth, probably nothing bad will happen. <S> However, if he should touch the other half of the outlet, he's likely to get shocked by completing the circuit through some accidental 2nd connection with Earth, or something else touching it (which is about everything). <S> So although flipping the plug around and inserting it backwards will probably be no problem with regard to the electrical operation of the appliance, it may create a safety hazard by exposing the "hot" half of the outlet, the half not connected to Earth, such that someone might touch it and be shocked. <S> National regulatory and licensing bodies have different requirements for an appliance to be certified as safe. <S> In particular, the home's wiring can be backwards, or users can force plugs in backwards. <S> Thus, things like double insulation have been developed to render appliances safe even in the presence of a fault like backwards home wiring. <A> Your question was about the European socket, not about other and polarised sockets. <S> No, the equivalence of both ways to plug a device to a socket of this type is mandatory by law. <S> CE regulations state the same creeping distances and other means of safety for both pins and all connected circuitry in your home appliances. <S> This is pretty obvious, because you have to rely on the behaviour stated on the boiler plate and/or in the manual of your device regardless of the orientation of the plug. <S> As the electronics in your device will see only the AC voltage between N and L, and they cannot discriminate between both, as there is no way to do so, also power consumption will exactly be the same. <S> There's only a minor difference when it comes to failures inside your appliances. <S> And only in Class I devices. <S> Suppose you have a washing machine, which has a housing connected to PE (which can't be interchanged when you turn the plug, because it's connected to both prongs protruding from the outlet) and a defect causing one wire of the two potential live/neutral wires to short to the housing. <S> If the now grounded wire was by accident the live one, your fuse or circuit breaker will blow. <S> If it was the neutral on, it won't and your device will seemingly operate rather normal. <S> A RCD will probably fall within both possible orientations. <A> There is a pretty good answer here: <S> Why are some AC outlets and plugs polarized? <S> Quoting the accepted answer: <S> Some systems will employ switches and fuses and such in the input power, and this is best handled by the live connection, not the neutral, so a polarized plug helps ensure this. <S> -Majenko <S> There is also a good discussion about grounding vs. neutral in the comments. <S> Essentially, the neutral is for returning current to the supply while ground is more for safety at the location. <S> I'm paraphrasing here.
Thus, from the perspective of the appliance, which has connections only to the two prongs of the outlet, it's impossible to tell one from the other, because the system is symmetrical. There are also grounding and 'commoning' issues to be taken into consideration.
How do I know if a transformer will output A.C. Or DC? I recently bought a small transformer to transform 230V AC mains to 7.2V DC to power a small IC and relay. I wired it all up, but as far as I can tell, the output is AC rather than the DC I expected. Here is the data sheet for the transformer : http://www.mantech.co.za/datasheets/products/P58.pdf I have the P01172 I can't see anything on that datasheet that specifies one way or the other. If I don't see anything specified, should I assume that it does not convert? <Q> I think you have been led astray by terminology. <S> As others have noted, a transformer converts one AC voltage to another AC voltage level, while providing some electrical isolation. <S> In your case, it converts 230V Ac to 7.3V AC. <S> What you may have been thinking of is a small power supply that sits in a plastic container and plugs straight into a wall. <S> Some people call these "wall warts" and some call them... "transformers". <S> Yes, they CONTAIN a transformer, but they also have rectifiers and (in the better ones) <S> voltage regulators to give you a nice steady DC voltage. <S> If you use a big capacitor, they have a polarity marking. <S> WARNING: <S> You are now playing with circuitry powered by mains electricity! <S> While 7.3V sounds tame, you have a 230V input, and that is DANGEROUS. <S> With that in mind: Buy A Multimeter <S> A decent one will have AC and DC voltage ranges. <S> In fact, having one means you would have been able to answer your own question (AC or DC voltage present). <S> UNPLUG <S> your stuff from mains if you can <S> - it's a basic safety precaution. <S> Put everything in an enclosure, to keep out curious children / pets / etc. <S> Electronic goods can be fun, but the mains isn't really the best place to start. <S> You're trying to power something else - a chip and a relay. <S> Whatever they do, you should be focusing more on that , than on mains power supplies as an introduction to electronic goods. <S> Maybe you know all about safety already. <S> However, none of the other answers I saw addressed this point, and it's really not something we should "assume" everyone "just knows". <A> Transformers always output AC. <S> You need rectification after that to get positive halfwaves followed by a capacitor(s) to smooth it out decently. <S> If you power integrated circuits, you need a linear regulator also, to get smooth DC, say 7805. <A> Yes, and also from the schematics labelled with INPUT and OUTPUT you can see that this is just a plain transformer without fancy rectification circuit: it's really just two coils. <S> Something else that you can see this from is the fact that the datasheet doesn't specify which output is positive and which negative.
If you want DC, add a good power diode to one of the output pins, then place a capacitor across the result.
Locking up a DC motor with constant supply a bad thing? I am working on a little pen plotter project and for one of my motors (which supports the pen) I have used a small DC motor obtained from an old DVD drive. I have noticed that if I supply a constant 5V to this motor, it "locks up" when the pen comes into contact with the paper and stays there whilst keeping a slight pressure on the pen (perfect for the application), but I am now starting to think, is this a bad thing? If the DC motor has a constant supply to it and is held in the same position, should it be OK? It seems to be and isn't getting hot, so I just wanted to confirm. <Q> If you stall the motor and let it draw as much current as it wants, the motor will be dissipating more energy as heat. <S> What happens next depends on the design of the motor. <S> Some motors can dissipate all of the heat when stalled. <S> Such motor can stay stalled indefinitely. <S> Some motors can't dissipate all of the heat generated when it's stalled. <S> The heat builds up, the temperature of the motor raises. <S> That can lead to a permanent failure of the motor (possibly a fire too). <S> For this reason, some of the motors have built-in thermal protection in the form of a bimetal strip or a fusible link. <A> Presumably this "constant" supply is a constant voltage. <S> At a fixed current the motor supplies a fixed torque, which might be nice and gentle to your pen. <A> With DC motors you don't have the "back EMF" that you have with AC motors, so the current does not change significantly if the motor stalls. <S> Still, as it has been suggested, you may want to limit the current to a value that holds the pen in place with the required torque, but not too much beyond that.
You can improve safety (to avoid possible overheat) and gentleness of the motor by limiting the current that the supply will put out.
What is the advantage of a switching over a linear power supply? I want to build my own switching power supply. I already know how to make a 10 Ampere Linear supply, and I'm wondering if I should bother. What do I have to learn to do a switching supply? What makes a switching supply better if they both end up giving me DC? What I don't get is the "inefficiency" argument. Maybe linear supplies get hot, yes, but so does every laptop switching power supply I have met. Looking at a schematic of a switching power supply shows that it has at least 3 times more components; that means 3 times more work and cost to build a power supply. Why would I feed a circuit using an expensive switching power supply that gets hot and that ends up being more expensive than a linear one? Don't both just end up giving me plain regulated and filtered DC power? I should be able to use either for every application shouldn't I? Also, if i wanted to make 10A one, how or which component can manage 10 Amps in a switching supply? (Darlington arrays are used in linears) <Q> The answer to which one you use depends on the application, and the efficiency needs. <S> For example, you're asked to make a phone charging dock. <S> The dock is powered via a 12 V wallwart, and powers the phone with 5V of power at 500mA. Using a linear regulator, 3.5W is dissipated. <S> That's quite a bit of waste, but you're connected to the mains, and a charging dock is a big enough device, where a properly heat sunk regulator wouldn't cause a lot of heating issues. <S> On the flip side, suppose you're building a wearable device that operates on a small Li-Po battery, even if you designed a LDO circuit that only wastes about 1W of power, a switching circuit would be more desirable as if designed properly, you could reduce your wastage to <10% that of the linear regulator <S> Note <S> : Pay attention to the efficiency curves of switching regulators. <S> They normally only have high efficiency for small ranges of current usage, and it helps to understand what current usage your application operates on in different condition to design the most efficient power circuit. <S> Also - laying out swtiching regulators on a PCB <S> can be hit/miss - I've seen a lot of incidents where tiny layout issues can mess with the desired voltage out. <A> As DoxyLover pointed out, it's not just a matter of "getting hot". <S> The efficiency of a linear regulator is Vout / Vin, which is really bad when there's a large difference between input and output. <S> Consider a modern desktop CPU running at 0.9V for an extreme example. <S> Another advantage of switching regulators is that they can boost or invert the input voltage. <S> If you need a positive and negative voltage from a single battery, or 12V from a 1.2V solar panel, a linear regulator won't work at all. <A> Linear supplies can be about as efficient as switching supplies--an on rare occasions even more efficient--in cases where the input voltage will always be slightly above the required output voltage. <S> Unfortunately, if the input voltage is only slightly above the required voltage, then a small dip in the input voltage will leave a supply unable to maintain the required output voltage, and a small increase in the input voltage will cause a huge relative increase in the amount of power a linear supply will have to dissipate. <S> Even though, as darudude notes, many supplies have a somewhat narrow range of conditions under which they will achieve optimal efficiency, in most cases such limits stem from the fact that many supplies have a certain minimum amount of power that they will draw whether or not the load requires it, as well as a certain amount of power that they will consume beyond what the load takes. <S> If a 12V to 5V converter is 90% efficient when supplying one amp, but draws a minimum of 1uA from the source, then it efficiency when driving a 10nA load <S> would be pretty pathetic (less than 1%) but the amount of power it would waste in that situation would be only 12uW--far less than the losses when supplying a full amps (where it would waste about 0.56W). <S> If a battery-powered device will need to supply a load that consumes a full amp for one 1 seconds each week and otherwise draws 10nA, the average current draw would be about 1.7uA, of which 1.0uA would be a result of the baseline current draw, making the overall efficiency about 40%. <S> If, however, the load consumed a full amp for ten seconds each week, the average current draw would be about 8uA and efficiency would improve to be about 80%. <S> With the one second-per-week load, reducing the current drawn during the idle times might potentially double battery life. <S> With the ten-second-per-week load, however, battery life would be limited by the need to supply real current to the load even if the idle power consumption could be reduced to nothing.
The advantage of a switching power supply is that it will be able to offer good performance over a wide range of input voltages.
How does a Line Driver actually works? So I've searched for days for information about the 74HC244 line driver. As always, the Datasheet provides raw and not very good information about the actual component :) (I mean, it gives us the temperatures, but not how the actual component works...) Any way, there is very little information about Line Drivers in general. So, how do thy actually works? And how the 74244 can act as a line driver? All I see in the inner structure is enables and input to outpuut wires. EDIT I'm using the 74244 to connect between my Altera (FPGA) and a servo motor. I think that in order to reduce the current that the FPGA suplies to the servo, I need to put the 74244 in between them, thus reducing the "effort" from the Altera. Is that necessary? Or the Altera can provide a good current flow to the servo without damaging it's abilities? <Q> A line driver is simply a buffer. <S> What you put in one end comes out the other. <S> However, it is typically able to sink and source much more current than, say, a normal GPIO pin on an MCU. <S> The increased current is able to overcome the capacitance caused by having many devices on a bus. <S> You can think of it as the logic gate equivalent of an amplifier. <A> I'm not sure what you mean by "how it works". <S> The datasheet tells you pretty much everything you need to actually use the part, though NXP does tend to be a bit sparse on internal details. <S> Anyways, if you look around a bit, you can find some slightly better datasheets that have more detailed internal diagrams. <S> The datasheet also states: Chip Complexity: 136 FETs or 34 Equivalent Gates , which tells you that (at least ON Semiconductor's implementation of the 74HC244) <S> the actual chip uses a combination of 136 transistors. <S> If you want more detail, you should start reading about how transistors work, and how they're used to make the logic gates in the shown diagram. <A> There are two principal functions of this line driver. <S> The first is to provide a power gain. <S> The circuits that are driving the inputs may not have the power to drive the impedance of a longer wire length. <S> This chip does not perform any logical function; it is just giving the signal a strength boost. <S> The second functionality is that it has a tristate output, which means that when the enable pin is not asserted the chip does not drive the output signal at all. <S> It's as if the chip were disconnected from the wire. <S> This allows you to have multiple line drivers on the same wire, and as long as they properly take turns driving it they can share the single connection. <S> Tristate mode also saves power by not driving the line when there is nothing to send. <A> It should be noted that the device identified above is a "tri-state" driver. <S> This means that it can set the line "high", "low", or can effectively disconnect itself from the line. <S> This is used in situations where any of several components can "drive" the bus (though hopefully not more than one at a time). <S> It can be used, eg, for a "bi-directional" bus between processor and memory or between processor and devices: The processor, when control lines are set in one state, can output data onto the bus to be read by the memory controller or I/ <S> O device controllers, and when the control lines are in another state the processor "listens" on the bus and the memory or device controller "drives" the bus.
The line driver also serves to isolate the local circuits from electrostatic discharge from connections that might go off the board.
Stepped Sine Wave in Spice I'm interested in simulating a circuit using a stepped sine wave input. I want to sample a sine wave at regular intervals and have the output constant during each interval, making an output that looks like: (pardon my silly Excel plot!) What's the best way to do this? Is there an easy way to create a source that generates an output like this? Am I better off carefully looking at the frequency response of my circuit and comparing it to the frequency content of this input? A solution relevant to LTSpice or Orcad would be ideal. <Q> E.g. the below circuit gives this waveform: <A> I am not sure about Cadenece Orcad but <S> LTSpice will do for you. <S> LTSpice allows you to use the PWL(piece wise linear <S> ) format to plot customized waveforms. <S> As a suggestion you can use PWL file which is a text file, store your data i.e. " what voltages at what time " in the text file. <S> Take a general voltage source on schematic <S> Right click on the voltage source to add your PWM file using PWL Tab. <S> As an example I plotted waveform using the PWL.txt shown below. <S> > <S> 0 0 <S> 1n .1 <S> 2n .5 <S> 3n 1.0 <S> 4n 1.5 5n 2.0 <S> 6n 2.5 <S> 7n 2.5 <S> 8n 2.0 9n 1.5 <S> 10n 1.0 <S> 11n 0.5 <S> 12n 0.1 <S> 13n 0 <S> 14n .1 <S> 15n .5 <S> 16n 1.0 <S> 17n 1.5 18n 2.0 19n 2.5 20n 2.5 21n 2.0 22n 1.5 23n 1.0 <S> 24n 0.5 <S> 25n 0.1 <S> Waveform: <S> Similarly you can plot your customized waveform. <S> I don't see exact data points on your waveform, or would have tried to plot waveform exactly like yours. <S> For more detail about PWL please follow the link from Linear . <S> Hope <S> this helps. <A> Micro-cap has a component called "sample and hold" <S> so, if you feed in a sinewave, the output will be a sampled version of the sinewave. <S> Micro-cap also has the facility to have user-defined signal sources. <S> Like AKR I cannot give recommendations for OrCAD because I don't use it.
LTspice has a ' sample ' block which implements a simple sample-and-hold.
Subtracting from lipo cell voltage! I just got a 3s Lipo battery. Lipo battery has a 4 pin connector to plug it into charger or voltage checker. But I don't want to buy a lipo voltage checker, I want to make it. But I have a problem. Let's say a=GND b=1st cell c=2nd cell d=3rd cellso if I read voltage from b it's b , But if I read the voltage from c it is c=b+c , If i read voltage from d it's d=c+d . I have lm339 IC , and I need b , c-b and d-c . How to substract voltage? Thank you for ANSWERS! <Q> It seems you are asking how to determine individual cell voltages in a 3 cell pack when you have access to each end and the points between the cells. <S> This simplest answer is to use a microcontroller with A/D and do the subtraction digitally. <S> I've done exactly that in a 8 cell stack once. <S> The problem with this method is that resolution goes down for the cells higher up in the stack. <S> However, what matters is whether the worst case is still within spec for your purposes. <S> Our A/D was 12 bits and we only needed to know the cell voltages well enough for charge ballancing and discharge limiting. <S> You should be able to easily do the same with your 3 cell stack. <S> Note that just measuring the voltage of each cell is only half the solution. <S> For charge ballancing you also need to do something about it when some cells charge to a higher voltage, as will inevitably happen. <S> If this is a one-off, then I'd probably go with the conceptually simplest method, which is to use a opto-isolator and resistor per cell. <A> Maybe is something like this you need. <S> The circuits will output the difference between voltage on each cells terminals, as ilustrated. <S> Does it help? <S> simulate this circuit – <S> Schematic created using CircuitLab <A> I found out that if i directly connect from cell to IC it will work because as i mentioned that they add up it is only if you mesure from gnd to that cell so if you mesure b and c it will display the real voltage from c. <S> THANK <S> YOU GUYS ANYWAY!
If this is for volume production where component cost matters, you can get more clever and use directly wired FETs to turn on the bleeder resistors for each cell.
Why do they crimp capacitor leads? Most manufacturers produces crimped and straight lead pairs of their capacitors which has exactly same capacitance and voltage rating. Why do they bother crimping the leads? What advantage does it make? In which cases a crimped lead capacitor should be preferred? <Q> As well as your pictured film capacitors, you'll find similar kinks in ceramic disc capacitor, thermistors, MOVs and similar parts. <S> Here you can see it clearly called a "hold off" kink on a disc capacitor. <S> Some other parts- NTC (Negative Temperature Coefficient) thermistors: <S> The capacitors you pictured are epoxy dipped, but the early laquer dipped ones were even more susceptible to damage from too much force on the leads. <S> This should bring back some bad memories for oldsters here: <A> Ceramic capacitors are rather brittle and so they do not like their leads getting tugged on. <S> Adding these crimps forces the capacitor to sit off the board with a few mm of relatively flexible lead in between. <S> This will isolate the capacitor from forces that it would otherwise experience during vibration, board flexing/bending, thermal expansion/contraction, etc. <S> By providing the crimped leads at the factory, the board house does not require a machine to add those in-house. <A> As the capacitors pictured are film capacitors, using the crimp to stand them off the board will help to keep their temperature below the melting point of the film during soldering.
It's to space the capacitor up off the board so that undue stress is not placed on the ends of the capacitor (for example, if the lead spacing in the board holes is not exactly the same as the lead spacing on that particular capacitor, or if the thermal coefficient of expansion is different from that of the PCB).