source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
White glue on circuit board? Took apart an Apple TV due to no power from the smps and its like 80% covered in a white rubbery adhesive. I'm used to seeing this stuff (or similar) on bigger components like electrolytic caps to minimize vibration but I'm curious it's purpose on the bottom of the board where it's all smd? Other than to make it really annoying to chip away to test components <Q> Its a type of RTV silicone adhesive that helps prevent component vibration, larger components like capacitors are normally secured by it as any excess movement in them can add to the chances of broken solder joints. <A> It's white RTV glue/silicone (room temperature vulcanization silicone). <S> The covering of the SMD components on that board is unintentional as they wanted to fixate the transformer and capacitor. <A> It's partly for mechanical reasons, and partly thermal. <S> RTV silicone is not a great thermal conductor, but it's a lot better than air. <S> It helps conduct the large amount of waste heat in a power supply to the inner walls of the case. <S> If you replace it do not use acetic acid cured consumer silicone, use an electronic grade encapsulant.
I can only imagine its been added here to help secure the transformer etc but has been applied in a really shoddy way It's used to mechanicaly fix though-hole components to the board.
using 50hz frequency in ac current Why we use exactly 50 Hz and 60 Hz, instead of 100 Hz or even 400 Hz 1-is it for light 2-is it for the speed of ac motors Can I use a machine that work in 50 Hz ac network, in 60 Hz network This picture shows a motor of 15 KW 220-230V /50HZ this motor run a water pumpeMy question is: can this motor work in 60 Hz network without damageWill it's speed increase <Q> Transformers - these would prefer both a lower frequency and a higher frequency. <S> A lower frequency would mean eddy current losses in the laminates reduces. <S> So is 50 Hz the Goldilocks value for power transformers - <S> no <S> because other parts of the world use 60 Hz. <S> Is 100 Hz too high - probably on the verge of being too high for power transformer cost effectiveness. <S> What about induced voltages from those overhead cables. <S> The higher the frequency <S> the greater the voltage induced in other objects and this in turn can cause problems. <S> Faraday basically said this: - So for a given current in an overhead conductor (for instance) <S> the rate of change of flux would increase proportional to frequency and induced emf would increase in objects placed close by. <S> Would this cause a problem at (say) <S> 100 Hz. <S> Maybe it would - maybe AC motors would suffer with increased losses in ironwork due to induced emfs causing eddy current flow. <S> I guess this is pretty much related to laminate eddy currents in transformers. <S> Skin effect in conductors carrying high current is also of significant importance. <S> Wiki shows a picture of overhead cable like this: - Note the bundles of conductors formed into a triangle. <S> Wiki say: - The 3-wire bundles in this power transmission installation act as a single conductor. <S> A single wire using the same amount of metal per kilometer would have higher losses due to the skin effect. <S> Increasing the frequency increases the skin effect and this will reduce the power delivery capability of feed cables because AC current tends to want to flow in the outer part of a conductor: - <S> How much will higher frequencies affect copper losses: - <S> So, if frequency doubles the AC resistance increases by \$\sqrt2\$. <A> Lower frequencies are problematic for lighting because of flicker. <S> Regarding your question, as others said, we have to know more about the specifics of your machine to answer. <S> Some devices, for example electromechanical clocks, cannot work in frequencies different from those they were designed for. <S> https://en.m.wikipedia.org/wiki/Utility_frequency <A> 50/60 Hz is used to allow transfer over long distances. <S> If you go to say 400 Hz, the power line inductance will become significant and you will need vast amount of compensation along a long transmission line. <S> Transformer size would be smaller, but the overall cost would be much higher. <S> For very long distances, you need to go even lower in frequency, i.e. HVDC, for sea cables (~500+ km) or very long overhead lines (10,000+ km)
A higher frequency means fewer primary turns because primary inductive reactance increases and this means less copper loss under load. The reasons to use 50 or 60Hz are mainly historical.
SIM Cards are using a Microprocessor or Microcontroller? I'm getting different answers on various websites (Wiki, Online University courses, fan pages, etc...) and can't get to decide. That would mean there are SIM with Microprocessors and some with Microcontrollers? If so, which ISO or reference does it follow? Thanks <Q> A SIM Card is closer to a microcontroller than a microprocessor as it has all the required logic blocks including memory etc. <S> It differs from a microcontroller because it is a dedicated application circuit specifically designed with the security implications in mind. <S> It is not a general purpose circuit like a microcontroller that can be used in many different types of applications <S> (even if microcontrollers are more or less targeted to a field of applications).A microcontroller can not provide the level of security that a SIM Card provides. <A> It definitely is a microcontroller since it integrates a microprocessor, RAM memory, persistant memory, timers, I <S> /O (7816 and optionally SPI, SWP, USB), security features and - often - a cryptoprocessor (DES, AES, ECC, RSA, hash). <S> So both are correct: it uses a microcontroller, it uses a microprocessor. <S> The applicable ISO standard is 7816 for smart cards. <S> ETSI defines the SIM standards. <A> O on a single device. <S> The relevant reference is ETSI TR 102 216. <S> ETSI - Smart Cards
SIM cards are UICCs , which integrate a CPU, memory, and I/
Schema for protection system I have a client that has an equipement in an area where sometimes electricity goes on and off many times a minute , and it can dammage his equipement, so he asked me for protection system that works as following: 1- the system needs to starts with an ON button (on button or push button or whatever) 2- if the power goes off the system will shutdown (that is normal) 3- But if the power goes on again, the system will not start automaticly 4- it needs some sort of timer, that will check if the electrity was back for 5 mins for example without any cut again to start the system 5- and of corse an OFF Button. NB : I DONT THINK MY CLIENT CAN AFFORD A SMART RELAY. Thank you <Q> This is a very simple thing to do with a small microcontroller. <S> The on and off buttons would be just inputs to the micro. <S> The micro would sense AC power via a opto-coupler and control a relay that switches the AC on/off to the equipment. <S> Even a small PIC with internal oscillator can do this easily. <S> Of course if your client can't even afford a "smart relay", then he can't afford to pay you to create something different. <S> This kind of client is way more trouble than the little profit you might make. <S> He'll always be trying to push you on price, and second guess everything you're doing thinking you're trying to take him for a ride. <S> Run away. <A> Press the green button to switch on power. <S> When power goes off contactor will drop out. <S> When power comes back on start the timer. <S> When time reached press green button. <S> It has an off button too. <A> What you need is the Direct on line starter with timer. <S> Since nothing is free in this world and everything comes with a price, get your "client" to agree to buy for you the simple starter with red+green push buttons on it and get it linked to the timer unit. <S> This starts the load with a momentary press on the green button and it drops the load off during power failure. <S> When power restored, the timer runs and when it is up to 5 minutes, the timer shorts out the contacts of the green button, ensuring the automatic self start.
Your client needs a start-stop contactor and timer.
Electrical Load increase on Generator = More Fuel Consumption Why does more electrical current drawn from generator leads to more fuel consumption? I understand that it's due to conservation of energy, I just want to know what the phenomenon or reaction is called that leads to more power being transferred to the engine via the gears. <Q> The generator has a speed governor. <S> As electrical load increases the mechanical load increases. <S> If the throttle position were fixed the speed would drop. <S> The governor opens the throttle to allow more fuel in to maintain speed at setpoint. <S> Figure 1. <S> Mechanical speed governor. <S> Source . <S> Figure 1 shows an old-fashioned mechanical governor. <S> As the shaft rotation speed increases the flyweights are thrown outwards, lift the control sleeve against the speeder spring and drive the crank to shut off the fuel valve. <S> If the shaft slows down the speeder spring pushes the control sleeve back down and the crank opens the fuel rack again. <S> Spring pressure is controlled by the speed control lever to set the running speed. <S> Modern speed regulators are electronic. <S> They monitor the output frequency and adjust the fuel valve electrically. <A> The increased output current in the generator windings increases the magnetic field strength which opposes rotation of the generator shaft. <S> To maintain the same RPM (and thus maintaining frequency and voltage) <S> the engine throttle must be opened further, using more fuel. <A> Here is a more detailed account of the energy conservation involved. <S> The generator output power is given by Power (watts) = <S> Voltage (volts) <S> X Current (amps). <S> For AC, there are two additional multipliers, power factor to account for the phase relationship between voltage and current and the square root of three for a three-phase generator. <S> Voltage is regulated to a constant value. <S> Current and power factor vary with load. <S> The generator converts mechanical power to electrical power. <S> Mechanical power is given by Power = <S> Torque X Rotational Speed. <S> An additional multiplier may be required depending on the units of measurement used. <S> Speed is normally regulated to be a constant value in order to keep the voltage an frequency constant. <S> Torque varies as required to supply the mechanical power that is converted to electrical power. <S> The engine speed is regulated to maintain a constant generator speed. <S> If the generator requires more torque, the fuel flow must increase to supply the additional power. <S> Power available from burning fuel is proportional to fuel mass per unit of time. <S> The throttle regulates the mass rate of fuel flow. <A> For any electromechanical device, the electromagnetic torque or force produced is directly proportional to the current flowing through them, if the excitation field is unchanged. <S> A generator has its torque opposing that produced by the engine (or whatever is turning the generator). <S> The engine requires energy to turn the shaft against the torque of the generator, much like you need energy to lift a weight vertically against earth's gravity. <S> When the load draws more current, more generator torque is induced due to the higher currents, the engine faces more resistance to its motion, just like how you would feel when you are lifting a 10 kg mass and someone adds another 5 kg mass to what you already have. <S> The speed (of the generator and you) would drop as a result (which is not good), so the control system sends additional fuel to bring the speed back. <A> This increased armature current results in an increased magnet field in the armature. <S> The rotor (field magnet) located in this armature magnetic field will then experience more attractive force which will in effect reduces the speed of the rotor, which in turn reduces the voltage. <S> As the speed reduces the frequency in the system also reduces. <S> N=120F/P. <S> This reduction in rotor speed is sensed by the turbine governor. <S> To keep the speed and frequency constant at 50hz, the governor valve will be opened and the turbine output power is increased to restore the generator voltage and frequency. <S> Unloading: <S> If the load on a power system is reduced, the load current on the system reduces, which results in decreased armature current in the generator. <S> This reduced armature current results in a reduced magnet field in the armature. <S> The rotor (field magnet) located in this armature magnetic field will then experience a less attractive force which will in effect increases the speed of the rotor which in turn increases the voltage. <S> As the speed increases the frequency in the system also increases. <S> N=120F/P. <S> This increase in rotor speed is sensed by the turbine governor. <S> To keep the speed and frequency constant at 50hz, the governor valve will be closed and the turbine output power is reduced to restore the generator voltage and frequency.
Loading: If the load on a power system is increased, the load current on the system increases, which results in increased armature current in the generator.
How can i sense if a circuit is open on 2 wires and if so, close the circuit on another 2 wires? I have a 4 wheeler that i converted from a mechanical locking differential to an electric one. the mechanical one had a 2 wire spring loaded switch that as the mechanism rotated it pushed the pin in and closed the circuit and the 4WD light comes on. Now that i have converted it to electric differential, there's no where for that sensor/switch to go. instead it has an electric plug on it and i isolated 2 wires that indicate if is in 2WD or 4WD. in 2WD it is closed on those 2 wires, in 4WD it is open. It would be easy if it were closed when in 4WD .. i could just hook the light sensor/switch wires to it. but it's not. How can i sense if a circuit is open on 2 wires and if so, close a circuit on another 2 wires? assume i know how to use a multimeter and how to solder... that's about it. :) Thanks for any help. <Q> Pass a current through the wire that energizes a relay. <S> A SPDT relay can be used to direct current through two different indicators to show the current mode. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> You have a few options but we need to bear in mind that losing the 4WD indication can be dangerous as steering response will be affected. <S> Simple indicators simulate this circuit – Schematic created using CircuitLab Figure 1. <S> Two LED circuits. <S> (a) <S> The LED defaults to ON giving 4WD indication. <S> When in 2WD mode the LED is shorted out. <S> This will result in the continuous 'waste' of 20 mA through R1 (but this is tiny in comparison to the output of your engine.) <S> R1 should be 1/4 W or greater. <S> (b) <S> A slightly more devious circuit: The combination of the red LED and D1 will require about 2.5 V to get a decent light out of it (1.8 V for the LED and 0.7 for the diode). <S> The green LED will require about 2.0 V. <S> If the green is turned on it will drop the voltage at the junction of the two LEDs and the red should go out. <S> If there's still a bit of a glow from the red then add a second diode in series with D1. <S> This circuit has the advantage that one LED should always be on and, should the switch get disconnected, it will fail to red prompting more caution (than is required in the circumstances). <S> Use high-brightness LEDs. <S> Relay circuit simulate this circuit Figure 2. <S> Note that this circuit does not fail "safe". <S> If the switch or relay coil get disconnected the lamp will never turn on. <A> from IPad so not easy, but if there is a flow when it is closed, you can do this. <S> Maybe not understood correctly?
Using a relay to invert the switch logic.
Is this a good method to detect 120VAC I need an embedded micro controller to detect some 120V AC signals, I decided it would be safest and most likely the best to use an opto-isolator. Below is my circuit, I rectify the incoming 120V and limit the max current. My question is, even with the blocking diode, could the LED still be exposed to a high enough reverse voltage to ruin it? The LED is only rated for a 5V reverse voltage, but would the blocking diodes D3 and D4 take the line voltage or would it be split in some manner by the LED also, I'm thinking it would be split based on the leakage current of the diodes. I was going to add diodes in reverse of the LED, or a Zener could be used also. So, are Diodes "D5" and D6" needed? And do people agree this is a good or best method of detecting 120VAC? Note that "R3" and "R4" will be rated for > 300V. Edit: I did examine this post, but it does not touch on this question. Update: After searching more, I found this post and based on what Spehro Pefhany said below, AC isolators exist with LED's in both directions on the input. That seems to be the correct direction to go, it reduces the parts needed to a single resistor on the input to limit current. <Q> There are AC optocouplers, like TCMT1600 that will reduce this reverse block diode. <A> D5 and D6 allay my fears. <S> I'm the guy who gets stuck with hideously slow recovery diodes when I'm counting on them opening up instantly with voltage reversal. <S> All diodes begin to conduct fast. <S> Not all diodes go back off again fast. <S> PIN diodes function as RF AC switches because they take time to recover <S> : They intentionally remain conductive when forward biased--in both directions--which is needed to deliver both halves of each RF AC cycle. <S> Meanwhile I'm doing a pulser <S> and I'm battling hockey puck SCRs with a 400 microsecond turnoff time. <S> The manufacturer made my 3200 Volt SCRs stay latched ON extra long, hoping to "ride through" any inductive ringing (typical of motor loads). <S> So these were deliberately given extra long recovery time. <S> It's an example and not too relevant here. <S> Line voltage probably wouldn't get too high 30 microseconds after a zero crossing. <S> So LED reverse voltage may remain less than five volts; D5 and D6 probably aren't doing anything. <S> But hypothetically let's consider a world where all diodes take 400 microseconds to go back off. <S> Again line voltage rises quickly from a zero crossing. <S> Reverse LED voltage skyrockets given 390 microseconds. <S> Persistent diode conduction lasting 400 microseconds would exceed optoisolator LED reverse voltage rating: if it weren't for D5 and D6, saving the LED again every cycle. <S> AC input optoisolators render this discussion moot. <A> It should be fine without the reverse diodes. <S> Think of how many applications have a diode in series as a rectifier which has any (unknown) load the other side, which may include LEDs or other sensitive components. <S> The reverse leakage current will be too small to harm the LED. <S> The reverse breakdown voltage is the point at which current will flow in a reverse direction through the LED... except that it can't (or rather only a tiny amount can) because of the rectifier diode. <S> If you see what I mean.
It's the reverse current that damages the LED, not the voltage per se. Also there are optocouplers dedicated for your application, like ACPL-K370. I did see a case where 1N4007 diodes took 30 microseconds to cease conduction after current reversal.
A question about a step-down transformer with its secondary winding shorted Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question: Neglecting losses, we can write the voltage and power unity equations as: Vs = (Ns/Np) * Vp Vp * Ip = Vs * Is It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change. And if the above argument is true the power dissipated in secondary part is: Ps = (Vs^2)/R And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn. I have the following questions: 1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon) 2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? <Q> Under real-world conditions no transformer is 100% efficient at converting power. <S> If the secondary is shorted then the primary 'sees' more watts being dissipated than the secondary. <S> In fact the primary always 'sees' more power dissipated regardless of the load. <S> Many fused transformers have fuses on the primary only, and they are usually the slow-blow type because of inrush currents when the transformer is turned on. <S> Most 50HZ to 60HZ transformers are only 40% to 60% efficient at converting power, so for a given known maximum continuous load the transformer is likely over-rated by 50%. <S> Some transformers have short-circuit protection, usually those called 'wall-packs'. <S> Those with extreme high power/high voltage may have secondary fuses as well. <S> I have seen pole mounted transformers explode from a lightning strike maybe 50 yards from me, only to see a huge fuse explode about 100 yards away on another pole. <S> For major power distribution, it pays to fuse both sides of a transformer. <A> In any transformer, (or any other load connected to the voltage source) always place the fuse before the unit. <S> In step down transformers, shorting the secondary will stress the primary winding more than the stress taken by the secondary. <S> The reason for this is since both windings are in tight magnetic coupling, a step current increment in the secondary will have a step increment of current in the primary following the inverse of the Np/Ns ratio. <S> But since in the step down transformer the primary is made from longer and thinner wire, its resistance is higher and dissipate higher power and heats up much quickly. <S> Thus in case your transformer is the stepdown type, protect the primary is sufficient to prevent your transformer from catching fire. <S> If your transformer is step up type, its more appropriate to have fuse on both sides of the transformer. <S> Derating the fuse according to the winding current limits will protect the transformer. <S> Fuse on the secondary normally is used to protect the load connected to the transformer, not to protect the transformer itself. <A> The maximum amount of current a transformer can handle is dependent on several things, the resistance of the windings, the reactance of the windings (current through an inductor cannot change instantaneously and eventually you reach a point where the current just can't ramp up fast enough to supply the load). <S> The current is also dependent on the quality of the coupling between the coils, see, there is a magnetic equivalent of resistance (called reluctance), it acts like a series inductor which limits the current (old fashioned fluoro ballasts are just a series inductor). <S> Old arc welders would control the peak output current but changing the coupling between the primary and secondary windings (they'd be moved away from each other, the greater separation increases the reluctance of the magnetic circuit). <A> Using any ideal transformer or one with low leakage inductance and resistance, you will need a fuse or your transformer will burn with shorted secondary. <S> Having a transformer which is inherently protected against short circuit (very few are), you have enough leakage inductance to limit the current to a safe level which the transformer can withstand continuously.
So called power transformers and industrial transformers have a fused primary.
I need the full range from an LM34DZ, but I don't have an 1N914 diode. Is there a replacement? I want to read the full range on my LM34DZ, but I can't seem to find any 1N914 diodes. Is there anything I can replace it with? <Q> The purpose of the ground diodes in the circuit are to lift the LM34 above ground so that its voltage drop can become negative. <S> As such, any device that drops voltage can be used as a replacement. <S> However, using a silicon rectifier such as a 1N4148 <S> or 1N400x is likely the most appropriate choice due to the fact that the voltage drop is 1) not grossly dependent on the current (unlike with resistors), and 2) is high enough to be generally useful (unlike with germanium or Schottky diodes). <A> <A> The 1N914 might not be available, but you can always find replacement components by looking into the datasheet. <S> This advice applies to any component, by the way. <S> For instance, the 1N4148 might be a suitable alternative if the cost does not matter. <A> Aside from the many other valid suggestions from others - the 1N4148 and BAV99 being the most common - you could simply use a voltage divider (which wastes more power, so it is not so suitable for a battery source). <S> The dynamic resistance of a couple of 1N914 diodes at the 75 uA current of an LM34 is around 1500 ohms. <S> Suppose your Vs is 5 V, and you want -1.2 V for the pseudo-ground; you could use something like 4.7 kΩ/1.5 kΩ. Or use two diode-connected transistors or one transistor connected as a VBE multiplier (requires two resistors).
The 1N914 is a very generic "jellybean" small-signal switching diode -- the 1N4148 or any other jellybean small-signal diode (say a BAV99) can be freely substituted for it.
Does Fibre Optic Cabling have any potential for noise? After Google searching "Do Fibre Optic Cables attract any noise", most results return that they attract virtually no noise. Is this the case or are there some exceptions? <Q> You are right, this is the case but fiber optics can still have problems that can be perceived as noise that lead to incorrect data: <S> Intersymbol interference: <S> This is a kind of noise because the previous symbol that was sent will interfere with the actual symbol that is being sent. <S> Thus the previous symbol will act as noise. <S> Well known techniques to help it are called Orthogonal frequency-division multiplexing (OFDM) and Orthogonal Frequency Division Multiple Access (OFDMA). <S> You can find whole books about intersymbol interference. <S> Chromatic dispersion [ps/(nm·km)]: <S> The refractive index of fibers varies slightly with the frequency of light, and light sources are not perfectly monochromatic. <S> This has the effect that, over long distances and at high modulation speeds, the different frequencies of light can take different times to arrive at the receiver, ultimately making the signal impossible to discern, and requiring extra repeaters or special cables with adjusted indexes for every wavelength (so they arrive at the same time). <A> To make proper comparisons between fibre and cable you have to consider the photodiode at the end of the fibre to be part of the fibre and this is the weak link in terms of noise. <S> Typically the Hamamatsu S5973 photodiode produces a noise equivalent power (NEP) of \$1.5 \times <S> 10^{-15}\$ watts per Hz and given that the device is good for 1 GHz the noise power is going to be about 1.5 uW. <S> This photodiode converts watts to amps at approximately 2:1 <S> therefore the noise current is about 0.75 uA RMS. <S> You then have to ask yourself how much "signal" current is the photodiode producing and how this compares to the noise current. <S> I'm just trying to point out that you need to compare apples with apples. <A> There are potentials for OTHER kinds of artifacts which may be mistaken for "noise". <A> Noise in optical communication typicaly refers to the deviation from an ideal signal, and is usually assocoated with random processes. <S> In general, the noise sources in a fiber optic link include noise from the RF amplifiers in the transmitter, the laser diode, the photodiode and RF amplifiers in the receiver. <S> Of which laser noise arises from random fluctuations in the intensity of the optical signal. <S> The two main noise contributors are fluctuations in light intensity, which comes from the laser diode, and interferometric noise, which arises because of multiple light reflections in the optical fiber. <A> Fibre optic communication can be said that it does not attract any noise. <S> But this is due to the fact that the sensitivity of the receiver is on lower side [for the noises]. <S> Optical fibres are highly sensitive to different kinds of disturbances like temperature vibrations chemical changes, bending, etc. <S> These can be refereed from fibre based sensors. <S> Fibre sensors has been developed and some are being developed for direct detection of above parameters or dependent parameters. <S> In reality the receivers for these sensors is much complex as detectors or receiver module has to be highly sensitive and it is a complex signal which needs to be decoded/analysed. <S> In the communication systems optical loses, dispersion, ISI etc has more effect than induced noise so the power-budgeting and other considerations keeps communication data unaffected from other noises.
Assuming you are talking about DIGITAL optical cabling, there is virtually no possibility of "noise".
LED lamps keep glowing when dimmer is turned off For starters, I am not electrically capable, so bear with me. But, I did read similar questions that did not address, as far as I could tell, what I see happening (they may have, but I was too illiterate to understand). I hired an electrician to replace an existing chandelier (19 years old, including dimmer switch is 19 years old). I bought 6 expensive ($10 each) Energetic LED soft white 3000K bulbs. These are marked 'Dimmable'. The chandelier is on a dimmer switch where off has meant off, in the past. I installed 6 bulbs into the chandelier, turned them on bright. All is well. Turned them off, they remained 'on' (rather dim, but still not all that dim). I also bought $9 bulbs (not dimmable) for another fixture.I also have a lamp nearby with the old fashioned light bulb.I unscrewed 1 of the 6 on the chandelier. It, of course, went out. The other 5 stayed lit. I then screwed in the old fashioned light bulb. All 5 LED lights went out!!! I turned on the light switch, and all 6 turned on. I turned it off and then unscrewed the old fashioned bulb - all 5 LED came back on!!!!I then replaced it with a $9 LED bulb, non dimmable. $9 bulb did not light. But the 5 LEDs that were still glowing continued to glow. So, I have a problem. I want the lights to go off when turned off.Might the problem be the bulbs?The wiring of the new chandelier?The age/wiring of the dimmer switch?Something else? Hopefully, you can tell from the content of this, that I know very very little. If you start talking to me about MOSFET, capacitors, snubbers and the like, it will likely go over my head. I will ask my electrician to come back out, but thought I would see what enlightenment I could obtain here. <Q> Buy a dimmer designed for LEDs. <S> Your typical light dimmer puts out pulses of power; the brighter the setting the wider the pulses. <S> This is OK for incandescent lamps as they draw lot of current and need a brief time to turn on and off. <S> LED lamps draw about 1/10th as much current and turn on and off in millionths of a second, so the same narrow pulses that make an incandescent light dim will make an LED light at least medium bright, maybe even flicker a bit. <S> Almost any hardware store should have them. <A> I very much suspect that your dimmer is intended for use with incandescent bulbs, and it's not turning off completely. <S> This is not a problem with regular bulbs, as you've found out. <S> However, your LED bulbs have a power supply circuit which is able to operate on the small amount of power being passed by the dimmer, and this creates your problem. <S> You have two solutions, I think. <S> The first is to try replacing the dimmer - it may be defective. <S> The other solution you've already discovered. <S> Leave one incandescent bulb in the chandelier. <A> It sounds like the old dimmer switch is the problem. <S> In the OFF position it is still leaking a small amount of current. <S> The old fashioned bulbs are so inefficient that that small amount is not enough to light them. <S> The new bulbs are very efficient so the small amount of leakage is lighting them up. <S> By placing a single old fashioned bulb shunts the current away from the new bulbs causing them to shut off. <A> An easy fix, just fit a 240 V neon bulb (with integral resistor) across the lamp input, this mops up any residual current. <S> Inside the ceiling rose or the light fitting is a handy place <S> were it cannot be seen. <S> It worked every time. <A> Here is how a traditional switch is wired. <S> Note the switch is in series with the load. <S> Black is supply hot; red is switched hot; white is return (neutral). <S> A dimmer is a type of powered switch . <S> That would include dimmers, PIRs, lighted switches, smart switches, any case where the switch needs power for its own onboard electronics . <S> Here's how an ideal powered switch would be wired. <S> Note that it has access to supply hot and neutral, and can power its own internal electronics indefinitely from these. <S> However, in many physical installations, the switch is off on a spur, fed by a 2-wire cable (supply and switched hot - no neutral present). <S> That's outlawed as of 2011, but many installed switches are wired this way. <S> As a result of this, many powered switches wire like this: <S> How do they do this? <S> The powered switch places itself in series with the load. <S> When on, it adds a tiny amount of voltage drop. <S> When off, it leaks a small amount of current through the light bulb. <S> This relied on a feature of incandescents: <S> When cold, they resemble a dead short, and some current will flow before an incandescent starts to light. <S> So in fact, Off did not mean off in the past with the incandescents . <S> Current leaked through them too. <S> They were just so inefficient they didn't glow. <S> LEDs are more efficient than that, and do indeed glow. <S> Once you screw in one old incandescent bulb in parallel with the LEDs, its "near dead short" provides a low impedance path for the dimmer's leakage current. <S> It also greatly reduces voltage across the LEDs, so they too extinguish. <A> You have several options. <S> If your chandelier has six sockets, can you run one incandescent bulb and five LED? <S> That may keep the dimmer happy since the TRIAC inside it needs a minimum holding current and the LEDs have too low current consumption to satisfy it. <S> You can change the dimmer to one made for LEDs, made with MOSFETs inside and hopefully designed with very low "holding" current in mind. <S> Also, when you say on, how bright is that? <S> I have some LEDs which shine even when the switch is off and <S> no dimmer <S> what so ever. <S> If the plates inside the switch are too close (lamp lead switches mostly and not wall mounted), you will capacitivly couple a few uA of current which will bleed though the LED and make it glow. <S> At 230 V, this effect is twice what you would see in a 115 V country. <S> Far from bright, but still not off. <A> I had a glow issue with a newly installed LED fixture and new dimmer switch recently. <S> I found that the switch was wired after the load (on the neutral) instead of on the hot side. <S> Switching the leads fixed the issue.
You could "cheat" a bit by plugging in a incandescent bulb off in a corner to act as a minimum load which helps the LEDs behave better, but in the long run it is best to install the correct dimmer for the LEDs.
How to measure high-voltage and high-frequency ripple? I have a ferrite transformer outputting 1.2kV RMS on it's secondary. I have added a diode to do half-wave rectification, and have added a sufficient capacitor (10nF) in parallel to deal with ripple. It's more than enough, according to the formula \$dV = \frac{i}{fC}\$ I'm using 32kHz on the transformer, and I plan to draw at most 500 µA. This should give me a very small ripple. However, to measure ripple, I tried using a 1000:1 high-voltage probe. It has 1GΩ impedance, and it's rated for 60Hz only. Measuring my rectified / filtered voltage shows a lot of gargabe and a huge ripple, around 800V, but I suspect this is false, given the probe's inability to deal with higher frequencies, probably. I also tried a voltage divider using 10MΩ and 100kΩ regular 1/8W resistors and a regular scope probe on the 100kΩ resistor, but results are quite similar. Changing the capacitor or removing it completely changes the ripple just a little, about 10% better or worse. So, my question: is it normal to get false results with those probes (or voltage dividers) on high-frequency voltages? Is there a safe and reliable way to measure ripple on this particular scenario? UPDATE: Here is the distortion I got on the secondary, after rectification and filtering, to illustrate the question a little better: After reading the suggestions of everyone, I decided to do a test I didn't do before: I made another transformer, with a secondary of about 170V. That way I could use a regular 10x probe (without any voltage divider), and compare it's performance with the 1/1000 voltage divider, but keeping the 32kHz frequency. 170V is good because it's not too high for a 10x probe, but not too low for a 1000x divider. Here's the result. First, the secondary after rectification and filtering, measured with the regular 1/10 probe. A very acceptable DC signal: However, here's the same signal measured with the divider. If we ignore the heavy noise, we can see the same pattern seem on the first picture: I don't know it this is the only problem with my circuit, but it's clear the measuring method is the biggest one. I will build a 1/1000 probe, using high voltage resistors and compensation capacitors shielded from noise. I can't rely on simple voltage dividers for this thing. <Q> is it normal to get false results with those probes (or voltage dividers) on high-frequency voltages? <S> Yes, without a proper frequency compensation. <S> It happens because resistors have small parasitic capacitance, which can be modelled as a capacitor in parallel with a resistor. <S> These parasitic capacitors form a capacitive voltage divider for high-frequency signals. <S> If the ratio of the parasitic divider differs from the ratio at DC, you will get wrong measurements, since the overall ratio becomes frequency-dependent. <S> Usually this is not a problem at kHz range. <S> But not in the case of high voltage, which implies high-value resistors. <S> The capacitance of a typical resistor is approximately 1.5 pF, which gives 3.3 MΩ at 32 kHz for a pure sine wave. <S> Because you are using high-value resistors, the parasitic capacitance becomes the dominant factor even at kHz-range frequencies. <S> If a signal is not a pure sine wave, i.e. it contains high-frequency harmonics, the parasitic capacitance dominates even more. <S> Do deal with the problem, add a compensating capacitor (typically, it is a variable capacitor). <S> To get a frequency compensation the following condition must be met$$\frac{R_2}{R_1 + R_2} = <S> \frac{C_1}{C_1 <S> + C_2}$$ <S> This can derived from the ratio for a capacitive divider$$\frac{\frac{1}{j\omega C_2}}{{\frac{1}{j\omega <S> C_1} + \frac{1}{j\omega <S> C_2}}}$$ <S> The easiest way to test a divider is to look at a divided square wave signal via an oscilloscope. <S> With the right compensation, the square wave looks like the scaled square wave. <S> Without the right compensation, your will see a signal with a strange shape. <S> That's because the ratio of uncompensated divider depends on a harmonic number, and after the division the harmonics do not sum up to the square wave. <S> I'm not sure that the frequency compensation is the only problem; probably there are other issues related to a noise in the measurement circuit. <S> Also, typical 1/8W resistors are not suitable for 1.2 kV RMS. <S> The maximum allowed voltage for such resistors does not exceed 100 V RMS, if I remember correctly. <S> Consult the datasheet for the exact value. <S> edit <S> One way to get proper division is to use 10 nF capacitor as a part of the divider simulate this circuit – <S> Schematic created using CircuitLab Note that$$\frac{100\,\text{kΩ}}{10000\,\text{kΩ} + 100\,\text{kΩ}} = <S> \frac{10\,\text{nF}}{10\,\text{nF} <S> + 1000\,\text{nF}}$$ <A> Your voltage dividers, assuming these resistors are safe for these voltages (yes, resistors have maximum operating voltages, too, simply because you can "arc" through/around them), should be correct. <S> I don't see any reason why they shouldn't. <S> 32kHz is really not that high, so parasitic effects (capacitance of the resistor and traces) should be pretty much irrelevant, and since you're not running a couple hundred meters of wire, you probably also don't have a significant radiation of power. <S> I'd thorougly measure all the individual signals involved here – <S> is the current going into the low-voltage side of your transformer really shaped like you expect it? <S> If you don't do the rectification, what does the output voltage look like? <A> I run RG58U coax to the BNC connector on the scope .This <S> means that I dont need to use the probe . <S> I terminate the coax with 50 ohms when the scope does not have a 50 ohm button .The <S> errors of scope probes are much worse on the 1:1 setting <S> so people generally use the 10:1 setting .The pulse transformer setup <S> is 1:1 <S> so you have better measurement sensitivity <S> and it is safer .
Scope probes are easy to use but cant always be trusted .I use a capacitively coupled 1:1 high frequency pulse transformer .Then
What is a PA/LNA? I saw a comparison of two similar radio receiver modules. They used the same IC, but one had a greater range due to the inclusion of a "PA/LNA" which I understand to be an abbreviation for "Power Amp / Low Noise Amp". What is a PA/LNA? How does the PA/LNA work to increase RF range? Are the PA and LNA typically used together? (update) The module with greater range has this IC which includes the PA and LNA functionality: SE2431L2.4 GHz ZigBee/802.15.4 Front End Module <Q> PA: (power amp) <S> amplifies when transmitting. <S> LNA: (low noise amp) <S> amplifies when receiving. <S> both sit between circuitry and antenna. <S> for duplexed signal, passive duplexer shifts between the two on <S> Rx/Tx. <S> The PA stands for power amplifier, in this case a RF or microwave amplifier used for transmission of a signal. <S> PAs and LNAs are not always combined. <S> It depends on the application. <S> I found this article on the web which covers the basic details. <S> Understanding the Basics of Low-Noise and Power Amplifiers in Wireless Designs <S> By Bill Schweber <S> Contributed By Electronic Products 2013-10-24 1) <S> In a wireless design, two components are the critical interfaces between the antenna and the electronic circuits, the low-noise amplifier (LNA) and the power amplifier (PA). <S> However, that is where their commonality ends. <S> Although both have very simple functional block diagrams and roles in principle, they have very different challenges, priorities, and performance parameters. <S> 2) <S> The LNA functions in a world of unknowns . <S> As the "front end" of the receiver channel, it must capture and amplify a very-low-power, low-voltage signal plus associated random noise which the antenna presents to it, within the bandwidth of interest. <S> In signal theory, this is called the unknown signal/unknown noise challenge, the most difficult of all signal-processing challenges. <S> 3) <S> In contrast, the PA takes a relatively strong signal from the circuitry, with very-high SNR, and must "merely" boost its power. <S> All the general factors about the signal are known, such as amplitude, modulation, shape, duty cycle, and more. <S> This is the known-signal/known-noise quadrant of the signal-processing map, and the easiest one to manage. <S> Despite this apparent simple functional situation, the PA has performance challenges as well. <S> 4) <S> In duplex (bidirectional) systems, the LNA and PA usually do not connect to the antenna directly, but instead go to a duplexer, a passive component. <S> The duplexer uses phasing and phase-shifting to steer the PA's output power to the antenna while blocking it from the LNA input, to avoid overload and saturation of the sensitive LNA input. <A> PA and LNA will be at opposite ends of an RF link - and in a duplex link the role switches depending on the direction of the signal. <S> The two components (along with the 2 antennas) do a long way to determining the link-budget, this affects the combination of transmit range and bit rate. <S> At the receive end, for a given modulation scheme and acceptable error rate, you will need a specific ratio of signal power to noise power. <S> Signal power is determined by transmit power (from the PA), antenna gain, and transmission loss. <S> However, more power is expensive both in components and supply (PA is usually well less than 50% efficient). <S> LNA amplifies <S> both the wanted signal, and the thermal noise at the LNA input, plus a little more noise. <S> For a good LNA, this will be around 1dB of extra thermal noise. <S> The LNA also needs to be linear to avoid distortion caused by unwanted (often strong) signals that can be filtered out later in the receive chain. <S> A good LNA is the first thing to invest in, this buys you 1-2 dB fairly easily. <S> Then good antennas, then finally a more powerful PA. <S> There are lots of small details that also contribute - these two components on their own can't rescue a bad design. <A> When trying to understand PA/LNA, you may also want to understand how they are related to duplexers. <S> However, it was surprising to me to see how difficult it is to find a simple-to-understand diagram of a basic duplexer , that shows both the signal and schematic properties. <S> Of course nowadays you don't even need duplexers as there are different solutions, as often used in mobile phone baseband transceivers. <S> In this regard, one descriptive picture I found, is this one from a patent. <S> and from YateBTS site. <S> Also note that, loosely speaking, a duplexer is a used to TX and RX on the same antenna but (often) not the same frequency, whereas a diplexer is used as either a TX or RX on same antenna, but (often) on the different frequencies.
LNA stands for low noise amplifier, normally used for high RF bands or microwave signals as a sensitive signal receiver.
What challenges restrict the resolution of spacefaring digital cameras? I've been reading up about NASA's Juno mission, and came across the Wikipedia article about JunoCam , which is Juno's onboard visible-light camera. In the article, it's mentioned that the resolution of the sensor is 1200x1600 pixels, which comes out to just under 2MP. Obviously, sending any camera into deep space and establishing a stable orbit around Jupiter is no small feat -- but seeing as Juno launched in 2011, why is JunoCam's sensor's resolution so low? I'm assuming - maybe too optimistically - that design changes like sensor selection would be finalized 4-5 years before launch. In 2006-2007, entry-level consumer DLSRs often sported 10MP sensors. Basically; Is it more difficult to harden a higher-resolution sensor against hazards in space? If not, what reasons could NASA have to avoid using higher-resolution sensors? <Q> There is one overriding requirement for deep-space missions: reliability. <S> In general NASA Preferred Parts are quite stodgy, because the overriding need is for a mature, well-understood technology. <S> Cutting-edge technology that doesn't work is frowned upon under the circumstances. <S> So 10-year-old image sensors are about what you expect. <S> Additionally, if you read the JunoCam article you linked, you'll see (second paragraph, first sentence) that data transfer rates are quite slow, on the order of 40 MB per 11 days. <S> Increasing image size cuts down the number of images which can be acquired, and I expect that a lot of effort went into determining the tradeoff between number of images and image resolution. <S> For what it's worth, NASA has been pushing for better data rates for its programs, but the limited power and long ranges involved make this a non-trivial problem. <S> The LADEE mission a couple of years ago incorporated the LLCD (Lunar Laser Communication Demonstrator) which worked quite well, and this holds great promise (optical communication limit of 1 bit/photon at the receiver), so future missions may be able to do a lot better. <A> You seem to be under the impression that the quality of photos taken in space is limited by the sensor resolution, which is not the case. <S> Equally important factors are the sensor sensitivity, which gets worse as you increase the pixel count, and the robustness of the optical system. <S> Simply put, if you were to send a 10MP DLSRs camera on Jupiter, it wouldn't be able to focus properly (or at all) after the vibrations it experienced during launch to the point where the actual sensor resolution wouldn't matter. <S> Plus, it wouldn't get enough light to make quality photos. <A> Think more like 10 years before launch. <S> Once it's designed, it's designed - changing components is a major risk factor and they're unlikely to want to do that. <S> A massive amount of that time will have been spent on testing. <S> This is the appeal of small, semi-disposable satellites with cheap launchers going into Earth orbit - if you lose one then it isn't such a big deal. <S> With massive investment in money and time getting this thing to Jupiter though, adding risk is generally Not A Good Thing. <A> The details are worth a few minutes of web research, as they also limit the effective resolution possible with the fine pixel pitch common in digital cameras, including DSLRs. <A> The data transmission rate needs to be considered. <S> It costs time and battery energy to send back whatever images you do collect. <S> To your first question: <S> Yes: Protecting micro-electronics from hard radiation will be much more difficult as you reduce the size of a pixel and increase its susceptibility to ionizing radiation.
Also, diffraction at the optical aperture limits the usable physical pixel size to a relatively large value.
Measuring state of charge of a non rechargeable CR123A lithium battery Is there a method (voltage, resistance, other...) of measuring leftover charge in a non rechargeable lithium battery? Background: I'm trying to get a general overview of power consumption for few battery powered devices which have been running in different conditions for a few days. The devices have a battery level percentage indicator but I don't know how its implemented and would like to do a side measurement. There are suggestions to measure the state of charge by checking the voltage level of the battery. The problem there are no characteristics for the low power my device is using (5 miliampSeconds every 5-10 minutes). <Q> Standard CR123A comes in variety of brands and chemistries. <S> Their standard loaded voltage is 2.5 Volt at 700mA load current. <S> This data can be easily obtained from internet. <S> Now the easy way to tell how usable the battery is (not actually gauging left over charge) by loading it with a 3.3 ohm resistor while measuring its terminal voltage. <S> If the voltage drops below 1.5Volt, you can assume the battery can be send for recycling. <A> The thing is that batteries when fully charged have a higher voltage and when fully discharged - lower. <S> For example a 12v battery: <S> charged - more than 12.6V, fully discharged 11.6V <S> - 11.8V. <S> A 3.7V battery: <S> (fully) charged - 4.2V, fully discharged - 2.6V <S> - 2.8V. <S> You need to test the voltage. <S> Here is short information from wikipedia: As we can see its nominal voltage is 3.6v so basically it should behave the same (or very similarly) to a 3.7v battery (18650 for example) . <S> Fully charged at 4.2v and fully discharged at about 2.5V-2.6V. <S> However not all bateries are the same. <S> Some can give off current better when they are discharged (hence last more) and some fail to give even 20% of their 'advertised' current when they are near discharge point. <S> You should check this out for more information: <S> http://www.powerstream.com/cr123a-tests.htm But basically - 2.5V is the '0%' point of the battery. <A> The method I came up so far is to completely drain one battery with a constant 5 miliamp current and draw a Voltage-Time characteristic from that. <S> After that I could check if the voltage drops are measurable and if they are, assume that all the other batteries from the same vendor in similar conditions would behave the same.
All devices get "leftover" battery charge percentage by simply measuring the voltage.
What is the schematic symbol for resettable fuses? I have seen symbols for one shot fuses and circuit breakers, but not for resettable fuses which reset themselves when the fault goes away. What is the recommended schematic symbol to use? <Q> The symbol for a PPTC <S> (AKA Resettable Fuse) is: <A> Sparkfun has the following on their website: Which is close to what I've seen, but NOT a standard per se. <A> This is actually a more difficult question than it would seem. <S> This is because there are a variety of PTC (positive temperature coefficient) resistor devices. <S> A fuse; heating causes the resistor to go open circuit. <S> An overload limiter; similar to the fuse but does not go quite as high impedance when hot. <S> A current limiter; increasing resistance is intended to limit current in the circuit but keep the circuit functional A temperature sensor a.k.a. <S> a thermistor; resistance of the PTC device is measured to infer its temperature. <S> For example this is a PTC fuse datasheet Bel PTC Fuse datasheet , and this Murata POSISTOR datasheet is a datasheet for other types of PTCs. <S> I believe it is important for the schematic to show as much about the circuit functionality as possible. <S> The difference between fusing and current limiting is big. <S> In the first case the circuit stops operating in the second <S> it does not. <S> My understanding is that using the rectangle or the zig-zag for a resistor depends on which side of the Atlantic you are on. <S> Putting a diagonal line across the resistor seems to be a common way to indicate a value change. <S> The lines at the end of the diagonal appear to be used to indicate achieving an end state. <S> Hence the other answer has a comment pointing out the difference between a PTC thermistor and PTC fuse symbol. <S> The thermistor is not supposed to reach an end state of open circuit; the fuse does. <S> Thus the PTC fuse symbol is, as already answered: or While the other PTCs that do not go to an end state but rather limit current should the use a diagional line with only one horizontal line: <S> or I like the indicating the direction of the temperature coefficient without using an initialism, which is tied to English, but I think it should be "\$+T^\circ\$" rather than with a lower case "t" to differentiate temperature from time. <S> I suggest that the difference between a current limiting function and thermistor be noted with text on the symbol. <S> Something "cur. <S> lim. <S> " or just "limit" on the current limiting devices. <S> I don't use these often <S> so please let me know if I am incorrect. <A> This is from one of my schematics. <S> It's like a hard-edges version of a regular fuse.
A line with an arrow across a resistor is the common way to show an adjustable resistor -- typically mechanically adjustable.
Where do these grounding wires go? In the following schematic the two Vss pins where are supposed to be the grounding pins don't connect to anything. Does this mean that the circuit is self-grounding (if that's even a thing)? I'm planning on attaching this to a breadboard but I'm not sure how to power it since I don't have a direct connection to the TSL1402 ship and have to go through the accessible pins. Here is the schematic: <Q> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> Various "ground" symbols. <S> One of the most common examples is the automobile / car electrical systems which use the chassis as the return path to the battery negative. <S> Most circuits use the power supply negative as the common rail but many don't (and cars used to use positive earth many years ago). <S> Using a symbol for ground makes schematics more readable as it eliminates many wires. <S> The signal ground symbol is commonly used in electronic schematics. <S> The 'earth' symbol is used for a real earth connection on an electrical power distribution diagram but is often seen in auto electrical schematics and electronics circuits to signify a connection to electrical ground. <S> The 'chassis' symbol will appear in electronic schematics to indicate connection to the metal frame or chassis of the device. <S> Does this mean that the circuit is self-grounding? <S> No. <S> It just means a common "ground" or reference rail. <A> The symbol you highlighted is the ground symbol. <S> On a breadboard, all these symbols are attach to each other. <S> This is the reference for the ground of your D.C. power supply. <S> So for any voltage measurement, you take this point as the reference so the 5V point will be 5 volts higher than this reference. <A> The symbol with the three short horizontal lines (see below) is an indication of a GND connection. <S> It is normally understood that all symbols like this on a schematic are tied (a.k.a. connected) together. <S> One of these can be the place you can connect in the GND from your breadboard. <A> Since you claim you don't have access to the chip <S> I assume the displayed schematic is from some kind of module? <S> The ground symbol (3 horizontal bars stacked on top of each other in a triangle shape) is a graphical tool to connect all ground lines/pins with one another without drawing traces all over the schematic. <S> So every time you see this ground symbol it is logically connected with every other ground symbol. <S> To connect the ground of your power supply to this schematic, use the one pin of J1 or the one pin of J3 that is connected to the same ground symbol.
You will note that there are also GND connections at the connectors J1 and J3. In most circuits there is a common rail which is used as a reference for all other points in the circuit. In an isolated system - your mobile phone, for example - the "ground" is isolated from everything else until you plug it into a charger.
What are the underlying physical principles of a constant current power source? I´m currently looking for a led driver to drive a 34,25 W High-Power LED at 38.06 V / 900 mA. Browsing trough various online shops i found a constant current source and i don`t get how it works. The datasheet says the rated output voltage is 27 to 54 V, the rated output current is 1050 ma. What are the physical principles, that would allow this device to drive my led ? I don`t understand how the voltage would adapt based on the needed voltage of the LED. <Q> The device outputs a constant current as long as the voltage is in the compliance range of 27 to 54 volts. <S> The current is measured and used as feedback in the same manner that the output voltage is measured and used as feedback to control voltage sources. <S> If the current is too low, the output is increased and vice versa. <A> A constant current power supply will have a current shunt. <S> This is a low value, but accurate resistor. <S> When current flow across it, you get a voltage drop that is fairly small, but measurable. <S> This is used as feedback to push the voltage up or down to reach the set current. <S> This can be with a switching or a linear regulation. <S> The output is adjusted such that the reference pin is 1.25V. <S> This is typically a voltage divider with resistors to make the proper ratio of the output voltage to be 1.25V. <S> In a constant current supply, you would amplify the voltage of the current shunt, such that it would reach 1.25V at the correct current. <S> When you run a shunt that gets to 1.25V directly, the design is very simple: simulate this circuit – Schematic created using CircuitLab <S> The current is coming through the OUT, then the 12.5 ohm resistor. <S> There is a very small current flowing in ADJ. <S> The current is changed to keep the voltage drop between OUT and ADJ to 1.25V, thus giving 100 mA. (This can only change within the driving capabilities of the Supply voltage and the LM317.) <S> Can't figure out how to show current with DC simulation, but Ohms Law gives it to you. <S> The best designs are a switching supply that uses the feedback to generate that output voltage. <S> This has a much higher efficiency than a linear design, but slightly harder to understand. <A> You don't give a link to the datasheet, but in general the voltage adapts through current feedback. <S> The current through the LED string is measured, compared to a reference (That is equivalent to 1050mA) and the voltage is adjusted to maintain the current equal to the reference. <S> A servo acts to increase or decrease the voltage to keep the current at the desired setpoint.
In the simplest sense, a common variable linear regulator, like a LM317, can be used.
Are there any open / unrestricted radio frequencies that are free for any use? The only 'free radio frequency' that I could think of are the HAM radio bands. But they really aren't free, for they are regulated by laws that require you to hold a license, restrict what purpose you can use them for, and are governed by some organization or body. Are there any useable radio frequencies that are 'undefined'? i.e. I could use them any way for any reason without breaking any laws or be unethical. <Q> There's the ISM bands that you can use for whatever you want, as long as you stay withing certain restrictions, mainly power level. <S> The 2.4Ghz band is one such ISM band, which is why there is so much traffic on it (WiFi, Bluetooth, Analog video cameras, ZigBee). <S> For that band, I believe that the power limit is 1 milliwatt for continuous transmission, and 10 milliwatt for low duty-cycle transmission. <S> No license or certification is required, but the FCC would come after you if you exceed the limits. <S> Wikipedia has a list of ISM frequencies: <S> https://en.wikipedia.org/wiki/ISM_band <S> The two bands listed as "Amateur" are usable, but you need to know what you are doing. <A> Sorry, no, at least not in the US. <S> The FCC has allocated everything between 9 kHz and 275 GHz. <A> Think about it. <S> If there were such a frequency range, lots of people would be abusing it, thereby making it unusable. <S> That's why we have a central authority that allocates space. <S> Doing so is in everyone's interest, even if not in individual interests. <S> It's a lot like the rules we have for driving on a specific side of the road. <S> If there is a traffic jam one way and little traffic the other, you as one person would be better off driving on the other side of the road. <S> However, when everyone does this, as they would if there weren't any rules, we'd have a dangerous mess, and everyone would be worse off. <S> You can use much shorter wavelength, like IR and visible light, pretty much any way you want for communication. <S> Even then, you can't go lighting your neighbor's house on fire with a stron IR beam or point a laser in the eyes of airplane pilots.
No, not for normal interpretations of "radio".
Is there any symbol for electrical insulation? Is there any symbol for electrical insulation ? Is there any symbol for insulated wire? is there any symbol for non-insulated wire? <Q> Is there any symbol for electrical insulation? <S> As mentioned in other answers, the answer is no, or better, the whitespace indicate non connection. <S> You can see, for example, the difference between the gate in JFET (or base in BJT) symbol and the insulated gate in MOSFET and IGBT symbol. <S> Is there any symbol for insulated wire? <S> is there any symbol for non-insulated wire? <S> The only thing I can see in Standard is the following symbol: <A> Dotted lines around a conductive lines indicate shielding and full lines may indicate an enclosure. <S> I am not aware of a symbol indicating insulation. <S> You can add a comment/text to indicate that this wire needs a particular insulation. <S> That way you are certain that this critical information is visible in the schematics which is referenced more often than the design or repair manual that may accompany it in the future. <S> Add information like voltage, thickness, kind of insulation material as you see fit [indicating voltage will help emphasize to the future reader that the insulation is indeed important]. <A> No. <S> A schematic diagram shows the logical interconnection of components. <S> Whether any of the interconnections use insulated wire, or uninsulated wire, or a circuit board trace (or NO wire at all, for that matter) does not affect the actual CIRCUIT diagram. <S> Details like HOW components are interconnected are part of the NEXT step AFTER the schematic diagram: namely the IMPLEMENTATION or construction of the circuit. <A> It depends on if your talking about a cable diagram or a schematic diagram. <S> Typically cables are not specified in schematic diagrams because there are connectors on the board. <S> You can find and example of IEC or other standardized symbols here (you can also find examples of the IEEE symbols). <S> You can look at the IEC standard if you want to buy it here . <S> Even building schematics like the NEMA do not provide a cable symbol. <S> As far as cable diagrams most places do them in house and have there own markings for them. <S> All of the cable diagrams I've encountered usually have a physical representation of the connector and or wire <S> and then call out the material or part number if it is an order able item. <S> So do what makes sense, anything in a diagram that is atypical should be noted and that is up to the designer. <S> The most important thing for diagrams: <S> All information for the design is Documented That other people understand the diagram <A> we study electronics and electricity which is the flow of electron in the conductor and semi-conductors
The answer is no: no symbol for electrical insulationBecause
Getting Clock/Time information from PC to an STM32F4 Assume I connect a microcontroller board (like an STM32F4) to a PC via USB. Is there anything in the USB communication protocol that contains host clock data? In other words, can I sync the microcontroller to the PC clock just by hooking up to USB? If that's not possible, could you suggest some clever minimal-effort way to get the clock info from PC to the STM32F4? I guess I could always write some software to run on the PC in the background and provide that info to the STM32F4, and I'll do that if I must, but I'd like to avoid that. I could also use a network shield and have the STM32F4 query the PC over the network, but I'd like to keep cost and complexity down by not using any extra shields. I could also have the STM32F4 run its own clock base, but that's not an option - it must be synced to this particular PC down to the second, even if the PC itself is out of sync with NTP. By the way the Synchronisation is on the Microsecond level. EDIT : I found this for Arduino Board : http://playground.arduino.cc/Code/DateTime <Q> I think microsecond, or even 10 microsecond accuracy between host PC and a microcontroller over USB will be very hard. <S> Even with a program running on the host PC with access to a 1µs accurate clock, there is no guarantee when your program will be able to write down the USB port to the STM32F4. <S> It could easily be 250µs or more after the host program gets the time before the STM32F4 receives the time. <S> So you are going to have to implement an algorithm which works out the errors, and corrects them. <S> There is an internet Network Time Protocol (NTP) which uses an algorithm to synchronise time, but is only accurate to milliseconds. <S> There is also the Precision Time Protocol (PTP) which aims for microsecond precision, so this is the one to understand. <S> As you want to do all of this in software, you'll need to find an implementation which is understandable enough that you can extract the portions you need. <S> I searched the web for "Precision Time Protocol over USB" and found a bunch of potentially useful articles and application notes. <S> I suggest you look at them and come back with specific questions. <S> I would be surprised if you find any implementations of PTP for STM32. <S> An alternative approach might be to use GPS time signals. <S> With a GPS attached to the PC, and another to the STM32F4, you may be able to measure how far from GPS the host PC, based on a shred time base. <A> Actually I believe the USB spec also has a provision for Interrupt Transfers that it has a maximum latency of 1ms. <S> Assuming your PC was the host and your STM32F4 was the device, I would think the driver on the PC could access the realtime clock, initiate an Interrupt Transfer with the current host time. <S> Upon receipt the STM32F4 would compare the host time with its current device time, and respond to the host with the time difference. <S> Fundamentally you will need to keep track of 4 values: <S> Host Time Transport time to device <S> Device Time Transport time back to host <S> Depending on which side you want to be "smarter" <S> the host can track these and send the expected device time to the device. <S> Or the device side can track these times. <S> My thinking is the host side tracking them <S> would be most accurate. <S> With software <S> I think you will be able to achieve microsecond or below. <A> Read it. <S> Briefly, the USB host sends a frame sync message at accurate regular intervals. <S> This is 1 ms for low and full speed USB. <S> I vaguely remember it is faster, like 250 µs on high speed USB. <S> Again, read the spec. <S> These frame sync messages have quite accurate timing requirements, and can be used to get relative time in the device. <S> There is nothing in USB that requires transferring absolute time from the host to a device.
Ideally if USB controller chips had hardware time stamping you could get to low nanosecond levels of synchronization. The answers to your questions are of course in the USB spec.
how should I label components changing between PCB versions? I have a current version of my PCB, and I'll make a new one where I will delete some resistors here and add some more there. What's the common rule for numbering? Should I re-use the numbers of deleted resistors for the new ones, should I re-number everything to avoid holes in the numbering sequences, or should I leave the holes in the sequence for the deleted resistors and add the new ones at the end of the sequence ? <Q> Leave the holes, it lets the schematics and the PCB indicate a mismatch if someone has to repair or diagnose such a device one day in the future. <S> Many consumer circuit producers number their resistors starting at round numbers to indicate the general circuit area involved. <S> So radio parts starting in 100 may be the RF circuit, those in 200 may be the pre-amps, and the ones at 500 may be the audio amp components. <S> It lets you reuse circuit sub sections easier. <S> These days renumbering is much easier but some of the old reasons still have utility. <S> Basically premeditated holes in the numbering system. <S> EDIT: <S> It also lets your Bill of Materials (BOM) maintain internal consistency over the versions. <S> It will also generate more human readable file comparison diffs if resistor R203 is removed and R209 is added instead of resistor R203 removed and R203 added with a different value or watt rating (or what if the value and power stay the same the BOM diff will show no change after the circuit version was revised). <S> If you have made a change to the value only then the diff will be descriptive too as it will indicate the change and you will know that no parts were removed or replaced, just the value or the particular circuit component R203 was changed in value. <S> Also if you have to provide replacement components to a service agent in future you will have to have each part number classified by revision as they may be different. <S> If you were to add or remove just one component and renumbered ALL the parts following your admin will become very expensive. <S> Once the circuit is published (leaves your office/factory) it would be prudent to lock as much of the documentation down from spurious changes. <S> If it is the Mark 2 with vector field nullifiers and no longer the proton precession coils you could have a totally new product and keep the documentation separate. <A> It keeps continuity between the revs so that a technician can still follow the old schematic when troubleshooting the new board (test points, connector pins, etc will still stay the same) and positional references in documentation ("the pad immediately to the right of D14") will still be accurate. <S> However, if the old documentation has never, and will never, be released, and will never be needed for reference (this is a very rare case) <S> then you can probably re-annotate the entire board. <S> Personally I recommend just leaving the gaps in annotation though. <A> Generally I prefer to use numbering of components based on the functional block staring with 100, 200, aso. <S> As an example, the power section will have components numbered 1XX (ie R101, C101, U101), the RF will have 3XX (ie R301, L301, U301, Q301), aso. <S> Having it this way also helps me with the BOM and future service/debugging. <S> I never reuse the component identification (number) in a new version of the schematic. <S> Helps everyone that has to look at the PCB/schematic in the future and keeps documentation consistent.
Changing a part in one section will not affect the others even if you were to add another resistor it may increment the component number only in the local group and everything else is as before. Where I work I generally leave the designators the same after a new rev if there is even a slight chance that someone might refer to old documentation while looking at the new board.
Best way to amplify the photoresistor signal I have the following schematic. simulate this circuit – Schematic created using CircuitLab And it graphs to an almost stratight line. It is supposed to be a heartbeat sensor. From what I have read, I come to know that amplification is a must. But my knowledge of electronics is very limted and I need guidance. I am emitting a red. My photoresistor responds to IR signal very poorly and I had to use the red one. Am working with Arduino Uno board. <Q> I suspect R2 is much too low; the 100 \$ \Omega \$ of the LDR is probably under high light condition, you probably have much less light. <S> I suggest empirically choose R2 to center the A0 point with nominal light. <S> You may still need amplification, but this should give you the best performance without changing the topology. <A> Biological signals are generally very small (in the uV range). <S> You should construct an amplifier of some sort. <S> general info on instrumentation op-amps Before you construct a circuit though you should have an idea about the signal. <S> Get a good oscilloscope and you should be able to confirm you have a signal. <S> It can be helpful to simulate a signal using a function generator rather than your heartbeat to test the circuit beforehand. <A> You can use a simple op-amp circuit (non-inverting configuration). <S> No need for, or value in, an instrumentation amplifier. <S> Try this: simulate this circuit – <S> Schematic created using CircuitLab <S> Adjust R3 to get a couple volts at the output. <S> If there is not enough range, reduce R1 a bit. <S> This circuit gives 10x the voltage of yours (due to the 1K R1) and has a gain of +11, so 110 times the output. <S> You can increase R5 to increase the gain further.
A instrumentation op-amp would do fine.
brushed motor torque and efficiency? What causes the rpm difference between peak brushed motor torque and peak brushed motor efficiency? typical brushed motor performance http://members.toast.net/joerger/pic2/motorcurve.gif source: http://members.toast.net/joerger/AskAaron/motors.html page 15 of this brushed & brushless motor performance summary talks about efficiency but doesn't mention a connection between torque and efficiency. http://www.dtic.mil/dtic/tr/fulltext/u2/a577582.pdf <Q> Peak efficiency means best ration of mechanical power (velocity * torque) to electrical power ( <S> I <S> * (IR + BEMF)). <S> The ability to keep high torque is lost towards highest RPM, this is why there is a peak instead constant growth. <A> The DC motor performance curves posted in the question are drawn for a fixed voltage and variable load torque. <S> At zero speed, the load has been increased to the point that the motor is stalled. <S> The motor is producing its maximum torque, but the shaft will not turn because the load is so high. <S> Since mechanical power is speed multiplied by torque, the output mechanical power is zero. <S> 100% torque X 0% speed = 0% power. <S> However, the motor current is high, so electrical power is going into the motor even though no mechanical power is coming out. <S> All of the power going into the motor is producing heat inside the motor. <S> Efficiency is output mechanical power divided by input electrical power. <S> No output divided by some input = <S> zero efficiency. <S> At 100% speed, the load has been reduced to zero so that the motor has nothing holding it back and runs at the maximum possible speed. <S> Here again, there is no mechanical power being produced. <S> 0% torque X 100% speed = 0% power. <S> Here again, there is some current going into the motor, so power is going into the motor, but no power is coming out. <S> Here again, the efficiency is zero. <S> The speed at which efficiency is maximum is someplace between zero and maximum speed. <S> The losses in the motor are mostly losses due to the resistance of the winding. <S> Losses in resistance are proportional to the current squared, and increase rapidly as the current increases in proportion to the torque. <S> The output power of the motor rises as the torque increases, but reach a peak when the speed has decreased as much as the torque has increased. <S> At that point, the losses have risen to equal the output power of the motor. <S> Half of the input power to the motor is going to losses and half to output, so the efficiency has fallen to 50%. <A> Put simply, the rpm difference is caused by current causing a voltage drop across the motor's internal resistances (brushes, commutator, armature windings). <S> Torque is proportional to current. <S> When the motor is free running it speeds up until the back-emf (almost) equals the supply voltage, and it draws just enough current to overcome internal losses. <S> As loading increases the current must increase to supply a torque to match the load. <S> This current causes a voltage drop across the motor's internal resistance. <S> The motor slows down because the driving voltage (supply voltage - voltage drop across internal resistance) is lower. <S> Peak torque occurs at maximum current. <S> This occurs at zero rpm because at that point there is no back-emf <S> so current is limited only by resistance. <S> However power output = <S> speed <S> * torque, so efficiency is zero. <S> loss reduces. <S> However as speed increases so too do frictional and magnetic ('iron') losses. <S> rpm is at a maximum when free running, but then torque is zero so power output and efficiency are also zero. <S> Peak efficiency occurs at the point where copper loss = iron loss, typically at 80~90% of no-load rpm.
Efficiency improves at higher rpm because then the current is lower so resistive ('copper')
Receive inaudible sound at ~20khz I'm looking into receiving data to toggle a led. The data would have to come from a microphone that picks up sound at inaudible frequencies transmitted by standard speakers (TV, PC). I've looked at ultrasonic sensors but they work at 40khz instead, which cannot be relied on because the speakers might not support it. Microphones ranging from 20hz to 20khz can be had cheaply. Looking at this page , it seems to put quite a burden to check the frequencies with Arduino timers. I figure there must be a simpler way to do it instead. Is there? Can I do it without an Atmel processor? I'm not very familiar with sound workings in electronics, so excuse my ignorance. <Q> Most of those very inexpensive "jelly bean" commodity electret microphone capsules have frequency response above 20 KHz. <S> So finding a sensor for 20 KHz should not be a significant issue. <S> They typically don't publish specifications for frequency response above 20 KHZ because most applications don't require it. <S> If you used an amplifier stage and a resonant circuit detector (or PLL, etc.) <S> , then you could feed the "demodulated" signal directly into your microcontroller (Arduino, etc.) <S> input pin. <S> Making the MC detect 20 KHz would not be anybody's first choice of solutions. <S> A commonly used chip for tone detection is LM567. <A> However those with onboard digital conversion may block it, so an analog one is preferred. <S> Common audio sources and speakers can typically produce a useful signal even though they are not designed to, but analog broadcast formats and the lower bitrate digital ones may not be able to carry one. <S> You will probably want an MCU wth an audio class ADC if you intend to pursue a digital decoding that works in the presence of background noise. <S> You will probably also want a modulation and demodulation scheme more sophisticated than on-off keying. <A> This is padding. <S> This is a start:
While ultrasonic electrets exist, common MEMS microphones potentially work a lot better than common electrets at these near-ultrasonic frequencies.
Does a switching supply always need load I'm looking to use a switching supply from an old computer for a project. The project runs a motor that turns on and off based on a timer, powered by the switching supply. Do I run any risk running the switching supply while not under load (ie: when the motor isn't running?) <Q> Generally, switching power supplies have a minimum percentage load to keep their output(s) in regulation. <S> For example: a wall-wart type power supply rated for 12V may have a 16V output when measured with a DMM and no load. <S> At very low loads, the power supply may also exceed its rated output ripple. <S> This problem, particularly for re-using computer power supplies, is generally solved by placing a power resistor that will draw the required minimum current across the affected supply rail. <S> However, this decreases the system efficiency. <S> Additionally, depending on the configuration of a multi-output power supply's internal rails, the minimum power requirement may apply per-rail or to only a specific rail. <S> For example, if the PC power supply generates 5V from the 12V isolated converter, externally loading the 5V rail would stabilize both the 5V and 12V rails. <S> Check your datasheet, if available, or engage in trial-and-error testing for your specific supply. <A> You should have no problem using a switch mode PSU without a load. <S> However something to consider, a lot of modern switching supplies nowadays at low output power/load operate using "pulse" mode modulation. <S> This means that below a certain output power, the power supply is not switched in a continuous PWM operation as they are normally, but the PWM is pulsed on and off at a much lower frequency than that of the PWM (below a few hundred Hz). <S> This can present low-frequency ripple on the output, which for your application shouldn't be an issue. <A> It depends entirely on the individual supply. <S> Older or low quality supplies can have significant overvoltage, significant undervoltage, massive ripple and other very bad behaviour.
Modern supplies from reputable vendors generally have controllers that can maintain regulation under no load conditions though there can often be a low frequency ripple on the output which is difficult to filter.
Is it possible to build a grid of silicon photosites to build a digital camera at home? So I know a digital camera takes light and focuses it via the lens onto a sensor made out of silicon. It is made up of a grid of tiny photosites that are sensitive to light. Each photosite is usually called a pixel, a contraction of "picture element". I was wondering, even if it is as small as 100x100 pixels or even smaller, is it possible to build my own grid of tiny photosites at home out of silicon and use that to make a digital camera? I know you can go out and buy them but I am trying not to use premade parts for this project only raw materials, machinery and other thing(transistors, diodes, resistors etc.) Is it possible to build one of these silicon grids at home? If so how? <Q> Because discrete photo-sensitive components (LDR or photo-transistor, or PV cells or whatever) have significant physical size, you will end up with VERY LOW effective resolution because the space between the sensors will likely be much larger than the sensitive areas themselves. <S> Not to mention the issue of scanning/multiplexing, etc. <S> 10,000 or more sensors. <S> If you mean actually creating an array of photo-sensitive diodes (or whatever) into a MONOLITHIC silicon substrate, then it would appear that the old rule applies: If you have to ask, you probably can't do it. <S> It takes millions of $$$ worth of very specialized equipment to process IC wafers. <S> Not to mention cleanroom environment, and knowledge of design and manufacturing techniques. <S> Not to mention raw materials (wafers, dopants, gases) that would be very difficult to source if you are not in the industry and have big piles of money. <A> You then need to find a way of reading them. <S> Surprisingly you can buy blank ones on ebay, but the chemical processing is nasty and requires a clean room and at least a chemistry degree. <A> As an alternative approach you could consider using one decent sensor and moving it with an X-Y motion to "scan" the focal plane behind the lens. <S> This would, of course, be rather slow and, <S> so, only suitable for static scenes. <S> The beauty of it is that you read only one pixel at any time. <S> How to: Make a "camera" box big enough to hold your X-Y motion. <S> Paint the inside black to prevent reflections. <S> Position a lens <S> the correct distance away to focus the image on the plane of the sensor. <S> You will need a view port to check this. <S> Mount the sensor on the X-Y and connect it to your micro analog port. <S> Have your micro move the X-Y motion in a back and forth scanning motion and record the brightness reading from the sensor to memory. <S> It sounds like a lot of work. <A> DIY Photolithography in your garage Guide. <S> You can build plasma-induced Hg machine for photolithography by yourself. <S> Nothing ultra-hardcore about it! <S> You need Hg lamp. <S> Make appropriate skills in chemistry and glass blowing or... better just buy one. <S> 180nm is sufficient. <S> Building machine for injection of metal into surface is the real problem, because you need to modulate magnetic field for that. <S> A lot of physics is required from you for that part! <S> Some solenoids and you are good. <S> Etching is the real task... <S> even more.... <S> You need to make strong laser for evaporation. <S> But the best way is to use plasma. <S> Create your own plasma controlled by field. <S> Hardcore? <S> This one is NIGHTMARE. <S> If you didn't finish QED course.
Building your own wafer from scratch is not feasible. The most practical approach is to find small SMD photo-diodes and build a grid of them.
How is this trackpoint supposed to work? I have salvaged a track-point device from an old laptop that ceased working. I couldn't find any information on it in the hardware manual. However, the labels (see below) suggested that I could apply a voltage between VCC and GND, and read out signals from X and Y, so I decided to solder a pin header to it and give it a try: I tried to apply 5V on the VCC, and found that the X and Y pins would settle at about 2.5V each. However, pushing the track-point in all directions did not have any measurable effect on the voltage on X and Y pins. Does anybody have any information regarding such device, or ideas of what I could try out to get a response from it? <Q> The Wikipedia page has more information. <S> The TrackPoint needs to be treated as a pair (X-Y) of strain gauges. <S> That Wikipedia page also includes a link to a relevant patents with more info. <S> Updated - Here is some more info I found from my bookmarks: <S> The A-to-D conversion (encoding) from the analog TrackPoint signals into either a standard serial or (later) <S> PS/2 mouse interface, was originally done by a separate MCU. <S> Later this functionality was built into the keyboard controller. <S> The Sprintek SK7100 is one of the devices produced by various vendors, which interfaces a TrackPoint sensor to a serial or PS/2 mouse connection. <S> That datasheet includes this reference schematic - the TrackPoint sensor is connected to J1 (upper right). <S> The PS/2 mouse interface signals are on J2 (lower right): <S> I remember that some earlier interface devices were less integrated and used external op-amps between the TrackPoint sensor and the MCU. <S> If you really want to make your own TrackPoint interface, rather than use one of the existing devices, then PCB layout is even more critical as the smallest amount of noise interferes with accurate force-detection. <S> This problem is sometimes seen when people use non-original PSUs for their ThinkPad, and then find that the cursor starts to move on its own, due to the increased EMI (from the lower-quality PSU) affecting the analog TrackPoint signals. <S> As an aside, this interview on the Microsoft Research site with one of the original TrackPoint inventors, gives some interesting background regarding its development. <A> Patent <S> US6509890 shows the following circuit. <S> It looks like a wheatstone bridge strain gauge. <S> I have absolutely no idea if your device is similar. <S> Try measuring the difference between X and Y. <S> I'd expect it to be small. <A> You didn't mention whether you connected the GND pin? <S> For example here is how one would connect to an Arduino: https://www.arduino.cc/en/Tutorial/JoystickMouseControl
This is a rather common thing to interface to a microcontroller.
Two 12v Power supplies in series? Is it possible to use my (3d printers 12V 20A power supply) in series with 12V unknown amperage pc power supply to gain at least 24V 20A for my printers new hotend? <Q> Probably. <S> The PC power supply (-) lead will be connected to the Earth terminal of the AC Mains plug. <S> The other supply may be isolated from the Earth terminal - or may not. <S> If the (-) terminal of the other supply is in fact connected to the Earth terminal, it's usually pretty easy to open the case and separate the (-) terminal from the Earth terminal. <S> Then just connect the two supplies in series. <S> I'd be tempted to install large diodes in anti-parallel with each of the outputs of the power supplies. <S> That is: Cathode to (+), Anode to (-). <S> The diodes should be rated for the full short-current rating of the power supplies. <S> The purpose of the diodes is to limit the amount of reverse voltage applied to the output of whichever power supply goes into current-limit first. <A> Can you connect two power supplies in series to add their voltage? <S> Yes, IF at least one of them is FULLY FLOATING, i.e. not referenced anywhere to ground. <S> Can you use a PC power supply for one of the sources? <S> Seems very doubtful. <S> Virtually all PC power supplies are GROUND referenced and you would have to be very careful to make <S> a ground-referenced power supply the "lower" (0-12V) half of the series string, with the "upper" half (12-24V) being the fully floating supply. <A> You are not going to get 20 amps out of that arrangement. <S> You will get 24 volts ok, but the current will be equal to the lower of the two, which will only be around 2 or 3 amps, depending on the PC supply. <S> Check the label on the PC supply to determine how much current you can draw. <S> It might be as high as 18 amps, but might also be under one amp.
Since we don't know anything about your unidentified existing printer power supply, this seems rather doubtful.
What is the identity of this shiny insulator? fig: Shiny layer I've seen many times, on metallic objects, a kind of insulator layer is given (I've tested its resistance with multimeter and it shows infinite ohms). The layer is greenish-gold in overall color, but have colored patches that may vary from red to green. It is seen on small transformers( above image ), switches, fan-regulators, etc. The layer change its color, and colored-bands shift their place when the parts get heated. Now , what is the identity of this golden insulator? What is its resistivity and other characteristics? And also, what is the identity of another , reddish-colored insulator used in electromagnetic coils? also, what is its resistivity and other characterisics <Q> That is common chromate surface treatment. <S> Seen in a great many metal utility items. <S> Not intended to be an "insulator" but perhaps non-conductive as a side-effect. <S> The red surface of magnet wire is the enamel insulation which is applied as a coating (vs. being an extruded plastic outer sheath as most other wire uses. <S> They use a very thin enamel insulation to get many windings into the space available in a transformer, coil, solenoid, motor, or whatever. <S> It is called "enamel" which it probably was in early days. <S> But in modern times, it is a more sophisticated plastic coating of perhaps several layers. <S> Note that red is only perhaps the most popular color. <S> Magnet wire comes in several other insulator colors also. <S> Ref: <S> https://en.wikipedia.org/wiki/Chromate_conversion_coating <S> Ref: https://en.wikipedia.org/wiki/Magnet_wire <A> Chromating is an anti-corrosion surface treatment for metals. <S> It is not used for insulation. <S> That's normal. <A> The layer is greenish-gold in overall color, but have colored patches that may vary from red to green. <S> Chromate conversion coating. <S> More comonly known as yellow chrome. <S> Wikipedia article <S> I've tested its resistance with multimeter and it shows infinite ohms <S> Push harder with your multimeter pins or scratch the yellow surface with a knife first and you will measure short circuit instead though the steel underneath the yellow chrome. <A> As others have said, the yellow color is chromate treatment on top of mild steel. <S> Less common these days, I think, there is some environmental issues with some versions of it. <S> It's actually quite conductive electrically, to the extent that a version of it is used on aluminum, for example, when we actually require electrical conductivity. <S> Similar parts from Wikipedia link above: <S> The entire transformer is probably also vacuum impregnated with enamel by dunking it into a liquid in a vacuum chamber. <S> This leaves an almost clear coating on top of the chromate (where it is thicker, as in drips, it will appear more brown). <S> It improves the insulation of the windings, and bonds the laminations together <S> so they are not as likely to buzz at 100Hz or 120Hz from the mains (photo of equipment from above link):
The colour can vary a bit, depending on the process and base metal condition.
Is it possible to identify what magnet has crossed over a hall effect sensor? We are currently doing a project with DCC locomotives, involving a micro controller LPC1768. Now the scenario is that we have a railway and under the railway we have hall effect sensors which the locomotives crosses over. Since we will have two different locomotives running on the track, is it possible to identify which one is currently crossing a particular sensor, since the trains do have their specific addresses in an address byte would it be possible to measure how the magnet(one magnet is in the form of a cylinder and one of a cube, and we are assuming they should affect the voltage or current differently) on each train affects the hall sensor. Another solution we have already thought of is to run the trains and measure where they are based on the magnets it has crossed. But for a nicer implementation a solution like the one ask for would be far easier to implement. <Q> Look at RFID tags. <S> They can be microscopically small (and lightweight). <S> And cost only a few pennies for guaranteed unique identification. <S> You could put a tag in each piece of rolling stock and use inexpensive sensor kit(s) to monitor/record/control the complete makeup of trains. <S> Ref: http://www.pcrnmra.org/pcr/clinics/RFID-in-Model-Railroading-20130123.pdf <A> Its possible assuming a few things: That the magnets are different (your question wasn't quite clear on that). <S> That your hall effect sensor can pick up the change in magnetic field with out saturation and with sufficient sensitivity. <S> If the magnets have different field strengths then you could differentiate the magnets by the difference in voltage from the sensor (assuming its not saturation, the voltage output of the sensor is proportional to the magnetic field strength). <S> You could detect the voltage difference with a voltage discrimination circuit built from comparators <S> If the magnets are like a 'barcode' (like your post might suggest) then it would be more complex. <S> The solution wouldn't be easy to implement, because you would have to monitor the voltage from the hall effect sensor and sample fast enough with an analog to digital converter to see the magnet go by, you would have to develop an algorithm to do this. <A> Single magnets, no. <S> However, you might try something like placing a pair of magnets in line, with sufficient spacing that a double magnet would produce a double pulse. <A> You need three magnets. <S> Mount one of them under the fist locomotive. <S> Place the other two under the second one at some small distance from each other. <S> Now count how many magnets were sensed in 2 seconds after sensing the first one. <S> If 0 - it's a locomotive 0. <S> If one - the other one. <S> If you need more locomotives use more magnets and also use different distances between them. <A> You are getting into the overall problem of occupancy detection. <S> There are no easy answers. <S> Some solutions are: Light detection. <S> IR or visible light detectors are put under the track in various places. <S> Either some ambient illumination is assumed, or IR LEDs are put overhead. <S> Rolling stock on the track is detected by the shadow it casts. <S> This has the advantage of working with any rolling stock without modification, but is susceptible to noise, changes in ambient illumination, and has problems with steady state levelling. <S> Accumulated dirt is a never-ending problem. <S> It also only measures occupancy at specific points. <S> Current sensing. <S> This measures the current draw from different sections of track. <S> Some friends of mine are into model railroading, and I designed them a advanced DCC current sensor. <S> It worked so well that I even made a product out of it. <S> Advantages are that it senses whole sections of track regardless of where the rolling stock is. <S> Problems are that unpowered cars aren't sensed unless you put deliberate resistors across wheel sets. <S> 10 kΩ is sufficient for the OC1. <S> Magnetic sensing. <S> This is probably the least popular from what I can tell. <S> It doesn't have the problem of adjusting to the ambient level as light detection does, but not all cars have a sufficiently large magnetic signature. <S> It also requires more circuitry at each point, and is point-based like the light method. <S> All these methods as usually employed only tell you something is there, not what is there. <S> The most common method I've seen to know what is where is to track it in clever software. <S> As long as the software initially knows what's where, it can track it as it moves between sensed segments or points. <S> There is a module in JMRI that already does this, and I've seen it work. <S> However, if I wanted to do that I'd probably use IR pointing down with sensors below the track. <S> That makes both the transmission and detection circuitry simpler. <S> With good software tracking, you only need occasional identification. <S> I'd look into a combination of the two to keep track of where things are.
You could theoretically have each locomotive pulse a magnet with a unique ID, then detect that with the Hall sensors.
What is the metallic coating on these stranded copper wires? The 5-colored wires's strands looks like aluminium, but it is basically copper wire at centre, with a thin conductive coating (my teacher shown it to me) of silvery-color, so that the copper doesn't get weathered . If the rubbery insulator removed, and the metallic strands Scrapped with a knife or a shaving-blade , the silvery-colored layer removed, and the red, copper-interior comes out. Now, i want to know, what substance(s) used in this coating? These wires sold as 5-color electronic wire (for low voltages). I've given a scale also. Since nothing is printed on the insulator, and i buy in retail amount (1 or 2 yard ) from a large spool, i've also no clue to the datasheet right now. (however, these wires are not very costly (around Rs. 10 per yard)). The upper strands (with respect to photo) of the upper, yellow wire is scrapped with a blade and it shows red, coppery colour inside . the blue-one shown as reference or control. <Q> What is it? <S> The wire is similar to that used in Amphenol's Spectrastrip. <S> Figure 1. <S> Amphenol's Spectrastrip. <S> Figure 2. <S> The Spectrastrip datasheet lists the conductors as "tinned copper". <S> Why is it? <S> Figure 3. <S> Despite severe oxidation her popularity remains untarnished. <S> Copper is a great conductor but is prone to corrosion - see the Statue of Liberty. <S> Tin plating brings the following benefits: Corrosion resistance, including marine environments. <S> At high temperatures (> 100°C) <S> the corrosion resistance of copper declines. <S> Soldering is easier as tin is a primary component in solder. <S> Tin plating strengthens the copper wire underneath. <S> But ... Steve Lampen, Belden, in his blog post In Defense of Tinned Copper makes some very interesting observations. <S> Tin coating prevents copper from tarnishing. <S> The green copper oxide is a semiconductor and is generally to be avoided in electrical connections. <S> At high frequencies when skin effect comes into play the tin layer, if used, becomes more prominent. <S> Tin has a resistivity 6.5 times that of copper (\$ 1.1 \times 10^{-7} \$ and \$ <S> 1.7 <S> \times 10^{-8} ~\Omega m\$ respectively) <S> so having the signal predominately in the tin layer results in higher impedance. <S> High frequency cables are not tin plated for this reason. <S> Certain formulations of Teflon (PTFE) are very caustic and can cause oxidation of copper during extrusion. <S> Having a tinned copper conductor reduces this effect. <S> (Silver is an expensive alternative.) <S> In summary ... <S> For low frequencies use tinned conductors for ease of soldering and resistance to corrosion. <S> For high frequencies use bare-copper. <A> 1) Because of the (somewhat new) RoHS standards, many wires are now nickel plated and cannot be soldered. <S> You must use crimp connectors and headers for these wires. <S> 2) <S> The Reduction of Hazardous Substances act has changed the manufacturing process in almost every plant that uses wires in their products. <S> 3) <S> It has been a very expensive conversion process costing thousands of dollars, but it has settled in as the 'norm'. <S> Those that must use tin/lead solder or a silver mix must declare it on the documents for that product, and labeled on the product. <S> 4) <S> Lead car batteries would be one example. <S> Doppler radar front-end boards would be another. <S> Pure silver plating is mostly used by the military along with Teflon insulation for higher currents in small gauge wires. <S> I forget the mil-spec number for that <S> but it exist. <S> You can still buy tin/lead solder and tin-plated wire as a hobbyist for personnel use, or for in-house test equipment. <S> 5) Be careful of taking cables out of old PC's and appliances for general use. <S> If they are nickel plated you cannot solder them. <S> Use must use crimps or acidic fluxes. <S> I found this link and PDF about nickel plated wires. <S> Lots of details. <S> Nickel plated wires: <S> It is estimated that over 10 000 tones of copper wire are plated worldwide per year with silver or nickel. <S> These plated wires are used principally for stranded conductors in high performance electric cable for the aerospace, airframe, defense, computer, telecommunication and professional electronics industrial sectors. <S> In addition, plated wire is used for high temperature cable, spark ignition leads and fuses. <S> Nickel plated copper wires can resist temperatures up to 750°C. <S> They are corrosion resistant and weld easily. <S> Stranded conductors in this material are coated with suitable temperature resistant materials for cables. <S> This coating process is requiring high temperatures, makes it unsuitable for silver plated wires which would oxidize. <S> A drawback with nickel plated copper is its reluctance to solder easily without special fluxes and the need to plate the nickel under carefully controlled conditions in order to give a pore - free and suitably ductile deposit for drawing. <A> It's almost always tin. <S> Silver is plated on Teflon insulated wire and some others (rare), and silver has a little different look than tin. <S> The Chinese have been known to use aluminum for shielding braid for things like USB cables. <S> I have never seen aluminum as a plating on wire, but it could be done. <S> Lead-tin plating was once common on stranded wire for easy soldering back in the day when TVs were made in this country. <A> It could be aluminium, tin or perhaps even silver. <S> If your teacher said that the purpose is to protect the copper core it could be aluminium as it conducts electricity really well but doesn't corrode because it forms a protective oxide layer. <S> There's no definite answer with such little information though.
The coating is most likely tin plating.
Connect anti-static wrist strap to earth wire? I have started doing some hobby electronics at home and got myself an anti-static srtist strap recently. I am unsure as to where to attach it to. Most topics I found online regarded PC building and attaching it to the PC case, but I want to use it while handling other things as well, mostly I am scared of killing my raspberry Pi. Since I live in Europe, one idea I had was to attach the clip to the earth connector on a Schuko (3-prong) outlet. But I'm not sure if it would be on the same potential as my pi or other things. And what happens if that clip comes off? So is it safe to attach it to the ground wire? If not, where else can I ground myself? <Q> There are several considerations for your wrist strap. <S> A) Safety for you B) Safety for your fragile components and kit C) Process, use them correctly D) <S> Grounded? <S> A) <S> Your safety <S> Your wrist strap should include a large amount of current limiting before connecting to a real earth. <S> In practice, this is a 1meg resistor in the strap, and/or where you clip it to, <S> and/or a connector that you plug into an earthed outlet to contact the ground pin. <S> B) <S> Your kit <S> There is no point grounding yourself, if the stuff you're working on can float to any potential. <S> Use a conductive sheet, and place all your tools, components, work in progress, on it, and ground the sheet (via a safety resistor) as well. <S> Metal foil, sheet or a tray will do. <S> In industry they tend to use conductive plastic, which is nicer to work on. <S> C) Process <S> Having the right equipment doesn't help if you don't use it correctly. <S> When your PI arrived through the post, it was (obviously) not connected to your grounded sheet. <S> At some point, you have to connect them, and it's at that point a damaging charge transfer could occur. <S> When you connect them, make sure the point that connects first is a grounded point of the PI, a connector shell for instance. <S> Before you unwrap a component from its conductive bag, or pull it from the conductive foam, touch the bag or foam to bring it to the same potential as you (ground). <S> D) <S> Grounding? <S> Once you, you tools, your components, your work in progress are all at the same potential, it doesn't matter whether the whole equipotential group is actually connected to ground or not. <S> In practice, it's a whole lot easier to keep track if it is, and as soon as you use a grounded soldering iron, 'scope or power supply <S> you have that earth connection, so you may as well start off with it. <A> If the resistor is present, it is safe to connect the wrist wrap to earth. <S> If not it is dangerous, because any voltage present in the equipment you are working with would easily discharge through you and the wrist-wrap. <A> The idea is specifically to bring yourself to the same electrical potential as the ground point of the device that you're operating on. <S> If you've got a way to secure the ground of the Pi to your wrist strap, do that; otherwise you can make a quick-and-dirty ESD mat by putting a piece of thin cardboard or cotton sheet over a sheet of aluminum foil. <S> You can also ground to a metal pipe or any other large metal object, but I wouldn't connect to your home's electrical ground -- you can get shocked that way if there is any defect in your home's wiring <A> I have always thought the workbench should have a single earth ground attachment point, with the mat, power supplies, electronics and wrist-strap all connected to that one point. <S> Most wrist-straps include a bleeder resister so you won't get zapped too hard if you touch anything hot. <S> That protects both you and the circuitry. <S> Whether you connect the wrist strap to the mat or back to the central ground isn't that much of an issue. <S> The only real concern I've had in the past is when I've use wrist straps that have an alligator clip to connect to ground. <S> If that clip comes off that whatever it is connected to (you, in this case) does not have a reliable ground. <S> So in my later years I first started hard-wiring everything. <S> Which is good. <S> But the lazy part of me doesn't want a wrist strip on at all, so <S> I figure simply touching the mat is usually enough to discharge any static I have in my body. <S> My soldering iron is plugged into the same ground source, as are the workbench power supplies. <S> One fallacy I think people need to dispel of involves working inside PCs. <S> All the manuals tell you to first unplug it. <S> I humbly suggest this is not a good idea, because if you leave it plugged in the entire chassis is at ground potential, so resting your hands on the metal discharges you enough that it is safe to install RAM and such. <S> It has always worked for me. <S> When I work on very high-voltage circuitry (like broadcast transmitters), I always follow the left-hand rule: <S> Keep your left hand behind your back. <S> That way any accidental discharge will most likely just go through a hand or maybe an arm, but not through your heart. <S> Same concept as a wrist-strap without having a wire hanging from your arm.
Your anti-static wrist strap should include a built-in resistor in the order of 1MOhm.
Why reverse polarity causes damage Is it because it can exceed the reverse breakdown voltage of semiconductors? For instance in low voltage devices say 1-5v is the reverse voltage rated a lot lower than that for damage to occur? Also I watched a video where a car battery was accidentally connected backwards and the main 80 amp fuse blew yet no other damage occurred. Makes me wonder what failed that drew so much current? <Q> So speaking to the semi-conductor / Silicon side of things. <S> The expectation is that people who use the device will operate it correctly, so reverse wiring really isn't designed into the product (with some exceptions). <S> Most of the the reverse bias (forward conduction in the protection diodes) arises from parasitic structure in the die, and as such they will easily conduct at very very low resistances and if you try to drive those structure hard the metal layers or bond wires are going to lose big time. <S> Overheat, melt and be irrecoverably damaged i.e "magic smoke". <S> The examples that need to handle this reverse bias and survive will be designed to do so. <S> But typically you don't design to protect people from their own stupidity unless it's endemic. <S> One concern that will always come up is the event of latch up with parasitic SCRs in the substrate. <S> These can be activated by reverse bias or very high \$ \frac{\partial V}{\partial t} \$ spikes. <S> However, as much as this is a reality in the bare process, a lot of time is spent in ensuring that this cannot happen if the product is designed properly. <S> In short during qualification of a new process this is analyzed and studied extensively and then parameters are put in place that are used to ensure that Latch up cannot happen in the final product. <S> These are manifest in the DRC (Design Rules Check) and have to do mainly with spacing of structures and implants. <S> Some designs may require that these DRC be waived, if this is the case <S> then this would be noted in the data-sheet. <S> But for most products this is not going to happen. <A> Is it because it can exceed the reverse breakdown voltage of semiconductors? <S> That's one of the reasons, but you'll have to look at the device as something much more complex. <S> For example, think of a transistor that would normally be "off", but negative biasing puts it into conducting mode. <S> In that mode, it might carry a much higher current than it was designed for, leading to the destruction of the device. <S> Then you've got protection diodes, whose job is to break down in case of ESD, or in case you suddenly stop supplying current to a motor, and that causes, by the inductive properties of that motor, a negative voltage spike of very large amplitude (but only for a very short duration), which you need to "short away" as quickly as possible. <S> Those diodes would normally be in "non-conducting" mode, but if you wire them up reversely, they will constantly conduct electricity, overheat and die. <S> More examples: <S> In classical switch mode power designs, there's often a diode antiparallel to the excited coil; in normal bias without the switch mode controller active, it doesn't let any current through. <S> If you plug that circuit in in reverse, the diode will constantly carry high current and evaporate. <S> Electrolytic Capacitors (typically, the round cans) are voltage sensitive and will quickly be destroyed by reverse voltage, reducing their oxide dielectric layer, leading to them gassing, high currents flowing, liquid blowing up, fires, the four apocalyptic horseman and more Justin Bieber albums. <A> I watched a video where a car battery was accidentally connected backwards and the main 80 amp fuse blew yet no other damage occurred. <S> Makes me wonder what failed that drew so much current? <S> Nothing 'failed'. <S> Luckily rectifier diodes can handle very high peak currents, so provided the fuse opened quickly they should have been protected. <S> Here is a typical alternator circuit. <S> The arrows on the rectifier diodes show the direction of current flow. <S> Normally the diodes steer current from the stator coils into the battery, but block current from going from the battery back into the coils. <S> However if the battery is connected in reverse then its positive terminal is connected to chassis ground, so it can push current straight up through the diodes. <S> Most solid state devices have components that are polarity sensitive, whether by design or as an inherent part of their makeup. <S> If the device is not specified to handle reverse polarity then you must assume that it can't. <S> Don't plug a power supply into a device it was not designed for without first verifying that the voltage and polarity are correct. <S> Never connect an electronic device directly to a car battery without a fuse!
With the battery connected in reverse the rectifier in the alternator would have been forward biased, causing a very high current to flow because it is connected directly to the battery.
is there any impact of Electron's spin in CRT? fact-1. Electron 's (or any-other particles) magnetic behavior depends upon mass, charge and spin . fact-2. All electrons are not the same. Though all electron have the same mass and charge; an electron can have any 1 of the 2 possible "Spin"... (+1/2) and (-1/2) (though later-on the electron can flip to opposite spinned state). fact-3. Moving electrons, in CRT TV, deflected using Magnetic Coils. Now , my question is, does all the electrons coming out from the cathode in a CRT TV, deflected irrespective of spin ( i.e. deflection in CRT does not depends on spin )? or some-how the cathode release all electrons of same spin ? or indeed depends upon spin ? <Q> So there are aspects of your question that are wrong, let's first get that out of the way. <S> Your "fact 2": Wrong. <S> All electrons are the same. <S> A more correct statement is that they are indistinguishable from one another. <S> In fact it is this example behaviour that allow fermi statistics to arise and thus gives us a model that is very powerful and accurate in predicting behaviour of the ensemble of particles. <S> In a Quantum system the various states that these fermions can take on are determined by the system, but you can not say which electron is actually in a given state. <S> Another assumption you are making. <S> You are conflating quantum effects to macroscopic effects. <S> In a CRT you're not going to see any quantum effects. <S> The stream of electrons looks like a current and such currents respond of Magnetic fields. <S> Spin effects do take place, but it is such a small effect that other effects are much much more dominant. <S> For example the natural repulsion electrons have for each other is very dominant in determining focus size. <A> See Lorentz Force on a Moving Charge in a Combined Electric and Magnetic Field. <S> https://en.wikipedia.org/wiki/Lorentz_force <A> In principle I think you could sort electrons by their spins. <S> It's a twist on the Stern-Gerlach experiment with electrons. <S> You need not a magnetic field, but a magnetic field gradient. <S> You can goolge Stern-Gerlach with electrons and see more.
No, the magnetic deflection force on a moving charged particle depends on electron charge and velocity , and the electric and mangetic field strength; but does not depend on electron spin .
Explosion in a battery I recently joined a 9 volt cell and two 1.5 volt cells to get an output voltage of nearly 6 volt. For 15 minutes or so, the battery worked perfectly. It did not even show any sign of heating and in the next 2 minutes the 2 1.5 volt batteries hiss out and leave a lot of smoke. I am not even sure if its called an explosion. All the batteries were general purpose walmart batteries. The batteries were connected in the following way: simulate this circuit – Schematic created using CircuitLab My guess is that these particular cells were faulty/defective from the very beginning. Can there be any more possible and realistic reason for this kind of explosion? PS: All the values in the circuit have been measured using a DMM and rounded of to nearest integer or 10th of an integer. <Q> If your circuit is drawn correctly, then the current is flowing backwards through the 1.5V cells. <S> This is like trying to (re)charge them beyond their design voltage. <S> It is very bad for them, and yes, they may catch fire. <S> An actual explosion is possible but very unlikely, as the case is designed to vent off the pressure and stop the explosion - and it sounds like this is what happened here. <A> Do not try to subtract battery voltages <S> "subtraction" amounts to charging, and charging is something that must only be attempted with care to confine it to what is appropriate for the chemistry and charge state of the cells in question, which for non-rechargeable types (or rechargeable types that are already "full") amounts to " don't do that " Do not place an ammeter across a battery or power supply <S> An ammeter is intended to be inserted into a break in a legitimate circuit. <S> If you apply one directly across a source (or for that matter a load resistor) <S> you short-circuit it and end up essentially measuring the internal impedance of the battery in comparison to that of the meter, which is both something of little relevance, and something that can lead to potentially dangerous levels of current flow. <A> As Chris says, you were charging the 1.5V cells in a way they werent designed to be. <S> If those cells were normal alkaline, they are not designed to be charged at all. <S> Never put alkaline cells in a position where current flows into their anode. <S> Also, never charge a cell beyond its nominal voltage. <S> Charging batteries is a dangerous science depending on the type and the current and voltage have to be carefully controlled.
What you are trying to do here is conceptually flawed, and as you were fortunate to discover without injury, potentially dangerous. If you want a 6v battery, don't try to subtract voltages, rather get a collection of matching cells which will total to that range, for example 4x 1.5v-nominal cells, or even an old style lantern battery (which is typically just four large cells in a common overcase).
Detecting a wire break I'm trying to design a circuit where it is essentially a motor controlled by a microcontroller with some feedback. The motor will have its own isolated power supply and the actual on/off switch will be a relay + transistor. What I'm looking for is to have some sort of feedback if the motor side is not working (power supply is dead or wire break, etc), as currently if the controller switches the transistor/relay, there is no way of telling if there is current going to the motor. The motor may also be far away (let's say a meter or two) as well, increasing the chance of a wire break. Currently, the only way I can think of doing this is to use an optocoupler with a current divider in parallel with the motor line. I have no idea if this a proper way, but I have gotten it to more or less work (with some guess work + trail and error on the resistor values). An issue is though that it is not perfect as I'm guessing when the motor is turned on/off (spinning up or spinning down or stall), the current draw changes which affects the reading a bit. What would be a better/proper way to do this? Or what should I look up, as trying to search this in google just returns many results on how to measure current using a meter or using the continuity test. simulate this circuit – Schematic created using CircuitLab Side note: My knowledge in electrical engineering is pretty limited and my schematic is pretty rough just to get the idea across: I've omitted some parts, like the diode across the relay coil/motor and the transistor might be wrong one, etc. <Q> These sensors basically give you a measure of current through a wire. <S> You will need to find one which fits in to your current range. <S> One example of such sensor is the ACS712T by Allegro. <S> You can even find these sensors ready on breakout boards (at low prices) making it very easy to use. <A> If you're only concerned with an open, try this: <S> Wind enough turns around a reed switch so that there'll be enough of a field to hold it closed when there's current through the motor, and use a wire size that won't starve the motor. <A> The best way to do this is to get a motor with an optical encoder attached to its shaft. <S> As the motor turns the A and B quadrature waveforms from the encoder can be fed back to the microcontroller to detect that the motor is turning. <S> You can also detect the direction the motor is turning and also the motor rotational speed. <A> You've answered your own question already. <S> The best way to do this is to monitor the current. <S> I'll go so far as saying this is the only way you should consider doing it (being an engineer of 20 years standing working on electronics and embedded software, including fault detection on automotive systems). <S> You've even mentioned this in your question. <S> But then you've asked people "please can you help me, because Google has told me the answer already. <S> What should I do?" <S> Answer: you should follow the answer you already got out of Google!!! <S> ;) <S> You may actually be trying to ask "Google has told me how to measure current using a resistor and a meter. <S> How do I use that technique to get the current measurement into my microcontroller?" <S> For that, I'll give you a Google search with several hits for useful tips . <S> I will note that you also want to add diodes on the ADC input to protect against voltages higher than +V or lower than 0V. <A> You can put a relay with two contacts. <S> When the motor is off and the circuit is OK you will get a signal from optocoupler. <S> In this way, when the motor is on the optocoupler circuit will not interfere with motor. <S> In your design the optocoupler diode and R2 are troubling since diode will "steal" up to 2,5 V from motor's power supply and R2 <S> will additionally decrease motor voltage. <S> However, in my modified design you cannot detect if something happens while the motor is running, only when relay is switched off. <S> I've calculated R1 based on LTL-307EE and 9V power supply, with condition R1>>Rmotor. <S> And it is roughly 250ohm. <S> In order for this circuit to work, R1>>Rmotor must be satisfied. <S> I guessed 10ohm for motor, but you need to measure it. <S> Also, around 30mA of current will flow through motor when test circuit is on. <S> That current must not be enough to spin the motor.
Another way to do it would be to use a hall effect sensor. Of course there are numerous other ways you can detect faults in the system, but the gold standard is monitoring current.
Why is average voltage across an ideal inductor zero? In some texts it mentions that the average voltage across an ideal inductor is always zero. How can we derive this conclusion by using: V = L (dI/dt) Why is average voltage across an ideal inductor is always zero in steady state? Can you give an example when the source voltage is a PWM signal and a sinusoidal signal? edit: They sometimes give buck converters as example. But how can they assume the current will be constant before making this assumption? <Q> The average voltage across an ideal inductor is zero just as the average current into an ideal capacitor is always zero. <S> If it were not zero average current for a capacitor it would charge to infinite volts. <S> If it were not zero average volts for an inductor it would take infinite amps. <A> The correct answers were already given. <S> As a supplement, I will try to give a formal mathematical derivation. <S> To begin with, you can integrate both sides of the equation to get a formula for the current through an inductor at time \$t\$: $$V(t) = <S> L\frac{\mathrm{d}I(t)}{\mathrm{d}t} \implies\int_0^t V(\tau)\,\mathrm{d}\tau = LI(t) \impliesI(t) = <S> \frac{1}{L}\int_0^t V(\tau)\,\mathrm{d}\tau$$here we assume that \$I(0) <S> = 0\$, otherwise add \$I(0)\$ to the expression for \$I(t)\$. <S> "The average voltage across an ideal inductor is always zero <S> " actually means the average voltage over a period <S> is zero (otherwise it's meaningless to impose such condition). <S> That is, here we assume that the voltage across an inductor is periodic. <S> Assume that the voltage across an inductor is a periodic function with period \$T\$. <S> "Is periodic with period \$T\$" is just another way to say that \$V(t) = <S> V(t + T)\$ for any \$t\$. <S> Let's calculate the current after \$n\$ periods, <S> i.e. when \$t = nT\$ (\$n\$ is an integer): $$I(nT) = <S> I(\underbrace{T + T <S> + \dots + <S> T}_{\text{n times}})= \frac{1}{L}\int_0^{\underbrace{T + T <S> + \dots <S> + T}_{\text{n times}}} V(\tau)\,\mathrm{d}\tau$$ <S> Here we can use the property of an integral \$\int_0^{x+y} f(t)\,\mathrm{d}t = <S> \int_0^x f(t)\,\mathrm{d}t + \int_x^y f(t)\,\mathrm{d}t\$ to break the integral into a sum: $$\begin{split}I(nT) &= <S> \frac{1}{L}\left(\underbrace{\int_0^T V(\tau)\,\mathrm{d}\tau+ \int_T^{2T} V(\tau)\,\mathrm{d}\tau+ \dots+ \int_{(n-1)T}^{nT} V(\tau)\,\mathrm{d}\tau}_{\text{n <S> times}}\right)\\&= n\cdot\frac{1}{L}\int_0^T <S> V(\tau)\,\mathrm{d}\tau <S> \quad \text{because }V(\tau)\text{ is periodic with period }T\\&= n\cdot I(T)\end{split}$$ <S> From this expression you can see that if the integral over a whole period$$I(T) = <S> \frac{1}{L}\int_0^T V(\tau)\,\mathrm{d}\tau$$is not zero, then after \$n\$ periods the current through an inductor will be n times larger:$$\boxed{I(nT) <S> = <S> n\cdot <S> I(T)}$$As \$n\$ goes to infinity, so does the the current. <S> Thus, the only way to keep current from going to infinity is the condition\$I(T) = 0\$, which is equivalent to$$\int_0^T <S> V(\tau)\,\mathrm{d}\tau = 0$$because <S> \$\frac{1}{L}\$ is just a constant factor. <S> Just to remind you, \$T\$ is the period of the voltage. <S> The lower limit of integration is \$t = 0\$. <S> "Zero time" can be an arbitrary chosen instant of time, because the process is periodic. <A> Think about what would happen when the average (DC component) of the voltage across a inductor was not zero. <S> The current would build up linearly. <S> For a ideal inductor, this would continue as long as the voltage was applied, and the current could become arbitrarily large. <S> Real inductors have some resistance, which can be thought of as being in series with the pure inductance. <S> Eventually the current will reach a steady state where all the voltage is across this resistance. <S> Even in real inductors, that resistance can be small, so currents resulting from steady applied voltage would be large. <A> so when using ideal inductors to model real-world problems the current is finite and the average voltage (over all time) is zero. <S> in thought experiments this rule can be ignored, but you won't get results that are applicable to the real world. <A> The fact that the average pure inductor is zero can be proven from the equation. <S> $$ V = <S> L <S> \cdot <S> \dfrac{d}{dt} I $$ <S> We can do this by considering what current would be flowing otherwise. <S> $$ <S> I = <S> \int^{+\infty}_{-\infty} V \text{ d}t$$ from this no matter how small \$ <S> V\$ is, other than 0V, then the current \$ I\$ would eventually become infinity large which clearly is impossible. <S> This does not mean you can't ever measure a non zero average voltage across an inductor <S> but that's due to all real inductors having series resistance. <S> A similar argument explains why the average current in a capacitor must be zero. <S> This answer is essentially that given by Andy and others except for my added mathematical notation.
beacuse the current through an ideal inductor is proportional to the time integral of the voltage it sees and if the average voltage is not zero the time integral is infinite.
How to speed up Modelsim simulation How can I get Modelsim to run faster for simulation rather than something in the picosecond range (time interval)? Are there any other methods for speeding up simulation? It takes 45 minutes to get to 1ms as of now. I want the simulation to run for 20 ms to check on certain counters, timer modules and events. The system clock runs at 50 MHz. And if there is an option will there be any drawbacks. For example missing events etc.? <Q> This is a really common issue for all FPGA developers. <S> Here are my advice (there are probably many other). <S> You just need to separate your design into smaller modules (or only look at one process after one). <S> Or you can define different constants for simulation like this: CONSTANT <S> MY_CONSTANT <S> : integer := <S> 50; -- for simu-- <S> CONSTANT MY_CONSTANT : integer : <S> = 500; -- for real Doing this for every counter can really make you save time. <S> And finally you can obviously accelerate your clock too. <A> Ensure that your timescale and time precision are set appropriately for your design. <S> If the system clock is 50 Mhz, you do not need 1ps resolution. <S> By reducing the time precision the simulator will evaluate fewer events and it should help the simulation speed. <S> For Verilog, use the timescale directive: `timescale 1ns/100ps <S> The first argument is the timescale - this will be used as the unit when using delays such as #10 . <S> The 2nd argument is the time precision. <S> For a 50 Mhz design where that is the highest frequency you need to simulate, 1ns/100ps would be appropriate. <S> For VHDL, I don't recall how this is controlled, and if it's a language construct or tool specific. <S> But the same concept holds. <A> Depending on your install, your simulator resolution may be picosecond by default. <S> Check your modelsim.ini and look for the Resolution variable under the [vsim] header. <S> Alternatively you can force the resolution on the command line. <S> Your are very close with your example. <S> The syntax is vsim -t ns for nanosecond resolution. <S> Note that the Verilog timescale is very different in VHDL. <S> Since time is a unit in VHDL, the time reference nature of timescale isn't meaningful (all wait for ... <S> have an explicit time, not implicit like Verilog's # ). <S> The resolution parameter for Modelsim is more analogous to the precision in timescale , but rounded down to the smallest precision. <S> So if you use a Verilog timescale <S> that dwikle suggested, Modelsim will use picosecond resolution. <S> Now, if your clock cycle is 50MHz, then you'll need at least nanosecond precision. <A> i just come here to see the issue of simulation speed. <S> i just disabled showing errors in Trascript section during simulation, and it got much faster. <S> just try it
First you can decide to watch only few signals, so that the calculation will run faster.
Frequency Synthesis from 1MHz to 800MHz I've been trying to plan out a new project for a while. It's a wideband antenna anaylyzer for radio applications. I've figured out most of the details, but the most important part - the frequency source - has me stumped. I'm trying to figure out the best way to cover the frequency range of ~1MHz to ~1GHz, which would cover pretty much all of the communications frequencies in use today. I was originally planning on using a DDS chip like the AD9910, which is around $40 and about the most I want to spend on a single component. The problem with that chip is that even with a 1G reference clock it still goes downhill pretty quickly after 400MHz. Analog has more options, but price ramps up pretty steep after this chip. The Si570 series from Silicon Labs was another idea I was tossing around, but the output of the 10M-1.4G chip is LVDS which is not easily converted to a sine wave without a lot of filtering. A PLL was something I was tossing around, but I haven't done much research into them and it seems that most chip offerings are designed for higher microwave applications. I know regardless of what I use I'll be including switchable filter banks to cover the entire range. I would love to hear input from some people who have more experience with RF design and see what you guys think I should be using to generate these frequencies. And by all means, feel free to tell me that what I'm doing is a very tall order. And yes, I'm aware there are commercial options that I could buy instead of building an antenna analyzer myself, but where's the fun in that? This will be used for mostly amateur radio applications, so I'd love to homebrew something together. UPDATE: I'm now considering using the AD9910 coupled with a frequency multiplier of some kind to reach the desired range of 1 to 800 MHz. This could be the best option, if I can find a suitable chip for a reasonable price. Thanks for the help. <Q> At around $12, quantity 1, on Digikey , this will successfully implement 10MHz to >1GHz signal synthesis. <S> It also has an internal VCO, so its output range will not be dependent on an external VCO's frequency range. <S> Along with a high quality, but not necessarily as high-spec as the AD9910 that you suggested, you could probably cover audio (kHz) to 50MHz effectively for HF radios, then cover 30m/10MHz and higher frequencies with the LMX RF synthesizer. <A> You might want to try a combination of a high frequency RF synthesiser, say the ADF4360-7, and your DDS for the low frequency tranche. <S> The RF synth can generate the clock signal for the DDS. <A> As usual it's the higher frequencies that will cause the most heartache. <S> Yes DDS is good into the low hundreds of MHz and you can have a nice little low pass filter on the output to keep sinewave quality. <S> As you drop in frequency the sinewave quality improves because more samples are used in the generation of the sine wave. <S> However, don't rule out that some "variable" sinewave purity filtering may be needed all the way down to the low tens of MHz and this will still be a little tricky. <S> The problem with hundreds of MHz is that to obtain decent waveform purity sine wave voltage controlled oscillator are used and these are limited in their control range to about 2.5:1. <S> This is because they use varactor diodes and they only usually have a maximum capacitance tuning range of about 6:1. <S> It's the square root of the capacitance that controls frequency change hence that is why I mentioned the overall VCO tuning range to be about 2.5:1. <S> So, using a PLL (or maybe two) create two "locked" oscillators that cover the range 1 GHz down to about 160 MHz. <S> That's how I would approach it.
Texas Instruments has a component that can do the majority of your desired RF output range: LMX2571 .
Exactly how much programming will I be doing if I work as a hardware design engineer? I have a question for current hardware design engineers or others, closely related to that field. Say I want to be a computer hardware engineer for the automotive industry. I very much like working with hardware, but much less fond of writing software. I heard however that I will have to write the software to command the hardware I build/test. How much programming skills do I need? Approximately what part of my job (in %) will consist of writing computer programs? How advanced will my programming ability need to be (for example, will the skill level acquired in CS I, CS II and Data Structures courses offered in most universities be enough)? Thank you in advance. EDIT: Thank you all for the replies. I did get an idea what the job will be like in regard to programming. I already have decent skills in C++, Java and Bash scripting, so I was wondering how much of my time will consist of pure programming, but as I see, it varies greatly. Again, thanks for the replies, they were really helpful. <Q> Python and Tcl (plus bash) come to mind -- <S> lots of tool-flows have scripting interfaces, and they often leverage these two languages. <S> Will you be writing the best, most performant, algorithmically pure code? <S> No. <S> Will you sling something together that won't win a beauty contest but end up saving your team hours in the long run (assuming you don't have a tools guy)? <S> Heck <S> yes. <S> Personally I have a bit of a software background, so I'm happy to sling around C, Python and Tcl. <S> When it comes to designing boards with SoCs and such, it is very helpful to have a blending / cross-over at low-level where the HW guy knows enough to work with that code and modify it as necessary. <S> Specifics of course will vary based on your job title, but I feel 100% comfortable saying knowing how to write basic scripts in Python or bash can only help you in your career. <S> That knowledge can be picked up in a couple of hours from some great tutorial websites, assuming you've got the basics down (don't most EE degrees <S> now at least require CS 101?). <A> If you are a hardware designer in a normal-sized company you will almost never code anything, apart from your own personal test code (time a response, flash some leds, make sure the different interfaces are functioning, etc.). <S> If you are in a small company and they can only afford to hire you, you will be doing what you can and people will be screaming at you for no reason, thinking you are a one-man army. <S> I wouldn't recommend it. <S> The responsibilities of a hardware designer is to elaborate the schematics from the problem analysis documents, construct prototypes, test, debug, optimize cost, test more, debug more, and figure out how to package it all in the smallest enclosure possible, while respecting constraints. <S> You will have more than enough to do without ALSO having to develop a full software stack. <S> If they insist you have to do it all, I would suggest asking them to pay you both salaries anyway. <A> If you want to do real, conceptual design and advance in your career, I would recommend you to become capable in imbedded firmware as well as hardware. <S> Someone has to make the trade-offs when designs are initiated, and knowledge of what is practical, processor speed vs. overhead vs. power vs. packaging requires a good grasp of all of the facets. <S> There are almost no designs left without a processor of some type. <S> I know there are firmware specialists who are quite happy staring at the screen all day, and hardware guys who try to avoid all firmware. <S> But if you are a systems guy, you will need to be able to do both. <S> My experience is that the firmware takes longer if it is part of your job, but even if you have a firmware guy assigned to a project you need to be able to troubleshoot at the code level.
It's always useful for a HW engineer to be able to write code, not necessarily in the sense of architecting some grandoise software application, but it's extremely useful to be able to sling scripts around.
How to convert 700V AC to 5V DC 500mA? I have a device that requires approximately 500 mA at 5 VDC, and I am required to get this power from two legs of a three phase power supply. The voltage between these legs can be between 110 and 700 Vrms. This device and its power supply will be enclosed in a small, hot area which will be exposed to fairly large amounts of vibration. There is no need for isolation, and the transformers I have found are too large anyways. I had previously used this design: http://www.ti.com/lit/an/slua721/slua721.pdf as a starting point for a supply that operated up to 525 Vrms. With some small modifications and a simple DC/DC buck supply at the output I was able to get the required 500 mA at 5V DC, but now of course the bosses say we need to push the input range up a bit. I went back to try and redesign this to work with 700 Vrms, and I have run into some problems. For one, my rectified DC Voltage is going to be up to 1000 V. Since this design uses BJTs, this seems to limit my choice in transistors to those which can only withstand 200 mA max or are too large for my application. I am thinking that an IBGT would probably be a better choice, but I am unsure of the implications of replacing the BJT in this circuit: Can anyone tell me if it would be possible to replace Q1 with an IGBT to handle higher voltages? If there are better suggestions on how to get from 700 Vrms to 5 VDC I am open to suggestions. [Edit 1:] This device will be located in the same physical location as the high voltage load that is being power by the three phase supply. It is essentially inside of the casing of a three phase motor. There is no realistic possibility of a human coming close to this device while it is powered. I have isolated DC/DC supplies coming off of the 5 VDC output for powering the microcontroller and sensors. Given this, is isolation really necessary? [Edit 2:] I have not been able to located any potential transformers that might be small enough for this application. Could someone point me in the right direction? <Q> Is the supply power at a consistent AC frequency ? <S> If so, use a small transformer to take the worst of the bite out of the high voltage. <S> You need like 3 VA ( not 3 KVA, just 3 VA) <S> so any transformer will do. <S> So for instance if you can find a 575V to 48V transformer, that will re-range your voltage to 9-60VAC give or take. <S> Sorry, didn't realize I essentially duplicated Transistor's comment, but I'm saying don't brew your own transformer, <S> buy one that is UL/CSA/CE <S> listed for at least near the high voltage. <S> And you really, really want the isolation. <S> Seriously. <S> And that's why to use <S> listed parts on the HV side. <A> You should get a low-power potential transformer (PT) that will step down your 700VAC to ~120VAC. <S> This will enable you to safely generate an isolated supply using normal 5V offline power converters. <S> DO <S> NOT attempt to directly do non-isolated power conversion from your AC supply. <S> With highly variable input voltage, however, it may be necessary to rectify the PT output and use a high-input-voltage buck converter to produce the required 5V. Possible small PTs could be found from Hammond Manufacturing (240 to 600 VAC nominal inputs), FASE (100 to 1500 VAC nominal inputs), and Langer-Messtechnik (100 to 1000 VAC nominal inputs). <S> If there is a variable-frequency input, though, these transformers may not be rated, even for non-precision use, with such an input. <A> Thank you everyone for the responses. <S> I have contacted TI, and found a solution that should work for my needs. <S> I will be using a rectified flyback topology similar to this: <S> The UCC28704 will operate up to Tj = 150C. <S> My biggest challenge now will be finding the transformer. <S> This topology allows for a much smaller transformer, but I'm not sure I will be able to find something off-the-shelf that will work. <S> I'll update this answer as I proceed with this design. <A> You could start with a circuit that handles the global range of consumer circuit voltages (such as the one shown below for 85VAC thru 265VAC). <S> And then throw in some series dropping components (resistors, capacitors, whatever). <S> And use some voltage-sensitive relay circuits to "un-short" the series elements as the incoming voltage rises. <S> Ref: http://www.edn.com/design/led/4368306/Driver-circuit-lights-architectural-and-interior-LEDs
You need a transformer certified for near the actual voltages you'll encounter, that is also commercially available at sane prices, such as the 575V popular in Canada. It will need to have very high isolation ratings, 10s of kV, usually.
Techniques for driving inductive load, e.g DC motor, with PWM and constant current I'm having trouble getting full torque out of a DC motor at anything other than full speed with a PWM driving circuit. Assuming the problem is inductance of the coils, what can be done besides decrease the driving frequency? What is usually done? Nothing? If something, what? I was thinking of using a constant current source, but at the beginning of each pulse, this would apply a voltage across the coils higher than the rated voltage. So, two questions: Is applying a higher voltage OK? Or would it have to be a constant power source, rather than a constant current source? Or despite the counter-emf of the coil, is any voltage applied beyond specs going to reduce life or safety of the motor? And would the additional circuits have to be active, or would some clever passive circuit increase the voltage when each pulse energises the coils, without even needing a higher voltage power supply? <Q> Torque is directly proportional to current. <S> So if you want to control the torque, you need to close the current-loop. <S> Higher than the rated voltage is fine, as long as you don't exceed the breakdown voltage (determined by the insulation). <S> I typically drive motors from a 160 VDC bus, regardless of their rating. <S> I just need to be sure I don't exceed the current limits (peak and continuous) or temperature limit. <S> A higher voltage allows for better speed control, as you can easily counter the BEMF at maximum rated motor speed. <A> @Mark is correct, you can run from a higher voltage if it is available. <S> This will mean running with PWM all of the time. <S> However, if you are wanting to run near full torque and still be able to run at the top speed at a given voltage, try using a fixed "off" time in your PWM instead of playing with the duty cycle. <S> Trigger your "off" pulse from a threshold against your current sense. <S> When operating properly, the number of pulses per cycle will decrease as you go and you will be able to get down to a single pulse per cycle. <S> During the "off" cycle, switch off the high side driver and switch on the complementary low-side driver to give all that current someplace to go. <S> You can narrow down the fixed pulse width and get as close to full torque as you need. <S> Be aware that with this approach, the motor will run at full speed until the load is applied, then it will run at your specified torque. <S> So it is a torque control rather than a speed control. <A> At full speed (which is only achieved when the motor is running free) torque is zero - so I think what you really mean is that you only get maximum torque at 100% PWM . <S> Torque is proportional to current, but you may wonder why current drops when the PWM ratio is reduced. <S> The reason is that the average motor voltage drops, and maximum current draw is equal to voltage divided by resistance (at stall, when there is no back-emf so all the voltage appears across the resistance). <S> At 'full speed' (100% PWM) <S> the motor voltage is highest, so if you want maximum torque you have to apply 100% PWM. <S> But perhaps you want the motor to spin slower and still get high torque? <S> In that case you need to increase the PWM ratio under heavy load. <S> Speed drops as loading increases because the higher current causes a greater voltage drop across the resistances in the circuit. <S> Increasing the voltage (by raising the PWM ratio) compensates for this loss while it keeps the rpm constant. <S> Is applying a higher voltage OK? <S> Generally not. <S> The motor is rated for a particular voltage based on speed and power loss. <S> At higher speed there is more risk of bearing failure, excessive brush arcing and thrown windings (in a brushed motor) or thrown magnets (in a brushless motor), and higher magnetic losses. <S> If you want to run at the same speed range as before then you must lower the PWM ratio - and you are back where you started except <S> now you have higher switching losses. <S> Bottom line - if you are getting sufficient torque and rpm at 'full speed' (100% PWM) on your present setup then there is no need for higher voltage. <S> If you aren't, and you would need to exceed the motor's voltage rating to get it, then you need a more powerful motor. <S> If you already have more rpm than you need then consider using a gearbox, which trades speed for torque.
One way to maintain full torque capability at low speed is develop a negative feedback loop based on rpm, which 'cancels out' the current limiting effect of the motor's internal resistance.
Why is capacitor a linear element? Why is a capacitor a linear device? One property for linearity is that the capacitance or some such parameter must not change with voltage or current. Is this enough to make a device linear? A few sources say that the \$Q=CU\$ has a linear characteristic with voltage and so it is a linear device but wouldn't there be at least one such parameter in a MOSFET/diode that does change with respect to voltage or current in a linear manner - for example the voltage of a diode decreases linearly with the temperature. So what should I exactly consider for linearity? <Q> First of all, an I-V curve does not make any sense for a capacitor. <S> This is because a capacitor follows the following <S> equation:$$i = C \frac{dV}{dt}$$ <S> Note that the current depends on the rate of change of voltage. <S> So you can have the same current at two different voltages, if the rate of change is the same. <S> The reason a capacitor is a linear device is because differentiation is linear. <S> Superposition becomes:$$i_1 + <S> i_2 = <S> \frac{d}{dt}(v_1 + v_2) <S> = <S> \frac{dv_1}{dt} + \frac{dv_2}{dt}$$ <A> Your assumption is wrong: <S> It has <S> non linear I-V characteristics <S> An ideal capacitor, just like an ideal resistor, has linear I/V characteristics. <S> Since you're obviously learning linear circuit analysis (judging by your knowledge of the superposition principle), I'm absolutely certain you've learned (or will very soon learn, by reading your course's material) about representing harmonic currents by complex currents . <S> With complex currents and voltage representations, it's really easy to see that a capacitor is a linear device. <S> \begin{align}I(t) &= <S> C\frac <S> {dU(t)}{dt} & \text{the elementary capacitor formula}\\&\text{hinting at linearity}\\I_\text{sum}(t) &= C\frac{dU_\text{sum}}{dt}&U_\text{sum} <S> = <S> U_1+U_2\\&\overset!= <S> C\frac <S> {dU_1(t)}{dt}+C\frac {dU_1(t)}{dt}\\\end{align}which is the case because the differentiation \$\frac d{dt}\$ is linear. <S> You might really just be confused by "linear" as term. <A> The formal definition of a "linear" function, as in linear system , is that if you scale the input of the function by some amount, the output is scaled by that same amount:   y = <S> f(x)    <S> f(Ax) = <S> Ay <S> Note that F() adding a offset inside violates this. <S> As you said, one way to describe a capacitor is V = Q / C. <S> This says that the voltage on a capacitor is proportional to the charge it is holding, and that proportionality constant is the inverse of the capacitance. <S> In the parlance of a linear equation as above, V = f(Q). <S> Since f(Q) = <S> Q/C, it should be clear that this equation is linear because:   A * Q / C = <S> A <S> * V for arbitrary values of A. <A> In the context of relations of two functions (of time) to each other (and not just values at one instance of time) linearity means that the principle of superposition holds (as Neil_UK has pointed out). <S> The principle of superposition says that the function of a linear combination equals the linear combination of the functions, i.e. \$f(ax + by) = a f(x) <S> + b f(y)\$. <S> This is the case not only for multiplication by a constant but also for the differentiation operator and integration operator. <S> I.e. <S> Not only muliplication by a constant like \$u(t) = <S> R <S> i(t)\$ is a linear operation upon a function but also integeration and differentiation: \$u(t) = <S> \frac{1}{C} \int i(t) dt\$ and \$u(t) = <S> L \frac{d}{dt} i(t)\$. <S> Therefore not only resistors but also (ideal) capacitors and inductors are linear components. <A> If we look to the capacitor when connected across a AC supply, then it can be easily said that it can be treated as a linear element. <S> Linear elements are those which current voltage relationship is linear. <S> V is proportional to I. <S> For an AC supply: v=v'e^jwt (here v'=amplitude of the applied ac voltage). <S> Now for a capacitor: <S> q=cv, i=dq <S> /dt= <S> c(dv/dt), i= <S> c.jw.v'e^jwt, <S> i= <S> jwc.v, v=i/(jwc)=i/ <S> z <S> z= impedance of the capacitor. <S> Hence linear
A capacitor is a linear component because voltage and current as functions of time depend in a linear way on each other.
What does "600 : 600 Ω" mean for an audio transformer? I found audio transformers online having the following specification:600 : 600 ohms. My question is what does it mean?Does it mean that it is simply an isolation transformer?Or is it in some way used for impedance matching? How can it be used for impedance matching if both the primary and secondary have the same resistance? Lets say for example I have constructed an amplifier with output impedance of 600 ohms. The speaker has an impedance of 4 ohms. How can this transformer be used to match the impedance? <Q> Impedance is just the ratio between voltage and current, like a resistor. <S> A transformer can change the ratio between in- and output voltage (and current as well) for AC signals. <S> So a 600 ohms to 4 ohms transformer lowers the voltage (and increases the current) <S> so that 4 ohms at the output behaves as 600 ohms at the input. <S> That is useful when you want to connect a 4 ohms speaker to an amplifier which can only handle 600 ohm loads. <S> A 600 to 600 ohm transformer can indeed be an isolation transformer for an audio distribution system or a telephone line. <S> The transformer is 1 : 1 meaning in- and output voltage stay the same (and the current as well). <A> It means this transformer is intended to isolate a audio signal between two different common mode voltages. <S> It otherwise is intended to alter the audio signal as little as possible. <S> This transformer is not intended to have a speaker connected to one side. <S> 600 Ω is the official impedance of "line" audio, although line audio drivers are often lower impedance. <S> The 600 Ω spec is giving you a clue that the secondary should be loaded with that resistance. <S> That is the load resistance <S> the frequency response and other specs are valid at. <S> Anything else will probably yield a less flat frequency response over the audio range. <S> One use for such a transformer is at the receiving end of long cables. <S> The driving and receiving equipment can easily have ground offsets between them, which would be added directly to the audio if it was a single-ended signal with ground as the common. <S> At the end of the cable, both the signal and common are applied to the primary of the transformer. <S> The common mode voltage then is largely cancelled out (only a little capacitive coupling between the transformer windings remains). <S> The receiving equipment can then turn the result into a ground-referenced signal, if that's what it wants, just by grounding one side of the secondary. <A> My question is what does it mean? <S> Does it mean that it is simply an isolation transformer? <S> Or is it in some way used for impedance matching? <S> It is an isolation transformer. <S> It is for isolating signal lines with a characteristic impedance of 600 Ω. <S> This may include audio and telephony circuits. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> Using a 1:1 transformer to "unbalance" a balanced microphone for an unbalanced input while maintaining balanced operation over the length of the cable. <S> Signal level transformers have many uses such as isolation, ground loop prevention, differential signals, etc. <S> Professional microphone and audio signal applications account are among the common applications where using various schemes are used to reduce any common-mode noise. <S> As can be seen in Figure 1, any common signal on the two mic lines will be rejected by the transformer as it will only pass differential signals. <S> How can it be used for impedance matching if both the primary and secondary have the same resistance? <S> The impedance matching is required when both circuits are 600 Ω and isolation is required. <S> Lets say for example I have constructed an amplifier with output impedance of 600 ohms. <S> The speaker has an impedance of 4 ohms. <S> How can this transformer be used to match the impedance? <S> It can't. <S> In any case, a 600:600 Ω transformer would be for mW signal levels and not 0.5 to 20 W levels, for example. <A> Magnetic coupling between the primary (input) and secondary (output) provides the function of electrical ISOLATION. <S> A transformer with the SAME impedance for both primary and secondary is clearly serving the function of isolation. <S> You are correct, if the primary and secondary impedance is the SAME, is is not really used for the purpose of "impedance-matching". <S> IMPEDANCE-MATCHING typically implies that the transformer has DIFFERENT primary and secondary impedances. <S> A 600 ohm transformer is not typically suitable to connect to a 4 ohm load. <S> You would find a 600 ohm (primary) to 4 ohm (secondary) transformer used for impedance matching in that example.
A transformer is typically used for isolation and/or impedance matching.
Empty Solder Points On PCB? first time here. Today, I was looking on my motherboard/graphics card and I noticed tiny little pin width points of solder with nothing soldered to them. The distance between most of them are just right for a small rectangular capacitor or resistor, but I can't seem to think of why the manufacturers would put the solder there yet not do anything with it. Just curious, thanks. <Q> There could be several reasons for having solder pads that aren't used: <S> Test points - these are pads that are used to connect test equipment to the board to test it in the factory. <S> Unneeded parts - the board was designed, then it was discovered that you could leave out some of the passive parts (resistors, capacitors, inductors) with out causing problems. <S> Most often these are parts that were intended to reduce emmissions (radio frequency "noise" that can interfere with other devices.) <S> Testing showed the whole board to be "clean" enough that they could be "dirtier" and still meet all requirements. <S> Leaving out specific parts can reduce the price of building the board. <S> They don't bother to change the board in such cases. <S> Alternatively, the plans include extra parts to remove emmissions, and these are only used if the board isn't clean enough. <S> The board can be used for multiple models. <S> It has sub-sections that can be left off to produce a cheaper board with less functionality. <S> You design one board that can be populated in different ways to produce different models. <S> These may be as simple as a jumper or as complex as a read only memory device containing a list of usable sections. <S> There are probably other reasons as well. <S> These are the ones that pop into mind. <A> It is not at all uncommon for parts to be in a schematic, yet not be fitted on a PCB; this is usually not because it was found to be unnecessary in many cases. <S> As an example, it is not uncommon to have extra positions for decoupling capacitors , just in case <S> the total electrical noise in some part of the PCB is higher than expected. <S> It is also not unusual to have strapping option resistors for a device such as this one . <S> This is where a pin (or group of pins) are tied either high or low (and in some cases floating) and will be sampled at reset to set certain functionality for the device. <S> It is not uncommon for all possible options to be on the schematic (i.e. both a pullup and a pulldown are provisioned) but only a subset of those are actually fitted. <S> Many of those pads may also be test points for ICT . <S> In other cases, different variants of a PCB may utilise a different mix of components; this is alluded to in the comments. <S> It means a manufacturer has a single base PCB and simply loads a different mix of components to have two different resulting items. <S> There are more reasons, but the above are certainly very common. <A> It could be a point for testing/debugging, one of multiple alternate layouts, or a design change after all of the PCBs were ordered. <A> "Tiny little pin-width points of solder" might be plane interconnects. <S> It's also common to plan for a lot of blocking capacitors on the supply lines and then figure out in testing that you'll get along with a small and/or strategically chosen subset just fine while using circuits from a particular supplier. <S> Then there are also testing pads for measurements (hard to get at signals otherwise on a modern PCB) and/or temporary test configurations (you actually solder "0 Ohm" "resistors" across them to activate) or for permanent configuration.
In these cases, there may also be parts whose only job is help the electronics detect which sections can be used.
How to test USB to TTL cable is working I bought this USB to TTL cable: https://www.amazon.co.uk/dp/B01DC0S13M Since it apparently uses the PL-2303HX chip, I downloaded the driver for Windows here: http://prolificusa.com/portfolio/pl2303hx-rev-d-usb-to-serial-bridge-controller/ After that, I connected the USB to the PC and the TTL pins to a single board computer. Unfortunately, my PC hasn't detected the cable, because Device Manager doesn't list a COM port for it. Is there a way to check if the cable is faulty? I don't have any other USB to TTL cable, so it could be something besides the cable. <Q> I have 2 exactly same cables which I have bought here , both work like a charm. <S> Of course, that doesn't prove anything, but fake cables usually come from el'cheapo manufacturers who don't bother to replicate the exterior looks so precisely. <S> I wouldn't be surprised if your cable was not counterfeit, but simply defective, especially if it doesn't show up in device manager at all (does it?). <S> Either way, time to open a dispute with the seller. <A> The world is awash with fake USB->Serial products, both FTDI and Prolific, and they're both engaged in an arms-race with the fakers to stop them camping-out on their (expensively written) drivers. <S> I would throw it away and buy one from a reputable source. <A> If so, you will have to try to get a driver from Wingoneer. <S> They probably don't supply one, however. <S> They probably depend on the (older) Prolific driver that didn't block counterfeit devices. <S> This site has some info on how to tell if you have a fake PL-2303 <S> Basically:Check the device manager. <S> If the USB-Serial converter has an error and a code 10 <S> then it is fake. <S> The better choice would be to buy a better converter with a real Prolific chip. <S> The knock-offs tend to be pretty lousy and will drive you nuts - they work most of the time, except when they don't <S> and then you don't know why your communications drop out. <S> That's pretty bad if you are developing the device on the far side of the converter. <S> You don't know if it is your error or the converter's error.
You may have bought a board with a counterfeit PL-2303 You might be able to dig up and use an older Prolific driver.
op amp+mosfet = current source.Why do we need a feedback resistor? The feedback resistor is needed to compensate for the error of the input currents? How to choose the resistance R2. Circuit source Resistor R2. Can I use this circuit, op-amp with differential input voltage range = +/- 0.6V? I'm not sure. I think not <Q> R2 (10k R4 in my diagram) is there to form together with C1 (1nF capacitor) a Miller Integrator to prevent unwanted oscillation. <S> And yes, this circuit will sometimes oscillate, mainly due to poor PCB/breadboard design. <S> And here you have a real world example (the breadboard one). <S> Without the Miller capacitance: <S> And after I add the Miller capacitance into the circuit: http://www.ecircuitcenter.com/Circuits_Audio_Amp/Miller_Integrator/Miller_Integrator.htm <S> EDIT <S> Today I test this circuit again. <S> And the result are: For RG = 0 Ohms ; RF = 10k <S> Ohms without Miller capacitance circuit oscillate (I_load from 1mA to 1A). <S> But surprise surprise <S> If I short RF (10K) resistor the oscillations magically disappear (even if RG = 1K ohms). <S> So, it seems that the main cause of a oscillation in my circuit was a feedback resistor. <S> I suspect that RF together with opamp input capacitance and some parasitic capacitance add a pole (lag) to the circuit and the circuit start to oscillate. <S> I even change the opamp to "much faster one" (TL071).And results was almost the same except the fact that he frequency of oscillations was much higher (713kHz). <A> You don't need a feedback resistor and neither do you need C1. <S> I guess the "designer" has some strange perception that the circuit will oscillate without them <S> but it won't. <S> Oscillation will occur if Q1 provides gain - it won't because it is a source follower. <S> Oscillation will occur if Q1 produces significant phase shift and this is more of a possibility but still unlikely if R1 (gate resistor) is kept low in value. <S> In fact, because of R3's presence, R1 is likely superfluous to requirements. <S> Here's an example circuit from Analog Devices: <S> - I don't see the two resistors and the capacitor in this schematic. <S> If you were using a poor op-amp for this application (because of input offset voltages causing inaccuracies in the current) like the LM358 <S> then you should consider using a bipolar transistor as shown in the data sheet on page 18: <S> - However, I believe it will work with a MOSFET providing you don't use a gate resistor (or a very small one). <S> There are plenty of examples of the LM358 being used with MOSFETs without all the "extras": - <A> This is a standard configuration for handling a capacitive load such as long cables (inside a standard current sink configuration). <S> It's unnecessary if R3 is significantly large compared to the op-amp open loop output impedance (between 8-70 ohms for common ordinary op-amps <S> ** with supply currents in the ~1mA range per amplifier) or the MOSFET has low input capacitance, or if the op-amp is designed to work with a large or unlimited capacitive load (if any of those three conditions are true). <S> R1 isolates the load, while C1/R2 provides a second feedback path (aka "in-loop compensation"). <S> If you have R1, you should have C1/R2. <S> R1 alone makes the situation worse. <S> ** <S> You have to be very careful with low power op-amps, which often recommend isolating capacitive loads in excess of only 100pF. <S> Edit: @G36 has provided a real-world measurement illustrating the effect (+1). <S> It would probably not oscillate with R2 = 0\$\Omega\$ rather than 330 but that depends on the MOSFET used and on the load in the drain circuit. <S> In any case, it will reduce the phase margin, leading to overshoot/undershoot of current. <S> Edit': Regarding choosing the values for a given situation, see this reference. <S> R2 should be a value such that it's a lot higher than R3 and <S> not so low it unduly causes offset or other bad effects. <S> Say in the 1K-10K range normally, but it could be higher or lower for very low power or high frequencies respectively. <S> So pick a value for C1. <S> The minimum value of R2 is: \$R_2 (min) = <S> C_L \frac{R_O <S> + R_1}{C_1}\$ <S> where RO is the open-loop output resistance of the op-amp and C_L is the load capacitance. <S> So if the load capacitance is 10nF including Miller effect, R1 is 100 ohms, RO is 100 ohms, and C1 is 100nF then R2 (min) = <S> 20 ohms. <S> So the circuit as shown (if my assumptions are reasonable) is grossly overcompensated and will respond much more sluggishly than necessary. <S> If we pick C1 = <S> 100pF then R2 <S> = <S> 10K. Or you could use 1nF and 1K. <A> Well, it is an odd circuit. <S> Not necessarily bad. <S> Keep in mind that the output of the op-amp is small signal ground and you'll see that R2 & C1 form a low pass filter. <S> The R1 acting against the the transistor gate also acts as a bit of a filter too. <S> C1 also injects changes on the op-amp output back into the inverting input and thus speeds up it's response to step changes on the control input. <S> This has the impact of slowing down the response of the op-amp output. <S> The optimization of the circuit will depend amongst other things, the input impedance of the op-amp. <S> Interestingly this all combines to allow for this circuit to be optimized for dynamical changes in the load and in the input reference <S> some what independently. <A> The capacitor in this circuit prevents a current spike when the circuit turns on. <S> When the circuit is off, it is fully discharged, and when it turns on the output will be VC and the current will be either off or lower than the target. <S> The negative terminal of the op amp will be driven up with the op amp output. <S> The output will then rise until the target value is reached. <S> If not present, the negative terminal of the op amp will be at ground while the op amp output increases to a voltage higher than the target as it drives the gate capacitance through 100 ohms and may possibly saturate. <S> When the FET turns on, overshoot may occur as the op amp recovers from saturation.
The purpose of R1/R2/C1 is to decouple the op-amp output from the capacitive load presented by the MOSFET gate/source capacitance in series with R3 .
Electricity discharge into water My question has to do with a piece of equipment in my home, but I thought I would post my question here because I am interested in the behaviour of the electricity rather than anything else. There is a submersible electric sump pump in my basement. It sits in a pit dug out of soft rock, where water accumulates. The water is fresh water, in the sense that it is rainwater and not seawater, though it is muddy. When the water reaches sufficient depth, a float switch causes the pump to switch on and pump the water out of the house. Yesterday, I was cleaning sediment out of the pit with my hand. The pit had a substantial amount of water in it. I felt a tingle: Electricity. I did a quick experiment with my multimeter. I put one probe in the soil and one probe in the water. With the pump plugged in, I measured a small voltage (about 0.5 V AC). When I unplugged the pump, I measured no voltage. So yes, this is a dangerous situation. The pump is dangerously defective. I have unplugged the pump and am arranging for its replacement. This is not the point of my question here. My question is what is actually happening with the electricity. I think it’s safe to assume that somewhere in the pump, the 240 VAC mains current is exposed to the water. So: Why did I feel a small tingle and not the full force of 240 VAC? In the same vein, why do I measure only 0.5 VAC in the water? Why did I feel anything? Why wouldn’t the exposed wire short directly to the nearest ground point through the water? Would I have felt a stronger shock if there had been less water in the pit? What happens in situations like this , in which a whole swimming pool becomes electrified? Thanks. <Q> Fresh water is a pretty poor conductor of electricity, though muddy water will be a better conductor than clean drinking water. <S> I'll assume that there really is a problem with the pump, and it's not just some trivial leakage current. <S> The exposed wire inside the pump is at 240V AC, and the rock around the edge of the pit will be more-or-less 0V (ground). <S> So a current will flow through the water to ground. <S> Since it's fresh water, this won't be a very big current. <S> There will be a voltage gradient between the live wire and the rock. <S> If you stick a probe into the water at some point (either the multimeter or your hand), then it will be at some voltage between 0V and 240V. <S> Essentially, you're probing at some point in the middle of a resistor. <S> But since you only measured 0.5V, it suggests that the leakage current was actually very small. <A> It's more than likely capacitance of a few nano farads between the submerged cable and the surrounding water. <S> This can generate voltages anywhere up to several tens of volts but the amount of current it can deliver is small and will only tingle a bit. <S> However, if you are unsure you did the right thing but you will probably find that if your replaced it with a new one it would do the same BUT call an electrician or somebody who understands this answer to take a 2nd look. <S> Also, double check that it has an ELCB or MCB breaker in the spur that feeds it. <S> This is law in the UK for submersible pumps - basically if an earth fault appears then it trips the breaker. <S> You can get the same effect from fluorescent lamps under a power grid cable: - <S> Also, see this Please double check everything though to be safe and secure. <A> Since the pump is in the window well, and the window <S> well ground is close to ground potential, testing the water vs ground in the window well is not a great way to find leakage current. <S> There is very little impedance between the water and ground and it wouldn't tell you how much current is leaking or if there is a dangerous voltage. <S> The pump needs to be isolated from the ground when you are doing your test. <S> First off, be careful, don't use yourself to do the testing. <S> Put the pump in a plastic 5gal bucket, and measure the potential between the water and mains ground. <S> Secondly, I'll bet your not feeling electricity but vibration from the pump. <S> I've had a few times when I was sure I felt an electric tingle <S> but it turned out to be a mechanical vibration. <S> After measuring it with a voltmeter there was no potential. <S> Thirdly, depending on your meter (I'm guessing its a cheaper one), you could probably find 0.5 volts between anything and anything else.
If it were live parts touching water you'd feel more than a tingle.
LEDs inside of solar-powered calculators Why do a lot of solar-powered calculators contain red or infrared LEDs that are hidden from the user? So far I have only seen this in Texas Instruments calculators. Here are some examples of this in Texas Instruments calculators, taken from user-contributed guides on iFixit: Older TI calculators, from datamath.org: Here's a newer TI-106II, with what appears to be an infrared LED: I think I first saw this in a Youtube video from either EEVBlog or bigclivedotcom and I remember whoever doing the video was confused as to why there was an LED inside. If anyone has seen this video and could provide the URL that would be good. Any ideas? <Q> I remember a label printer I dissected as a kid having the same. <S> My best guess hence is: This might actually be a way for the manufacturer to, at low cost, add a programming header to the devices that allows for different functionalities being programmed in a manual labor assembly line, where you just want to decide at last minute, before you put the back cover on, into which market you'll sell that product. <S> Why solar calculators? <S> If they ran QA on those, they'd want a way to test them, so one would assume you just want to put your calculator in a light box, and run a few tests. <S> Now "run a few tests" is a bit complicated with a device that runs at wildly varying voltages, and probably also wildly varying clock rates: you either end up somehow electrically coupling the device with sensitive connectors (which are expensive) to something that does the level shifting in your test stand, or you do the level shifting in your calculator and end up having an unnecessary level shifter in every calculator. <S> Or, you just add some wireless link, like, in the simplest case, two LEDs, one used as a photodiode (receiver), <S> one as a Light emitting diode (transmitter). <S> You place the half-assembled calculator manually on a frame that is being lit from a below, and communicate with the LEDs in the shadow of the calculator itself. <S> My other best guess is: LEDs have become a mass product, and they come with a relatively well-defined band gap. <S> So if you need a 2.15 V voltage reference for whatever, why not go and buy the cheapest yellow LED you can find? <A> Since these tend to be near the keyboard another explanation might be as ESD protection devices. <S> As to why it would be a IR or red LED, the forward drop on those diodes are less and therefore you have a breakdown device that can swing +/- <S> ~ 1 V. <S> A lot of LED's are not actually ESD sensitive because of this. <A> I'd presume the LEDs inside solar calculators <S> are actually there as a means of regulating the power coming from the solar panel. <S> It is possible when used under extremely bright light (i.e. in direct sunlight), the solar panel could produce a voltage in excess of what the calculator requires to operate, perhaps even high enough to potentially damage it. <S> The LED is likely used as a method of dissipating the excess voltage in these situations. <A> I found such an LED in the solar powered calculator CASIO SL-801, made in 1981. <S> It is just a cheap 4 function model with no EPROMS. <S> So the explanation of Cristopher Sturtz seems to me most likely to be.
From the position of the LEDs, it looked like they were pretty close to the EEPROMs on the device. The LED dissipates too much incoming solar power at a well defined voltage level.
Why is stepping up voltage during transmission considers I^2*R but not V^2/R? During the transmission, the generated electric power is delivered after stepping up to hundreds of thousands or even more voltages by transformers. In that case since P = I*V, increasing V reduces I in the secondary of the transformer. The reason given is to reduce Ploss = I^2*R losses. Here I decreases so the power loss. Is that the real reason to step up? I'm asking because we can write the power loss equation as: Ploss = V^2/R Or if we use both I and V in the power equation: Ploss = (V/R)*I It seems like if we step up the voltage I decreases but V increases. How about the power loss? Does the power loss decrease?Or the real reason to step-up the voltage is to reduce the cross section area of the transmission lines significantly? <Q> For these transmission lines, not the voltage will result in a power loss, but the voltage drop on these lines. <S> I thinks its easiest explained on an example:Lets say your transmission line has a resistance of R=100Ohm <S> and you want to transfer P=1kW. With <S> "P=U*I" you get: @1000V, you need to transfer 1A @100kV <S> you need to transfer 0.01A <S> By "dU=R*I" <S> the voltage drop across your transmission line will be: 100V <S> @1000V 1V @100kV <S> As the voltage drop in your transmission line is the power lost you can now calculate the Power loss by "Ploss = dU <S> * I" which results in: <S> 100W @1000V which is 10% of your original 1kW 10mW @100kV which is 0.001% of your original 1kW <S> Thus: The higher the voltage, the lower the power loss in your transmission line (which of course also has it's upper limits due to for example isolation of these lines and isolation of the transformers, but thats another topic). <A> I'm asking because we can write the power loss equation as: Ploss = <S> V^2/R <S> Well... no. <S> A more accurate representation would be: \$P_{loss} = {{{\Delta V}^2} \over R}\$ <S> That is to say, you only lose power when you have a voltage (i.e. energy) drop in the resistance. <S> Increasing the voltage decreases the voltage drop due to the now lower current passing through the same resistance. <A> V^2/R holds but you need to be careful about what you mean by V, in particular V is the voltage drop due to cable resistance not total supply voltage.... <S> Lets say we have 1MVA @ 1,000V, so 1,000A flowing, and our line is say <S> 0.01 Ohms so our I^R losses calculate to be 1000^ <S> 2 * 0.01 <S> = 10KW. <S> Lets see what V^2/R makes it, well the V in this case is the voltage drop due to the line resistance = <S> 1000A * 0.01 ohms = 10V, 10^2/0.01 = 10KW, exactly the same as the calculation done the other way. <S> Now lets raise the line voltage to 10kV, current is now 100A for the same power delivered, so if the cable is still 0.01 ohms, we get I^2R losses as 100^ <S> 2 * 0.01 = 100W, doing the same calculation for V^2/R, we get voltage drop due to cable resistance as 100A <S> * 0.01 = 1V, and 1^2/0.01 = <S> 100W. <S> It should be no surprise that both calculations come out the same, as to find the voltage term for the cable losses we do V = IR, the <S> we can substitute that into P = <S> V^2/R = <S> (IR^2)/R = <S> IR*IR/R = I^2R. <S> Regards, Dan. <A> When using such equations we must be careful which voltage, which current and which resistance we are talking about. <S> Lets assume a simple case, the supply is DC so there are no capacitive or inductive affects and the cable has perfect insulation but the conductors have some resistance. <S> Now lets write some equations. <S> $$V_{load}=V_{source}-V_{drop}$$ <S> Where \$V_{load}\$ is the voltage delivered to the load V_{source} is the voltage supplied by the source and V_{drop} <S> is the voltage dropped in the cable. <S> By ohms law we can write $$V_{drop} = <S> I <S> * <S> R_{cable}$$ <S> Where \$I\$ is the current flow in the circuit <S> and\$R_{cable}\$ is the total resistance of the conductors (both positive and negative) in the cable supplying the load. <S> We can now write an equation for the power loss in the cable. <S> $$P_{loss} = <S> V_{drop} <S> * I = <S> I^2 <S> * R_{cable} = <S> V_{drop}^2 <S> / R_{cable}$$ <S> So to reduce \$P_{loss}\$ we need to either reduce \$R_{cable}\$ or reduce \$I\$. <S> To reduce \$I\$ while keeping the power delivered to the load the same we have to increase \$V_{load}\$ and hence \$V_{supply}\$ <S> Now this example isn't a perfect reflection of the real world. <S> In reality insulators aren't perfect and systems are usually AC so capacitive and inductive affects have to be considered. <S> The result is that increasing voltage helps up to a point <S> but eventualy you reach a point where further voltage increases are not helpful.
As to reasons to do it, it is an optimisation problem, you trade off insulation hassle and cost for less weight in the wires and possibly less line loss (or some combination of both).
How often does EMC certification have to be repeated (especially in Australia)? I realise that this may be largely legal in nature and may be off topic, but I think that the users of this site are most likely to be able to answer accurately, so I shall ask and risk being shot down! We have a device (which I helped to design) that we have sold globally for a long time with no issues. It was tested against the following standards in 2014: EN 55022:2010 incorporating corrigendum October 2011 EN 55024:2010 incorporating corrigendum 2011 47CFR15.109 ICES-003 Issue 5 August 2012 However, an Australian reseller that we have just shipped an order to asked to see test reports (including some which were not relevant, which made me question their understanding of the matter). We sent them over and they have accepted them, but were not happy that the test results were over a year old. They suggested that the next shipment will require test results taken in the previous 12 months. Has anyone else ever heard anything like this? Sounds like rubbish to me - we haven't changed the design, we haven't changed any suppliers and I don't remember physics having changed in the last 12 months - but is it another weird legal requirement? Is this particular to Australia, or is it also the case in other countries? Secondarily, thinking about this, it strikes me that these standards may be updated in the future. Will we be required to retest every time that a standard has an update, or can we continue in perpetuity once a product has been certified provided that no additional legislation is passed which precludes the sale (I'm thinking about things like the RoHS and WEEE legislation which changed the landscape for a lot of products, but had a grace period built in). <Q> You generally don't need to do new tests. <S> The test you have done before putting the product on the market are what applies. <S> The age of the test protocol is not relevant, as long as the test complies with current legislation. <S> If the standards or directives change in the future, you only need to carry out new tests if there are significant changes of the technical requirements. <S> If so, there is usually a grace period of a couple of years, during which you can state compliance either to the old or the new directive. <S> This only applies to products that are in production (still put on market). <S> You don't have to make new tests for older products that you don't sell any longer, even if they still exist on the market. <S> Though of course, you will have to carry out new tests if you have made significant changes to the product. <S> Particularly if you changed things that may have impact on EMC and radio. <S> (Most notably changes to voltage regulators, clocks/oscillators or changes to anything radio-related.) <S> However. <S> In this specific case, I think your customer might have a valid point - it would appear that these standards are superseded(?). <S> See this , note "Superseded by EN <S> 50561-1:2013 EN 55032:20122". <S> I did some very brief research and it appears that they were updated together with the new European EMC directive 2014/30/EU. <S> EN 55022:2010 was listed as expired at 1.12.2013 by the "EU Official Journal" (European law) and it is not part of the current list of harmonized standards for the EMC directive . <S> Apparently you should now follow EN 50561-1:2013 instead, so there might have been significant technical changes. <S> When it comes to EN standards, the rest of the world that accept such standards typically updates their legislation accordingly whenever EU changes the standards/directives. <S> Anyway, I only did a brief check <S> and I don't know these standards, <S> so if I were you, I'd go double-check this with some EMC expert at your nearest test house. <S> If you did tests in 2014, you'd think a professional test house should have told you about these on-going/upcoming changes. <A> First of all for importing product to Australia, immunity test as per CISPR24 is not required. <S> Secondly CISPR22 has been replaced by CISPR32, however the limit does not change. <S> CISPR32 <S> simply just combining CISPR22 and CISPR13 and several additional clarification regarding test method etc. <S> What could be beneficial for you is to write a rationale that address the difference between CISPR22 and CISPR32 and describe that it does not affect your current product. <A> Australia may be different in some ways from the reset of the world <S> but I do know this: Even though the product was tested at some particular point in time the manufacturer bears the responsibility to make sure the product continues to be compliant with current laws and regulations if it continues to be sold. <S> A common way to be assured of this is to re-test the product from time to time - whether that be every year, every two years or whatever. <S> There are a number of reasons to re-test periodically even though you think you are building the same product: Manufacturing procedures change and some minor details can change things. <S> Manufacturing personnel change and new folks may interpret notes and documentation differently. <S> Materials change such that the composition of plastics, metals, plating or coatings can change behavior with respect to ESD or RF emissions. <S> Legal requirements can change over the years. <S> Electronics components can have improvements made to their internal design that you the buyer may not even be aware of that may make your product change its sensitivity to noise or cause it to change its emissions characteristics. <S> Software running in the product may be changed and induce new behavior in the component interactions that can change sensitivity or emissions.
Generally you don't need to retest your product.
How I can test the power supply across the load? I want to test my power supply at the 12 V 10 A specification. I had tried with connect a 100-watt 0.7-ohm load resistor across it, but by that time the voltage went down to 3 to 4 volt. How does it happen? And how can I test this power supply? <Q> how can i test this power supply? <S> Start by using the right load. <S> The supply is rated for 10 A maximum at 12 V. By Ohm's law, that means the smallest valid load resistance is (12 V)/(10 A) = <S> 1.2 Ω. <S> By connecting a 700 mΩ resistor to the supply, you violated it's current spec. <S> Again by Ohm's law, (12 V)/(700 mΩ <S> ) = 17 A. <S> The supply dropping its output voltage when you attempt to draw more than rated current from it is a totally reasonable thing for it to do. <S> You really need to look up Ohms's Law and understand what it means. <S> Also consider the power the load resistor must dissipate. <S> If you manage to load the supply to its maximum rating, then the power into the resistor will be (12 V)(10 A <S> ) = 120 W. <S> Even if your resistor were the right resistance, it doesn't have enough power handling capability. <S> You could get a second 700 mΩ 100 W resistor and put them in series. <S> That effectively makes a 1.4 Ω 200 W resistor. <S> That is within what the supply can drive and the combined resistance can handle. <S> By Ohm's law again <S> (yes this comes up a lot and is really useful), (12 V)/(1.4 Ω) <S> = <S> 8.6 <S> A. <S> That's what the current will be thru the combined resistor. <S> It doesn't test the supply to the limit, but it's a good start. <A> Is that 12V @ 10A ? <S> Your 0.7 Ohm resistor will try to draw 17A which will overload the power supply. <S> Try loading your power supply with 1.2 Ohm. <S> That should give you 10A. <A> If your power supply were going to be able to drive full output voltage of 12V to a 0.7 ohm resistor it would have to source over 17A. <S> Since you said your design was for 10A it was unable to source the 17A and something inside the supply is causing the output to be drug down to the 3 to 4V. <S> May be caused by current limiting or source impedance. <S> To test the supply you need to think about realistic loads. <S> Use multiple of the 0.7 ohm resistors in series or find resistors with a higher ohms value. <S> For testing power supplies I have built myself an electronic load which is basically controllable current sink. <S> The load is powered from a separate low current 12V wall wart. <S> The circuit consists of a mongo power MOSFET in series with a small value sense resistor to GND. <S> An opamp compares the voltage drop across the sense resistor to the setting of a multi-turn pot and then drives the gate of the mongo FET to keep the load current through the FET constant as per the pot setting. <S> Separate small digital panel meters monitor the load current and load voltage. <S> This is a picture of the type of N-MOSFET used for the electronic load. <S> A device like that can handle 10A to 20A easily and support a power level well over 100W. FETs of this type are very expensive <S> but I got lucky and happened upon a box of them at a surplus store for $4 to $5 each. <S> At mail order retail a part of that type could be over 10X that price. <A> If we consider that your power supply is ideal you get 17.12 A, and 12v between the terminal if you connect a 0.7 Ω across your power source <S> But in reality you have just a physical power source with internal resistance about 2 Ω, and in this case you got a 3v at the terminal of your power supply
When testing at high currents and higher load voltages the mongo FET can generate an appreciable amount of heat so the FET is mounted to a good size heatsink which is then cooled by two 12V fans which are powered off the wall wart supply. Most power supplies have a current limit of some sort which will reduce the output voltage until the current is at a safe level.
What determines the output voltage of this SMPS? This website http://danyk.cz/impulz4_en.html says that the output voltage is due to the zener diode that has a forward voltage drop of 18V. But zener diode is just a clipper that regulates the output voltage so, What is the component that really produce (or specify or determine) the output voltage? Is it the transformer or the IC? Also, What is the 180V of the transil? Is it the maximum forward voltage or the reverse voltage drop? I think it is the reverse voltage drop due to the diode. Can I replace BA159 with 1n4007? Thank you very much, <Q> The zener DOES control the output voltage - at about 19V across the output terminals, the zener begins to conduct and starts to turn on the the opto-isolator. <S> This in turn signals the TNY267 that the correct level of output voltage has been met <S> and it's time to start backing off dumping too much energy into the transformer. <S> If loading increases and the output voltage starts to drop the zener reduces its conduction and starts to turn off the opto isolator. <S> The transil "snubs" out back-emfs from the transformer <S> - it needs to be rated in volts below the level at which the MOSFET in the TNY267 might suffer damage. <S> Given that there could be a DC voltage of 240 x 1.414 volts (339 volts) <S> the MOSFET will be protected to no more than 339V + <S> 180V = 519 volts. <S> Note also that the BA159 prevents it conducting in normal diode mode hence the voltage rating of 180V is the reverse breakdown voltage. <S> I suspect the BA159 needs to be fast recovery type and the 1N400x series is certainly not that! <S> BA159 is specified as having 500 ns reverse recovery time whereas the 1N4007 is about 30 us. <S> When a diode goes from forward conducting to reverse blocking there is this time that even though the diode is reverse biased it acts as a forward conducting device. <S> Clearly if you are switching at (say) <S> 100 kHz, 500 ns represents 20% of the time that it could be reverse conducting. <S> 30us would be a disaster! <A> The zener, here, is not used as a "clipper", as you say. <S> In this situation, it is used as kind of voltage reference, used to provide feedback to the TNY267 IC and maintain regulation. <S> When the voltage on the output rises above the zener voltage, the zener starts to conduct. <S> This allows current to flow through the two resistors (100R and 470R). <S> This creates a voltage difference across the 470R, and, at some point, enough so that the opto diode starts to illuminate. <S> When this happens, the TNY267 sees, through its pin nr 4 (feedback) that the output voltage has reached its target, and adjusts (lowers) the duty cycle so the output voltage does not rise more. <S> On the opposite, when the output voltage drops, the zener stops conducting, which leads to an increase of the duty cycle. <S> The 180V transil is clamping the voltage across the transformer, and thus limits the voltage also across the internal TNY267 mosfet, in order to protect it. <S> Because the leakage inductance in the transformer leads to rather high voltage spikes when the mosfet switches, so you need some kind of clamping across the primary winding <S> (there are several alternatives, either a zener clamp like here, or a RC snubber, ...). <S> Although <S> 1N4007 <S> and BA159 have the same reverse voltage rating and current rating <S> , BA159 is fast switching. <S> So it may not be a good idea. <A> But zener diode is just a clipper that regulates the output voltage <S> In this case it is not <S> , it just drops about 18 V. Suppose <S> 100V AC came out of the transformer there would be 18 V across the zener but around 80 V across the 100R and the 470R resistors. <S> There would still be almost 100 V at the output. <S> Together they will conduct at around 19 V, the LED will illuminate the photo transistor and via the TNY267 provide feedback. <S> The ZD/transil is a transorb or a high voltage supression diode. <S> When the switch in the TNY267 switches off but there is still magnetic energy inside the transformer, a high voltage is induced at the 140z side of the transformer. <S> The transorb diode clips this to a safe value so the TNY267 is not damaged. <S> No you cannot replace the BA159 with an 1N400x because these are much slower diodes. <S> This circuit switches at a couple of 100 kHz or so. <S> Way too fast for a 1N400x series diode. <S> Only use 1N400x for mains rectifiers at 50 or 60 Hz. <S> Maybe they can handle 1 kHz <S> but I would not go above that.
The output voltage is regulated by the zener and the LED in the optocoupler.
How to increase mosfet switching speed, and decrease switching losses? I hope this is not a too broad question, but what are the best practices to achieve fast switching on a MOSFET driven by a PWM signal? My current knowledge tells me I can do two things: 1 - To use the lowest possible PWM frequency, because switching losses are higher at higher frequencies. 2 - Drive the gate with the maximum possible current, to overcome gate capacitance as soon as possible. To do this, I avoid adding a resistor between MCU and gate, or add a general purpose transistor between MCU and mosfet, so I can drive the gate with higher current. Currently, I have a PWM that must run at least at 100kHz using a N-channel IRLZ44 mosfet, so first point is not applicable, and the second point is not enough to give me acceptable switching losses. My mosfets are overheating and I would like to find a better solution than using a bigger heatsink. Should I look for a better mosfet? Or perhaps, should I try adding a capacitor somehow to kick in when PWM signal rises, boosting current through the gate? Or are there other ways to achieve faster switching? Update: I thought the question didn't need an example circuit diagram, but here goes it: I got to this circuit based on other questions I asked in here. I'm using 5V and the load is about 1A. As you can see, I'm driving a transformer. In this configuration, I have 10 Vpp on transformer primary, and secondary elevates this to 1500 Vpp. Based on current comments and answer, it's already pretty clear to me that using a driver is the easiest, cheapest and simplest way to achieve lower swirching losses. But if there's a way to improve the circuit without a driver, I would be interested on learning about it. <Q> Either choose a better MOSFET or use a push-pull driver like this: - Notice that this chip uses identical MOSFETs in the output stage. <S> Here's another using the FAN7842 from Fairchild: - You should also make sure there is enough deadtime between one turning off and the other turning on. <S> Both devices can be used to drive single MOSFET outputs if needed. <S> Here's one that drives a highside MOSFET: - Avoiding P channel devices will earn you a couple of percent more efficiency (genralism alert). <S> This is a useful set of images to give other ideas. <A> As Andy aka advises, there are tons and tons of integrated MOSFET drivers available, and they work really well with a minimum of parts. <S> But in case you want a one-off design with discrete parts, here's a starting point: <S> (The switch represents your microcontroller, or whatever is driving this arrangement) simulate this circuit – <S> Schematic created using CircuitLab Q1 and Q2 are a push-pull pair of emitter followers. <S> Their output (at M1's gate) is held at approximately the same voltage as the input (modulo the base-emitter voltage), but the BJT's current gain multiplies the current available from the input. <S> Consequently, you'll need something connected to the input which can get up to the gate voltage you'll want to use. <S> If you are using a microcontroller its output voltage will probably be 3.3V or 5V. <S> You can find MOSFETs designed to work at these gate voltages, but most power MOSFETs work best with something more like 12V, so you'll want to add additional circuity to perform the voltage conversion. <S> See driving low side of a mosfet bridge with 3.3V which also includes a more complex discrete MOSFET gate driver. <A> Correctly choose your gate resistor w.r.t. gate charge curve (or total gate capacitance). <S> Too high and you will switch slower and more switching losses. <S> Too low and there is a chance of power cct ringing (increases your losses) and worse-case... setting up a pierce osc <S> If you are switching an inductive load KEEP the the stray inductance between the cathode of the freewheel diode and the FET very, very low (not as low as convenient as low as you can - re-layout if needed) <S> Again, if you are switching inductive load, do not overlook the reverse recovery of the diode. <S> choose an appropriate diode Minimise the gate-source lead inductance (twisted pair, short), again not short for convenience, short as possible. <S> if you are power switching, minimise stray inductance to the bulk DClink capacitor. <S> Again not short for convenience, but as short as possible. <S> consider some form of lamina busbar w.r.t. 5 <A> Good gate drive is a step in the right direction and has been stated in other answers. <S> Now it is time to look at T1 .There <S> will be some leakage inductance between each leg of the CT primary. <S> When you turn off Q5 or Q6 the current is broken .Energy <S> stored in leakage inductance will go into horrible high voltage spikes in your circuit .You <S> must deal with this to stop Mosfet failure .When <S> you plug in ballpark figures for this inductive energy that on your circuit is wasting and multiply by frequency to estimate power loss you will find that these losses are bad .So <S> try to recover the wasted power to limit the voltage spikes and keep the mosfets cool. <S> One straightfoward way to recover this energy is to build your passive snubber that burns power into a resistor so the fets do not blow anymore. <S> Then optimise the waveforms <S> .Now decide if you want to put the energy into the input or the output or some aux device like say what I did was the cooling fans .Now <S> all you need to do is build a small DC/DC convertor to do this .You should be able to get 90% back without too much effort. <S> You could also try an active clamp system .Active clamps are easy to drive .I have not implemented an active clamp.
provide a suitable gate drive circuit that can sink/source a high enough current and at a decent slew rate (others have posted about a dedicated gatedrive)
Is it safe to define a memory section over multiple physical memories? I am working on the memory mapping on my MCU.Let's say I have this mapping for physical for memories: 0x1000 RAM 0 0x2000 RAM 1 0x3000 Then I have memory allocation. In this part the RAM 1 is not used as a whole: RAM0 : origin = 0x1000, length = 0x1000RAM1 : origin = 0x2000, length = 0x400 // <= Only 1Ko used Then I have sections: .stack : > RAM0.ebss : > RAM1 This is just as an example, since it is not the reality of my project. To avoid wasting memory in RAM 1, I'd like to create a memory allocation over both memories: RAM0_M1PART1 : origin = 0x1000, length = 0x1C00 // <= Added 3Ko hereRAM1_PART2 : origin = 0x2C00, length = 0x400 // Moved at the end of the range (2C00 -> 3000) The question is quite simple : is it safe? Thank you! <Q> There are several interpretations of 'safe'. <S> One is, can the processor access RAM blocks as contiguous blocks? <S> That will be described in the datasheet for the part. <S> I would expect Microcontrollers with separate but adjacent RAM might to have some restrictions. <S> For example their may be no restrictions for the CPU, but the bus might prevent access to some blocks by DMA. <S> So, you might develop a program, then later discover it needs to use DMA. <S> That would be a time when the behaviour might become 'unsafe'. <S> Tracking down the bug might be hard if the memory needed in DMA transfers is crossing and uncrossing the memory boundary as the program evolves. <S> A second use of different segment names for the same memory is to 'reuse' memory at different stages in a programs lifecycle. <S> For example some blocks of variables may be needed during an early phase of the program's lifetime, say for initialisation, but never later. <S> So you could have the linker 'overlay' variables with disjoint lifetimes. <S> This can be tricky to debug, and make maintenance significantly harder because your program is managing its variables lifetime. <S> What are you going to use the alternative memory segments (RAM0, RAM1, RAM0_M1PART1, and RAM1_PART2) for? <S> Are you planning to ask the linker to place variables or code in all four memory segments within one program? <S> As explained above, that is unlikely to be safe unless you mange when the alternative memory segments are 'alive', so that their lifetimes don't overlap. <A> For many kinds of memory access, the processor will have no reason to care about the boundaries between memory sections. <S> There are some times when it might matter, however. <S> On some TI DSPs, for example, there are regions of memory (called DARAM--Double Access RAM) that can--using a special instruction--simultaneously have read out a word and copy it over the preceding word. <S> Those instructions will fail if used on the first word of one of those memory regions, however, since there is no mechanism for selecting one region for the read and another for the write. <S> Unless one is using such mechanisms, however, most individual operations will take place entirely within a single memory region, and the processor won't care whether different operations in a sequence come from different places. <A> Depending on the architecture of the the particular MCU, treating different blocks of physical memory as a contiguous area may or may not be safe. <S> The blocks may be specified separately simply because that is how they are defined in the hardware, or they may be different types of memory that require different procedures to access (eg. <S> Flash vs RAM, internal RAM vs external RAM) or have restricted access ( <S> eg. <S> stack pointer can only address a small fixed area of the RAM, DMA controller is wired to a particular block). <S> The purpose of memory mapping is to tell the compiler how to use the memory. <S> The compiler normally expects different sections to be separate from each other, whether in the same or different blocks of physical memory. <S> (eg. <S> stack running into bss and corrupting variables). <S> This is obviously not safe! <S> The only time it is safe to map one section over another is if you intend the memory to be used for different purposes at different points in the program, and you know that the compiler won't get confused by this dual use. <S> So in your example where you want 3k more in the first section, first check that the hardware is capable of using the two RAM blocks contiguously for that purpose (eg. <S> stack pointer not restricted to the first block). <S> Then map the second section into the free area that is left, like this:- RAM0_M1PART1 : origin = 0x1000 <S> , length = 0x1C00 <S> // <= <S> Added 3Ko <S> hereRAM1_PART2 <S> : <S> origin = 0x2C00 <S> , length = 0x400
If one section is mapped over another then it will try to use that same memory for different purposes, which could cause the program to fail
Using supercapacitor as backup for MCU I have this project that requires some kind of backup power supply. and I am planning to use a 5V 4F super cap. there are my questions: I am planning to charge the cap with a diode and 100 ohm resistor to a 5V VCC (Good idea?). how can I connect the cap to the MCU. direct connection will not work because it will take some time for the cap to charge up. Normally the circuit will consume 20mA, in power-off mode, it will use about 200uA, how long will this 4F cap last? <Q> Assuming ideal conditions i.e. no leakage current in capacitor and other parts of circuit. <S> Case 1: Your micro controller is running and drawing <S> 20 mA. Lets assume your micro controller will work fine till voltage reaches 4V. <S> However for atmega 328, you can make it run at even lower voltages if you choose to run it at a lower clock frequency. <S> Assuming 20 mA at 5V, your load resistance will be 5V/0.02A = 250 ohms <S> Here is complete theory in one image: <S> Initial Vo = 5V and final <S> Vc = <S> 4V. <S> Solving for time gives 225 seconds. <S> It means, your micro-controller will keep functioning for another 225 seconds after you lose power, provided the capacitor was charged to 5V. Case 2: Your micro-controller is in power off mode consuming 200 <S> uA. <S> R = 25000 ohms. <S> Solving for time gives 6.25 hours. <S> This is the theoretical max time you are getting. <S> Things can't get better than this unless you are planning to run your controller at a lower clock frequency. <S> Just for you reference, Atmega328 can run from 1.8V. <S> For this you get a time between 17 minutes and 28.33 hours <S> These are theoretical values. <S> Practical values will be even lesser due to leakage in your diode, capacitor itself and other circuit elements. <A> This will mean as VCC is lost, the cap still stop charging and the diode input for VCC will drop, but the diode input for Cap-> MCU's VCC will continue on until the discharge curve shown by Whiskeyjack reaches a critical point where the Atmega's brown-out detection circuit kicks in and it shuts down. <S> You may want to check your settings fuses for the brown-out-detection voltage by the way, it's pretty important. <S> simulate this circuit – <S> Schematic created using CircuitLab Note: Part numbers for the diodes are just default values in the circuit maker thing. <S> Find some 300-400mV forward drop diodes. <A> To build your circuit, I would suggest using a super-capacitor charger IC. <S> LTC makes great products, and something similar to the LTC4425 would serve you well. <S> This will do a great job of super-capacitor management. <S> Also, 20mA is reasonably high current draw from a super-capacitor, so you have to watch out for ESR, or Equivalent Series Resistance. <S> All real capacitors have a parasitic resistance inside them which is modeled into the circuit as in series. <S> At 30 ohms and 20 mA, you will see a 0.6V drop, which is quite a waste. <S> Be sure to find something in the 30mohm range.
For the connection of the battery to the MCU's VCC pin, you can use a simple dual Diode "OR" with low-forward drop diodes.
Why does Flash and SRAM have a "hold" signal? I have read datasheets on SRAM and Flash devices. They seem to have a "Hold" signal that can be used to prevent it from registering the input data/instructions until it the hold signal is deasserted. When would one need to use such a thing? The 23LC1024 SRAM datasheet says: "The HOLD pin is used to suspend transmission to the 23A1024/23LC1024 while in the middle of a serial sequence without having to re-transmit the entire sequence over again" These memory devices also have dual and quad SPI modes in which the hold signal is replaced with a data IO signal instead. This makes me think that the hold signal is not really important or is it? <Q> As well as what the other answers have stated, the pin also has a use in DMA transfers. <S> In one design I had an SPI based MP3 codec IC which required data to be copied from Flash memory (could also be SRAM) to the IC using the same data bus (due to having only one hardware SPI bus on the MCU I was using). <S> The process here is essentially: (1) set the start address in the memory, (2) read a small chunk of data, (3) put the memory <S> is suspend mode, (4) write the chunk of data back out to the MP3 codec, and (5) bring the memory out of suspend mode and jump back to 2. <S> The advantage of this process is it is a lot faster to toggle the suspend pin between each chunk of data than it is to start a new read from the memory. <S> To start a read you usually have to send at least 3 bytes (command + address) to the memory which takes many, many more clock cycles than just toggling the hold pin between data chunks while you use the bus for something else. <A> I suppose it is possible to do that if you design everything very carefully, but it raises more issues that it solves. <S> If interrupt code really needs to have low latency access to a particular device, you should probably put that device on its own bus. <S> However, in general, interrupt code shouldn't be trying to do something as slow as SPI transactions anyway. <S> A much better strategy is to employ a proper software architecture so that multiple tasks can use the SPI bus serially by use of a mutex. <S> I have done this several times, and it makes multiple asynchronous SPI tranactions pretty easy and reliable without stepping on each other. <S> Basically, bus sharing shouldn't be done at such a low level that individual chips need to be suspended in the middle of a transaction while something else uses the bus. <S> That's a hardware bandaid to deal with poorly thought out software. <A> The only way I've used the hold signal is to attach an RC network to prevent the FLASH from reading incorrect data during power up. <S> I think one of the FPGAs I was using at the time had an unspecified behavior on its configuration pins before the power lines have settled. <S> Effectively this is a second chip select. <S> Another useful application is when you need to read information from a secondary memory chip on the same bus mid-transmission in time critical applications. <S> You do not need to wait for the SPI to finish its message and can resume it later.
I think the purpose is so that exception software can do other things with the SPI bus while your code is in the middle of a transaction.
Does oxidized copper conduct electricity? When the surface of copper turns that greenish oxidized color, does the resistance to current flow increase or isn't affected? For instance if the point of contact is secure and clean but bare visible copper wire has oxidized will it still conduct electricity no problem? Like this ground wire in my car. The resistance to negative of the battery reads 0 \$ \Omega \$ as well as from both ends of the cable itself. <Q> The oxides are non-conductive as they have a full valence bands, but if you "dig into" the wire, you'll get to metal that isn't covered with an oxide. <S> CuO is pink, but does not complete the valence rings, so you get Cu2O after a time, which is black. <S> The green is either from a sulfate or carbonate. <S> You have CO floating near the engine, you'll have some green after the Cu2O reduces. <S> If you see green near the battery, it's because the sulfur from the the battery has electromigrated up to the connector and gone it a lower energy state there. <S> You see this when you have a "bad cell". <S> I'm pretty sure it's Cu4SO4 with some (OH) hydrated state. <S> Anyway, you need to take some steelwool and clean off wires in order to remove the oxide if you want to remount them. <S> You could put them in a glass of Coke and have the phosphate reduce the copper oxides. <S> Everything just wants to be at a lower energy level, and if you are there, you don't conduct. <A> When cars and trucks burn fuel they emit sulfur dioxide and other pollutants (all bad for your health). <S> The sulfur dioxide mixes with the moisture in the air to make a very mild sulfuric acid. <S> this mild acid reacts with the exposed copper and turns it green. <S> This green oxide is non conductive. <S> However the copper wire that is crimped by the terminal connection or covered by insulation are protected from this mild acid mixture and thus stays copper bright and totally conductive. <S> I have seen gold, silver and iron turn black. <S> Aluminum turns powdery white from this same mild acid. <S> You could use a white light grease to seal the metal contacts from the environment and oxidation. <S> We use to use that grease in factory environments where petroleum lubricants (high in sulfur) where used. <A> In your photo the corrosion on the outside does not really matter, and it tends to be self-limiting- <S> once a layer forms the corrosion slows greatly. <S> What matters is the connection between the plated copper or brass lug and the copper wire, and that is a crimp connection. <S> A proper crimp joint is gas-tight and will not allow corrosion to occur within the joint. <S> To get a reliable gas-tight crimp proper tools should be used in accordance with the manufacturer's directions. <S> A cheap crimp tool that just mashes the lug barrel against the wire is a recipe for unreliability. <S> Good ones are made with precision, hardened dies and ratchet so that once a crimp is started it must be completed before the tool opens. <S> Here is a photo (from here ) of some properly crimped connectors that have been sectioned to show the wire-lug interface. <S> As you can see it's become pretty much a solid mass: <S> If you sliced open your automotive lug you would likely see a similar wire-lug interface that is a solid mass. <A> Metallic materials, in certain conditions, can develop a thin film of ceramic material for various reasons (it is usually an oxide of the underlying metal). <S> Ceramic materials do not conduct electricity, but the surface thin film is usually confined to a few atomic layers, so it will not affect significantly the properties of the bulk metal (provided the thickness of the metal is not subnanometric). <A> You have proved that for yourself with your zero-ohm measurement.
The surface oxidation does NOT affect the conductivity of the wire unless the degradation is MUCH deeper.
How to minimize hash noise in amplifier mounted near boost regulator I've designed an audio low power (20W) guitar amplifier circuit whose pre-amp section has somewhat high input impedance, as is best for a magnetic guitar pickup. Since it operates from a battery supply, I'd hoped to employ a boost regulator so that the maximum output I have available before clipping would stay more constant as the battery voltage faded. My circuit also employs a simple compressor, which means there is more gain at lower volumes then at higher ones. Unfortunately while everything was quiet and worked very well on the bench, I'm getting what I guess I'd call hashing noise as the components are brought closer together. The culprit is obviously the boost regulator, and the enclosure size makes it impossible to place that board more than about 6 inches away from the sensitive pre-amp circuits. Not surprisingly, the noise is worst when the amplifiers active treble control is maximized, and/or my compressor circuit is active. The boost regulator's oscillator operates at around 100Khz and of course generates additional harmonics, but I was surprised how much audible noise it added. Which of the following do any of you think has the best chance of minimizing this problem (feel free to add more or tell me which ones likely won't help) Shielding around the boost regulator (what material... grounded or floating Adding more low value ceramic caps across the boost reg output adding inductors to the boost reg output Building a simple shield completely surrounding the amplifiers pre-amp electronics At this point I'm strongly considering just ditching the boost regulator idea, if I can't drastically decrease this problem. FYI, the boost regulator is a fairly common one good for moderate power, Max6A 4.5-32V to 5-60V, often sold on ebay or ali express. You may recognize it from the picture... As requested by Andy, I've added a picture of my basic wiring diagram, but of course its not truly "physical". Ignore the BTEST board, which is just a battery test board only active when a button is pushed. Note that my power amp board is separate from the pre-amp/control board, which is where also the PCB mount controls are soldered down. Addendum: user96037 had the most helpful info here. This photo of the boost converter output is very telling... There is 20mV (!!!) of noise at 176Khz here, much more if you count those spikes. The harmonic content is huge, so all my wires are behaving as antennas. It is obvious to me that if the amplitude can be cut and the waveform softened, I just may be able to salvage my use of this boost regulator. Already I've seen that adding a "random" pi-filter helps a LOT. (By random I mean my inductor was just 100 turns around a nail). I've ordered some better (toroidal) 100uH inductors and will try to come up with the best capacitors. Of course my next challenge will be keeping my inductor far enough away from the coil on the boost converter, and figuring out the highest value capacitors that still offer decent high frequency behavior. Thanks again everyone... I'll update this thread with my progress for the benefit of anyone else dealing with this issue. <Q> There are two main types of electromagnetic interference. <S> Conducted emissions and radiated emissions. <S> CONDUCTED EMISSIONS: <S> If you see lots of ripple then I would add a PI filter at the output of the regulator using a small inductor and two ceramic caps, and a small damping resistor in series with the inductor, where R>SQRT(4L/C). <S> The resistor prevents resonance that could boost the noise at the resonance frequency of the filter 1/(2*pi*SQRT(LC)) <S> Hz. <S> The cutoff frequency 1/(2*pi SQRT(L C)) <S> Hz should be several times lower than the 100kHz frequency of your regulator. <S> Remember that the PI filter is second order, so if you make the cutoff 10x lower (at say 10KHz) you would squash any noise by a factor of 100X. <S> Also, keep the inductor in your PI filter away from that large toroid or they may couple together which defeats the purpose of the filter. <S> RADIATED EMISSIONS: <S> If you think that electromagnetic fields are radiating then steel sheet metal will block both electric and magnetic fields. <S> It needs to be a magnetic type of steel. <S> You basically just need to form it in a box around your noise source (in this case the power supply). <A> Typical practice is two steps: <S> Fully ENCLOSE <S> the offending circuit (your switch-mode booster) in a Faraday Cage. <S> i.e. a metal enclosure/box. <S> Use FEEDTHROUGH <S> FILTER CAPACITORS <S> where any voltage goes INTO the Faraday Cage, or comes OUT of the Faraday Cage. <S> The exact value of the capacitors depends on the operating frequency of your switch-mode booster. <A> I got nervous just by reading "boost converter" and "preamp" in adjacent sentences. <S> Use a linear regulator for the preamp and leave the power amp directly powered. <S> You will get both less noise and more battery life. <S> They make rather good batteries nowadays so you can get some 10% voltage drop from a fresh battery to 80% depleted one, esp. <S> if you use rechargeables (and you should).
The first thing I would try is to look at the output of the boost regulator on an oscilloscope and see how much ripple voltage there is.
What is this cap-like part? I found a huge cap-like looking part in an old parts bin of mine. I tried making sense of its marking and I don't really understand what they mean. I googled for the part number and I found eBay listing and such, suggesting it's indeed a cap according to the eBay category, but even there I didn't find further information about the specs of this thing (somewhere it was suggested it's a 40uF 1250V cap but I don't see 40 anywhere written on the part). Testing it with my component tester delivers very weird results: it's detected as a diode but also with 1.8mF capacitance...?! (usually the tester works nicely and rarely lies to me) Can you help me identifying this? Here is the picture: Picture http://share.cherrytree.at/showfile-24709/img_20160611_122403.jpg EDIT: Interestingly, while playing with it again, the tester detected it once as capacitor, with 1759uF and 7.9 ohms ESR. But I can't reproduce it, now it's back to the diode rubbish... Anyway, I don't get how to read these markings correctly. EDIT2: Multimeter gives me 1755uF. <Q> "1250 MFD" is 1250 µF — it's a capacitor. <S> "50 W.V." is 50 working volts. <S> If your tester is feeding it AC, it might identify it as a diode, because of the reverse leakage current. <S> When it does identify it as a capacitor, it isn't surprising that the value is off by quite a lot. <S> Tolerances weren't tight to begin with, and this unit has some age to it, which probably means a lot of drift in the value. <S> You could try connecting it to a power supply of about 25V or so for a while to see if this re-forms the oxide layer. <S> Pay attention to the correct polarity! <S> If the electrolyte hasn't dried out completely, this might allow your tester to identify it more reliably. <A> The Cornell Dubilier Electronics logo is marked below the 50 W.V. <S> That company was established in 1909: CDE History <S> The company is apparently now part of Kemet Corporation. <S> I suspect that the "R" in a circle to the right below "Computamite" indicates that word is a registered trademark. <S> It probably designates "computer grade electrolytic. <S> " I don't think that term is recognized as a standard by any standard publishing organization, but it may have an informal "industry consensus," (some would say honorific) meaning. <A> it looks like a capacitor to me. <S> it says on the can 50V (working) 1250 Micro Farads
Specifically, it's an electrolytic capacitor.
How do you document bodges? After making the PCBs, we decided on some bodges and implemented them on the PCBs. What's the standard way to document that on the schematics? For now, I hand drew with the mouse, paint style, on the schematics PDF files, to convey the idea that it's a bodge, but is there a better way? <Q> The modification is a proper part of the design of the product. <S> So, no, don't "hand draw that on the PDF". <S> Add it to the schematics. <S> You should have proper versioning control for your schematics anyway <S> (Try git. <S> It's thought for source code, but it works with any file type – you just won't get useful differences displayed between revisions, but that's not a problem here). <S> If your EDA suite allows that, change color, stroke, whatever of the modification to make clear it's a modification. <S> If your modification is just a wire bridge: it's not uncommon for schematic design programs to have some kind of special "external connection" component. <S> Since you're probably want to still be able to generate a valid board from that schematic, make sure the "external connection" component you have has a small single pad that you can place on a wire in your board design. <S> That way, you also get a guarantee you have something to solder too in the end. <S> Again, I don't know what software you use, but it will still make sense to "formalize" even the crudest hack into some kind of footprint/component: your wire bridge could simply consist of two components with one pad each, which share the same net name. <S> That way, you could actually "place" the fix in your board design, and hence, have it documented. <S> Also, your board might fail design rule checks. <S> Which would obviously be correct. <A> Create a new minor revision for the intermediate changes and indicate them however you like as long as they can be printed out or copied reliably. <S> They will not be used to create new faulty PCBs hopefully in the future so it is a dead end path. <S> The next more significant revision should incorporate those changes and any others that may have occurred which will allow new boards to have missing parts placed and traces to be routed correctly. <S> That version does not need to indicate the history of how you got there as it is in the version history already. <S> Avoid reallocating part designators if you have released any documents outside the design office. <A> The classic way is to designate a (paper) copy of the schematic as a working master. <S> All changes get entered on the schematic in red. <S> When you're ready to fab the next rev of the board, you hand over the master to whoever is doing the schematics. <S> At the same time, you need to start keeping a notebook. <S> Sketch out your design ideas as you think of them, and record your design experience. <S> When it comes time to start troubleshooting, record all changes you make in your notebook, as well as marking up the schematics. <S> Then, when you're ready to update the design, cross-check the master schematic with the notebook, and make sure that they agree.
Clearly annotate it in your schematics.
What is a weak transistor? In implementation section of C-element at Wikipedia website: https://en.wikipedia.org/wiki/C-element there is a diagram that points to a weak transistor. What is a weak transistor? Below is the diagram that I am talking about: <Q> I'm pretty sure they mean a transitor that is designed to have a deliberately high on resistance. <S> This allows other transistors to overpower it. <A> A "weak" transistor has a lower transconductance, which could be done by making something longer, current starved or a threshold implant. <S> I would encourage you to avoid "weak" transistors for a few reasons: <S> we really cannot use them on aggressive processes due to doping spacing <S> you cannot just make "Long" devices because it moves the threshold toward the bulk, and you need to have a biased FET or resistor. <S> Significant power to overdrive the other FET. <S> You have Muller C-Elements there, and there are ways to make static Muller C-elements without weak transistors. <S> Edit <S> As mentioned in the comments, weak transistors are used in I/O, and I was focused on the datapath, but here's a feel for making these devices with resistors on a commercially available CMOS process that I'm allowed to talk about. <S> As devices have gotten smaller, you find that you have greater space between implants, also, devices cannot be made "longer" without tiling standard devices in a chain, and this makes a mess of your datapath; however, I/O drivers are always the exception. <S> The I/ <S> O drivers are huge, so you generally use resistors to make your weak devices as you have space. <S> There's a yellow box in the picture above for the "digital minimum" size, and the size for a driver is 2x that on this process. <S> I use the n+ diffusion resistors for my FPGA reset pin because I also use it for a double bus fault notification. <S> You bring the pin low to reset, OR I can wire logic up to it to bring the pin low when I have an unrecoverable timing issue. <S> If you want to compare this to the length of a FET, I spec my I/0 drivers for 20mA, which makes them 4-micrometers wide at a minimum length for I/O FETs, which gives me 25 "squares" of equivalent space. <S> This would give me 8.7k of resistance if I ran a p+ poly up the side in parallel with my driver. <S> There's the high altitude view of where weak transistors are actually used, and we (well, me anyway) use resistors to make them instead of playing with threshold implants or length. <A> This is done primarily with a lower \$ \frac{W}{L} \$ <S> although in most digital processes you don't change the L so much, so that means that the W must be smaller than those other transistors. <S> In this circuit these weak transistors are being used as bus holders and need to not cause as much conflict on the input to that last inverter.
A weak transistor is one that has lower current \$ I_{ds,sat}\$ relative to others in the circuit.
I can not understand this circuit diagram I've many questions about this circuit diagram. I found it on the datasheet of NCP1012 . It is a switching mode power supply SMPS. Are E1 and E2 electrolytic capacitors and C1 and C2 Ceramic Capacitors? What is the value of C2 ? Is it 2.2 nF ? and what is the meaning of /Y next to its value? What are the numbers that are around transformer pins? Is number 5 connected to ground? What are J1 CEE7.5/2 and J2 CZM5/2 ? I think they are simply output and input. What is the turns ratio? If I need a 5 v output, Should I change the ratio? <Q> Are E1 and E2 electrolytic capacitors and C1 and C2 <S> Ceramic Capacitors? <S> E1 and E2 are polarised. <S> electrolytic capacitors. <S> C1 & C2 are unpolarised & could be ceramic <S> What is the value of C2? <S> Is it 2.2 nF? <S> and what is the meaning of /Y next to its value? <S> The value of C2 is 2.2nF. <S> The Y means the safety type of capacitor. <S> X are good across lines and Y are good for line-Chassis. <S> Its all associated with their failure modes. <S> What are the numbers that are around transformer pins? <S> Is number 5 connected to ground? <S> A larger grounding scheme would be needed to confirm this What are J1 CEE7.5/2 and J2 CZM5/2? <S> I think they are simply output and input. <S> These are input and output connectors/headers. <S> What is the turns ratio? <S> If I need a 5 v output, Should I change the ratio? <S> There isn't enough information to state either way unfortunately. <A> /Y is <S> a "Y safety rating" Y capacitors are usually metalised filmand will be marked with class Y <S> The other 2.2 nF capacitor need to handle several hundered volts, as hundereds of kilohertz so it is probably a high voltage ceramic capacitor, <S> 10u 400v is probably electrolytic, the others probably solid-electrolytic if you buy the right transformer the numbers will match j1, j2 are connectors, screw clamping terminals, the input side has 7.5mm spacing the output side 5mm spacing. <S> the datasheet should list the transformer you need, and name the maker.a quick scan suggests Coilcraft <S> A9619-C <S> is the one you want. <S> for 5V output you could go to a steeper turns ratio (but this is no needed unless you need more current), you will hoever need to modify the feedback circuit, as the one presented is designed to produce approximately 12V output. <S> probably you want the TL431 based feedback, check the TL431 datasheet for examples. <A> I will answer your questions in the order you asked them. <S> 1) <S> Yes, E1 and E2 are electrolytic capacitors designed for SMPS. <S> They have short stubby leads for short length connections for power and ground. <S> 2) <S> 2n2/Y is 2.2nF (just a shorthand description). <S> The "Y" marking defines <S> it's 'safety' grade as being allowed for use from chassis or signal to earth ground. <S> 3) <S> Pins 1 and 4 are the transformer primary winding, usually with 40 to 60 turns depending on the core properties. <S> Pins 7 and 6 are the secondary outputs. <S> Pins 8 and 5 are not used, just shown as pins with no winding connections. <S> 4) J1 is an AC power input with a range of 100 to 240vac. <S> The internal mosfet is rated to 700v per its datasheet, so it has a wide range of operation. <S> J2 is the DC output, set to 12 volts as it is. <S> Due to a lack of inductive filtering, the output may have some ac noise and ripple of 100mV or so. <S> In 'pulse-skipping' mode (used for very light loads) <S> the output noise can be 1 volt P-P, in short burst about every 100mS . <S> 5) <S> The turns ratio is impossible to know or calculate exactly because it is custom 'fit' to that type of core. <S> You cannot change the turns ratio because the windings are sealed with epoxy to prevent "buzzing", and because the secondary goes on first, so it is closest to the core to pick up maximum energy transfer, and may have extra windings to make up for core losses. <A> E1/E2 are indeed electrolytics (although I would use a ceramic instead of E2 -- there's no reason to use electrolytics at that CV-point these days). <S> C1 and C2 may be ceramic or film types. <S> C2 is indeed a 2.2nF capacitor -- <S> the Y is a safety rating used for mains-to-ground capacitors (i.e. it sits between the mains and something you can poke with your finger, so it cannot fail in a way that'd lead to you getting shocked). <S> There are different "grades" of Y capacitors by the way; you'll want a Y1 for this application as there's only 1 capacitor between you and the mains -- if there were two in series, you could make them both Y2s. <S> The numbers on transformer pins are the pinout of the transformer -- SMPS transformers often have mildly complicated pinouts. <S> Pins 5/6 are connected to the output negative terminal (which isn't "ground" as this is a Class II/floating supply). <S> CEE7.5 is a type of mains plug (European), while the CZM5 is a type of DC power connector (likely a barrel-plug) -- they are indeed input and output respectively. <S> You can compute the turns ratio from the equation $$\frac{V_o}{V_i} = <S> n\frac{D}{1-D}$$ <S> where Vo is the output voltage desired, Vi is the rectified DC bus, D is the duty cycle of the converter, and n is the turns ratio. <S> You may be able to get 5V out of the converter without changing n by changing the Zener, which changes D -- however, that will have other effects on the converter as well.
C1 and C2 are typical ceramic capacitors. They are the pins of the Transformer and yes pin 5 is connected to some reference point that maybe ground, maybe chassis.
Gates and energy loss i read in a book that when we have for instance a NAND or NOR gate ,the processor doesn't lose energy because we do not have direct current transfer from Vdd to Vss so the processor does not consume energy (apart from some leakage currents). I didn't understand it so much ,so my question is, why we don't have?( if someone could be more specific) <Q> This is note quite clearly forumlated, but I guess what the book meant is that typical gate chips are not drawing current at steady state. <S> This wasn't true on older days (when TTL ruled the world), but nowadays, this is almost true with the CMOS chips. <S> Take a look at a the implementation of a NAND gate, for example: <S> Whathever <S> the state of the inputs, you see that there is either a path from output to ground (Q3 and Q4), or a path from Vdd to the output (Q1 or Q2), but current can never go from vdd to ground (More details on how CMOS basic functions work can be found here ). <S> Moreover, the inputs being FET gates, the current drawn from the inputs is null. <S> So, this gate, at steady state, does not draw any current ( <S> well, almost). <S> However, when the output state changes, during the time the FETs change state, there can be a current flowing directly from Vdd to ground (when the FETs are in their linear region, i.e. half-blocking, half-passing). <S> So the more state changes, the more power consumption. <S> Which is why power consumption of processors heavily depends on their frequency of operation. <S> Now, there is also some tiny leakage current, even when the FETs are blocking. <S> And there is a gate leakage current also, which makes the input currents not exactly null. <S> This leads to a few hundreds of nA wasted, usually. <A> this energy can never be recovered so that gets dissipated as heat. <S> Energy loss is therefore proportional to switching frequency. <A> simulate this circuit – Schematic created using CircuitLab Figure 1a. <S> CMOS output configuration. <S> 1b. <S> Hard-wired switch representation. <S> ... <S> we do not have direct current transfer from Vdd to Vss ... <S> We do this by ensuring that only the pull-up or pull-down transistor (M1 or M2) is turned on but never both together. <S> Figure 1b might make this a little clearer. <S> If the HI switch is closed the output is pulled high. <S> If the LO switch is closed the output is pulled LO. <S> If both are closed we would have a high current between Vdd and Vss <S> (so we try to avoid it!). <S> This is true for the static condition <S> but when we switch there is capacitance in the output stage (gates of the transistors, etc.) <S> and the input stages of the following devices. <S> This takes some current to charge and discharge and so power consumption rises with increasing frequency.
For sure there is energy loss - each time the gate output or input changes state there is capacitance charged or discharged -
Why are 'modern' high-efficiency LEDs easier to damage than 'old' LEDs? I have a few 'old time' LEDs from the late 80s and early 90s. The red and green 5mm LEDs (amber was a rarity, blue was 'impossible' back then). Not being very smart, I used to test them with a 9V battery without any resistor and, surprisingly enough, they always outlived the experience. Fast forward to the third millennium: I bought several dozens transparent 'high-efficiency' (or should I call them 'high brightness', hard to tell without a datasheet) LEDs on the Web. They are superbright, but the one time I tried to " Oh, here's a 9V battery: let's see what color this is " one of those, they almost instantly died after a faint flash that told me " I was blue, you #@@#! " (Like this: https://www.youtube.com/watch?v=7IoyYj6BJlc ) Now, it is clear to me that a resistor is required to limit the current, but my question is about what exactly kills the LED , or put in another way: why do 'old technology' LEDs survive? Is it related to the fact that 'old' LEDs exploited recombination between conduction and valence band of a sturdy PN junction, while 'new' LEDs are based on more exotic etherostructures that create quantum wells?Or is it because the 'old' manufacturing process used bigger dies, or thicker bonding wires, or materials that were so lossy that they provided enough series resistance by themselves? I think I owe an answer to my two dead blue LEDs. EDIT : just re-did the experience with an 'old' red LED: I can leave it on for seconds without a problem using the same battery that zapped the 'new' LED. New EDIT : While I can let the LEDs light up for a few seconds, I managed to blow one up when trying to measure the current. So, they are harder to damage, but not immortal after all. Tried three - four more old LEDs and I can confirm that for at least one second they survive (appearently) unscathed. New LEDs die almost instantly.I will try later to measure the current in a more controlled setup, possibly with short pulses. I love the smell of burning GaAs in the morning. <Q> Yes, newer LEDs are also static sensitive. <S> I learned this the hard way when testing a batch of blue SOIC chips with (unknown to me) a soldering iron with defective ground which was later found to be floating at >30V.I can assure you that the LEDs did not work after this experience and it wasn't heat as a single touch to one side of a diode at even 100C <S> ruined it. <S> Some started flashing like demented strobes, some just died completely. <S> The white LEDs in their caving lamps would often begin to flicker and eventually fail, at survivable (for humans) radiation doses. <S> Silicon carbide ones are less so <S> but still eventually fail, rumor has it that Cold War era LED technology is still used today on the ISS Zvezda module and the Progress spacecraft. <S> I did also find that some GaN based blue LEDs can be used as varicaps, in some cases with no effect from brightness loss. <S> The mechanism can generate 100pF changes comparable with an expensive part. <A> If you really want to know, you have to repeat the experiment with LEDs <S> you got a datasheet for and with current measurement. <S> Compare the measured current with the datasheet, especially with absolute maximum ratings. <S> Search the datasheet for different duration of maximum current. <A> There are many different reasons why they might fail more quickly. <S> But it all comes down to overheating in the semiconductor junction in the device; Let's enumerate some of these: 1) <S> The new devices are brighter, so that could mean the die is larger to give a bigger emissive area, a larger device will flow more current and thus will look like lower resistance device. <S> When connected to 9V is will appear as a larger load dumping more heat into the same thermal structures. <S> It's temperature rises more quickly and thus dies sooner. <S> 2) <S> The new devices flow more current due to a difference in process, higher current means more power consumed which means that ... well, you do the simple reasoning. <S> 3) <S> That means that it will be more sensitive to over current. <S> Essentially there is nothing to be learned here. <S> The "experiment" is not controlled, you're not examining the device before or after the "test" and more importantly you've only characterized these devices as "old" and "new". <S> Which manufacturer is that? <A> Modern 'high brightness' LEDs are designed to be strobed at high frequncy. <S> This helps cool the junction as they are not on 100% of the time. <S> Usual control of modern LEDs as well as strobing is by current not voltage control. <S> All (modern) LED driver circuits are current control not voltage control. <S> Due to this they are far less tolerant to over-voltage than old LED packages (some of which had broad voltage ranges across the diode junction), and far more volatile when a fixed DC voltage is applied across the junction without being strobed. <S> Check the data-sheet for max. <S> gate current on modern LEDs <S> they are quite low resistance and will try to draw more current than they can dissapate as heat and consequently fry themselves very quickly. <S> Check out these datasheets to see the V/mA levels - you will see quite clearly that luminosity is controlled by current. <S> Give a modern LED unlimited current and it works too hard and as you saw after a brief flash is pffffttt! <S> http://docs-europe.electrocomponents.com/webdocs/0026/0900766b80026dcc.pdf <S> http://docs-europe.electrocomponents.com/webdocs/14a3/0900766b814a37c6.pdf
The higher efficiency LED is designed for lower current limits because ... well it doesn't need higher current to generate the same amount of light. Incidentally newer LEDs based on quantum wells are also highly sensitive to ionizing radiation, I learned this by reading about folks venturing into the ruins at Tchernobyl and Fukushima.
Neutral wire diameter and phase wires diameter (3 phase current) I have nothing to do with electrical engineering so this may be lame question. According to Kirchhoff's law sum of currents of L1+L2+L3 should be same as current of neutral wire. What puzzles me is why neutral wire has the same diameter as phase wires in a cable. Can anyone explain or point what may be stupid about this question? ADDITIONAL INFO L1/L2/L3 do not have to be equal. I know a building with say 3 flats can be connected to "outer world" with 3 phase cable and each phase is connected to different flat. In such scenario phase currents differ but neutral is common. Doesn't neutral wire conduct sum of all 3 phase currents then? <Q> What happens in the case of a symmetric, non-harmonic generating load is that the fundamental currents cancel each other out when added to form the neutral current -- this is a result of the 120 degree offset between the phases. <S> (Think of it as current flowing from hot to another hot and returning that way instead of returning on the neutral.) <S> There are two cases where this does not hold though: asymmetric loading and triplen harmonics. <S> In the worst possible asymmetric loading case, the circuit's rated ampacity is placed entirely on a single phase. <S> (Putting the rated ampacity on two phases doesn't change things, either.) <S> What's worse, though, is when you have harmonic generating loads spread across the phases. <S> While the fundamental and most of the harmonics cancel, odd harmonics that are also multiples of 3 (3, 9, 15, and so on, called "triplen harmonics" in the electrical world) <S> do not cancel out, leading to a situation where the effective neutral current is higher than expected -- it can be even higher than the circuit ampacity as these harmonics are coming from loads on all three phases and returning on the neutral. <S> The resulting high currents can overload the neutral alone, leading to a fire hazard and the need for a doubled or oversized neutral to counter this. <S> (They can also overheat the common delta-wye type of power distribution transformer, but that's neither here nor there for this problem.) <A> ADDITIONAL INFO L1/L2/L3 do not have to be equal. <S> I know a building with say 3 flats can be connected to "outer world" with 3 phase cable and each phase is connected to different flat. <S> In such scenario phase currents differ but neutral is common. <S> Doesn't neutral wire conduct sum of all 3 phase currents then? <S> The individual neutral connections to each flat carry the return current back to the building distribution board. <S> If the loads are exactly balanced the neutral currents will be 120° out of phase with each other and will sum to zero. <S> The ammeter shown in Figure 1 will read zero. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> Alice, Bob and Charlie's flats are fed from different phases from the building incoming three-phase supply. <S> On the other hand, consider the three flats situation when Alice and Bob are on holiday and Charlie's is the only occupied flat. <S> Normally their loads would balance reasonably well but now Charlie is the only one pulling power from the phase that he's connected to. <S> All his current must return to the transformer via the neutral wire. <S> Therefore: The reading on AM1 will be non-zero. <S> It will be the same as the current into his flat from L3. <S> Assuming the L3 wiring is rated for maximum capacity, the neutral wire must be the same gauge as the supply phase wire as it will have to carry the same maximum current. <A> For 3-phase power normally there is NO current on the neutral wire if phase currents are equal. <S> There is only current on neutral if a phase is not the same current as the other 2 phases. <S> I have done several installations of main service panels where 2 neutral wires were run, also known as a 'double-neutral'. <S> This is to account for some imbalance in the loading of the 3 phases. <A> Since the phase currents are shifted by 120 degrees, it is a little different. <S> If the load is symetric, then zero current will flow i neutral. <S> If only one phase is connected, then the neutral current will be the same as phase current. <S> This is also the max current that will flow in the neutral, so if the neutral has equal cross seqtion as phase it is a proper way. <A> The currents in the three phases are 120 degrees out-of-phase with each other. <S> If the three currents are equal, the sum flowing in the Neutral wire will be zero due to the phase differences.
Assuming that the load is non-harmonic-generating, this means that the neutral current equals the load on the circuit, allowing for equal size hot and neutral wires to be used, and the breaker on the hot side to protect the entire circuit's wiring adequately.
Cooling an outdoor electronics enclosure I built an outdoor enclosure which contains electronic devices (cable modem, router, battery backup, MCU, etc) for a long-distance internet connection . The enclosure is made of insulated metal panels (approx R-15 insulation value). Volume of enclosed space is approx. 3 cubic feet. I currently have a squirrel-cage blower pushing hot air out the top, with a cool-air vent at the bottom on the opposite side, to generate cross-component air flow. However, when the sun hits this enclosure from about 10am until 2pm, it's heating up to 105+F. (Outdoor ambient is about 85F, 50% RH) The outside of the enclosure, which is painted dark green to blend with surroundings, is too hot to touch after 10am. I am looking for suggestions to cool this enclosure to 80F or less, while hopefully keeping costs at $50 USD or less. I realize miniature thermoelectric coolers exist, but they are cost-prohibitive, starting at around $1,000 USD. There is a 110VAC, 20A circuit available in the enclosure. I can't paint the unit white, due to aesthetics etc. One possibility is to install a sun shield board a few feet from the enclosure to block direct sunlight. I've also considered building a thermoelectric air conditioner , but I'm not sure how to calculate the BTU output to determine size needed for this space. Any suggestions appreciated! <Q> I would try 3 solutions, possibly 4: 1) Implement a rigid sun shade as described by user113791. <S> 2) Add pvc pipe to intake vent <S> so it is pulling in the coolest air possible, at ground level or below, or from the garage which usually has cool air at ground level. <S> 3) Add extra insulated fan protected from rain that blows air onto the enclosure as broadly as possible to cool off the metal enclosure. <S> These 4 steps combined are low cost and should push the temperature down by a useful amount. <S> Refrigeration would cost so much you might as well move the enclosure indoors. <S> NOTES : <S> A) <S> If temperature drops below freezing or dew points become high then condensation of moisture on crucial parts is possible. <S> As long as power is ON and some internal fans run, self-heating should keep moisture from forming internally. <S> B) <S> Under freezing conditions the external fans should be cut off . <S> You can do this with an insulated switch or install thermal cut-offs that open at 45deg F or below. <S> C) Make sure the intake vent cannot pull in rain, heavy moisture or snow. <S> This would corrode exposed contacts and circuit boards over time. <A> Given your fairly restricted circumstances (aesthetics, budget), your best ROI might be to add a matching color sun shade (if it must also be green. <S> Else paint it a more reflective and UV longevity color) and position it relative to the solar patterns of the hottest 6 months of the year so that you get the actual protection you are needing. <S> One other suggestion, make sure the sun shade does not force rain right onto/into your enclosure and make sure it doesn't become a sail- <S> you might cut a few slits into the protective sun shade to allow wind to pass through. <S> My initial two cents. <A> Think about to bury the electronic enclosure into the ground. <S> 1 or 2 meters deep you will get friendly temperatures nearly independent to the sun. <S> Of course you need a really water tight enclosure.
You do not want the fans to be blowing snow around or have power when they freeze and cannot rotate. 4) Add an internal fan to blow air on just the crucial components-those that are damaged by extensive heat.
How does electric current flow through a potentiometer? I'm very new to electronics so I'm still grasping some of the fundamentals. I'm trying to figure out exactly how a potentiometer works and how the current flows through a circuit with a potentiometer in it. In the picture, if I move the wiper (B) down - further from A - will the resistance increase (and hence voltage drop increase) between A and B meaning a lower voltage will be supplied to the power amplifier. I'm trying to visualize the path that the current is taking within this circuit. Does the current flow as I have drawn on the picture in blue? <Q> In the picture, if I move the wiper (B) down - further from A - will the resistance increase (and hence voltage drop increase) between A and B meaning a lower voltage will be supplied to the power amplifier. <S> Yes. <S> Perhaps it would help to consider the input impedance of the power amplifier to be very large (much greater than the potentiometer). <S> As such, any small amount of current will develop a sizable voltage (labeled above as Vin). <S> To control this we could place a potentiometer (consider points A and B only and disconnect point C from ground) in series between the preamplifier and power amplifier. <S> However, since the resistance of this potentiometer (50Kohms) is likely much smaller than the input impedance of the power amplifier, the effect of turning the potentiometer is negligible. <S> So, instead, lets setup a path for the current from the pre amplifier with a lower resistance. <S> We can use the potentiometer (now consider points A and C) for this lower resistance path. <S> Now, if the wiper of the potentiometer was moved all the way to the "A" side of the potentiometer, Vin would be about (ignoring the impedance of the power amplifier's input) <S> the current from the preamplifier times 50Kohm. <S> This is our maximum volume setting. <S> Let's call that Vmax. <S> If the wiper were moved half way between the "A" and "C" sides of the potentiometer, Vin would be cut in half. <S> That is: Vin = <S> Vmax <S> * (25Kohms / 50Kohms) = <S> Vmax * 1/2 <S> And if we move the wiper all the way to the "C" side, Vin would be zero: Vin = <S> Vmax <S> * (0ohms / 50Kohms) = <S> Vmax * 0 <S> This would be the minimum volume setting. <S> You shouldn't hear a thing out of the power amplifier. <A> You've nearly got it. <S> The part you are missing is that in a good design the input impedance (or resistance, if you like) will be much higher than that of the potentiometer. <S> The result is that most of the current flows through the potentiometer and the amplifier just "listens-in" on the potentiometer. <S> The effect is that the amplifier lightly-loads the potentiometer. <S> If the amplifier impedance is low relative to the potentiometer you might find that the output is lower than expected when the potentiometer is turned down. <A> The full audio signal voltage amplitude appears across A-C. <S> If you insist on analyzing current flow, then essentially all of the current is flowing from A to C through the potentiometer. <S> The audio signal current is actually heating up the potentiometer, but so slightly it is probably not measurable. <S> When B (the wiper) is all the way up at the top (A) end, the signal at B is essentially the full amplitude of the preamp output. <S> When the wiper (B) is down at the C end (ground), then the signal at B is essentially zero. <S> Your circuit demonstrates that potentiometers used as "volume controls" don't really operate in "current mode". <S> Because the input impedance of the destination Power Amplifier is likely much (5-10X or more) higher than the impedance of the potentiometer, we don't analyze a circuit like this as "current flow", but rather as signal amplitude voltage. <S> to the "A" end (full signal amplitude). <A> A Simplified circuit can be shown as follows, Resistor <S> dR is in series with the set of resistors <S> Rt-dR and Rin of the amplifier. <S> The input impedance of the power amplifier can be neglected as it is comparatively larger, simulate this circuit – <S> Schematic created using CircuitLab <S> If the sliding contact is at A, <S> dR is becomes equal to zero. <S> Hence Rin= <S> Rt-dR <S> As the input impedance of the power amplifier is quite high the current at in the Ramp branch will be very low, hence the Ramp has no significance and can be neglected. <S> Also it can be seen that by moving the contact towards A the resistance <S> dR is reduced which reduced the voltage drop, hence the voltage at the contact increases, at A the voltage is max and the volume is MAX. <S> As the contact is moved towards C <S> the <S> there will a large voltage drop across the dR <S> as its value will be increased this will eventually lead to decrease in the contact voltage. <S> Think of Rt-dR <S> resistance as a way to shunt the current to the ground, if its value is decreased then more current will shunt off to the ground and vice versa.
Generally speaking, a potentiometer is used to "sample" the signal anywhere between the "C" end (ground = no signal).
What is the purpose of a tri-state pin in a Oscillator I'm trying to interface this clock ( 32.768kHz CeramicSurface Mount Crystal Oscillator datasheet ), but I'm confused as to what to do with the tri-state pin. What is it's purpose in the oscillator and should I care about it? Can I just leave it unconnected or pull it to ground? <Q> There is a line in the Parameters table on p.1 of the datasheet, which describes the function of the Tri-state. <S> It's a kind of an enable-disable pin. <S> If Tri-state pin is logic "1", then oscillator is connected to the output pin. <S> Same happens if Tri-state is left unconnected (there's probably an internal pull-up). <S> If Tri-state pin is logic "0 <S> ", then oscillator is not connected to the output pin, and output is floating. <S> The fact that it's floating may be useful if you need to switch between several clock sources. <S> You would enable one of them, and Tri-state the others to prevent contention between multiple outputs. <S> edit <S> : Here's an example where an enable/disable with tri-state is used with an external clock source. <S> When connecting external clock source to J1, the J2 jumper should be installed. <S> The resistor R1 protects from a direct short between clock outputs. <S> (schematic from p.22 in this user guide and datasheet for that oscillator ) <A> There are already a slew of answers that explain the functioning of the enable pin on the oscillator part. <S> Let me provide some reasons that pin can be useful in real world situations. <S> Sometimes a circuit board is tested with an automated test fixture with pogo pin test points all over the board. <S> In such test environments the clocks for the circuit are often supplied from the tester equipment so it is necessary to shut off on board oscillators so that the tester clocks can drive the board. <S> In similar test situations like #1 above the test fixtures with pogo pins have 100's of long wires connecting the test fixture to the test equipment. <S> On onboard oscillators can create a problem with overall noise if they are allowed to run. <S> A test point access to disable the oscillators can be beneficial to shut off the oscillator. <S> Some board designs may have logic tied to the oscillator output. <S> That logic consumes power when the clocks are running. <S> If the design is one that needs to save power by going to sleep for periods of time a GPIO can be attached to the oscillator enable to allow it to be disabled when entering the sleep state. <S> There are some complex ICs that have multiple power rails that must be sequenced on and off to ensure proper operation. <S> There are cases where sequencing requirements also require that clocks to the part are held off until the appropriate point in the power sequence. <A> When the tri-state pin is High or not connected, the oscillator will produce its normal output. <S> If the tri-state pin is connected to Ground, the oscillator output will be high impedance, effectively disabling the output. <S> This is described in the second-last line of the "Standard Specifications" table. <S> You might want to use this feature if you wanted to switch between two or more clock sources. <A> It's an enable pin. <S> From the data-sheet. <S> Tri-state function: Logic "1" or open: Oscillation Logic "0" (VIL < 0.8 Vdc): <S> Hi z <S> Ground the pin, and the Oscillator stops ticking or the output disabled. <S> Same difference in practice, but without the warm-up instability time that it would take if power was cut instead. <S> If you don't need to use this feature, the normal usage is to tie it high, or leave it open/unconnected, for the oscillator to keep clocking. <S> The use of the phrase "Tristate" here is odd to the point of obfuscation, but seems like technically correct. <A> The specification you pointed to says: Logic "1" or open: Oscillation Logic "0" (VIL < 0.8 Vdc) : Hi z <S> So, if pin 1 is unconnected or connected to Vdd the device produces a clock. <S> If pin 1 is grounded the device's output will look like an open circuit. <A> Figure 1. <S> Extract from datasheet. <S> The term "tri-state" is used to describe outputs which can be: <S> Low / off / 0 <S> V. <S> High / on / <S> V+. <S> Disconnected or floating. <S> Hence there are three states. <S> In this case the datasheet shows us that the chip will oscillate when the tri-state pin is at logic 1 or open circuit. <S> If it is pulled to logic 0 (less than 0.8 V DC in this device) <S> the output will be disabled and present a hi-z (hi impedance) to the rest of the circuit. <S> i.e., It will appear to be disconnected.
Clocks provided by oscillators can use the enable pin to hold it off till the right time.
Altium PCB Layout: The Difference of Through Via, Micro Via, and Burried Via This is my very first time doing 4 layers PCB layout. What is actually the difference between Micro Via and Buried Via?Say I have Layer 1, Layer 2, Layer 3, Layer 4. What I know I use the Through Via for Layer 1 as start layer and Layer 4 for Stop Layer. And how do I use the Micro Via and Buried Via? Last question, what about the Drill Pair Properties, should I connect every layer there? <Q> Blind vias connect an exposed surface with an inner layer, but do not go through the entire board. <S> Buried vias connect inner layers with each other, and to not extend into the top or bottom layer. <S> Microvias are drilled with a laser instead of a regular drill bit, usually in a process step before the layers are laminated together. <S> While there is no technical reason why blind and buries vias couldn't be drilled regularly, their main use case is in BGA breakout, where high density is important, and mechanical drills would be too fragile. <S> Thus, blind and buried vias are microvias. <S> A four layer board is constructed by laminating two thinner two layer boards together, and connecting these together. <S> Blind vias can then be placed between layer 1 and 2, and between layer 3 and 4, and do not affect the other half of the board. <S> Buried vias become possible starting from four layers on -- 1-2, 1-2-3, 4-5-6 and 5-6 are possible options for blind vias, and 2-3 is a possible option for a buried via. <S> It is perfectly possible to use a microvia through all layers as well. <S> Drill pair properties define which layers are manufactured together. <S> Usually, 1-2, 3-4, 5-6 etc. <S> go together, unless you have a strange stackup. <S> In general, try to make microvias only between drill pairs, as manufacturing can become more expensive if there are additional drilling and plating steps even for the in-between substrate (here, normal vias should probably work fine. <A> One thing about microvias is that each via connect two adjacent layers, only. <S> The laser cannot punch through more than two layers. <S> To connect layers that are not adjacent or several layers you need to setup staggered or stacked microvias. <S> Stacked are just that: stacked on top each other. <S> Staggered are placed next to each other but not on the same layer. <A> If you route a 4 layer PCB with 2 layers for signals and two layers for GND and VCC, you will not need buried vias when the two signal layers are top and bottom layer and the power layers are between them. <S> Ask your manufacturer about the cost of normal vias and blind, buried and micro vias. <A> To add to Lars's comment. <S> Some manufacturers will support the use of "skip" and "core" vias. <S> This is a type of via of which the hole is filled with copper or epoxy, which connect tracks on 2 different layers that don't have to be adjacent or close to each other.
Using microvias and restricting the layers touched by the via mainly defines which drill file the coordinates go into, and which layers get a copper blob around these coordinates.
Can 50 Arduinos be daisy chained? I have multiple users on seats (50), each with a small keypad for input. I need to collect inputs from all the users; I want to use Arduinos for each user, which will also display responses on a 7 segment display. I intend to connect all Arduinos using I2C but I fear the signal might drop due to long distance and fan-out limitations, so I'm considering daisy-chaining them. Is this a good idea, or is there a better approach to do this? <Q> As the application note posted by RedGrittyBrick says: Because the original I 2 C-bus applications were internal to a piece of equipment, for example in a PC or radio/TV/audio equipment, I 2 C-bus is rarely considered for systems when long distances with large numbers of drop-off points are required. <S> The solutions in the application note is to use specific driver circuits to convert the signals into something that can be driven over longer distances. <S> This is a tried and trusted standard for bidirectional transmission over long cables. <S> It uses a normal UART on your Arduino, and the driver circuits can be found in many shapes and forms. <S> If you don't want to make your own PCB there are adapter boards available that takes a TTL RX/TX signal from an UART and converts it to RS-485 levels. <A> If you're not particularly tied to the idea of using Arduinos for this, you could try some other microcontroller boards. <S> I'm quite fond of the various boards that are based on the ESP8266 chip; these would have the advantage that they have an integral wireless networking connection so they can all talk directly to your central system that stores the input. <A> Have you considered arduino Ethernet with PoE? <S> Solves the powering the devices issue and allows 2 way comms <A> There is no limit to the length of the chain you can achieve if each board regenerates the signal it passes on, however each node will add some delay before it passes on a message - in the simplest implementation, the per-node delay might equal the transmission time of the message contents. <S> It sounds however like you may need to pass messages in both directions along the chain. <S> The challenge in that would be the presence of only one hardware UART on each board. <S> You can augment that with a software uart, though to get more than one of those you have to use a more sophisticated implementation than the default which ships with the IDE. <S> Or if you can keep any on-board USB-serial out of the way, you can use the hardware UART to communicate in one direction and the software one in the other. <A> You can overcome I2C limitations by using DS28E17 1-Wire to I2C bridge. <S> Of course, the solution will depend on your budget and the required bandwidth.
If the distance between your nodes is not great, and you do not need to get particularly rapid communication, a daisy-chain where messages are propagated from one to the next via serial UARTs is likely one of the few things you can do without adding hardware . Since you will require driver circuits anyway, let me instead suggest that you take a look at RS-485 .
240 V instead of 230 V on a true RMS multimeter I'm a beginner in electronics, and I've recently got myself a UNI-T UT61E multimeter, which is marked as true RMS. I stuck its probes into a mains socket and I was expecting 230 V (Europe/Poland) but the DMM showed 240 V. Is it possible that my meter is wrong, or is it a consequence of a true RMS measurement? I don't have any voltage standard nearby to check the meter. <Q> Is it possible that my meter is wrong or is it a consequence of a true RMS measurement? <S> It shouldn't be an issue. <S> The sine wave purity of most "mains" supplies is usually pretty good and this will be registered correctly on an RMS meter anyway. <S> Cheaper (rectifer type) meters will start to show a discepancy as the distortion increases but this would be hardly noticeable given the "very reasonable" quality of most AC supplies but this doesn't apply to a lot of battery powered (solar) inverters. <S> It might be a 4.3% error of measurement or it might be that your AC supply is not quite exactly right. <S> Read the data that came with your multimeter to find out. <S> According to this wiki page and over-voltage or under-voltage is classified at the 110% and 90% level. <A> The European standard is now 230V +/- <S> 10%. <S> So anything between 217 and 253V is acceptable. <S> Conveniently, that almost exactly matches anything within the old 220V + <S> /-6% previously used by some countries in Europe and the 240V +/- <S> 6% used by others. <S> Here in the UK, most substation transformers are still wired for 240V. <S> So it's actually quite unusual to see a 230V supply here. <S> I'm currently getting about 243V here. <A> I think 240 Volts is well within the accepted tolerance - it is only 4.3% greater than 230 Volts. <S> The AC line voltage will vary over time, depending on loading. <S> The voltage will likely be lower around dinner time, when many people are using electric stoves, and higher mid-day, when everone is out at work, so very little power is used. <A> You say you're using a UNI-T UT61E meter which, according to its manual , is a 22000 count meter with 1.2% basic accuracy + 10 counts of error on the 750 VAC range, which is the one the meter would use for that measurement. <S> This comes from page 48 of that manual. <S> Although that meter claims to be a 22000 count meter, it is only that good on the lower ranges. <S> It only has 0.1 V resolution on the 750 V range, which effectively makes it a 7500 count meter on that range. <S> That resolution spec means the reading could be off by as much as 1 V and still be in-spec by that error source alone. <S> But the bulk of the error comes from the basic accuracy spec, which adds up to another 1.2% of the measurement to the error spec, or 2.88 V in this case. <S> That means your meter would still be in-spec if it gave a value ±3.88 V of the correct value. <S> That means your wall power is within its proper range, unless your meter is very badly out-of-spec.
Given the 230 V ±10% value from other answers, your wall voltage could be as high as 253 V, while your meter's error could only be as much as 244 V, rounded off.
A mosfet switch for PWM signal I was wondering if my circuit will work. The 5V signal is coming from a microcontroller, based on a condition. If its 1(5V) I want the PWM signal to be received by the motor. If its 0, the motor shouldn't get any signal and thus not rotate. What specs of the mosfet should I look into? Thanks! EDIT: I think I am using the wrong term for "motor", I am sorry, I don't know what else to call it. I will be using it to either drive an ESC or a servo motor. The signal comes from the PWM output of Erle Brain 2, which has a 25 mA current sink capability at 5V. P.S. Thanks everyone for your inputs! Really appreciate it! :) <Q> Forgive me if you already know this, but the configuration for your Nchan is a source follower, which is typically used as a current buffer. <S> More plainly, a circuit which will produce the same signal (minus VGS) on the source that is on the gate, but with more current, which limited by what the Nchan can supply (drain to source) and also limited by the capability of the power supply, which would normally be coupled to the drain. <S> In this caseyou substituted a PWM signal for the power supply. <S> What you are doing is using an Nchan as a gate, to "gate" an on/off signal from a micro-controller to pass a PWM signal to a motor. <S> First of all, this circuit will work, however this depends on the input current requirement of the motor at the PWM input, and also, what kind ofNchan you use. <S> There are basically two types of Nchan's, which are lowerpower, and higher power. <S> I would suggest you NOT use a higher power Nchanbecause the Vgs turn on voltage is usually in the neighborhood of 2VDC.Using <S> something like a 2N7000 would work (Vgs turn on = 0.7VDC), however it can only supply a maximum of 200mA from drain to source. <S> If the PWM input of your motor requires more than that, this approach will not work. <S> Specifically I suggest using an AND gate. <S> Put your PWM signal into oneinput of the AND gate, and on/off signal from the mirco-controller to theother input of the AND gate. <S> The motor's PWM signal is essentially a digitalinput (1 or 0). <S> It's just the duration between the 1 and 0 change (duty cycle). <S> If you choose this approach, please be cautious about choosing an AND gatethat can supply a sufficient amount of current to the motor's PWM input. <S> Lastly, if you stick with using an Nchan, I would be cautious about swapping the PWM and on/off signals to the Nchan. <S> If you choose to do that, please make sure your micro-controller's output can supply enough current to the motor's PWM input. <S> Most micro-controllers (Arduino) supply very low outputcurrents. <S> Hope this helps. <A> It should work, but I would switch on/off and PWM signals other way. <S> So PWM is on gate and on/off on the drain. <S> Regarding the mosfet, looks like that line do not have high current consumption? <S> If that is the case, than any N-channel mosfet with threshold gate voltage low enough (<= 2V) would do the job. <A> That's not how motors work. <S> They don't have a control input like that. <S> A motor with a specific controller could be lumped into a box and have three terminals, but then you no longer have a "motor". <S> In that case, it depends what the specs of the control input are. <S> Otherwise, what you need in between depends on what exactly the controller input requires. <S> In any case, using a FET in the way you show is iffy at best. <S> There will be some voltage drop from the gate to the source. <S> We can't tell without specs whether the result can still be reliably detected as a logic high by the controller. <S> It seems what you really want is a AND function. <S> This can be done by a AND gate, but much better would be to perform it as part of the programmable logic in the micro. <S> Shut down the PWM output in the micro when you want the motor off.
If it's a ordinary digital input, then you don't need a transistor between the microcontroller and the motor controller at all. I would like to suggest you use a logic gate instead of an Nchan.
Measure delay of sound in wood with piezo sensor without an oscilloscope First of all, excuse my layman terminology but electrical engineering is not my speciality. I am currently trying to measure the delay of a sound wave travelling through different pieces of wood. I have already read on here that this is obviously possible with two piezo sensors (and a knocking device, e.g. a hammer) and an oscilloscope. Since I'm on a tight budget I am not willing to buy one just for this experiment. Now I was wondering if I would get accurate measurements if I was to use a Raspberry Pi and an ADC (I have a MCP3008 lying around) as well as two piezo sensors. Is the sampling rate high enough to measure delays in the µs range? If this is not possible, I would appreciate other low-budget suggestions. I was maybe thinking of a circuit which first subtracts the two signals which I can then convert via my ADC. <Q> Rather than building counters and threshold devices, if your software foo is up to it and you have a PC, then using a sound card oscilloscope or a recording application like Audacity (which can show you waveforms just like a 'scope) could give you all you need. <S> Building a thresholding device to get a nice logic edge from the sensors is a job that will probably need an oscilloscope to be successful. <S> At most, all you need is a pre-amplifier per channel, but with a sufficiently sensitive input, even this might not be required. <S> Connect the two sensors to the two input channels. <S> As soon as you hook the sensors up to the PC, you can make a recording and see what you have. <S> As you would be recording analogue waveforms, your temporal resolution is not limited to the sampling rate. <S> Programs like Audacity allow you to shift time waveforms with sub-sample resolution. <S> Shift one waveform until it lines up with the other, and the shift you needed is the answer. <S> Often PC hardware is capable of higher rates than 48kHz, 192k is not uncommon. <S> Using an existing PC and a free program like Audacity is about as low budget as you can get. <A> Per the datasheet, the max sampling rate of the MCP3008 is 200 ksps. <S> 1/(200kHz) <S> = 5 us. <S> That means, at best, you will have a temporal resolution of 5 us. <S> I suspect this is much too slow for your purposes. <S> However, you don't need an ADC, and you might be able to do this with a microcontroller. <S> The Raspberry Pi is not a great solution for this because it is not really designed to operate in realtime. <S> Basically, by designing an amplifier/peak detector circuit, you can convert the analog-ish piezo signals into digital peaks, and use those to trigger interrupts on a low-cost microcontroller. <S> Depending on the speed of the microcontroller, I'd expect you'd be able to get a temporal resolution of at least 1 us, and likely much faster. <S> You could also design a counter circuit with discrete logic that would run very fast. <S> Basically, use a free-running counter, with one piezo triggering reset and one piezo triggering a capture (or a stop, for that matter). <S> Running the counter at say, 20 MHz, would be fairly easy, giving you a theoretical temporal resolution of 50 ns. <S> Again, convert the piezo pulses to digital, capture the edges with the logic analyzer, and then measuring the delay is trivial. <A> I presume all the wooden blocks will be identically shaped to make good comparisons and, if so why don't you use the idea behind such musical instruments as the marimba. <S> The wooden blocks resonate when hit and give off a distinctive pitch that is related to the speed at which sound travels through the medium. <S> Use a microphone and a sound card to capture the signal and analyse the fundamental frequency produced. <S> From this calculate the speed of sound in the block.
One final suggestion: there are very inexpensive USB logic analyzers out there with relatively high sample rates that you could also use for this purpose.
USB cables combining shield and power ground conductor I see a lot of USB "charging" cables on the market with cutaways showing two small data conductors, one large power conductor, and shield. The implication is that power is conducted in the shield. Is that really appropriate for USB cables? Specifically USB, with its wide variety of devices. I realise that there are plenty of application-specific phantom power configurations where the devices are designed appropriately for that situation. <Q> In fact, in most kinds of shielded cables, that is common practice. <S> There are rather complex "handshaking" protocols established for USB peripherals to negotiate for how much power they require from the host. <S> And for "dumb" hosts like simple chargers, Apple has a scheme for setting the D+ and D- pins at fixed voltages to indicate how much current is available. <S> For example... <A> There should be no problem when power is conducted through the shield. <S> The shielding will most of the time have significantly higher maximum sustainable current than the very thin wire inside the today's average USB cable. <S> So using the shielding for 0 (black), a thicker wire for +5 <S> (red) along the two data transmission <S> thin wires is both a cost-efficient and technically-sound (appropriate) choice. <A> Many high-current charging devices (both supplier and consumer) detect a ground connection on the shields on the two ends and interpret that to mean that the supply device can and should provide more than 500mA of usable current. <S> These devices also use the shield connection as a second ground wire. <S> So is it appropriate to omit the normal ground wire and use the shield exclusively? <S> It is debatable as to whether or not it is, but there are no popular devices that would operate incorrectly in such a case.
Considering that shielding the data wires is desirable to keep interference away, and considering that shielding is virtually always grounded, it should come as no surprise to find the ground conductor and the overall shield combined.
Do capacitors increase voltage? Just got a (220<-->12-0-12) transformer hooked up with a bridge rectifier and it measured 13 volts DC output from rectifier, but when I added a 1uF capacitor it just jumped up to 20 volts, and same reading(20 volts) from a 0.1uF capacitor, how is that even possible!? Note: Nothing is connected in the circuit more than a transformer and a bridge rectifier in the first case, and only a capacitor added in the second case along with the voltmeter. <Q> \$V_{rms}\$ vs. \$V_{peak}\$. <S> The peak voltage is \$\sqrt{2}\approx 1.4\$ times larger than the average (RMS) voltage. <S> If there is no load, it will smooth it out to around the peak voltage of the supply. <S> A 220V AC supply is 220V RMS, which is equivalent to 311V peak. <S> If you bring that down to 12V AC, that is equivalent to about 17V peak. <S> If you add a load, the voltage will drop because the average voltage supplied is the RMS value - the capacitor can't sustain a current at the peak voltage because that would require it to deliver more power to the load than is being delivered by the transformer. <S> That the rectified supply is varying between 0 and \$V_{peak}\$ with the average voltage being \$V_{rms}\$. <S> If you are drawing a current, the capacitor will smooth this out to be around \$V_{rms}\$, but if you aren't drawing any current, then it will keep getting "topped up" with charge until it reaches the peak voltage. <S> As a demonstration of this, try putting, say, a 1k resistor at the output of your supply, you should see the voltage drop. <A> The "220VAC" and "12VAC" are RMS ("average") values. <S> That is normally how Alternating Current is measured and specified. <S> By adding rectification and capacitive filtering, you have converted the RMS AC voltage into "peak" DC voltage. <S> This is completely normal and predictable. <S> For a bridge rectifier and "capacitor-input" filter we typically use the square-root of 2 (1.414) as the multiplier to predict the resulting DC voltage. <S> Of course other factors enter into the equation in Real Life. <S> For example, you won't see the same voltage under load as you are measuring "open-circuit". <A> When speaking of AC voltage, we normally refer to the RMS value - a "sort of" average. <S> The peak AC voltage will be about 1.4 times the RMS value. <S> When you rectify an AC voltage you will get successive half-waves of the sine wave - this should (I think) <S> give an apparent DC voltage about equal to the RMS value of the AC. <S> When you add a capacitor, the capacitor will charge to the peak voltage each half-cycle, and, if there is any load current, will discharge between the AC peaks. <S> With no load, you should measure a DC voltage equal to the AC peak voltage (possibly minus 0.7 volts or so lost in the rectifier diodes). <S> It appears that your transformer is producing a bit more than the advertised 12-0-12 volts - not uncommon if you are not drawing any significant load current.
If you put a capacitor on a rectified AC waveform it will smooth out the supply. As to how you got 20V, either your meter is dodgy (unlikely) or the supply voltage is higher than you thought, or the transformer is not the ratio you think it is.
Electronic switch with negative and positive voltage I want to use a micro-controller to send data through a max232 that will convert the 0v/5v signals into -12/+12v signals. The problem with this IC is that when there's no signal, it sends a +12v signal which I don't want. I want to be able to control exactly when to send a signal. So I would like to use a transistor (or something else) to make an electronic switch and connect an additional pin of my micro-controller to open and close the switch when I need to.I've made a diagram so it makes more sense. This diagram is an idea I've had, but I don't know how to make it work, I read the data-sheet of the 2n2222 transistor and seems like negative current won't go through. Here's the Schema simplified :Basically, it sends the data to the receiver when and only when the Switch is turned on, the rest of the time, the switch is off and no data transit. I'm just looking for an electronic equivalent of that switch that I could control with a microcontroller. <Q> Here is how I would do (well, I don't like relays, but it's a personal preference...). <S> The upper signal is the +12/-12 coming from MAX232. <S> The lower signal is a +3.3/0V signal coming directly from the MCU to enable the "switch". <S> When the enable signal is high (3.3V), it drives the P channel FET gate low. <S> Then, if the MAX232 signal is high, it will pass (because the mosfet gate sees a positive voltage). <S> If the MAX 232 signal is low (-12V), the gate is not triggered, but the signal will still pass, because of the body diode (there will be a slight voltage drop, but not of big consequences at these levels). <S> When the enable signal is low and the MAX232 signal is high, the output is driven high impedance. <S> There is one big constraint: <S> The MAX232 signal must always be high (+12V) when you disable the switch . <S> Otherwise, if it is negative (-12V), the body diode will make the signal pass anyway. <S> But this can easily be made sure in the firmware. <S> Note <S> : Circuit has been updated <S> In the previous circuit, I was using BJT NPN at the bottom. <S> I realized that the negative voltage could have been destructive to the transistor, so an additional diode would have been necessary to protect it. <S> So, actually, I changed it to a NFET, because then, there is no need for such a diode, and the base resistor can also be avoided. <S> So it's only three components, now. <S> And you can even get Nfet and Pfet in a single package, so that would be two components. <A> You could investigate the possibility of switching off the MAX232 chip with a PNP transistor in the positive supply line. <S> There will be a short delay while the voltage doubler capacitors discharge. <A> It is not clear what your goal is. <S> Assuming it is to place more then <S> 1 RS232 device on one communication line <S> it is assumed you need to tri-state the output instead of driving the output to 0V. <S> Also, it is assumed you have an additional control signal to do this. <S> As going from only 2 states (0V and 5V) to 3 states (0V, 12V and -12V) is impossible with out additional information. <S> Given all this, I would suggest you use a dual coil latching relay similar to the G5AK-234P . <S> Only needing to drive 1 of the 2 coils momentarily is a great advantage to this type of relay. <S> Momentarily drive 1 coil to put the MAX232 on line, and momentarily drive the other coil to to take it off line. <S> Under normal conditions, the relay will retain the current state even through a power cycle. <S> However, if the assumption made here (putting multiple RS232 devices on a singe bus) is true, it is suggested to switch to the RS485 alternative. <S> In the above link it is stated: Multiple receivers may be connected to such a network in a linear, multi-drop configuration. <S> Which realizes connecting multiple serial devices to 1 bus with out the need to additional arbitration hardware. <S> added later... <S> Alternatively, you might consider solid state relays . <S> The specification and applications need to be look at much closer when considering solid state relays as opposed to normal mechanical relays. <S> I quickly found this one , but will leave it up to you to do the research and make the final decision if this will work for you. <A> If I understood well you don't want +12V when the input signal is low, right? <S> But AFAIK max232 act like an inverter, then if the input (driver) is 0 <S> the output is +12v <S> and when input is +5V the max232's output is <S> -12V. <S> Conversely if you need a sort of "enable" you could evaluate to use MC1488 (or 75188) instead of max232.
Maybe you could add an inverter before the max232.
How to remove DC offset of the input signal without using an opAmp? Below is a pulse train with a DC offset: This will go to the circuit below as input. The thing is this works well in simulation but in real I need to remove the offset of the input signal. My question is: How can I remove this DC offset without using another opAmp i.e. just by adding a component to the circuitry ect.? Is there an easy way? <Q> Careful - my answer is only valid for low duty cycle signals as shown above <S> You can use a simple RC high pass filter in front of R3 as your comparator input resistance is quite high. <S> DC offset is basically Voltage with 0frequency <S> so, the high pass will filter it out! <S> Something like 47n / 220k will do the job (depending on other design considerations the values may vary! <A> There has been a similar question recently: Solution for adding around 60V dc-offset on digital signal(0 and 5v) of 10MHz frequency <S> Except you want to remove <S> some offset (not add), so <S> the circuit is a little bit different, but the principles are exactly the same (and well described in this other answer - the only difference <S> is that the capacitor is charged when the signal is at high level, not low). <S> Here it is: <S> That will lead to a signal that will swing between 0.5V and 5.5V (due to the diode voltage drop). <S> The advantage compared to a RC filter is that the levels won't change depending on the duty cycle. <A> Well its crude, less than 100% accurate , and your results will vary with load, but you might get away with just adding 5 or 6 silicon diodes to the output, or maybe even a yellow LED or similar combination along with one diode. <S> That new "floating" ground would then have to be the reference for any additional circuits using that pulse. <S> And finally, though you said you didn't want to resort to an op amp, consider that 1/2 of a dual op-amp might take the place of the LM317 you're already using.
If you need a little more stability and fine tuned accuracy, an LM137 negative voltage regulator along with a trim pot for setup might offer a way to create a "return path" for the pulse that sits right at the desired offset.
What is T1 in this circuit for 'Mains power failure alarm'? I know it is a transformer, but is it a transformer that looks like an IC? Or just a regular step down transformer? (Transformer 12V (1A)). This will be built on a breadboard. Also can I use 1n4007 for the full wave rectifier diodes? I would also appreciate any feedback on this circuit. <Q> A little bit of googling would have helped here. <S> Here is the main page for the design: http://www.circuitstoday.com/mains-failure-alarm-circuit <S> And the relevant component choice notes: <S> T1 can be a 230V primary 6V secondary 500mA transformer. <S> B1 can be a 1A bridge. <S> You can make the same using four 1N 4007 diodes. <S> All capacitors are rated 25V. <S> You can use any general purpose PNP transistor (like BC158,BC177 etc) as Q1. <A> Yes indeed that is a transformer. <S> There are no markings to indicate, but most certainly it is a step down transformer. <S> You could use the 1N4007 for the bridge as well, in fact it is way over rated for its use as D1 and D2. <S> Once this power fails the charged capacitor C1 supplies all the current for the buzzer K1. <S> It likely will not last very long. <S> To make it last longer you could use a super cap, but you'd have to regulate the voltage a lot better. <A> Even the smallest power step-down transformer will be significantly larger than an integrated circuit. <S> It is very unlikely that you can find a power transformer small enough to plug into a breadboard. <S> It may even be easier to substitute a 9V "wall wart" power source vs. building the entire power supply from scratch. <S> In most parts of the world it is not difficult to locate a surplus, redundant, or discarded power supply that could be used for this project. <S> The capacitance value of C1 will determine how long the piezo buzzer will sound before running out of energy.
The circuit shows that is a 230V to 6V stepdown transformer. While there are some transformers that are potted in solid plastic blocks, and with pins out the bottom, they can't really be confused with integrated circuits as they are physically much larger and heavier (because of the iron core).
Precision voltage reference divider vs. constant current source to drive an RTD sensor I need to measure temperature with an RTD between 0-300 C. I believe I have two options to make this work: A constant current source feeding the RTD A precision voltage reference and a resistor forming a voltage divider with the RTD. It seems like I could use a precision voltage with an op-amp constant current source circuit to complete option 1. I could also use something like an LT3092. This approach seems more complicated than option 2, where I could connect an ADR02 5V reference in line with a 5k 0.01% resistor to form a divider with the RTD. I plan on using a 1k platinum 2 terminal RTD. Which is the better option, and why? EDIT: I've come up with this circuit after reading through the various suggestions. Does it look like this will work? Should this reduce noise reasonably well? The 10V clamp is for protection of the ADC. I think I also need an inline resistor on the output before the clamp, but not sure how to size it. <Q> RTDs are <S> non-linear so let's just say that using a resistor to feed it adds a bit more non-linearity you have to cope with. <S> But if you are feeding the RTD signal to an ADC then it makes sense to tie the top of the resistor to a reference voltage that is also used by the ADC - this is called a ratiometric measurement and, to a significant extent that voltage reference can drift this way or that without numerically affecting the ADC conversion number. <S> Using a current source may make things a bit more linear <S> but you lose the benefit of a ratiometric measurement. <S> Using three more resistors to make a bridge isn't worth the hassle in my opinion - that's three resistors that have to be precision instead of <S> 1 and any quarter bridge circuit will be non-linear anyway. <A> A current source ideally has infinite resistance. <S> A resistor has positive resistance. <S> There's a third choice- use current source with a large but negative output resistance which will linearize the RTD so that it has an 'S'-shaped residual error curve. <S> Probably not worth it these days if you are going into the digital domain anyway. <S> Since an platinum RTD will only change from 1K to about 2.14K for 0°C~300 <S> ° <S> C you will lose about half your resolution if you use a simple resistor. <S> A bridge can offset the voltage at 0°C and give you approximately full resolution. <S> Something like this <S> (you'd want to add other parts in a real implementation typically for EMI and so on). <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The output voltage is: Vo/5V = R4/(R4+R1) - R3/(R2+R3) = <S> R4/(R4+20K) <S> - 1/21 <S> So the output voltage would be 0 to 245.05mV for 0 to 300C. <A> If you make your measurement ratiometric, then you can get the best of both worlds.(current source, and a "precise" voltage reference) <S> An ADC measurement is a ratio of the sampled voltage in comparison to the reference voltage. <S> The ADC will not care if the reference voltage is 1.0V or 2.424V (assuming you are within the spec of the ADC). <S> You can do the following if you have an ADC with an external ref pin and the ADC can be be differential. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> If your current fluctuates, the ratio will always be the same since the fluctuation will be applied to be Vrtd and Vref. <S> So now you have "precise" voltage reference for your ADC.
In any case your resistor should be connected to the same reference as your ADC so that reference voltage error cancels out and only the resistor values matter.
Sensor to rapidly sample light intensity of a small area at a distance? I'm looking for a sensor or method of rapidly sensing light intensity of a 1-inch x 1-inch square of a LCD monitor at a distance of 1-1.5 feet away. Does anyone know if such a thing exists, or has experience with this challenge? I've considered using a standard photocell with some sort of zoom-lense, but I don't have enough experience with senses to put one together, and haven't found anything pre-built. <Q> Buy a cheap webcam with a screw-thread mount for the lens. <S> Cover the lens with something opaque like aluminum foil, leaving a small hole in the foil to restrain the field of view to 1 square inch. <S> Focus the lens at 1.5 feet distance and test. <S> When you're satisfied with the results, you can actually scrap the webcam and put the optical system you've made on a photodiode or a similar light sensor with a desired interface. <A> Avalanche photo diodes have comparatively lower rise and fall time as compared to LDR, these APD's can be coupled with Si-PM (silicon photomultiplier) which work in the Geiger region to amplify the input through silicon and is the successor of vacuum tubes devices. <S> But I guess these are going to be costly as OP seems to be doing a DIY project, using a lens the light you can concentrate will be proportional to the aperture of the lens. <S> The attenuation factor of the light can be calculated by various models and the output can be scaled at the photocell, if possible you can also use a photocell just near the LCD surface and compare the results for various distance. <A> An example is a Pritchard Photometer.
If you are looking for a commercial product that will do this job, look for photometers that are made to image the target from a distance.
Why is zero represented by 4mA in 4 - 20 mA industrial control systems? Ideally, to drive a controller the required current must be just above 0 mA. However, practically we consider readings only taken from 4 mA as the valid data sets. Now, my question is why do we take 4 mA and not 3 mA or 2 mA? Is there any particular reason or is it a randomly chosen point for the sake of an ideal graph? <Q> This allows them to operate with no additional power supply at the far end, saving the extra wiring that would be needed. <S> Often the transmitter will be a pressure sensor, or optical gate, or thermometer. <S> The 4mA is a compromise between low power consumption for the system, and enough power for the sensor to operate. <S> There is no more magic behind the exact figure of 4mA than (say) 240v for mains voltage. <S> It is a reasonable value, which over the course of time has been found useful, so has been supported by many different players, and become a standard. <A> In addition to @Neil_UK's answer, the following extract from Wikipedia's Current loop article may help. <S> (Emphasis mine.) <S> For industrial process control instruments, analog 4–20 mA current loops are commonly used for analog signaling, with 4 mA representing the lowest end of the range and 20 mA the highest. <S> The key advantages of the current loop are that the accuracy of the signal is not affected by voltage drop in the interconnecting wiring , and that the loop can supply operating power to the device . <S> Even if there is significant electrical resistance in the line, the current loop transmitter will maintain the proper current, up to its maximum voltage capability. <S> The live-zero represented by 4 mA <S> allows the receiving instrument to detect some failures of the loop , and also allows transmitter devices to be powered by the same current loop (called two-wire transmitters). <S> Such instruments are used to measure pressure, temperature, level, flow, pH or other process variables. <S> A current loop can also be used to control a valve positioner or other output actuator. <S> An analog current loop can be converted to a voltage input with a precision resistor. <S> Since input terminals of instruments may have one side of the current loop input tied to the chassis ground (earth), analog isolators may be required when connecting several instruments in series. <S> Note that the live-zero feature can be used in a number of ways: <S> On transmitters: Deliberately sending a current of, for example, 3 mA to indicated a sensor fault. <S> On receivers: If the received signal goes below 4 mA the actuator can move to a preset safe position. <A> I believe that the 4-20mA loop standard long predates electronics that can operate from 4mA, so I think it was simply an arbitrary choice based on older pneumatic control systems that use 3-15 PSI etc. <S> as the signal (note the same ratio). <S> 10-50mA was also used in some cases. <S> The choice of 20% of full scale as the live zero is just an arbitrary pragmatic engineering choice. <S> Of course the live zero allows the receiver to distiguish between a broken wire (or out of range) and 0, just as it can detect >20mA as out of range on the high side. <A> the protocol 4-20mA is think for comunication through long distance. <S> When the current travel long distance, is very important not loss voltage, because that we use a protocol based in current and not voltage, because if there is long distance, the voltage signal change but not the current. <S> And the protocol begin in 4mA because we need distinct the situacion when there is not connection and the situation when the voltage signal is zero. <A> With a 4-20mA current loop it is possible to have signal loss detection. <S> For example, when the cable is cut the signal is 0 <S> mA. <S> This won't work with 0-20 mA or 0-10V signals, since both 0 mA or 0 V are valid. <S> You can often find these signals in non-stationary systems where cable wear can be a problem. <S> Or high reliability or safety systems. <A> A current sensor needs to use 4mA for a Zero reading rather than 0mA. <S> If 0mA were used for the zero value, there would be no way to detect a sensor malfunction vs. a broken wire. <S> This is a very old technique to read sensors and at that time the chips which are used to read pressures or any other data used to consume 3mA. <S> This was a two-wire protocol so the current has to be greater than 3mA to make it work. <S> The second question is: why didn't they start with 5,10,20 or something else. <S> In this protocol the values change linear, so they have to have something like this 4-20mA or 5-25mA or 10-30mA or 10-40mA something to make calculation real easy. <S> The human body can take up to 30mA current above that it could damage the heart, so they had to keep it below 30mA. <S> There are two choices, either 4-20mA or 5-25mA. <S> There could be two reasons. <S> One being the case of 5-25mA <S> the 25mA was still really close to the max limit of 30mA, so they went with 4-20mA standard. <S> https://ncd.io/how-to-read-4-20ma-current-loop-sensors/
In a 4-20mA current loop (which appears to be what you are talking about), the minimum 4mA current is set not for any measurement reasons per se , but to provide a guaranteed operating current for the electronics at the far end of the loop. This allows the receiving end to differentiate between zero measurement, sensor disconnection and sensor fault.
Why isn't this opamp working correctly? I am working on an adjustable current source. In a thread awhile back, various circuits were discussed: simple adjustable current source for LED string ... but as I have settled on one option, and it's not working correctly, I'm starting a new thread to focus on my conundrum. Here is the circuit: The resistor divider (30K resistor and potentiometer) provide a reference voltage on 'set' (the DC sweep of v1 just rotates the pot shaft). The opamp should servo 'gate' so that 'sense' equals 'set', and thus the current (in milliamps) pulled through the load 'Rload' equals the voltage of 'set' (in millivolts). Simple as that. The 12v supply which powers the 'set' circuit and the opamp is a 7812 powered off the 24v supply. And the mosfet is actually a FQP10N20C (a fairly vanilla power nfet). I've simulated with LTspice and it behaves as I'd expect. But on the breadboard, as 'set' is increased from 0 to about 400mV, 'sense' tracks 'set' less and less well. At one point I'm seeing 257mV on 'set' but only 226mV on 'sense'; so only 226mA is flowing through Rload and R1. 'Gate' is at 3.53V and 'down' is at 11.7V. If one just examines the opamp in isolation, it seems that 'gate' should be driven higher (until, presumably, at some point enough current flows that 'sense' equals 257mV). The opamp is meant to be used with a single-ended supply, and should easily be able to drive its output above 3.53V (with a 12V supply voltage). The gate of the FET should not be sinking any current (verified with meter). I'm stumped. Datasheet for the opamp (LT1006) <Q> The problem is evidently that there is some sort of oscillation on the output of the opamp. <S> Putting a 10uF capacitor on the 'gate' node more or less fixed the problem, but putting a 1K resistor between the opamp output and the fet gate doesn't help much. <S> I'm now seeing no more than about 7mv discrepancy between 'sense' and 'set', over the whole current adjustment range (now 0 to 300ma) and a voltage (required to drive that current through the load) between about 3 and 23v. <A> I only saw this question just now, and your answer that the opamp was oscillating. <S> That was my first guess from the schematic and the symptoms. <S> However, I don't like the way you fixed it. <S> It may not work with the same model opamp from a different batch or some future batch. <S> A better solution is to put a little resistance in the feedback path, between the top of the current sense resistor and the negative opamp input. <S> Then add a small compensation capacitor directly from the opamp output to the negative input. <S> The cap provides immediate negative AC feedback to keep the amp stable. <S> The resistor raises the impedance of the signal so that the cap can have some effect without having to be too big for other considerations. <S> Try 1 kΩ <S> and maybe 100 pF. <S> You can use a larger capacitor if response time doesn't need to be fast and you want to err on the side of more stability. <S> Added <S> I hadn't looked at the datasheet of the opamp before, and just answered for a ordinary opamp. <S> The LT1006 is optimized for very low offset voltage and low power. <S> That means compromises were made in other areas. <S> One of those is apparently stability. <S> The datasheet does show the amp used as a unity-gain voltage follower, so it is apparently unity-gain stable. <S> However, look carefully at the typical application schematics on page 11. <S> Note how one has 1 kΩ in series with a 680 nF compensation capacitor, and the other 2 kΩ with 330 nF compensation. <S> This means my guess above of 1 kΩ and <S> 100 pF was way too little. <S> Try a combination more like what they use. <S> Since you've already got 1 kΩ series resistance, try 1 µF directly between the opamp output and the negative input. <S> The other thing you need to do is actually look at the signal over time, not its average voltage. <S> Put a scope on it already and see what's really going on. <A> I recently came back to this project after a hiatus, and continued to have troubles with the stability of the opamp. <S> However, I've discovered there's a simpler solution to the problem, the linear regulator LT3080; it essentially integrates the op-amp and power-transistor of my original circuit, and seems to be very stable in my testing. <S> http://cds.linear.com/docs/en/datasheet/3080fc.pdf <S> My new circuit is essentially that shown in the figure titled "Low Dropout Voltage LED Driver" on p.17 of the datasheet. <S> But instead of putting a fixed resistor from the SET pin to GND, I drive a variable voltage into the SET pin (one could also use a variable resistor, but a voltage works better for my application). <S> The voltage signal simply needs to be able to sink the 10ua of the internal current source. <S> It works like a charm.
Simply loading the opamp output with a lot of capacitance may work now in this case at this temperature, with this phase of the moon.
Remove spikes from Load Cell readings I have developed load indicator interfaced with load cell to display the value of load. The device is to be used in on-line weighing of a bag that is moving over the conveyor. Due to the mechanical movements there is a spike in load reading and in order to remove those spikes i need to implement software based filter to stabilize the readings. Can anybody point me to the right direction, on where to start ? I am using 8051 micro controller and ADS1231 ADC for converting the analog voltage levels to load value. <Q> You don't need to 'remove the spikes' so much as 'give the right reading'. <S> There are at least two good possibilities for what's happening, and they require different software filters. <S> Then there are bad possibilities, that may require a rethink of the mechanical arrangement. <S> The difference between the two that can be handled in software depends on the sample rate and the sensor bandwidth. <S> If you are sampling above the Nyquist rate for a low-pass bandlimited sensor, and your sensor is linear, then the correct filter is the mean. <S> The spikes are part of the correct reading, and are required to balance the low readings you get either side of the spikes. <S> If you are sampling well below Nyquist on a wideband sensor, and the correct reading is 'most of the readings', then you do indeed need to reject the spikes. <S> This will be slightly biassed, but not as much as a mean filter. <S> If you can identify the spikes and remove them from your data set before filtering, then the median will be much less biassed, and still less sensitive to any errors in the spike classification process than the mean. <S> If you have a situation which is neither of these extremes, then it will be very difficult by straightforward software filtering to recover the true forces, as you have contaminated the measurement at the sensor. <A> These things happen in checkweighers <S> all the time and the common way around this is for the software to simply ignore the glitch i.e. throw away those readings that look suspicious. <S> A lot of checkweighers also use an optical device to sync the software up with the position of the "thing" to be weighed thus the software knows when it should be using readings to calculate weight. <S> This problem is usually due to cantilever resonance as the "thing" initially slides onto the edge of the weigh-part of the conveyor. <S> I reckon you should show a picture of weight results versus "thing" position as it passes over the weighing section. <S> This will allow further analysis by me and others. <S> I recommend that you have an ADC rate that would take several tens of readings (if not hundreds) as the thing passes through the weigh section. <A> A FIR filter or moving average filter, when your bag strikes a photocell, you have an average of measurings. <S> The averaging time is the time that bag travels on the weighing platform before striking the photocell. <S> Other possibility is to have FIR lowpass filter and sequently averaging filter. <S> You sample at high freq, use FIR to eliminate HF noise, then you use this filtered signal to pass it into averaging filter. <S> Perhaps you should see DSP forum. <A> Filtering problems are best handled by first capturing the data that comes in on your ADC and then analyzing the nature of your signal. <S> A graph of your data in a spreadsheet is invaluable for analysis because you can apply proposed filters in the spreadsheet and see what the outcome will be and if it is what you want, thereby saving you a ton of code-and-churn. <S> Once you determine what aspects of the data stream are desired and which are not, you should select and tune your filter (or data validation algorithm) for that application. <S> This is a standard engineering approach to a problem - find, analyze, simulate, implement, repeat until done. <S> I find that embedded engineers are usually light on "analyze" and forget "simulate" entirely, thereby forming a find-implement-repeat churn. <S> In embedded systems, the first problem is usually getting the data out for analysis. <S> I learned to slip in a high-speed data port for the purpose of telemetry on designs on prototypes to help with these problems, and FTDI is usually what I turned to - just need a spare UART on the micro and a bit of board space for the FTDI chip and a micro-B connector. <S> You can depopulate the port for production, and if you wire USB power up, you've made your desktop debug kit a bit easier to power up. <S> A lot of your filter options and tuning will depend on the relationship between your sample rate and the duration of the glitch waveform. <S> You have to determine when a glitch is no longer a glitch but a real phenomenon that you want to respond to. <S> I think that if you can see your data stream in a spreadsheet, the solution will quickly present itself.
As long as the number of spikes is well below 50% of the readings, the simplest filter to use is the median, that value for which 50% of the readings are above and below.
Issues with NPN Transistors and RGB LEDs (Common Anode) I have a question regarding the use of NPN transistors in an RGB LED (common anode) circuit connected to an Arduino Uno. I have spent the past several months developing LED color control software that communicates over serial to an Arduino board, which then produces color in LEDs over three PWM pins (a fairly common setup). I am now ready to install 10 LEDs instead of the single LED I was using during the testing process. I came across this Fritzing diagram in a solution, and have built the circuit with my NPN transistors and two LEDs to start. Here are my issues: Typically in common anode setups, writing a value of 255 to a PWM pin indicates OFF and writing 0 is full brightness. These controls seem to be inverted now with 0 being OFF. Is this normal behavior? The brightness steps are out of control. Writing 1 to a RGB pin creates a fairly bright light, and anything above 3 is maximum brightness. For my color reproduction to be effective, I need at least 100 brightness steps as I had in my previous setup where only 1 LED was used and no NPN transistors were in the circuit. Have I overlooked some aspect of NPN transistor electronics? I definitely am in need of guidance. I have only gotten into LED electronics recently and have a lot of learning to do. Fundamentally, I need a circuit that will allow 10 common anode RGB LEDs to be controlled as a group by three Arduino PWM pins. <Q> Thanks for not posting the Fritzing diagram as they drive some of the regulars here apoplectic. <S> There's a schematic button on the editor toolbar if you want to sketch something out for appraisal. <S> Typically in common anode setups, writing a value of 255 to a PWM pin indicates OFF and writing 0 is full brightness. <S> These controls seem to be inverted now with 0 being OFF. <S> Is this normal behavior? <S> 0 is normally off. <S> 1 is on. <S> 100% PWM would be "on all the time". <S> You can think of it as the % time power is applied. <S> The brightness steps are out of control. <S> Writing 1 to a RGB pin creates a fairly bright light, and anything above 3 is maximum brightness. <S> For my color reproduction to be effective, I need at least 100 brightness steps as I had in my previous setup where only 1 LED was used and no NPN transistors were in the circuit. <S> Post the relevant bits in your question and be sure to use the "code" tag. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> Q1 lights the red channel. <S> Triplicate for green and blue. <A> Yes, this is normal. <S> An NPN is "positive logic" <S> i.e. 1 is on, by connected the LED's cathode to ground. <S> If you were directly connecting the LED to your GPIO pin, you would need to provide the ground by writing the pin itself to 0. <S> I would try increasing your current limiting resistor to increase the brightness steps. <A> From what I can deduce you did not take into account that the transistor is an inverting device. <S> This would invert your PWM signals as you indicated, on being off and visa versa. <S> I will venture to guess you connected the test leds directly to the arduino without a driver. <S> I think you can complement your duty cycle in software to solve the problem. <S> You can also invert the output signal with another transistor or logic gate such as the 74HC04. <S> To get your board space back use a MOSFET, no resistor is needed.
It sounds as though there's something wrong with your PWM output configuration or code.
Rationale for operating the diode in the (reverse biased) breakdown region For the circuits I've studied thus far involving diodes (which admittedly are not that many), they have been nominally used in the forward-bias mode. For example, the LED only lights up when it's operated in the forward-biased region, and is not designed for reverse bias, let alone the burn-out breakdown region. However, I recently read about the Zener diode, and I found that this particular diode is predominantly used in the reverse-biased, breakdown region, with the following regulator circuit being a popular example: Although this circuit works, why can we not achieve the same voltage regulation functionality by operating the diode in the forward-biased mode, like this: This is I-V curve I am assuming for the diode: <Q> Three reasons: First, operating in the forward orientation only allows operation at a single voltage, nominally about 0.7 volts for a silicon diode. <S> Diode construction can be tailored to produce a wide range of breakdown voltages, with a consequent choice of different regulator outputs. <S> Second, <S> your V-I curve overstates the sharpness of a forward-biased junction. <S> There is no relatively flat portion other than in the vicinity of zero, and that's not very useful. <S> Third, with an exponential V-I curve, the forward-biased junction cannot be operated at useful current levels with good regulation. <A> The obvious answer is ... that they were designed to do that. <S> Not being facetious here, rectifying diodes are designed for forward operation and high withstand in reverse, regulating diodes are designed for a relatively accurate reverse breakdown voltage to be used in regulation circuits. <S> If we want to continue, you could say that Photodiodes are optimized for reverse operation without breakdown too ... <S> In short an diode is not a diode <S> is not a diode, the type really matters. <S> Interestingly, there is a very precise mechanism that dictates the operational voltage of a Zener diode. <S> There are other regulator diodes that operate similarly to a Zener but use a different effect. <A> It still has the 0.6-0.7V drop typical of most silicon rectifiers. <S> It's only when reverse-biased that it exhibits its nominally high voltage drop. <A> Because they are manufactured to have very specific and sharp curve in reverse polarity around this specific voltage value so while it may have typical diode forward voltage it can for example have around -5V of this reverse one. <S> And that value is the one we use. <S> It is all about desired properties of your diode. <A> It depends on the application <S> but I'll give two examples, including the zener: Zener Diode <S> This is operated in reverse bias because, simply put, that's where the interesting, 'Zener' property is. <S> In forward bias, Zeners look like regular diodes. <S> That is to say that their voltage changes a lot with current and isn't very tune-able. <S> That means you can put a lot of current through it and the voltage remains relatively constant. <S> Photodiode <S> These are often operated in reverse when used to measure light input. <S> That's because the IV curve is extremely flat at negative voltages. <S> That means that the negative voltage you apply to the thing can be noisy and unstable and you will still get a steady current for a given light level. <A> On the other hand Zener diodes break down due to quantum tunnelling effect <S> and this breakdown is reversible . <S> That's why Zener diodes can be used in both current directions and the standard diodes cannot. <S> The actual values of breakdown and opening voltages, conductivities etc. <S> define the application of the diode - Zeners are used for stabilisation of voltages near 5 V, Avalanche diodes are used to stabilise higher voltages, power (standard) diodes are used in rectifiers and LEDs are used as light sources.
The reverse-bias breakdown is both tune-able (which is why you can buy Zeners in all manner of voltages) and sharp. The rectifying and LED diodes break down thermally and their breakdown is irreversible . Because there's nothing special about forward-biased operation.
LM386 no output with input signal below 2V I'm trying to implement volume control using a digital potentiometer (LM386) and audio amplifier (MCP4131). The way I have it setup, I'm outputting square wave audio signals out of a PWM pin on the Particle P1 board. The amplitude of the square wave is adjusted using a digital potentiometer (acts as a moving voltage divider) before it enters a LM386 audio amplifier. Where I'm having trouble is, when the amplitude of the square wave is adjusted below 2V (feeding this into pin 3), the LM386 audio amplifier stops outputting a signal (pin 5). Everything works fine above 2V and I get a good amplified square wave out of pin 5. This is a problem because it limits the desired range of volume control. I'd like to get softer audio as well. I've read the LM386 datasheet back and forth and can't see to find a specification that would explain this. I think I may be missing something pretty basic... The schematic is attached. Any help would be appreciated. If there are any questions about the set-up, let me know! http://i68.tinypic.com/15grneo.jpg Datasheets:LM386- http://www.ti.com/lit/ds/symlink/lm386.pdf <Q> With the addition of "R17" (C17) before the digital pot, the voltages present on pin 5 of the pot, for any typical input signal, will settle to an average of zero volts since capacitors block DC. <S> So the AC signal being fed into the pot, if it were 0-3.3v from a microcontroller, is now -1.65 to +1.65v, which To quote the MCP4131 Datasheet , The terminal A pin does not have a polarity relative to the terminal W or B pins. <S> The terminal A pin can support both positive and negative current. <S> The voltage on terminal A must be between Vss and Vdd. <S> So you are placing as low as -1.65v on a wiper pin with these maximums as defined in the datasheet: <S> Voltage on all other pins (PxA, PxW, PxB, and SDO) with respect to VSS .................. <S> -0.3V to <S> Vdd + <S> 0.3V <S> Try it without C17 and it will likely work. <S> If it still misbehaves, then keep C17, but bias pin 5 (add a resistor divider to it from +3V3 to Vss) to around +1.65v or slightly over. <S> Doing so should prevent the signal from going negative into the pot. <A> Have you looked at the signals with an oscilloscope? <S> (R??? <S> It's a capacitor!). <S> The potentiometer nodes can not be outside of the potentiometer supply voltages, and when you block the DC with the capacitor, this is exactly what you're getting. <S> Since you are now operating the potentiometer outside its specifications, it can act up in unpredictable ways, but it's likely that you're just getting half the expected voltage out. <S> I suggest that you simulate your setup, and observe the voltages involved at any node. <S> They should not go below zero for the circuit to operate within specifications. <A> Ok, here's a different answer: it is because your R17 is a resistor and not a 1 uF capacitor like you show in your schematic. <S> I bet if you change it back to a 1 uF capacitor it will work fine. <S> Let me know if that was it <S> and then I can explain in more detail.
I think your problem is on the input side, more specifically R17
Equal voltage, but not shorted? This is a fundamental circuit question. Suppose two nodes of a circuit are always kept at the same voltage by some unknown mechanism. If I now connect these two nodes with a wire, would it introduce any changes to the operation of the circuit? For example, in an ideal op-amp, the two input terminals have equal voltage. However, I've heard some say it's incorrect to think these two terminals are shorted. Why is that? If these two terminals are always at the same voltage (as we are assuming an ideal device), shorting them wouldn't make a difference, would it? <Q> For the question in the first paragraph A clarification to my earlier statement. <S> You have to be careful how you explain things. <S> If you assume that "two nodes are always kept at the same voltage by some unknown force", then the problem is simply, no it will not have an effect - because as you stated, the two nodes are always kept at the same voltage no matter what - they must be independent of each other for this to be the case. <S> However, just because two nodes are the same voltage, doesn't mean that a wire will have no effect. <S> In a circuit where two nodes are dependent on each other, then just shorting them out may or may not have no effect. <S> Why? <S> because the wire may change the transfer function of the circuit and thus the relationship between the two dependent nodes. <S> This is certainly the case in op-amp circuits where you have feedback. <S> In regards to the second paragraph Op-amp terminals aren't shorted internally (you can short them externally, but it's not particularly useful to do so). <S> The voltage is not always the same at each terminal - for example you can connect one to one supply rail and the other to the other supply rail. <S> However the effect of having them non-equal is that your op-amp output clamps to one or other of the supply rails. <S> This is because op-amps typically have very high gains - a small difference in input voltage results in a large difference in output voltage. <S> I think you are confusing concepts. <S> There is an approach with op-amps of negative feedback in which the output feeds back in to the negative input terminal. <S> Any changes made to the input terminal will result in the output value trying to change, but because you have feedback, the output then affects the input which is typically designed so as to bring the input difference back to zero at which point the output stabilises at its new value. <A> The stable non-saturated solution to an op-amp circuit satisfies equality of voltage across the inputs, but it also satisfies other requirements such as near-zero current into the inputs. <S> If you add a wire, you replace the restriction with a much weaker one -- that the currents at both inputs are equal and opposite. <S> This greatly increases the set of possible solutions. <S> In particular, your assumption that the current through the wire is zero is flawed. <S> By Ohm's Law, $$V = IR$$ <S> You want to divide both sides by \$R\$, leaving $$I = <S> V \frac{1}{R} = <S> 0 \frac{1}{R} = <S> 0$$ <S> However, this equation is NOT valid. <S> Deriving <S> \$I = <S> \frac{V}{R}\$ from <S> \$V = IR\$ is only permitted when \$R \neq 0\$. <S> For a wire, your derivation causes division by zero. <S> Ultimately, Ohm's Law is satisfied by any arbitrary value of \$I\$. <A> The nodes being at the same voltage does not mean that they can be shorted without any ill effects. <S> Consider this circuit: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Now, assuming the opamp is ideal, the voltage on nodes A and B should be the same (equal to 1V). <S> The current isn't though. <S> The current flowing through R2 is much lower than the current flowing through R1. <S> If you took out the opamp and just connected nodes A and B together, the voltage would become lower (about 9.9mV). <A> In an op-amp circuit, a feedback network maintains the two inputs at (very nearly) the same voltage. <S> Usually the feedback network around an op-amp assumes two independent KCL equations, where the input current drawn by the op-amp is negligible (on the order of 1nA). <S> But each node has its own independent KCL in this design. <S> At first, very little current would flow (because the feedback network causes the two inputs to nearly match). <S> But even though very little current would flow at first, shorting the inputs would interfere with correct operation of the feedback network, and the circuit would become unstable. <S> There would be no way for the op-amp to correctly determine the right output voltage to continue to make the two inputs match. <S> So the output would likely saturate at either the positive or negative supply rail. <S> About the ideal op amp model: As the open-loop gain approaches infinity, the difference between the two inputs approaches zero. <S> But there still must be some difference between the inputs, otherwise how can the op amp determine its output voltage? <S> The idea of an ideal op-amp is just a heuristic to make the circuit analysis a bit easier, by looking at just the feedback networks and not worrying about the circuitry inside the op-amp. <S> But it is just a heuristic for analysis, it is not something that could ever exist. <A> Regarding shorting two same voltage nodes:Suppose you can manage two node at same voltage, still if you shorted them there are many effect you have to face: <S> Your transfer function may change. <S> Your equivalent impedances from those two nodes will change. <S> your circuits feedback will effected <S> current finds a new path to flow, so operating voltage may change. <S> Now regarding your Op-amp , if you shorted A & B , there are some consequences too. <S> Like: <S> Ideally <S> Op-amp input current zero, so there will be no current flow through resistance R2 . <S> But now it found a new path and current will flow. <S> real <S> Op-amp don't have the same voltage at node A & B <S> ,there is always a small difference (maybe in millivolt range ) <S> but when you short them, you give same voltage at those node. <S> So output of the op-amp may oscillate.
If you left the opamp in and just shorted A and B together, shorting both inputs of an opamp would make it output 0V and since it has low output impedance, the voltage on both A and B would become zero.
Convert PWM to Analog using a DAC chip in order to emulate a Potentiometer for audio I'm trying to control audio level/gain (from line or amplified signal) using an Arduino. I do not want to use SPI, for this project I can only use the PWM outputs, thus I do not want to use a digital pot. I found some related questions here, but they do not fully explain how this approach applies to audio applications. From the PWM I know I can use a low pass filter, but I want to save time and space using a DAC chip . One option is the TDA1543 ( http://www.docethifi.com/TDA1543_.PDF ). So my questions are: How do I connect the PWM and audio in/out using the DAC TDA1543? Will this approach work as an audio pot controlled by PWM or is there a more straightforward option? The TDA1543 has 8 pins: 1: bit clock input 2: word select input 3: data input 4: ground 5: voltage 6: left channel voltage output 7: reference voltage output 8: right channel output Where do I connect the PWM, and audio in and out? I believe I also need to indicate the resistance somehow or add resistors such as in a 10K pot (amplified) or 100K pot (line). Any help will be very much appreciated!! <Q> The output is fed to a low pass filter with a corner frequency somewhere between the top of the audio band of interest (say, 20kHz), and the PWM switching frequency (say, 100kHz). <S> For cleanest waveform, a corner frequency of just over 20kHz - or a high order brick wall filter as used in early CD players. <A> You want something like this chip that converts pwm to an analog signal http://www.linear.com/product/LTC2644 <A> Why don't you want to use SPI? <S> What are your real constraints? <S> What is the context of what you're trying to build? <S> Where do I connect the PWM, and audio in and out? <S> There is no audio in connection and no PWM in connection. <S> The device takes 3 digital pins of input in I2S format, and outputs a voltage. <S> This is not on its own sufficient to control a line level signal. <S> What you want is a programmable gain amplifier of some sort. <S> It ought to be possible to use PWM into an analog low-pass filter with a large time constant to drive a voltage-controlled amplifier. <S> You'd need to select a suitable VCA chip. <A> simulate this circuit – Schematic created using CircuitLab <S> There are ways to do this with Analog switches ('4066) but may inject noise pulse injection and depends on circuit impedance and switching rate above 20kHz, Nyquist filtering etc. <S> Generally Digital Pots are harder to emulate with low distortion with discrete parts unlike PWM used in class D,E PWM relies on Switching rate half-bridge and filtering <A> Although there are various clever ways to implement a variable gain amplifier, you may have trouble finding the best one and implementing it if your knowledge of analog circuits is limited. <S> How fast do you need to be able to change the volume? <S> One solution to this problem is to employ a digital potentiometer . <S> Here is one example. <S> It is basically a resistor who's value can be programmed over a serial interface. <S> If you drop one of these into a simple feedback amplifier , you can adjust the gain by programming the resistor according to the laws of the op-amp configuration.
To control audio signal gain with a PWM channel, simply connect the audio input and ground to the inputs of an analog SPDT switch, and connect the PWM signal to the switch's control input.
Op Amp in buffer configuration is decreasing output voltage I have a 4.24V DC signal which I have to apply a buffer in the purpose of have a high impedance between the 5V signal and Op Amp output. My circuit is like following: The problem is that my output voltage presents aproximattely 3.8V Why the Op Amp in buffer configuration is decreasing the output voltage? PS: I'm using an Op Amp Single Power Supply (I tried with LM324 and LM358) <Q> Because you're trying to have the op amp output a voltage higher than it's capable of. <S> A typical upper limit is V + -1.5V, so the fact that you're getting 3.8V out on a 5V supply is already better than that. <S> Either pick an op amp with a rail-to-rail output or use a supply with a higher voltage. <A> You have to actually read the datasheet before using a part. <S> Here is the relevant snippet from the LM324 datasheet: <S> The middle column of numbers applies to the LM324. <S> With 30 V power, it can only go to 26 V output, meaning it requires 4 V of headroom. <S> The datasheet isn't very clear what the headroom requirement is with 5 V power, but the 1.2 V you see should be no surprise. <S> There is a similar limitation on the input common mode voltage range, with is 0-28 V with 30 V power. <S> Your 4.24 V in may be violating the limit for 5 V power. <A> As others have already mentioned, the output voltage of a real op amp can never reach (be equal to) <S> the voltages on the power supply rails (+VCC or -VEE). <S> Looking at the schematic diagram for a real op amp, one can understand by inspection why this is. <S> Figure 1 is a schematic diagram I copied from the National Semiconductor LM324 data sheet (dated August 2000): <S> Figure 1. <S> National Semiconductor <S> LM324 Op Amp Schematic <S> The op amp's output stage is comprised of the Darlington transistor pair Q5 and Q6, resistor \$R_{SC}\$, and transistor Q13. <S> When current flows through resistor \$R_{SC}\$, Ohm's Law applies—i.e., there will be a voltage drop across \$R_{SC}\$ that is directly proportional to the amount of current flowing through it. <S> If the input signal tries to drive the OUTPUT voltage toward \$V^{+}\$, and if the load circuit connected to the OUTPUT pin pulls a lot of current, then Q6 is driven into saturation (or close to it), presumably Q13 is cutoff (or close to it), and the maximum voltage at the op amp's OUTPUT pin (relative to ground) is approximately $$V_{OUTPUT} = V_{load} = <S> V^{+} - V_{Q6} - V_{R_{SC}}$$ <S> (n.b. <S> I am assuming the load circuit is connected between the op amp's OUTPUT pin and ground.) <S> For the conditions mentioned above, the voltage drop across transistor Q6 (\$V_{Q6}\$) is the collector-to-emitter saturation voltage, and the voltage drop across resistor \$R_{SC}\$ (\$V_{R_{SC}}\$) is determined by the amount of current flowing through that resistor (Ohm's Law). <S> If the load circuit connected to the op amp's OUTPUT pin pulls lots of current, then by inspection one can see that the maximum voltage level at the OUTPUT pin will be considerably less than \$V^{+}\$. <S> If you look at @OlinLathrop's answer to your question, note that on his data sheet example the Output Voltage Swing \$V_{OH}\$ spec—i.e., <S> the maximum positive ("high") voltage swing at the OUTPUT pin—is defined as a function of \$V^{+}\$ AND load resistance \$R_{L}\$ (see Fig. 2). <S> Figure 2. <S> LM324 "Output Voltage Swing" specification <S> This is because the load resistance plays a big role in determining the amount of current that flows through resistor \$R_{SC}\$, and therefore the voltage drop in the op amp's output stage. <A> Both of the op amps you used -- LM324 and LM358 -- have an input common mode range requirement that the input can't be less than \$V_{\text{CC}} - 1.5\text{V}\$ away from the \$V_{\text{CC}}\$ rail. <S> Additionally, these op amps don't have a rail-to-rail output -- they can't drive the output to closer than \$1.5\text{V}\$ from \$V_{\text{CC}}\$. <S> You can see both of these specifications on the LM324 datasheet, for example: With \$V_{\text{CC}} = <S> 5\text{V}\$ <S> both your input and output exceed these ranges. <S> If that's not possible you need to find an op amp which can deal with these ranges.
The easiest way to fix this may be to increase your supply voltage.
ATmega - Why is the prescaler factory defaulted to 8? Why do ATmega (e.g. 328P or 644P) have CKDIV8 (or CLKPS = 0011 ) factory programmed along with a default internal 8 MHz oscillator? From the 644P documentation: 6.12.2 CLKPR - Clock Prescale Register , Page 40: The CKDIV8 Fuse determines the initial value of the CLKPS bits. If CKDIV8 is unprogrammed, the CLKPS bits will be reset to “0000”. If CKDIV8 is programmed, CLKPS bits are reset to “0011”, giving a division factor of 8 at start up. This feature should be used if the selected clock source has a higher frequency than the maximum frequency of the device at the present operating conditions. [...] The Application software must ensure that a sufficient division factor is chosen if the selected clock source has a higher frequency than the maximum frequency of the device at the present operating conditions. The device is shipped with the CKDIV8 Fuse programmed. Is it just a precaution to ensure the CPU clock does not exceed say a 16 MHz limit when configuring the MCU to run with an external oscillator of too high frequency (and forgetting to change CLKPS accordingly). Or are there other reasons? <Q> Note the dependency of the maximum allowed clock speed on the supply voltage: <S> E.g. the 644PV can only reach 4 MHz when running at 1.8V (similar for other chips) <S> If the controllers were programmed to a default 8 MHz you could not program them in a circuit running at such low supply voltage. <S> 1 MHz is a safe default frequency that any AVR can reach at any supply voltage within its specifications. <S> You could change the internal oscillator to a 1 MHz one and leave the clock divider unprogrammed, but this forbids to run the controller at a higher clock rate without an external clock source. <A> It's for out-of-the-box compatibility with and easy migration from the ATmega163, which only has a 1MHz internal RC oscillator (with other clock rates available via an external crystal or clock). <S> Of course, no one uses/should be using the '163 anymore this decade, but the legacy continues. <A> You can deduce it logically from a number of parameters, all which have to do with guaranteeing that the chip can be programmed safely under all allowed voltage conditions, from the factory settings. <S> AVR devices require a valid clock to be programmed. <S> If you want to use the RC oscillator, you shouldn't have to add a crystal or external clock signal just to program the device. <S> Conclusion: <S> The fuse bits must be programmed to run the chip from the RC oscillator from factory. <S> The RC oscillator should calibrated at a fast enough value to be generally useful, while having a good enough precision. <S> Atmel probably decided at some point that 8 MHz was an optimal point. <S> Not all devices can run at 8 MHz under all voltage conditions. <S> Conclusion: <S> Atmel standardized on dividing the clock by 8, which yields a 1 MHz clock, which is safe for any device. <S> There may also be a value in standardizing on a single frequency across all devices, which seems to be the case, even if it may not be strictly required for some of the devices.
Conclusion: 8 MHz was chosen as the calibration point for the RC oscillator in most AVR devices.
Does N-type or P-type semi-conductor show electrical effect? I am reading the book Electrical Engineering 101 . It's a book of basics for not-so-newbie. It contains below description in Chapter 3: A diode is made of two types of semiconductors pushed together. They are known as type P and type N. They are created by a process called doping...Some dopants will create a type N structure in which there are some extra electrons simply hanging out with nowhere to go. Other dopants will create a type P structure in which there are missing electrons , also called holes. So, if I have a piece of P-type or N-type semiconductor in my hand, does it show any electrical effect? Say, electrostatic field? And a similar question: How is a semiconductor electrically neutral? <Q> The section you cite is misleading. <S> As Ignacio already said, the atoms in both P-type and N-type semiconductors are neutral. <S> The difference lies in the distribution of electrons between valence band and conduction band. <S> In simple words: in N-type semiconductors there is an excess of electrons that are able to move relatively freely in the bulk of the crystal. <S> For P-type semiconductors <S> the situation is reversed, there are less free electrons than in an intrinsic (i.e. undoped) crystal. <S> This also enhances conduction, even if it seems counter-intuitive, since those "missing" electrons leave "holes" in valence band that can move as if they were positive charges. <S> To recap: doping enhances conductivity of the crystal by altering the equilibrium of free electrons with respect to the intrinsic crystal, not by putting more or less charges in the crystal itself. <S> Keep in mind that what I explained in basic terms is explained rigorously only by quantum physics applied to the crystal structure. <S> Not an easy subject. <S> I think even many undergraduate courses in electronics around the world don't delve into that subject too much. <S> Even the concept of valence and conduction band cannot be explained quantitatively without formulas obtained from quantum physics. <S> I don't know your goals, but if you are an electronic enthusiast or an undergraduate student(*), usually you don't need to understand much more the subject to design electronic circuits and understand the external behavior of electronic components. <S> (*) unless you aim at becoming an IC designer, in that case you must know very well how the components behave "inside the chip". <S> BTW, prompted by your comments to Ignacio's answer, I'll add some extra points: semiconductors are called that way because the conductivity of the intrinsic crystals is intermediate between insulators and metals, but doped semiconductors can have very high conductivity (especially N-type ones). <S> As an example consider a power MOSFET in its ON state: it can reach a resistance between drain and source of few milliohm, just the kind of resistance level of a common relay's contacts, which are made of metal! <S> See, for instance, the datasheet of the <S> IRF3709 : <S> Moreover, free electrons are called that way because they are free as they are in a metal: they are in conduction band and that means that they can move freely across the entire crystal trellis, like in a metal. <S> They are not bound to a specific atom. <A> Some dopants will create a type N structure in which there are some extra electrons simply hanging out with nowhere to go. <S> Other dopants will create a type P structure in which there are missing electrons, also called holes. <S> A better way to state this is that an n-type semiconductor has extra mobile electrons, and a p-type semiconducor has a deficit of valence electrons. <S> Why a deficit of valence band electrons produces an effect identical to a positively-charged carrier called a hole is a bit of an involved topic. <S> But as an analogy you can consider that when a bubble of air flows upwards in a pool of water, there is a corresponding net downward flow of water. <A> No, since the atoms themselves in the material are neutral. <S> The extra electrons or holes are carriers that allow a current to flow when a voltage is applied to the material.
As the other answers point out, the structure as a whole (considering conduction band and valence band electrons, bound electrons in lower bands, nuclear protons, and ionized and unionized impurity sites) is electrically neutral.
Does several PIC12F683 pull-ups cause short circuit I am connecting several PICs together by GP4 and GP5 pins, and I used their internal pull-ups to make pins high by default. If in all PICs pull-ups are enabled. Does connecting a lot of PICs together (i.e. 25 PICs) makes a low pull-up that causes short circuit when one pin in one PIC became low? Do I need to disable pull-up and simply use two resistors for all PICs? <Q> According to the datasheet for PIC12F683 , the max pull-up current is 400μA (see D070 on p.121). <S> The 25 PICs pulling up together would pull 10mA. <S> The nominal max GPIO sink current for this PIC is 8.5mA (see D080 on p.121), while the absolute max is 25mA. <S> There is also a more detailed chart fig.16-24 on p.149, which goes to 10mA. <S> You can make do [without a comfortable margin] using only internal pull-ups and not exceeding max sink current for a GPIO pin. <S> But I would use external pull-up resistors in this case. <A> I have to disagree with the previous answers. <S> In the PIC12F683 Datasheet the typical pull-up current for 5V is given with typical 250μA and max. <S> 400μA (see D080 on p.121). <S> This lets you calculate the typical pull-up resistance to something around: \$ <S> R=\frac{U}{I}=\frac{5V}{250μA}=20k\Omega\$ <S> And the minimum pull-up resistance to \$ <S> R=\frac{U}{I}=\frac{5V}{400μA}=12.5k\Omega\$ <S> Basically no current will flow. <S> When some device drives this line to 0V there will also be no problem for your PIC's as each individual PIC will still have to source only typical \$ <S> I=\frac{U}{R}=\frac{5V}{20k\Omega}=250μA\$ and maximum \$ <S> I=\frac{U}{R}=\frac{5V}{12.5k\Omega}=400μA\$ <S> However, whatever drives this line of 25 parallel PIC's to 0V will have to pull the signal down. <S> To do that it must be able to drive: 25 pull-up's parallel will result in typical: \$ <S> R_{total}= \frac{1}{\frac{1}{20k\Omega}*25}=800\Omega\$ \$ I_{total}=\frac{U}{R_{total}}=\frac{5V}{800\Omega} = <S> 6.25mA <S> \$ (or simple: <S> 250μA <S> x 25) <S> and maximum: \$ <S> R_{total}= \frac{1}{\frac{1}{12.5k\Omega}*25}=500\Omega\$ \$ I_{total}=\frac{U}{R_{total}}=\frac{5V}{500\Omega} = <S> 10mA <S> \$ (or simple <S> 400μA x 25) <S> In case GP4 and GP5 are connected as well (for whatever reason) <S> the current will double to 12.5mA typical and 20mA max. <S> On page 115 of the same datasheet you can see that the max. <S> sunk current per I/ <S> O Pin is 25mA. <S> This means you can still drive the line of 25 pull-up's (or even 50 pull-up <S> 's in case GP4 and GP5 are connected) with a single output pin. <A> It will not create a short circuit condition because there is still a lot of resistance. <S> Assuming the internal pull-ups are around 10k and are the same value on each chip then you would have a load of around 400 Ohms (25 ICs).
But yes, adding pull-ups resistors will increase current into the pin. Assuming all pins are inputs, nothing will happen as the pull-up's will pull up against the input resistance (typical in the 2 digit Megaohms range).
Why some circuits uses additional resistor with pot in dividers? Are there any profits of using the first circuit for divider instead of the second one? simulate this circuit – Schematic created using CircuitLab <Q> The circuit on the right will have the full range of the voltage connected. <S> The circuit on the left will have the maximum given by the voltage divider of \$R_1\$ and \$R_2\$. <S> This is if you want to limit the maximum voltage to a lower value than the maximum voltage in the circuit. <A> Yes: it has a different range. <S> Depending on the application, that may be what you want. <A> The primary reason one does this (left hand side) in the real world is to prevent hazardous conditions from arising. <S> In some pots if you move the wiper off of the end of the track it can short out or open circuit (it very much depends upon the construction). <S> Some op-amps don't want heir inputs being driven to the rail, so you'd put a series resistor in there to ensure that it always stay within the operational range of the op-amp input. <S> In some cases if the pot shorts you want to prevent high currents from running through the pot and collapsing the rail, over heating etc. <S> Of course there are uses where it makes sense to only vary resistance between certain values. <S> In that case it is not a problem. <A> Safety. <S> A series resistor is connected so that the load resistance cannot become \$ 0\Omega\$. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> If you connect a load to the wiper of your pot, a portion of the pot (top) is in series with the load (wiper) and the remainder is in parallel with the load. <S> As the pot is adjusted, Load Voltage varies from 0V to 12V. <S> But if you connect the wiper to one end of the pot, you effectively have a resistance which varies from 0\$\Omega\$ to rated resistance. <S> simulate this circuit <S> As you move the wiper to the top, the series resistance will decrease to \$ 0\Omega\$, which would mean \$\infty\$ <S> current. <S> So to protect the pot from burning up or overloading the source, a series resistance is added. <A> In your example, consider what the output would be: <S> In the first circuit, the output ranges from \$0 \Leftrightarrow(V- <S> I_{R1}R1)\$ <S> In the second circuit, the output ranges from \$0 \Leftrightarrow V\$
So, if you need a variable divider with some hard limits, external resistors are the way to go. So yes, it can be bad and limit your range of operation, but it is primarily used to protect for faults.
CE-compliant Schuko mains power switch: do I need to switch both poles? I am currently revising a plug-in power meter (i.e. a power meter that you plug into mains, then plug the DUT into it) for CE compliance. It features power switching using a latching relay, only on one pole. Because Schuko is reversible, do I need to add an extra relay so that both poles are switched? <Q> Assuming that EN60950 applies to your power meter, in a case where the mains plug is reversible <S> then EN60950 requires that a disconnect device "shall disconnect both poles simultaneously" (section 3.4.6). <S> However, pulling the power meter out of the mains socket will disconnect both poles simultaneously and so it is quite valid to consider this to be a disconnect device. <S> For CE compliance your relay only needs to switch one pole. <A> Your power meter has a shuko (wall side). <S> It ALSO has a switch. <S> Then the EN60950 say, in section 3.4.6 that: "Three examples of cases where a two-pole disconnect device is required are: [...] <S> on PLUGGABLE EQUIPMENT supplied through a reversibleappliance coupler or a reversible plug (unless the appliance coupler or plug itself is used as the disconnect device ); [...] <S> _ . <S> Then if you use the plug as disconnect devices it's ok, it unplug two poles at the same time, else, <S> if you put switches as disconnect devices the switch must be bipolar. <S> I agree with the Turbo J's and zebonaut's comments above. <S> The heart of the matter is: If I have a switch, this switch power off the device connected to my power meter or not? <S> I think that is not so obvious (for the user) that the switch is a functional switch and not a disconnecting device for the downstream device. <S> If the switch remove the power to the device connected to the power meter it is a disconnecting device and must be bipolar, if it only stop the power measurement , leaving the device powered <S> then we can say this is a functional switch. <S> The user must be aware that if he open the switch, and this is only a functional switch the downstream device may be connected to the hot wire, and there's the risk of electric shock. <A> Disclaimer: <S> Sorry, my copy of EN60950 is not in English, so the exact words are not a quote, but my translation. <S> However, I come to the conclusion that you must switch off both poles, because you say that your meter "features power switching", which sounds much like a true disconnect switch and is not just some sort of stand-by or functional switch: 3.4 Disconnection from the supplying branch <S> [...] <S> 3.4.6, Number of poles for [...] <S> single phase devices: [...] <S> Three examples of cases where a two-pole disconnect device is required are: [...] 2) on PLUGGABLE EQUIPMENT supplied through a reversible appliance coupler or a reversible plug (unless the appliance coupler or plug itself is used as the disconnect device) 3) on equipment supplied from a socket-outlet with indeterminate polarity <S> There must not exist any ambiguity whether or not your relay may act as a disconnect device. <S> If there is a chance that a user may think it is possible to switch off the outlet using the relay (because: "Why is it there in the first place, anyway?"), you are, IMHO required to use a two-pole relay or two relays acting simultaneously. <S> Example for a very simple device that does not require a two-poleswitch: Power strip or extension cord without a switch. <S> Example for a very simple device that does require a two-pole switch:Power strip with a switch. <S> Also, quoting Steve G's comment below his own answer: "if you go down the path of considering that the relay is a EN60950 "disconnect device" then the relay has to meet more stringent requirements than a purely functional relay. <S> For example it will need a contact separation of at least 3mm (see 3.4.2)." <S> Late edit , (sorry!) <S> : I have just learned that there are power strips with funcional switches (as opposed to disconnect switches). <S> These must be labeled: "Disconnected only when unplugged from outlet!" <S> (my own translation of "Spannungsfrei nur bei gezogenem Stecker!" ). <S> So yes, it's all about building something that leaves no room for error on the user side.
Most domestic electrical equipment sold in the EU has a mains switch which only switches one pole of the mains supply and relies on the mains plug as the EN60950 disconnect device.
Connecting a Camera Hot Shoe to a homemade strobe The Situation simulate this circuit – Schematic created using CircuitLab I need to trigger a strobe from a Sony Hot Shoe. A 3.5mm jack lead comes out of the hot shoe. Using a multimeter I measured 6V across the tip and sleeve of the jack, and when the camera triggers the ring and tip become connected like a switch closing. Two wires come out of the strobe. Using a multimeter I measured 12V across them. When you touch them together, the strobe triggers. If I connect the hot shoe to the strobe, it all works fine. When the camera triggers the ring becomes connected to the sleeve completing the circuit and effectively shorting the strobe trigger to ground and firing the strobe. The problem is that when I plug the camera into to a PC through USB, the strobe stops working. Cutting the power wires on the usb does not fix this, but cutting one of the data wires does. We have assumed that the hotshoe circuitry must be somehow connected to the USB circuitry. Off-the-shelf flashes work with the USB connected, but they also have 12V across the trigger leads, so we assume that off-the-shelf flashes are opto-isolated. We are hoping that opto-isolating the strobe trigger will fix the USB issue. The ideal solution simulate this circuit Because there is normally 6V out of the hot shoe, and this goes to 0V when the short is created, the LED in the opto-isolator will normally be on. Therefore I need a opto-isolator that lets the phototransistor allow a current to flow when the LED is off, and stops current when the LED is on. Does this type of "dark on" opto-isolator exist, and what should I search for to find one? Alternatively simulate this circuit If a "dark on" opto isolator doesn't exist, my idea is to use two optoisolators. The opto-isolator 2 will be powered from a separate 5V voltage source, that will be shorted out when opto-isolator 1 is closed. (i.e, no current will flow through R1 because there is a short to ground through opto-isolator 1). The problem is this requires a new separate power supply, and i'm unsure if it will work. Will this work and is there a better way to do it? Thanks! <Q> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> A very simple opto-isolator with normally-off output. <S> How it works: D1 allows C1 to charge up from the 6 V supply. <S> See note on current limiting below. <S> When the strobe contact closes D1 is reverse biased and C1 discharges through R2, D2 and SW1. <S> With R2 = 390 \$\Omega\$ the current will be limited to 10 mA. <S> While the LED is on Q1 will 'close its contacts'. <S> The beauty of this apart from its simplicity is that the LED is normally off, saving power. <S> Current limiting <S> We don't know exactly what is protecting the 6 V supply. <S> I suggest you connect a 1k resistor across the jack and measure the voltage across it. <S> From that you can calculate the voltage drop internally and figure out the effective series resistance. <A> That was our first thought, but <S> any other off-the-shelf flash works fine when the camera is connected to the computer. <S> It is only the strobe that doesnt work in this situation, which is why we want to modify it by adding opto-isolators. <S> In that case, you can probably get by with a much simpler level-shifter, like this: simulate this circuit – <S> Schematic created using CircuitLab <S> This works just like the common 3.3V-to-5.0V level shifters seen on I 2 C busses, except that we need the diode and capacitor to capture and hold the gate bias during a trigger event. <S> The MOSFET allows the hotshoe to pull the strobe trigger low, but prevents the 12V pullup from the strobe from back-feeding the camera. <A> The circuit shown below appears to meet the specific need. <S> Operation: C1 stores energy from the input which is used to drive the LED when the trigger line is taken low. <S> With Vtrigger high C1 charges via R3 and D1. <S> R3 may not be needed - it is provided to minimise the load of the initially uncharged capacitor on the trigger line. <S> D1 prevents discharge of C1 during triggering, allowing energy to be saved for the next initiation and preventing possible adverse effects from C1 discharging into trigger input. <S> If length of trigger low is shorter than desired (unlikely but possible) <S> the FEt on time can be extended by adding a suitable capacitor Cg from M1 gate to ground such that the time constant of R1.Cg holds the FET on as desired. <S> MOSFET M1 is held off by R1 when Vtrigger is high. <S> When Vtrigger goes low M1 is turned on by D2 to gate. <S> LED is operated by discharge of C1 via R2. <S> R2 may not be necessary - it limits current to the LED and prolongs the discharge time of C1. <S> The sizing of C1 and R2 depend on the length of the trigger pulse and the optocoupler characteristics. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Notes: <S> Values shown for R2, R3 are arbitrary and need to be designed to suit. <S> MOSFET needs to have suitable Vgsth so that available turn on voltage is sufficient. <S> MOSFET can be driven by a comparator for sharper on/off drive.
It does not allow ongoing drive of the optocoupler LED but does provide a pulse to the LED when the input trigger line is taken low. If this is less than 1k (indicated by > 3 V across the test resistor) then I'd be inclined to add a 1k resistor between D1 and the top of C1 to limit the surge current on connection. This circuit is 'out of my head', may well work 'as is' but also may need some refining.
Understanding Lead in a causal system I've gone through school and done all the work but looking back at it i still don't quite get it. The rule is current in a capacitor leads and in an inductor lags. The inductor makes perfect sense but the capacitor doesn't. We live in a causal world. Nothing from the future can affect the present. That said the inductor lags but then why does the capacitor not lag even more? Is this only a mathematical construct? <Q> The voltage of a capacitor can't change instantaneously it needs a certain time. <S> A good way to visualize this behavior is by charging the capacitor with a current source. <S> First we have the current then the voltage builds up. <S> The voltage lags the current (or the current leads the voltage). <S> For an inductor the current needs some time to build up. <S> A voltage is applied and a current starts to flow. <S> The current lags, the voltage leads. <A> This is a mathematical construct for when we are dealing with sinusoidal waveforms and to say that the current in a capacitor leads the voltage is just an equivalent way of saying that the voltage lags the current. <S> For a capacitor \$ <S> i = <S> C \cdot <S> \dfrac{dv}{dt} <S> \Rightarrow v = \dfrac{1}{C} \int <S> i \ dt\$ <S> For an inductor \$ v = <S> L \cdot \dfrac{di}{dt} <S> \Rightarrow i = \dfrac{1}{L <S> } \int v <S> \ dt\$ <S> Now <S> if we differentiate \$ \sin \$ <S> we get \$ \cos \$ which appears to lead <S> and if we integrate \$ \sin \$ we get \$ (- \cos) <S> \$ which appears to lag. <S> If we want to know the current in any component we can use \$ <S> i = <S> \dfrac{v}{Z} \$ <S> For a capacitor \$ <S> Z = <S> \dfrac{1}{j <S> \cdot <S> \omega <S> \cdot <S> C}\$ <S> thus \$ <S> i = \dfrac{v}{\dfrac{1}{j <S> \cdot <S> \omega <S> \cdot C} <S> } \Rightarrow \dfrac{i}{v} = <S> j <S> \cdot <S> \omega <S> \cdot <S> C\$ <S> and we can see that the current leads the voltage. <S> For an inductor \$ Z <S> = j <S> \cdot <S> \omega <S> \cdot <S> L\$ <S> thus \$ <S> i = <S> \dfrac{v}{j <S> \cdot <S> \omega <S> \cdot <S> L} \Rightarrow <S> \dfrac{i}{v} = <S> -j <S> \cdot <S> \dfrac{1}{\omega <S> \cdot <S> L} \$ <S> and we can see that the current lags the voltage, or the voltage leads the current. <S> But it must be remembered that this is a steady state response after the system has had time to settle. <A> Lead or lag does not imply transient analysis. <S> Go back to the basic formula for a capacitor Q = CV. <S> Then differentiate both sides to get dQ/dt = <S> C dv/dt <S> and of course dQ/dt = current <S> so: <S> - <S> \$I = C \dfrac{dv}{dt}\$ <S> If voltage rises at a certain rate the current will be constant - nothing to do with leading or lagging here until you apply a sinewave and the differential of a sinewave voltage is a cosine wave <S> hence current leads voltage by 90 degrees <S> BUT we're taling steady state AC analysis <S> and that's when the terms leading and lagging apply - they don't make sense when talking about transient analysis. <A> It may seem capacitors are clairvoyant since the current appears to lead the voltage. <S> However, what they're really doing is making the current follow the derivative of the voltage. <S> When the voltage is a sine, then the current can be said to lead the voltage. <S> That's only because the derivative of a sinusoid is another sinusoid, so the signals appear the same with one leading the other by ¼ cycle. <S> The "leading" part is just one way to look at this special case, although it is a common and useful special case. <A> Put another way; In the steady state, a leading phase of 90 is actually a lagging phase of 270. <S> The lead of one cycle is the lag from the previous cycle. <A> <A> Sorry, I won't give any equations because I have not dabbled enough with the maths of passive components in AC. <S> I can somewhat understand the first order and a lot less in second derivative equations. <S> But what we have to understand is what is causal and not. <S> While this might seem like an un-resolvable philosophical question, this one is not. <S> The equation and description is not causal. <S> It is a relationship. <S> Which means as long as the voltage a given value at a given time, then the current should exist and we have this certain value. <S> You can turn the question into a causal one. <S> You can ask, "OK, I have a reliable voltage source (which can supply any amount of stream of charges to maintain the voltage across it) <S> that is AC <S> , what's the current?". <S> Which is one side of the question. <S> One can also ask, "If I have a current source that is AC, what would the voltage across the capacitor be?". <S> By the way, a current source charging a capacitor is hardly ever given consideration, but if we were to implement a setup like that. <S> You can have a high enough voltage source (to provide the peak current) then have a transistor (BJT or MOSFET) provide the current. <S> Incidentally, you can also say "the voltage drop across the transistor vary to create the voltage across it and the rest across the capacitor", as the equation tells us. <S> You can see it from different points of view. <S> Equations only give us relationship and because we can plug-in numbers, value, but hardly ever the context. <S> ==== <S> I'm gonna get a lot of fire for posting this simple idea, but if it wasn't for this, I wouldn't have resolved a system that I was solving in a purely mathematical sense a while back.
To use terms like lead or lag is to imply that your are refering to the AC sinusoidal analysis of capacitors and inductors and that means the steady state AC situation in which currents can lead voltages or voltages can lead currents. In a PERIODIC environment "lead" is essentially indistinguishable from a very long "lag".
Plugging the same device on a jack of smaller tension will draw more or less current? I'll put a set of ''right's''numbered based on my current assumptions, so if one of them is wrong, you can just point a couple out. Let's say I have a device that is labeled ''660W/220V'', so It delivers 660W on a jack of 220V, right(1) ? Then, let's move on: Based on Ohm's law: P = U²/R, applying the values, we get an INTERNAL RESISTANCE of the device of about 73,3 ohms, right(2) ? With U = R * i, the device draws a current of about 3A, right(3) ? So, if the internal resistance doesn't change , plugging the same device on a 110V jack now will only produce 165W and will only draw 1,5A right(4) ? Thing is, I've seen a video of a dude plugging 2x 60W lightbulbs on 2x outlets each, one in 110V and one in 220V. The one in the 110V drawed twice the amount of current compared with the one in 220V. That doesn't make sense. Maybe the bulbs were different and designed to operate on the voltage he plugged? So in that case, a 60W/110V bulb have a smaller internal resistance, so it need to draw way more current to produce 60W, is that it? Extreme stupid question: I don't understand the logic behind P = V * i, The amount of voltage in an outlet should only determine the capacity of current it can provide. More voltage should equal more current. More voltage equals less current makes sense in the formula, but not in my head lol. <Q> if the internal resistance doesn't change , plugging the same device on a 110V jack now will only produce 165W and will only draw 1,5A right(4)? <S> Correct. <S> But there are very few loads that really act like pure unchanging resistors. <S> Even an incandescent bulb's resistance changes value as the filament heats up. <S> Maybe the bulbs were different and designed to operate on the voltage he plugged? <S> So in that case, a 60W/110V bulb have a smaller internal resistance, so it need to draw way more current to produce 60W, is that it? <S> Correct. <S> You could also see this when using a device powered by a switching power supply. <S> The supply will adjust its current draw to supply the same power to its load. <S> So on 110 V mains it will need to draw about twice as much current as on 220 V mains. <A> The resistance of the filament depends on the temperature. <S> As soon as current flows the temperature rises and the resistance increases. <S> A certain power is required to heat up the filament. <A> This affects how they behave on different voltages, but in the video example that you cite there is almost certainly a simple explanation The bulbs were almost certainly in series on 220 VAC and in parallel on 110 VAC. <S> This would fully explain the current draws that were seen. <S> If they did not explain this then they were trying to trick you. <S> Longer <S> : DO NOT believe everything technical (or even most things) that you see in videos and/or on the internet. <S> If you can provide a link to the video we may well be able to provide a better answer. <S> Otherwise the best answer is - " <S> In a video ?? <S> ?????? !!!!". :-). BUT do note that <S> Two 110V 60 W bulbs IN SERIES on 220V will draw about 120/200A = <S> 0.545A. <S> AND the same two bulbs IN PARALLEL on 110V will draw 120W/110V <S> = 1.09A or twice as much as on 220 VAC. <S> _____________________________ <S> Ohms law is the first formula here but the three examples are simply rearrangements of the same formula. <S> Resistance = <S> Voltage / Current R = <S> V <S> /I Current = <S> Voltage / Resistance <S> I = <S> V / R Voltage = <S> Current <S> x Voltage V = <S> I <S> x R Power dissipated in a resistance can be expressed by the following formula. <S> The three examples are simply the same expression rearranged with different variable substituted. <S> Power = <S> Volts <S> x Amps P = <S> V <S> x I Power = <S> Voltage drop squared / Resistance P = <S> V^2/R as P = <S> V <S> x <S> I = <S> V <S> x (V/R) = <S> V^2/R Power = <S> Current squared x Resistance P = <S> I^2 x R as P = <S> V <S> x I = (I x R) <S> x I = I^2 x R <A> Depends. <S> That's an average or max. <S> that shouldn't be exceeded. <S> What device are we talking about? <S> Because, if this is a power supply (for example), it can go from few Watts to 660W, yes. <S> But also can go a bit higher too, if we put something such as motor on it or so. <S> Internal resistance (as you call it) is called impedance. <S> But that's just as long as it is a single device, not a power supply, which just converts (AC-AC, AC-DC, DC-AC or DC-DC voltage). <S> In this case, it's impedance is much higher. <S> Yes, about. <S> During the time of work it probably falls down (at least a bit, but theoretically...yes). <S> The factor P (power) is actually just a energetics thing. <S> Electronics doesn't care about it (unless seing for example what resistor to use, 1/4, 1/2, 1W), because we ALWAYS use determined voltage and determined current. <S> Yes, they together combine power, but it's just not that simple. <S> More voltage equals less current, why would wanted to take more current? <S> It's the same effect. <S> If you need to boil a jar of water, you can put it on stove, who is pre-heated <S> on 100°C for 1 minute or to a preheated stove on 200°C for 30 seconds. <S> The effect is the same, time is different. <S> P= <S> U*I is simmiliar. <A> If we neglect that you are talking about AC currents and voltages (which make calculations a bit more complex for non-resistive loads), you are correct with (1), (2), (3) and (4). <S> Then both will deliver 60W in the voltage system they've been designed for. <S> Things would be as you expect only if the same type of 60W/220V bulb were to be used in a 110V and 220V system. <S> Then, in the 110V system, the current would be half the current of the 220V system.
The light bulbs seemed to be rated for the corresponding voltage, so in order for both to achieve 60W, the 110V version has to draw twice the current than the 220V bulb. I also would like to add that often there is a small frequency difference between different mains systems (most commonly 50Hz vs. 60Hz), which adds small changes in current for inductive loads (motors), because the Impedance ("internal resistance") changes with frequency. Short: Light bulbs are not well behaved resistors as they change resistance as the filament heats. As long as the temperature is low the resistance is low as well. For linear pure "ideal" resistors you can apply Ohms law and the related power formulae:
What are some common applications of the MOV? (Metal Oxide Varistor) The owner of a broken washing machine recently asked me to examine a damaged circuit board he had found inside. Using the schematic I was able to determine that the charred area formerly contained a device labeled MOV. I found several of these devices on the board and now gather they were Metal Oxide Varistors, which can be used for over voltage protection. Considering this board appeared to be a low power supply of some sort (transformer, rectifier, transistor etc.) and also contained a blown 0.5A fuse, what was the most likely function of the blown MOV? In general, what are MOVs used for in PCB designs? Real world circuit examples would be great. <Q> A varistor after the fuse ensures that when the voltage crosses a certain value, the fuse blows and current flow stops. <S> Generally fuses are rated for a current limitation, not a voltage limitation (as in your example). <S> It is possible that the voltage difference across the fuse is such that it doesn't create more than the rated current through the fuse yet is still such that it could cause harm to harm the circuit (or is just unwanted). <S> In that case the varistor is used to increase current through the fuse, causing it to blow and stop the current. <S> When the voltage crosses the upper limit, the varistor resistance is reduced, increasing current through the fuse as in this circuit: <S> Many PCB circuits contain inductors and capacitors which will create transient states and surges (switching spikes). <S> Too much of this kind of occurrence will harm the device, so varistors are used for protection. <A> In the case of your circuit board, it sounds like your MOV decided to go low-impedance, meaning its internal resistance went close to zero and caused a big current to go through it, thereby overheating it and blowing it up. <S> The current surge caused by the MOV doing this probably caused your fuse to blow. <S> This was caused either by a large voltage surge that the MOV tried to shunt to ground, or the MOV was defective (either through manufacturing or over-used). <S> When used for circuit protection, MOVs are used to shunt excess energy to ground. <S> Other devices that do this are gas discharge tubes and TVS diodes. <S> Each has their own method of dumping energy, which is usually a trade-off between the accuracy of the voltage threshold (Vtrip) before dumping energy to ground versus how much energy the device can dump before it explodes. <S> A rule of thumb for these circuits is delay-dump, where circuit blocks try to alternately delay or retard the surge (through inline devices like transformers, resistors, etc), <S> followed by dumping it to ground through a gas discharge tube, MOV or TVS, and repeating these stages until the sensitive circuits behind the protection stages are reasonably safe. <S> It's all about trying to handle and manage the excess energy while the protection circuit tries to shed it. <S> MOVs can get "old" due to exposure to repeated excess voltage and either fail or no longer function properly. <S> Think of it as if the MOV has a counter inside of it for how many joules it can dump through itself before it is finished. <S> I recall from my avionics years that MOVs were being avoided because of their indeterminate lifespan and lack of test methods to see if it was still functional. <A> The varistor is used to suppress transients such as surges, switching spikes, and ESD events -- usually, they are found on power lines. <S> Varistors absorb some energy every time they suppress a surge, and this reduces their voltage withstanding capability slightly -- too much of this, and the varistor turns on all the time, leading to an effective short across the line, no more varistor, and a blown fuse. <S> Better that than sacrificing more costly components downstream, though!
You may see combinations of tubes, MOVs and TVSs on input circuits to protect them from surges, whether it be a power mains surge or lightning-induced effect.