source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
How do thermal tapes transfer heat despite their low thermal conductivity? I found myself needing to attach small heatsinks to Mosfets and CC power regulators. Also some high-power diodes. Today I have learnt that thermal compounds, even best in the market, are dozens of times less conductive than aluminium or copper. I understand thermal compound is used only to cover microscopic cracks. But what about thermal adhesive tapes? My question is: Using a thermal tape, I might completely cover the surface between the heat source and the heat sink in order to join them together. If thermal tape is 100 times less conductive than aluminium, how do they manage to do their job and transfer the heat even at high power applications? <Q> Heat sinking compounds/tapes are less conductive than metals but are orders of magnitude better than air. <S> The main goal of these interfaces is to fill in the gaps between the solid interface between your heat source and sink. <S> Without such compounds air pockets would form and the thermal interface would be poor. <S> See the image below ( source ). <S> The compound takes up any space between rough interfaces. <S> Compare thermal interface options using the table at end of description and you will see that the compounds make sense. <S> When cost is a non-issue even more novel thermal interfaces can be developed. <S> I have head of Chemical Vapor Deposition diamond used in certain packages as the diamond is superior to an epoxy interface but still has insulating properties. <S> Example source , tho I have never personally used such devices. <S> Thermal Conductivity Table: <S> Air Thermal Conductivity: <S> ~26mW/mK <S> EMC Packacging Compound: <S> ~ 2-4W/ <S> mK <S> Thermal Compound Thermal Conductivity High Grade: <S> 8.5 W/mK <S> Aluminum: <S> 205 W/mK <S> Copper: <S> 385 W/mK Diamond: <S> 2200 W/mK <A> If thermal tape is 100 times less conductive than aluminium, how do they manage to do their job and transfer the heat even at high power applications? <S> They do their jobs because they are better than air, which is lower than 0.024 W/(m*K) <S> and so they are approximately 100 times better than air. <S> If your using an adhesive or thermal compound, it needs to be thin. <S> The thickness also affects the thermal conductivity. <S> The total conductivity will go down when the thickness goes down. <S> (Think of Teflon pans, Teflon is one of the least thermally conductive materials, but if applied in a thin layer it still conducts enough heat to hear food) <S> Many adhesives have thicknesses in the teens or hundreds of um. <S> You also need a 'filler' in between two pieces of metal, as they are not perfectly flat (the more flat you want metal the higher the machining cost). <S> This allows air to come between two metal surfaces, again air has a low conductivity, and adhesive is much higher than air. <S> By the way, there are new adhesive materials made of graphite or graphene, that have conductivities in the 400 W/(m K) to 1000 W/(m K) <S> (in the x-y direction) that you might want to check out. <S> https://industrial.panasonic.com/ww/products/thermal-solutions/graphite-sheet-pgs/pgs <S> You can get pads at major distributors <A> The total thermal resistance is the length * resistivity <S> / Ac, where length is the length of the conductive path, Ac is the cross-sectional area of the conductive path, and resistivity is the thermal resistivity of the pad. <S> The conductive path length is actually just the thickness of the pad. <S> So even though the resistivity may be high, because the thickness of the pad is very small, it is an OK thermal conductor overall. <S> It is a design goal to make the pads out of the best all around material. <S> It is hard to find anything conformable that can act as a heat sink pad that is extremely conductive like copper or aluminum. <S> Sometimes the pads also need to provide electrical insulation. <A> How do they manage to do their job... <S> I am more familiar with pads than adhesive tape, but they are similar. <S> Although they are not as thermally conductive as metal, they are more thermally conductive than air. <S> They assure good physical contact, and they are very thin. <S> A typical thermal pad has a thermal resistance of 1 W/mK and a thickness of 0.2 mm. <S> One square cm of this pad has a thermal resistance of 0.5 degC/W. Not bad.
A single solid piece of copper or aluminum mountain directly to the chip die would certainly heat sink better but the issue is it would short out all the circuitry, thus epoxy encapsulation must be used for an isolating interface and as a result then some compound to interface to a metal heat sink.
How are electric water heating elements isolated? From what I've seen and understood, electric water heating elements are usually copper tubes/pipes with electricity running through them to heat up the surrounding water. I'm not sure if the electricity directly goes through the tubes/pipes though. How do the heating elements make sure electricity only heats up the water and not run through the water, causing unwanted currents through the water instead? <Q> The element is not made of copper (the resistance would way too low).Normally <S> it'd be nichrome (nickel+chromium) and then enclosed. <S> This is a straight quote from wikipedia Tubular (sheathed) <S> elements normally comprise a fine coil of nichrome (NiCr) resistance heating alloy wire, that is located in a metallic tube (of stainless steel alloys, such as Incoloy, or copper) and insulated by magnesium oxide powder. <A> The conducting element is isolated from the exterior tube by a ceramic powder spacer layer that serves as an insulator. <S> More here: https://en.wikipedia.org/wiki/Heating_element <A> MgO wiki: <S> It is used extensively as an electrical insulator in tubular construction heating elements. <S> There are several mesh sizes available and most commonly used ones are 40 and 80 mesh per the American Foundry Society. <S> The extensive use is due to its high dielectric strength and average thermal conductivity . <S> MgO is usually crushed and compacted with minimal airgaps or voids. <S> The electrical heating industry also experimented with aluminium oxide, but it is not used anymore. <S> It is also used as an insulator in heat-resistant electrical cable. <S> The expansion CTE ought to match or in between the metals used. <S> Cu and NiCr. <S> please report back with all values. <A> Figure 1. <S> Heater construction looking from the heater towards the terminal. <S> Source: Omega - Electric tubular heaters . <S> The magnesium oxide provides electrical insulation while giving reasonably good thermal conduction. <S> The helically coiled wire helps when bending elements and avoids buckling due to thermal expansion. <S> This low-tech construction method <S> YouTube video may help. <S> Note in the video the annealing of the tube using very high current at low voltage to make it very hot. <S> anneal <S> /əˈniːl/ verb gerund or present participle: <S> annealing 1.heat (metal or glass) and allow it to cool slowly, in order to remove internal stresses and toughen it. <S> "copper tubes must be annealed after bending or they will be brittle"
There is no electrical contact between the tube and the element (or at least, there shouldn't be.)
Safety and isolation transformers What safety measures can be taken when using a 1:1 isolation transformer in order to avoid electric shock even if someone touches both ends of the secondary?Would an insulation monitoring device be a safety measure in this case or does it only detect a leakage between primary and secondary? <Q> What safety measures can be taken when using a 1:1 isolation transformer in order to avoid electric shock even if someone touches both ends of the secondary? <S> There's very little you can do if someone might touch both ends of the secondary. <S> Keep the secondary voltage below about 50 V. <S> Ensure there's not enough energy present at the secondary to harm a person. <S> For example, by not having any capacitive load on the secondary, and by feeding the primary through a high resistance, so that the secondary voltage will drop when a "low" resistance load is attached. <S> The challenge is that a person touching the terminals doesn't look particularly different to the circuit than whatever load you are powering from the secondary. <S> If there's something special about the usual load (for example, it's purely capacitive, or it never draws more than a few nanoamps) <S> then perhaps you could design a useful safety circuit. <A> You can't take any effective countermeasure for touching both secondary wires. <S> The isolation transformer is not a device designed to prevent this action, rather it isolates a circuit from mains. <S> If the device isolation breaks, it makes a short circuit to the earth. <S> In case you have a portable device, the supply wire can be live or neutral <S> it depends how the plug is wired/twisted. <S> So an isolation transformer prevents that in case of device break you get a live wire on chassis. <S> This kind of isolation is depicted on device with double square- double insulated appliance, Class II. <A> 1:1 isolation transformer isolates the ground(Earth) line from the secondary. <S> Hence accidental connection between isolated line and earth won't be a problem. <S> It will still kill(or harm) the person. <S> Is there any safety precaution? <S> For primary side, There are current monitoring safety devices which can detect the current leakage (current taking other path than neutral) and trips the circuit with in 10s of <S> ms but it is also enough to make a heart stop or skip a beat. <S> Other than that: <S> Precaution Keeping high voltage stuff inside a plastic housing, <S> Making opening in only the low voltage section of devices and Good safety practices and education
Main safety precaution to be taken: Do not touch both line and neutral of secondary.
Hold time of a D Flip Flop which is the physical cause of hold time of a D flip flop? Why is it necessary to keep its input data constant for a certain amount of time? <Q> In addition to @Bimpelrekkie's answer, you should know that the clock signal may be buffered and inverted inside the flip-flop. <S> So there are internal clock signals that may not be in their final stable states at the instant that the external clock rises. <A> Clocked logic elements are built from pairs of latches. <S> We’ll call them Stage 1 and Stage 2. <S> (Often they’re called master and slave , but I dislike those terms and avoid them when I can.) <S> A basic clocked flop works like this: Stage <S> 1 latch passes input during clock-low time and holds during clock high Stage 2 latch passes input during clock-high time and holds during clock low <S> You may recall that latches work by selecting between the input and self-reinforcing feedback. <S> To reliably catch and hold the input, the Stage 1 latch input state has to be stable long enough so that the feedback state is settled when the Stage 1 latch closes and Stage 2 opens. <S> Setup time is the maximum of this feedback delay, hold time is the minimum . <S> To keep things simple most logic designers try to set up the relative max/min delays for clock and data to ensure zero hold time, but this isn’t always the case. <S> Sometimes hold will be after the clock, sometimes before, depending on the delays of clock and data to the flop. <A> All circuits have delays due to (parasitic) capacitances which need to be (dis)charged. <S> The (dis)charging is done through switches (usually transistors) which do not have a zero series resistance. <S> This means that the speed of any change is limited by at least some RC timing constant. <S> The hold time is needed because the flip flop isn't infinitely fast. <S> It needs some time to settle in the desired state. <S> If you know a circuit or component that is infinitely fast then please let me know! <A> The hold time needed for most of the will be mentioned as 0 seconds . <S> It doesn't mean the devices are infinitesimally faster <S> but they have logics which doesn't need the data to be stable after clock edge. <S> Another example is hold time mentioned in negative seconds , There, the data need not be stable during clock transition. <S> Anyone can supply examples for these cases too. <S> Propogation delay , rise time , assymetrical sub sections and temperature effects demands the input signal to be stable for certain duration so that system can reliably sample the data (see the stable data). <S> Finally, follow the specifications in the datasheet from the supplier. <A> One additional issue: If the data is in the "illegal" voltage range between "high" and "low" when the clock changes you will experience meta-stable behaviour; see its Wikipedia entry . <A> The cause is the capacitances that creates RC time constants within the FF. <S> This capacitance is usually from the gate or base of transistors to somewhere else in the circuit, but also to a lesser extent, in wiring, packaging, etc. <S> The capacitances that create hold time requirements can be in multiple places. <S> It can be in the clock buffers, perhaps delaying the clock to state holding circuits. <S> Or it can be in transistors preventing or enabling feedback to hold the state. <S> Or in a state holding pair. <S> If the input changes before the clock can enable state holding via feedback, both of which (clock path and hold buffers) have finite rise/fall times due to intrinsic capacitance, then any input change before the end of the hold time can corrupt the state that the clock edge is trying to establish before it can be established. <S> Perhaps by partially charging or discharging a capacitance below the threshold need for reliable feedback establishment to hold the state indefinitely. <S> The hold time will vary with the design of the circuits in the clock, input, and state holding circuitry, as well as component characteristics and their variation (via manufacturing, temperature, aging, etc.)
The "hold" time ensures that the input data remains valid until all of the internal clock signals have become stable.
How can I realize this nonlinear resistor? I want to know if it's possible to realize a circuit described by the equation $$I = \sin(w V)$$ where \$I\$ is the current, \$V\$ the potential and \$w\$ a variable which characterizes the single circuit. <Q> One such implementation is described here , regarding the HP3311A function generator. <A> A bit like a SQUID, which has a (ideally) sinusoidal response. <S> If your function is monotonic you could use some simple nonlinear circuit arrangement. <S> If your allowable values of wV cause the current function to be multivalued you could use a voltage-driven LUT (like an arbitrary function generator) and current source, but of course the resulting current would be dependent on the voltage history and initial conditions. <A> If what you truly want is a resistor, consider using a motor with alever arm. <S> The motor position (which can be servo-controlledto be a number of clockwise turns equal to V <S> *W/(2*pi) ) <S> willthen put the end of the lever arm at an excursion which can belinkage-connected so that it drives a variable resistor. <S> Thenegative resistance cases will be a bit of a problem, but a currentdriven positive or negative is a relatively easy thing to arrange(the resistor <S> can span +1 to -1 volts, and a voltage-controlled currentsource driven from it). <S> A related probem, building a sinusoidal voltage-controlled oscillator,requires only proportioning a frequency to the input voltage, not holdingany absolute phase relationship.
A common way to implement this is to use a piecewise-linear approximation to a sine curve, built using multiple resistors and diodes.
How to reduce voltage from shift register to ESP32 MISO pin? In this circuit I have sensors that are rated for a minimum of 6V connected to a 74HC165 which passes its output to the MISO pin of an ESP32. I was hoping a simple divider to convert the 6V from the shift register to 3.3V for the ESP32 would work ... but no. The ESP32 is unable to process the input consistently, delivering erratic output results. Interestingly, it all works perfectly if I remove the resistors for the voltage divider and power it all with 5V. The sensors can apparently handle the lower voltage fine ... but they need to be operated at 6V minimum. Should I use a linear regulator or switching regulator instead of a voltage divider? Or is there something else I'm missing? Much thanks, <Q> While the datasheet is not clear on what the absolute maximum ratings for the IO's are, I am certain that it's 3.6V. Usually <S> "+0.3V" means that the inputs are protected with a diode that turns on at Vdd+0.3V. <S> So this means don't tie Qh directly to the port of the ESP32, it could burn out the diode. <S> Source: <S> https://www.espressif.com/sites/default/files/documentation/esp32-wroom-32_datasheet_en.pdf <S> The resistor divider isn't working because it's not dividing the output voltage from Qh <S> , it needs to look like this: <S> The Voltage output specs for the SN74HC165 are <S> Voh = 5.99V and Vol = 0.1V, <S> after the resistor divider this would be 2.7V for Voh and 0.045V for Vol, using a resistor divider is compatible range of the ESP32's Vih of 2.475V and Vil of 0.825 <A> You have to use a logic level shifter IC. <S> The MOSFET can be logic N type MOSFET. <S> Image and many other solutions from this link: https://next-hack.com/index.php/2017/09/15/how-to-interface-a-5v-output-to-a-3-3v-input/ <S> As other answers have mentioned, there is also a need to check once the overall design and supply voltage level options to make sure you are following the recommended specification from the respective datasheets. <S> The particular sample may work, but not always. <A> Chances are high that in addition to feeding 6V into a pin that's not even 5V tolerant, you're trying to run a 74HC part with woefully inadequate pin voltages. <S> The 74HCT line is made to run from 5V and accept TTL input voltages -- which are, happily, about the same as what comes out of CMOS being driven from 3.3V. <S> If you absolutely must run your sensor at 6V, then you'll need proper voltage translation between it (or them) and your 74HCT165. <S> For transitions that are slower than about 100Hz or so (and maybe up to the low kHz), this should work well -- and you can collect Q1, R1 and R2 into a pre-biased transistor, for more savings. <S> Choose R3 to be around 1k \$\Omega\$ to 10k \$\Omega\$ Choose R1 and R2 so that if they were an unloaded voltage divider, they'd have a voltage of around 1.4 - 2V when the sensor output is on (i.e., \$\frac{R1}{R1 + R2} \simeq \frac{1.4\mathrm{V}}{V_{out}}\$ ). <S> That uses the transistor's Vbe to make a really soft threshold of around half the sensor output voltage. <S> Choose R1 and R2 so that their parallel equivalent is equal to about five times R3 or less (i.e., \$\frac{R1\,R2}{R1 <S> + R2} \le 5 R3\$ ). <S> That makes sure that the transistor is well into saturation when its on. <S> Don't sweat getting the numbers for R1 and R2 exact. <S> simulate this circuit – <S> Schematic created using CircuitLab
The easiest solution is the proper voltage divider, run your sensor at 5V, and use a 74HC T 165 (note the 'T' in the part designation). You can use a dedicated IC which can translate the signal from one level to the other. you can also use the below circuit if there is only one signal to convert from one voltage level to the other.
Interpreting results of a thermal camera - what should I expect to be hot? I'm not a trained electrician, but I got my hands on a thermal camera to play with. Its existing documentation didn't help me quite understand what's going on when I see a hotspot. What components can I ignore when they get hot? What components typically heat up on a PCB as soon as power goes on? I know that heat is caused by current, so I kind of expect anything current flows through is heating up. Should ground leads be hot (in the temperature sense) where they touch your chassis or whatever you're using as a ground? Let me get more specific. A small amplifier (based on the TDA2003 ) I'm using has a very hot resistor (compared to the rest of the circuit) as soon as power is on, and eventually the main IC and pieces of the PCB nearby heat up too. Is this sort of a case-by-case basis where it depends on my circuit diagram, or are there general rules of thumb when looking at thermal imaging of circuits? EDIT: Attaching some images as requested, first is visible circuit, second is immediately after power on, third is after 2 minutes on. Circuit schematic from CanaKit : (Original source of instructions above: Imgur ) <Q> I know that heat is caused by current <S> Not really, no. <S> Temperature rises because of electrical power getting converted to heat. <S> The amount of power converted determines the temperature rise, not the current alone. <S> For example, a current of 0.1 A through a 1 MΩ resistor will cause 10 kW of heating – whereas a current of 10 A through a 1 Ω resistor will only lead to 0.1 kW of heating. <S> But that's only half of the story – <S> heat alone doesn't mean a temperature increase; what counts is how much a specific material gets heated by a specific amount of heat power <S> (i.e. a 1 kg piece of wood needs less heat to get 1°C hotter than 1 kg of water), and how hard it is for the component to get rid of the heat (i.e. you add a large heatsink, the temperature doesn't rise as much). <S> A small amplifier chip I'm using has a very hot resistor (compared to the rest of the circuit) as soon as power is on, and eventually the main IC and pieces of the PCB nearby heat up too. <S> Is this sort of a case-by-case basis where it depends on my circuit diagram, or are there general rules of thumb when looking at thermal imaging of circuits? <S> That resistor probably intentionally drops some voltage e.g. to supply the amplifier chip. <S> Yes, it's a case-by-case basis, because the same device (e.g. an amplifier) can be built with architectures of different efficiency , i.e. how much of the input power is converted to heat vs converted to output power. <S> Generally, power electronics deal with lots of power, and thus, even that 1% waste heat of a 99% efficient device means a lot of heat. <A> Yes it's case by case. <S> As a rule of thumb, if something catches on fire, desolders itself, or even just smokes, it's too hot. <S> Chips don't want to be over 55C on their surface, at least at ambient temperatures. <S> Resistors, probably the same unless they're obviously power resistors and are chosen to get hot on purpose. <S> Some variation within the above temperatures is normal for power supplies, amplifiers, super-high-performance digital electronics (GPUs, FPGAs, high powered processors, etc.), but would be suspicious elsewhere. <A> The highest temperature that your device shows is about 37 °C (set your camera to use Celsius; Fahrenheit is pretty much unused in engineering). <S> That's pretty much "cold" for electronics. <S> You'll be fine.
So, any component that wastes a lot of electrical power AND has a hard time getting rid of the heat is going to get hot.
comparator opamps & AND gate I have two comparator opamps which connected AND gate. I would like to change the output of the AND gate by changing the voltage on the 5k resistor with the Potentiometer. I increased potentiometer value when the voltage at 5k was less than -0.5V. When the value at 5k exceeds -0.5V, AND gate output is high, so the voltage at 5k turned positive. In this case I increase voltage at 5k above 0.75V by decreasing the value of potentiometer, but the output cannot be negative and it started oscillating. What could be the reason of that? Edit I tried to create -1V + 1V on RV2 and R1 with feedback from AND gate output. But since the output of the AND gate is 0-5V, first voltage divider was used to reduce the range to 0-2V. 1V power supply(V3) was used for the -1V + 1V range. V3 for the first test, it will replaced with a voltage substractor circuit. <Q> A few pointers: <S> Op-amps are not ideal for use as a comparator. <S> The ones you have chosen can only swing to V + - 1.5 V. <S> This might be OK for your application. <S> Your op-amps' outputs can swing to -5 V. <S> I suspect that this will destroy the AND gate. <S> The whole circuit is very strange as you have a potential divider attached to V3 followed by another potential divider RV2 and R1. <S> Please edit your question to explain what you are trying to make and we can address further. <S> We'd be interested to know how you're generating V3 too. <A> The outputs of the two LM358 are going to switch to either +5V or -5V . <S> Because that's the power you supplied to them. <S> The 74HC08 cannot accept a negative input voltage. <S> It's a logic IC, it handles voltages between 0V and VCC. <S> Depending on the accuracy of the simulation, the results are random. <S> A real circuit will blow up. <A> If you were to work out the bugs of a circuit like the one pictured above, keep in mind that by introducing feedback with digital circuits, the delays will cause circuits like this to oscillate. <S> Feed back in non clocked digital circuits causes oscillations
A proper comparator is better as they have been designed for the job and won't latch up like some op-amps do.
How to accommodate a range of input voltages How does a designer generally accommodate for a range of input voltages in their circuit? For example, my design needs to be able to accept 12-24V where it uses 12V and 9V at seperate areas as shown below. Current no more than 200mA. The 12V supply does not need to be very accurate as it is being used for two things: A power supply for a 4-20mA sensor transmitter (As far as I know, the sensor has its own regulation to maintain a constant current) A supply for the 9V regulator. The 9V supply needs to be precise, say under ±1% but a regulator will be able to step down a slightly noisy/innacurate 12V supply if needed so i'm not concerned. simulate this circuit – Schematic created using CircuitLab Is there an IC that output a regulated 12V with a range of input voltages from 12V up to 24V. I've found a buck boost converter that seems to do the job (TPS55165-Q1). Will it do well in a 12V - 12V situation scenario? <Q> I don't know about the TPS55165 specifically, but buck-boost controllers are specifically designed to smoothly transition from the input being lower (or about equal to) the output to the input being higher. <S> One would hope they'd do the job with ease, but then, it's not the easiest job. <S> Scrutinize the data sheet, while reminding yourself that it was written to (A) <S> sell lots of chips chips, while (B) keeping TI from getting sued. <S> So if they say something definitive, you can trust it (see B), but if they only seem to say something definitive, be suspicious (see A). <S> And note that I'm not picking on TI -- all semiconductor datasheets work pretty much the same way. <S> They do have a "step down to step up transition" graph in there that looks pretty good to me, but check for yourself . <S> I am, after all, just some guy on the Internet. <A> The 12V supply does not need to be very accurate as it is being used for two things The above 12 V, If it can be say 11 V (10% tolerance) <S> I believe the 4 - 20 mA sensor can still work. <S> Consider, LMR36006 <S> 4.2-V <S> to 60-V, 0.6-A Ultra-small synchronous step-down converter for example. <S> The drop out voltage is about a couple 100 mV. <S> This is measured by setting output voltage as 5, but I believe the performance for 12 V will be similar. <S> The circuit is simple and there are multiple options available from many vendors for the buck regulator. <S> I would argue not to be really sticking to buck boost. <S> Isn't <S> it is okay when you get 11.5 V supply instead of 12, when the input is at 12 V? <S> Check once again. <S> For 9V, you can have a linear regulator to convert 12 V to 9V for the best noise performance (you spoke about noise again, so). <A> You lack much of the detail necessary to provide a viable answer. <S> What is the range of 12V output that can be tolerated? <S> Can the 12V supply go above 12V or is that an absolute upper limit? <S> What is the range of 9V output that is expected? <S> What is the current range for the 12V output? <S> What is the current range for the 9V output? <S> Are you expecting low loss (switching) supplies for both the 12V output and the 9V output? <S> Is linear regulation a possibility for either/both? <S> You give only one current figure, 'no more than 200mA'. <S> If all you are driving is a single 4-20mA loop (at something around 12V) plus 9V supply ….does <S> this means the 9V supply is about 180mA maximum? <S> If you draw a maximum of 200mA from a 24V supply then linear regulators would dissipate about 3W combined, whereas using a switching regulator for both the 12 an 9V regulators would drop this to a little under 1W combined. <S> You need to make a choice about what you need to use. <S> In terms of solutions the biggest challenge is to provide 12V output when the input is 12V (it's lowest). <S> To do this you would need to use a Boost/Buck switching regulator which would be in Boost mode when the input AND output are 12V. <S> Typically you would expect this Boost mode as the input gets closer to 12V since there are voltage losses to cope with internally. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The 12V supply could be a simple TLV431 and a TO92 transistor that would run to about 50c in cost. <S> This would be ideal if you set the regulated output to about 11.3V <S> (well within 10%). <S> The 9V regulator could be an LM2596 based regulator that costs about 80c.
Since it appears that the majority of your current is consumed in the 9V supply line, the best solution may be to use a linear regulator for 12V supply and use a switching regulator for the 9V supply ...running both from the raw input (12-24V).
Heat output from a 200W electric radiator? Im looking at installing an electric towel radiator. The heating element is 200W. Dose that mean that it would both : use 200W of electricity provide 200W of heat output to the room ? <Q> Yes, electric heaters are basically a resistor. <S> A resistor converts electric energy into heat, it does that with 100% efficiency. <S> That might sound weird but think about it this way: if a resistor was 90% efficient, where would the 10% "lost" power go? <S> Nearly all not 100% efficient devices lose the wasted energy as heat. <S> Generating heat is the sole purpose of a (resistor) heater. <S> So even if the heater was only 90% effcient, that 10% would still be heat, making the efficiency 100%. <S> So indeed the heater consumes 200 W (when it is in operation) <S> and it will then also emit 200 W of heat. <A> Of course it may only produce 95% of the rated output - nothing is perfect of course and, it may indeed produce 105% of the rated labelled power. <A> Dose that mean that it would both: use 200w of electricity <S> Yes. <S> It would consume 200 W of electrical power. <S> provide 200w of heat output to the room ? <S> It would give of 200 W in the form of heat. <S> 'W' for watt, 'V' for volt, 'A' for ampere. <A> Possibly the most universal law in physics, even before the constant speed of light, is conservation of energy. <S> Energy in + energy stored = energy out. <S> So if there's 100W going in, and there's not a significant amount of energy being stored, then energy in = energy out. <S> Always. <S> Every time. <S> And not just heaters -- lights, refrigerators, light bulbs*, motors, the Starship Enterprise <S> **, etc. <S> If there's 100W going into the room, and it's not coming out in the form of light or radio waves or mechanical energy (or, <S> presumably, subspace beacons in the case of the Enterprise), then it's going to heat up the room. <S> Period. <S> End of story. <S> (And, sadly, it's why perpetual motion machines don't work). <S> Which is a really long way of saying that, yes, your 100W heater will consume as much power is <S> it gives you in heat. <S> And if it doesn't quite match its ratings and it consumes more power, then it'll deliver more heat, and visa-versa if it consumes less power. <S> * Confusing "100W equivalent" ratings on LED and CFL bulbs notwithstanding -- <S> I'm talking about real energy. <S> ** <S> Although the Starship Enterprise won't fit well into a room that is appropriate for a 100W heater, and it usually comes with a lot of stored energy in the form of antimatter. <A> Both. <S> Heat is the easiest thing to make. <S> In fact it's the ultimate destination of virtually all energy conversion, because of how entropy works. <S> Every watt ends up turning into heat, or rarely, light. <S> For instance a 100W incandescent makes about 98 watts of heat, and 2 watts of luminous light, which turns into heat after it hits wall surfaces. <S> A 15 watt (100W equivalent) <S> LED makes 13 watts of heat and 2 watts of light. <S> One of my running jokes is that some people like to build heaters out of resistors, I like to build them out of Bitcoin miners. <S> The electricity bill and useful heat will be the same either way, only one of them also gives you bitcoin. <S> So whether your towel heater computes Bitcoin or not, the answer is, 200W in, 200W out. <S> It's not going anywhere else. <A> To add to the existing answers, your radiator is likely to have a thermostat to control the radiator temperature (not the room temperature), so the average power may be somewhat less than 200W.
Yes (maybe a couple of watts more for electrical losses in the cabling from the energy meter) and yes (unless there are losses through the wall to the outside world).
DYI USB Hub - It is possible to just connect ports in parallel? I want to make device what is similar to KVM switch. I have mouse + keyboard connected to my table PC and I want to use it for my notebook as well. I decided to make USB HUB with 2 USB input (for mouse and keyboard) and 2x 1 USB for output. Those two outputs will be switchable by switch. My question is: Can I just connect two input USB in parallel to make HUB? Or it will not work and mouse+keyboard will not be recognized? If it will not work what do I need to make it work? <Q> Your idea of a conventional switch is likely to struggle but may still work for you. <S> if it does you may be able to switch a normal hub. <A> Short answer, <S> NO ...you cannot connect USB data lines in parallel under any circumstances. <S> Your situation is however much more complex in trying to share devices between multiple Host ports. <S> A KVM does much more than simply switch devices from one computer Host to another, it is NOT a Hub. <S> It maintains the heartbeat so that the keyboard and mouse are not disconnected from the computer as the switch is activated from port to port. <S> You can see this for yourself if you plug and unplug a mouse from your computer system. <S> If you plug it in, it takes several seconds for the computer OS (via the Host interface, and called enumeration) to detect the device and load the appropriate driver. <S> If you unplug the device then the driver unloads. <S> A KVM provides enough endpoint functionality to ensure the computer does not unload the driver. <A> Basically it should be possible to switch between host devices. <S> Nevertheless the setup you proposed won't work reliably. <S> What you are trying to emulate is just unplugging a cable and replugging it to a different host. <S> But then you have to emulate the complete behaviour. <S> For the USB Hub and attached devices to initialise correctly also power has to be cut and reconnected. <S> Additionally, when you look at a USB plug you may notice the power contacts being longer than the data contacts. <S> This is to make sure, that power lines are connected before data lines are joined and in reversed order when disconnecting. <S> This could be emulated using a switch with more positions. <S> 4P5T should be optimal, when Power for host 1 connect at pos 1&2, Data for host 1 at pos 1, Power for host 2 al pos 4&5 and data 2 connect at pos 5. <S> This should emulate the behaviour of a plugging operation.
A hub is an active device, wiring data lines in parallel will not work, This could be achieved by using a switch with four contacts.
Is there a way to tell what frequency I need a PWM to be? I have a 12V DC motor that turns at 60 RPMs or 1 Hz. Do I need a PWM that matches the frequency in order for it to work? Most of the ones I've seen are around 25 kHz for a 12V 20A motor. <Q> No. <S> You don't match rotations per second to PWM frequency. <S> It needs to be much higher. <S> helps smooth out current), but not so high the switching losses in your electronics is excessive. <S> Often at least 8-10kHz, but you might want >20kHz if audible noise is an issue. <S> You have quite a bit of leeway before it becomes outright "the wrong" frequency. <A> You want the motor inductance to result in reasonably low ripple in current during the PWM cycle. <S> Here is one drive maker's rule of thumb : \$ f_{PWM <S> } \ge \frac {0.6V_{SUPPLY} }{L_{MOTOR}\cdot I_{NOMINAL}} <S> \$ <S> This formula results in quite a bit of ripple, around 40% peak at the limit and D=50%, so a bit higher frequency might be desirable, especially if the motor will be operated with low duty cycle. <S> In any case, the optimum PWM frequency thus can be dependent on the motor design (for switching efficiency and low cost <S> you don't want the PWM frequency to be higher than necessary. <S> An unnecessarily high PWM frequency can cause excessive losses in the motor as well as in the driver. <S> On the other hand, if the frequency is too low, the RMS driver and motor current will be excessive and will result in a lot of driver losses (and copper losses in the motor and wiring). <S> There might also be acoustic effects like an annoying whine if the frequency is audible or if it excites vibrational modes that are audible, and at very low frequencies the torque ripple might even be objectionable. <S> For example, pancake motors with very low inductance may require an external series inductor in order to be able to use a reasonable PWM frequency. <S> Despite rules of thumb and so on, you'll likely have to test the motor and driver to get a good estimate of the efficiency. <S> 20KHz-25kHz is probably a reasonable starting point for a conventional brushed DC motor with a gear head as I think you are describing. <A> The frequency to choose depends on the applied filter (e.g. like a RC filter). <S> Your DC motor probably won't need any filters because its inertia will serve this purpose as long as the pwm frequency is high enough (some kHz should be sufficient). <S> The motor eventually will produce some undesirable noise, in that case you might want to add a capacitor for smoothening. <S> Those 25 kHz you mentioned probably rely to the intel standard for PC fans. <S> But note: the 25kHz are not proportional to rpm in any kind. <S> When dimming with PWM it's the duty cycle that determines the motors power consumption . <S> If you want to control the actual rpm, you need to implement a control loop. <S> Unless you don't use one of those mentioned 4-pin pc fans, setting rpm just with PWM is not possible. <A> Shepro's answer shows a way to estimate the lower limit for \$f_{PWM}\$ . <S> There are several aspects which come into play, including regulation error (which is usually compensated for), harmonics (which sometimes have to be filtered out) and increased MOSFET power dissipation (during dead time the motor current flows though the MOSFET body diodes which have a much higher voltage drop than a fully open MOSFET). <S> As a rule of thumb, you want your PWM period to be about 50 times larger than the dead time, so the dead time which occurs twice during a PWM cycle takes only 4% of the time. <S> Then you will need to program the dead time compensation only if you need a good precision, and very simple compensation methods (like adding a constant offset to the duty cycle) will be sufficient. <S> Audible range was already mentioned. <S> Usually you want \$f_{PWM}\$ to be higher than audible frequencies, to avoid noise. <S> Typically, noise above 16kHz is considered to be faint enough so that most people won't distinguish it, especially behind a normal mechanical noise of a running motor.
You want it high enough so the motor runs smoothly (motor inertia smooths out motion and motor inductance I'd like to add that the upper limit typically depends on your dead time, which introduces Dead Time Distortion (DTD).
Why does `buck` mean `step-down`? I just read about buck converters and boost converters and buck/boost converters. Great stuff. But, why is a step-down converter called a buck converter? I tried to research this myself. According to Google Book search, the phrase buck-boost transformer was in use at least as early at 1891 in a periodical called Architectural Review. <Q> It's the same sense as to "buck" a trend: to oppose or resist (something that seems oppressive or inevitable)."the shares bucked the market trend"synonyms: <S> resist, oppose, contradict, defy, fight (against), go against, kick against"it takes guts to buck the system" <S> So you're "bucking" the input voltage to reduce the output voltage. <A> I may be wrong(apparently there is no way to qualify any answer here as correct), but I had always assumed that "buck" referred to an action similar to a "bucking bronco or bull". <S> A buck converter sends a voltage pulse only as often as it needs to in order to provide the rectified and filtered DC output required, just as a "bucking bronco or bull" will "buck" as often as he feels he needs to in order to eject the rider or loosen the strap. <A> Question probably belongs on english.stackexchange.com . <S> It arises from bucking being an action an animal takes to throw riders off or down, so a buck converter "throws" the voltage down by a repetitive "bucking" mechanism. <S> buck (v.1) of a horse, "make a violent back-arched leap in an effort to throw off a rider," 1848, apparently "jump like a buck," from buck (n.1). <S> Related: <S> Bucked; bucking. <S> Buck up "cheer up" is from 1844, probably from the noun in the "man" sense. <S> (from etymonline.com) <A> Step-down converters is really a subclass of DC-DC converters, while a buck converter is one specific topology ("brand") of step-down converter. <S> In essence, a buck converter is a step-down converter, but not every step-down converter is a buck converter. <S> In theory, anyway. <S> Let's look at an overview of the various non-isolated DC-DC converter topologies: Step-down Buck converter Step-up/down <S> Inverting buck-boost converter SEPIC converter <S> Ćuk converter <S> Zeta converter <S> Step-up <S> Boost converter As you can see, the subclasses are step-up , step-down , and step-up/down , and there are some topologies in each subclass. <S> You can also see that there's only one topology in the step-down subclass: the buck converter. <S> So: <S> But, why is a step-down converter called a buck converter? <S> The names refer to different things, but in practice it doesn't matter, so they're used interchangeably. <S> The Texas Instruments book Power Topologies Handbook (by Markus Zehendner and Matthias Ulmann) also has a good overview of the various topologies (also including isolating converters). <S> If you want to know where the name "buck converter" comes from; I don't know, but the other answers try to address that.
Because buck converters are step-down converters, and in practice all step-down converters are buck converters.
How to slow 12 VDC motor for power windows in a car? I installed power window motors in a classic car and they run way too fast I cannot change the gearing is it possible to reduce the power to slow down the motors? <Q> You can, but it will also reduce torque and could stall the motor if it falls too low. <S> What kind of classic car has power windows anyways? <S> I thought classic cars were...classic? <A> The motor may have a 10:1 ratio (or more) of starting current to moving current so using any form of series resistance is basically futile. <S> Sooner or later your motor will stall. <S> Since what you want to do is control the speed of the motor you have to sense either the rotation of the motor or the linear travel of the window movement arms. <S> You have several choices: <S> Use some sort of position sensor (resistor or pulse position encoder) in a closed loop servo. <S> This is probably beyond your requirements. <S> A PWM controller run open loop. <S> This might be your best solution with the most simple implementation. <S> As an example of #2 you could use a simple DC motor controller like this which provides a PWM drive with reverse switches <S> (I assume you want both up and down control of the window). <S> This type of open loop PWM controller can still allow your motor to pull startup current and will give you the ability to effectively dial back the speed, though the speed will NOT be regulated. <S> If you set this to 50% duty cycle it would be almost identical to driving the motor with 6V instead of 12V. The PWM control will probably only work between about 50% and 100% duty cycle as the motor does need to provide the initial torque requirements. <S> The second major advantage I would assume is you don't have to build and test a solution. <A> This may be a mechanical issue. <S> It sounds like either the motors are a misfit for the job at hand... <S> Or the linkage is a misfit. <S> I don't know where you got these motors, but I would hope they are power window motors out of a car, of such an age that they are mechanically powered and not electronically controlled. <S> These motors incorporate the gear-downs you are looking for. <S> Further, many of them don't turn a shaft; they swing a lever. <S> This lever needs to be mechanically linked to your window mechanism. <S> These mechanical issues are show-stoppers . <S> There is nothing you can do with electrical wizardry that can correct for problems here. <A> The challenge with using any sort of a one-quadrant PWM approach is that the motor runs in two directions. <S> So instead of being wired at the motor, the PWM chopper has to be wired somewhere between the master switch panel and the supply fuse. <S> This isn't so easy considering how these switch panels are built up. <S> I consider this impractical, given the idea of not messing about too much with a classic car. <S> Adding a series resistor on each motor would be simpler. <S> How to choose a value? <S> Measure the motor current. <S> If you measure 4A, the motor will present about a 3 ohm nominal load while running. <S> Add a resistor about the same as that value to slow the motor by 50%, so 3 ohms or so. <S> The rating should be about 50W to be safe, as the motor can draw a lot more at the end of its travel. <S> You can get adjustable ceramic power resistors to tweak the current. <S> Example: <S> https://www.mouser.com/ProductDetail/Ohmite/D50K3R0E?qs=sGAEpiMZZMtbXrIkmrvidHIgCu07YBurrqV1pEdYAB8%3D <S> Looks a lot like a rheostat, which is exactly what it is. <S> Old-school motor control. <S> One last thing. <S> Make sure the resistor is thermally insulated from the stuff surrounding it as it can get hot.
The current required by the motor will vary over a wide range and reducing the voltage (with some form of regulator) will only work to some extent.
what is difference between servos & servo motors? I am studying about basic of motors and i came across two terms Is there any difference between two terms servos and servo motors? or they both exactly same?? <Q> particularly when referring to the low-cost lightweight servo motors originally used in radio control models ("RC servos"), which have de-facto-standard control signals and sizes. <S> More formally, “servo” is an abbreviation of “ servomechanism ”, which is a control system that uses negative feedback to steer the system to the target accurately. <S> A servomechanism involving a motor does not necessarily seek a target position ; for example a cruise control or governor controls speed. <S> Servomechanisms can also control non-mechanical states. <S> A servo motor is not a distinct type of motor in the way “induction”, <S> “brushless” and such are; you can turn any motor into a servo motor by adding a sensor to detect its position or speed, and connecting this combination to a controller. <S> The controller may or may not be considered part of the "servo motor" itself; in particular, in industrial controls the controller is likely to be a separate unit. <S> In most cases, if you see the word “servo” and you're not reading about control systems, it will be referring to a servo motor, but it may occasionally refer to a different control system; use context to determine whether it makes sense that there's a motor involved. <A> The terms are often used interchangeably, but servo motors are a subset of servos. <S> Any control system that has an actuator and a feedback sensor designed to implement closed loop control is a servo. <S> A servo could control rotational positional, linear positional, thermal, chemical ph, pressure, optical brightness or color, etc. <A> In the radio control world and in industry, "servo" means "servomechanism", means some gizmo that gets commands and drives the output to a commanded position. <S> In the radio control world since about the mid 1970's, the command has been a pulse whose width varies between 1ms and 2ms, and that occurs at a rate of 50-60Hz. <S> In the hobby robotics world, "servo motor" means an RC servo. <S> In industry about 20 years ago, "servo motor" meant just a motor which was optimized for use in a servomechanism -- meaning that low friction, low cogging torque, and general good behavior was more important than a super high power to weight (or power to size) ratio. <S> It is, most emphatically, a different thing than a "servo motor" than you'd by from Bob's Hobby Robot Factory. <S> More recently in industry, there are "smart servo motors", or more properly "servo motor with integrated drive" which get power and a speed <S> ( not position) command, and that command is not in the form of a 1 to 2 millisecond pulse that repeats at 50Hz.
In common usage, the term “servo” is an abbreviation of “ servo motor ”,
Why do motor drives have multiple bus capacitors of small value capacitance instead of a single bus capacitor of large value? All professional DC, BLDC or PMSM motor controllers that I have seen ( Sevcon , etc.) have large numbers of DC bus capacitors connected in parallel. Their capacitances range around 100 µF - 220 µF. Wouldn't a single capacitor of a large value, like 4700 µF or 10000 µF, be more convenient? Is it because of the large surge current whenever these controllers are connected to batteries or other high current power sources? <Q> Sure having enough capacitance is one parameter. <S> Capacitors also have series inductance which limits how fast you can get the peak current out. <S> Having multiple smaller capacitors in parallel reduces both series resistance and inductance. <A> Higher ripple current capability, lower ESR and sometimes better form factor (eg. shorter) to fit in a convenient spot in the enclosure are likely reasons. <S> More surface area of the capacitor means more power dissipation capability, all other things being equal. <A> Other answers have already mentioned the main factors which determine that choice: lower total ESR, lower total inductance, better heat handling capability, etc. <S> I'll add one more aspect that has been neglected: reliability . <S> If you have just one big capacitor, once it fails, you are left with a nonworking system. <S> Moreover, a bigger cap can do more damage to nearby components if it fails spectacularly. <S> Having multiple caps in parallel helps mitigate the effects you have when a cap fails open, because the others will still be there. <S> You could even design the system with redundancy in mind, i.e. adding more caps than the minimum you would need given the other constraints. <S> There are also issues with endurance against vibrations (this is particularly relevant when dealing with big motors). <S> A single, big capacitor can be stressed mechanically more heavily when subjected to vibrations. <S> The big mass of the cap can resonate mechanically and exert a bigger stress on its terminals or its mounting points, leading to mechanical failure of the cap itself or the PCB it is attached to. <S> Smaller capacitors, since they have less mass, have less inertia, so they experience and cause less mechanical stress due to vibrations or shocks. <S> Therefore it's also easier (and cheaper) to design appropriate strain reliefs to avoid mechanical stress and shocks causing problems. <A> The capacitors help in filtering and decoupling noise. <S> But each single value of capacitor is only good at one particular frequency. <S> It has least ESR (higher ability to mitigate noise) <S> Using a range of values provides that good filtering ability over wide frequency range. <S> Reduced Heating due to ESR . <S> As the ripple currents flow through the capacitors to and fro, the ESR opposes the current flow (similar to resistor). <S> The higher ESR means higher power dissipation (as heat). <S> This effectively raises the temperature of the capacitors. <S> Higher the temperature lower the capacitance they can provide. <S> Hence, low ESR over multiple frequency band is one desired parameter which can be effectively received by combining multiple capacitors than one single bug capacitor. <A> This could also be a production optimization thing. <S> If a product already uses 220uF capacitors, using them instead of an additional 4700uF may make sense (though replacing one cap with 20 seems a bit extreme). <S> A 4700uF cap is likely to be through-hole, and if it's the only through-hole component in a product, you save a whole manufacturing step if you can avoid it. <S> Even if it's not, your stock becomes easier to manage because there are less part types to order, and you reduce the risk of having to redesign a product because that capacitor model goes out of production. <A> A single, custom capacitor optimized for the needs of that drive would probably have some advantages, if that was the only product you were building. <S> But if you build dozens of different drives, as all drive manufacturers do, you want to optimize the supply chain across the entire product line . <S> That means standardizing on as few building blocks as possible, and using them in various combinations to get the voltage and capacitance ratings you need. <S> This model needs two caps in parallel, another needs two in series, another needs four, another needs twenty, but you still only have to stock one part . <S> You get economies of scale in purchasing, lower likelihood of running out of a part you need, and lower stocking costs overall. <S> Bonus points if it's the same part a dozen other drive manufacturers are using, since they're probably building <S> exactly the same drive frame sizes you are. <S> Now, if we could just get the power magnetics industry to work this way... <A> I think it is the best option for the manufacturer. <S> After all, whatever cost less I think will be the preferred option for it.
But capacitors have series resistance which limits how much peak current can be drawn from a capacitor.
Can copper pour be used as an alternative to large traces? This is a general question, but if I am designing a PCB that needs a trace width of 110 mil, would I be able to use the copper pour instead of drawing out the large trace? The reason I am asking is for PCB's that have a lot of components that don't allow for large trace widths. Thanks. <Q> The reason I am asking is for PCB's that have a lot of components that don't allow for large trace widths. <S> Yes, you can do that. <S> However, I'd still recommend using a high-width trace where possible, and then just connecting the small components using short, thinner traces. <S> That way, you can guarantee the wide trace for most of the distance, your design rule check has an easier time, and actually laying out the high-current traces means that you can do that before you place all the other traces – which is desirable, because especially in fast-changing high-current signals, you want to avoid large current "detours" and loops. <A> The only difference between a trace and a copper pour is how they're created in your EDA -- a trace is defined explicitly, a copper pour is created implicitly from everything that's left over. <S> Once the board is manufactured, there's no difference. <A> Yes, using filled zones instead of a trace is common practice. <A> Having a copper pour completely cover a trace is a common practice. <S> Using one instead of a trace, however, is probably not a good idea. <S> Ground traces should generally be given routing priority over most other traces, so good ground routing is established long before anything would know what parts of the board will be covered with a copper pour. <S> If one places traces between a bypass cap and the power/ground connections of the appropriate chip, one can ensure that the cap and chip will be well connected. <S> If one relies upon a ground pour to connect them, one may end up with the cap and chip connected via a long thick "C-shaped" copper poor that requires current from the chip to flow halfway around the board in order to reach the nearby cap. <S> Such a design might pass an automated design rule check, but fail to actually work reliably if manufactured.
So long as the copper pour meets the required trace width, it's fine.
Which has less energy loss - AC step up/step down transformers or DC to DC step up/down converters? In the EE field, AC transformers are constantly talked about. However, I don't hear much of DC to DC converter modules, even though DC is widely used today, mostly in smaller electronics. Both of these perform similar operations by converting to a higher or lower voltage, but I am wondering which is better in terms of least amount of energy lost? I would assume that if transformers are not as efficient as these DC converters then AC would be rectified and then converted to higher voltage. <Q> I don't hear much of DC to DC converter modules, even though DC is widely used today, mostly in smaller electronics. <S> You might not have heard much about DC/DC converters because they are often built into the device. <S> The vast majority of mains powered 'smaller electronics' today have switching AC/DC converters to produce low voltage DC from the AC mains. <S> A switching AC/DC converter (also known as a switched-mode power supply or SMPS) consists of a mains voltage rectifier and filter, followed by a DC/DC converter. <S> Many devices also have on-board DC/DC converters to further reduce voltage. <S> Mains frequency transformers are used less to power 'smaller electronics' today because they are more bulky and less efficient, especially once the output has been rectified, smoothed and regulated. <S> Although the power loss in an individual device might be so low that the consumer doesn't consider the cost, there are so many of them that the total loss could be significant. <S> Transformers are used mostly to increase or decrease AC voltages for mains transmission and supply. <S> Large transformers are more efficient than small ones, and cheaper and less complex than using a DC/DC intermediary when the the input and output are both AC. <S> However at the other end of the scale, very long transmission lines and undersea cables have less loss using DC, even after DC/AC conversion at each end. <S> So in deciding which is more efficient you have to consider not just the component that changes the voltage, but what associated losses may occur in other parts of the system. <S> In the case of 'smaller electronics' seemingly insignificant transformer losses could become a big problem when DC conversion losses and power factor are taken into account. <S> Efficiency Standards for External Power Supplies <S> In the early 90’s, it was estimated that there were more than one billion external power supplies active in the United States alone. <S> The efficiency of these power supplies, mainly utilizing linear technology, could be as low as 50% and still draw power when the application was turned off or not even connected to the power supply (referred to as “no-load” condition). <S> Experts calculated that without efforts to increase efficiencies and reduce “no-load” power consumption, external power supplies would account for around 30% of total energy consumption in less than 20 years . <A> It’s important to compare apples to apples. <S> If you’re talking about long-lines distribution, HVDC as a system is more efficient due to the absence of skin effect in the conductor: current flows in the entire cable making the I2R losses much lower. <S> Also, dielectric losses are nullified since DC avoids the charging/discharging of AC, and the cable the can be sized without needing to allow for reactive power and RMS peak. <S> Taken together, these advantages make HVDC long lines about 30-40% more efficient than AC. <S> This increased capacity and efficiency justifies the cost and losses of the HVDC inverters at each end. <S> More here: https://sinews.siam.org/Details-Page/direct-current-transmission-and-the-future-of-electricity For medium and low voltage distribution, the simplicity and low cost of AC with transformers wins out. <S> AC is however more convenient , especially for motors and some lighting systems. <S> Even in this realm however, the push for 48V DC data center power would suggest that different approaches using DC-DC are more efficient. <S> More here: <S> https://www.edn.com/design/power-management/4442710/2/48V-direct-conversion-dramatically-improves-data-center-energy-efficiency- <S> That said, even at smaller scale, for managing renewable power generation there is work going on with voltage-source conversion using local HVDC at tens to hundreds of kilovolts. <S> This development, called ‘HVDC Light’, arose from using HVDC for offshore windmill plants where using undersea AC cabling was impractical. <S> More here: https://library.e.abb.com/public/b35718ff8f3fa4c0c1256fda004c8ca2/VSC%20TRANSMISSION%20TECHNOLOGIES.pdf <A> I suppose it mostly matters what you're coming from and what you're going to, i.e. AC/AC, AC/DC, DC/DC, or DC/AC. <S> That said in a direct comparison between transformers vs. DC/DC converters, I'd wager transformers will have better efficiency. <S> Obviously there's lots of use case considerations such as input and output voltage differential, but generally, it's difficult to get much better than 90% effeciency out of a DC/DC converter. <S> Where according to this ... <S> Most of the transformers have full load efficiency between 95% to 98.5%
It’s hard to say if AC is actually more efficient than DC at this scale, given advances in conversion technologies since Edison and Tesla’s era.
8/16/32 Bits of microcontrollers I am new to the field of embedded systems. Recently I was learning about the differences between different bit size (8, 16, 32) microcontrollers. What I found was that the size of bit indicates the memory addressing capacity, data bus and address bus size etc. Every other website has almost the same explanation. But still I could not find answer to some of the questions. To list them: Is it really necessary for the address and data bus to be the same size as the bit size of the micro. What is it that will become fixed for sure for a given bit sized microcontroller? Do a microcontroller have have same address bus for all the memories (RAM, flash, EEPROM) and if so, is it the choice of manufacturer to allot any size to any type of memory out of the available addresses? Say there's an 8 bit microcontroller. So it can adress 2^8 memory locations (that's what I figured out, I am not sure). If each register is 8 bit it means a total of (2^8)*8 =2048 bits of memory. That's not even close to the 32kb of flash inside most of them. What blunder am I making? <Q> For example, the MC68000 is generally considered a 32-bit architecture as its registers are all 32 bits wide -- even though it has a 16-bit data bus and a 20-bit address bus. <S> (This means that it must make two memory accesses to write a single register to memory. <S> The top 4 bits of an address register are simply ignored, which is a bit of an oddity.) <S> The sizes of the address and/or data busses connecting the processor to its various memories are often different, either from the size of registers or even from each other. <S> In more complex architectures, it is not uncommon for the same memory to even be accessible over multiple data busses of different widths. <A> For eg, in the good old 8051, data bus is of 8 bits and address bus is of 16 bits. ' <S> 8-bit' in the naming convention of a micro-controller is quite abstract. <S> Mostly it refers to the size of registers inside 8. <S> All general purpose registers inside 8051 are 8-bit registers except PC and DPTR. <S> You have to go thru the data sheet. <S> So you are wrong in your assumptions about 4th question, because '8-bit micro-controller' doesn't always mean that it has an address/data bus of 8-bit. <S> Need more info to clarify it. <S> 2,3 are purely architectural based. <S> Yea, you can have different buses inside. <S> Big topic. <A> Well that's a lot of learning books. <S> You should get one in the library. <S> No, usually the address bus is wider than data bus. <S> This is very basic knowledge on processors. <S> Depends on architecture of the processor. <S> Newer ARM RISC processors have many address buses. <S> Learn: <S> von Neuman VS. <S> Harvard architecture. <S> For example the MCU can have separate internal RAM and FEPROM bus and yet 3rd external address and data bus <S> As stated in 2. <S> it can have more internal buses, depends on architecture. <S> You're wrong. <S> For 32kB space you would need an address bus at least (or exactly) 15 bit wide. <S> source of the image <A> (3) Do a microcontroller have have same address bus for all the memories...? <S> Not necessarily. <S> Unlike a larger computer system where an operating system can load and execute different programs on-the-fly, the application code in a microcontroller typically is burned into the flash memory at the factory, and then is seldom or never changed after that. <S> There is no need, therefore, for one microcontroller program to be able to treat the instructions of another program like data. <S> A system which, like some of the world's earliest stored-program computers, has separate data paths for fetching instructions and, for accessing data, can be simpler (cost less/use less power) than a so-called Von Neumann machine that stores instructions and data in the same memory space.
The "bit size" of a CPU typically refers to the size of its general-purpose registers (or its primary register, in parts with nonuniform registers). It is not necessary that data bus and address bus widths are same in a micro-controller.
Will multimeter show a voltage in an electric field? I think that it won’t because there is no current entering the meter with which it can measure. Correct? Also, is it the same reason why the meter won’t show a reading even though one probe is in air and one is at a battery terminal? <Q> The meter's input capacitance is also why you can measure frequencies on some meters with no ground connected. <S> However with DC voltage <S> the other probe quickly charges up to the DC voltage via the 10M resistance and shows 0 after the initial spike. <A> If you set up a static electric field (say between two charged plates) and then place multimeter probes into it, the first thing that would happen is that the meter would read a voltage. <S> The second thing that would happen is that the meter would discharge. <S> At this point, there are two interesting things to note: <S> The meter reads zero (OK, this is only interesting in context) <S> The electric field has assumed a different shape, because there is a structure within it (the probe wires and tips) that is all at one electric potential. <A> Working with a buddy on piezo sensors, and experimenting with reducing the 60Hz trash floor, we found the 3' wire of a 10MegaOhm DVM would report 7 volts when hanging near the chassis of a floating unpowered computer tower. <S> A nearby low-quality switching-supply was coupling onto the computer tower (that measured over 50 volts by the DVM)
A multi-meter can show a reading when moved through an electric field, however when its stationary, it rapidly dissipates the charge over its generally 10 megaohm input resistance, equally any charge that is developed on the probe is largely attenuated by the meters input capacitance (10's to 100's of pF)
Why not nickel-plated edge connectors? The tracks used as an edge connector are gold coated to improve contact with the female connector: bare Cu oxidizes, and also wears out if they are inserted and removed repeatedly. Au cannot be applied directly to Cu, so electroless Ni + immersion Au (ENIG) is used. Well, the question is: since Ni has good resistance to abrasion and oxidation, why are never edge tracks plated with Ni only? I don't think it's because of the lower resistivity of Au: with or without Au there are always the microns of Ni in the middle, practically no resistance. And female connector contacts are also Au coated. Anybody knows? Thank you! I am NOT answering my question. I am writing here just because I needed a place that is common to all contributions. My real need was not exactly about edge connectors, but making printed contacts for a sliding switch, this will be to choose tuned circuits for a 'ham' receiver. I aim to make the construction easier for the DIY builder, that's why I looked forward to avoid the gold process, not because of cost but added complexity. I am delighted to have been quickly and correctly informed in my very first tour into StackExchange. Thanks to everyone! <Q> And female connector contacts are also Au coated. <S> Not your question, but you want to mate like metals to like metals -- gold to gold, tin to tin, etc. <S> If you don't, then in environments with any moisture you get (possibly slow) galvanic corrosion, and in high vibration environments you get something called "fretting corrosion". <A> I've wondered the same thing myself on occasion. <S> I'm pretty sure the answer has to do with contact resistance (as opposed to the bulk resistance of the materials). <S> A gold-to-gold contact has lower resistance and better long-term reliability than a nickel-to-nickel contact. <A> I don't think the nickel is very good for low level contacts. <S> Gold is good for high level and low level contacts. <S> You can perfectly well apply gold to copper, however the ultra-thin layer of gold will diffuse into the copper and effectively disappear. <A> Contact resistance. <S> Edge connectors aren't typically required to sustain high insertion/withdrawal cycles. <S> Many pro audio grade XLR connectors have heavy silver plating on both M&F. <S> It doesn't last forever in stage and studio applications and the entire cables are frequently thrown out.
Even a microns-thick layer of gold is significantly softer than the underlying nickel. That's why the nickel layer is sometimes referred to as a "barrier". Yes, it's possible to use nickel as an edge connector contact, we've done it for high voltage/current connector contacts (24~240V).
STM32 product line in a spreadsheet? There is a staggering number of variants of the STM32 family of MCUs. I have been led to believe that a list of the available STM32 microcontroller parts and their various features is available somewhere in spreadsheet form. Is this true and if so where can it be found? <Q> It requires a bit of work to produce an usable spreadsheet, because you have to right-click on the table header, scroll down, and check every column you're interested in. <S> Then click on the Excel icon to export it. <S> But if you have this tool, perhaps you don't need the spreadsheet at all. <S> The feature list is available as a .json file as well <S> , it's at <S> ~/.stmcufinder <S> /plugins/mcufinder/mcu/ <S> mcusFeaturesAndDescription.json on Linux systems, or in \Program Files (x86)\STMicroelectronics\STM32Cube\STM32CubeMX\db\plugins\mcufinder\mcu\mcusFeaturesAndDescription.json on Windows. <S> Recent versions of Excel might be able to import it. <A> ST website used contain a product selector which enabled export to spreadsheet. <S> At present they provide a downloadable MCU finder tool, capable of spreadsheet export. <A> The best source to find ST's microcontroller is it's own website. <S> But STMCUBEMX also can be nice option to find right microcontroller. <S> This STMCUBEMX provides a list in the beginning of any project. <S> You can select needed peripherals, middlwires etc. <S> so that it can filters you <S> MCU's which has what you need.
The STM32CubeMX tool has an MCU selector module, that can export the product list and features in Excel format.
How to decide CPOL and CPHA values in SPI configuration I am working on a slave device for which I have to write a Master configuration.In the datasheet of the slave, it is mentioned, " The SDO data changes on the falling edge of the SCLK signals. The devices sample the SDI data on the rising edge of SCLK "Can you please tell me the values of CPOL and CPHA for this slave.Thank you <Q> You must look at master datasheet and slave datasheet to find matching settings. <S> Different microcontrollers could have different interpretation of these bits, so it is impossible to say how they should be set as we don't even know what the master is. <A> Follow the below image carefully. <S> Several times, until you get a gist of it. <S> (Image source: Wikipedia - Serial Peripheral Interface ) <S> Thumb rule is as below: CPOL significance : <S> This defines, whether the Clock signal will be high (CPOL = 1) or low (CPOL = 0) before the Chip select goes low (before beginning the transaction). <S> CPHA significance <S> : It tells, whether the data is sampled (by both master and slave) during first edge of the clock signal or the second edge of the clock signal (soon after transaction has started). <S> Coming to your question: "The SDO data changes on the falling edge of the SCLK signals. <S> The devices sample the SDI data on the rising edge of SCLK" this one in particular: devices sample the SDI data on the rising edge of SCLK <S> indicates that (referring to the nice waveform) <S> If CPHA = \$0\$ , CPOL has to be \$0\$ so that the sampling can happen during the rising edge of the signal. <S> (RED vertical line). <S> If CPHA = <S> \$1\$ , CPOL has to be \$1\$ so that the sampling can happen during the rising edge of the signal. <S> (Blue vertical line) <S> So, depending on the options you have in the MCU, you have to choose between Mode 0 or Mode 3 . <S> Please share the slave details and we can find it out by looking at the waveforms (timing diagram). <S> One example: <S> This is SPI flash from Microchip : <S> The timing diagram has been given for Mode 3. <S> (notice that the clock is high before the Chip select goes low. <S> Also, notice that the sampling is done on the second active edge (rising edge). <A> This is the first (oldest) answer to this question: <S> Based only on the informations you provided (for a slave device); your statements are equivalent to: <S> The setup of SDO (Serial Data Out) or MISO <S> (Master In, Slave Out) occurs in the falling edge of SCK. <S> The sample of SDI (Serial Data In) or MOSI (Master Out, Slave In) occurs in the rising edge of SCK. <S> This can be either Mode 0 or Mode 3 of SPI. <S> The exact Mode depends on value of SCK in idle state . <S> Could be: Mode 0: CPOL =0 <S> (SCK=0 in idle), CPHA =0 <S> Mode 3: <S> CPOL=1, (SCK=1 in idle), CPHA=1 <S> Again: <S> Only based on informations you provided. <S> The doubt would be removed knowing the logic state of SCK pin in idle state (no transmissions).
While many microcontrollers have bits named CPOL and CPHA to change settings, there is no single standard how to set these bits to get the settings you need.
Why does 50 Ω termination result in less noise than 1 MΩ termination on the scope reading? Regarding the following section from the Keysight document "Making Your Best Power Integrity Measurements" : It says that using 50 Ω termination we will see less noise on the scope compared to 1 MΩ. Can this be explained by modeling what is meant here as an electrical circuit? I'm trying to understand why lower resistance causes less noise on the scope screen. <Q> The first thing to consider is the Johnson noise of the resistor which cannot be eliminated. <S> The higher the resistance the greater the noise. <S> Reducing bandwidth will also reduce Johnson noise. <S> So if your scope has bandwidth settings, and if you don't need high bandwidth for your signal, you can get cleaner results using the reduced bandwidth modes. <S> The second thing to consider is noise which couples in to the oscilloscope, particularly if it couples through the probe wiring arrangement by way of a magnetic field. <S> The time-varying magnetic field will induce a current in the probe. <S> The termination resistance inside the oscilloscope will convert that current to a voltage. <S> If the termination resistor is 50 Ohms, that will lead to a much smaller voltage than if it is 1M Ohm. <S> This is a very important concept when you encounter situations where noise immunity is required. <S> Usually any noise coupling path will have some series resistance or fundamental power limiting just by its nature. <S> So the lower your termination resistance, the lower the voltage due to noise coupling. <S> Sometimes a 20 pF capacitor on a digital input can make the difference between a flaky and totally unreliable piece of junk and a rock solid product. <S> Often if I need to put the oscilloscope on a shunt resistor, I will use the 50 Ohm termination feature of the oscilloscope. <S> This greatly reduces noise, and since the shunt resistance is much less than 50 Ohms (for the shunts I deal with) there is no worry of excessive current flowing into the oscilloscope, even if the shunt current may be high. <S> This image was formally assigned to the public domain by its creator (not me). <S> Retrieved here: https://upload.wikimedia.org/wikipedia/commons/f/f6/JohnsonNoiseEquivalentCircuits.svg ) <A> Because it takes a lot more induced noise current to produce the same noise voltage across 50 Ohms compared to 1 megaOhm. <S> The tradeoff is that it is more difficult to drive. <S> A brick and a piece of paper: Which one is more resistant against disturbances and undesired movement in a breeze? <S> Which is easier to move when you actually want to move it? <S> You can't have it both ways. <S> Same idea. <A> Because Johnson-Nyquist noise . <S> Resistors generate thermal noise, with a noise power that's proportional to the absolute temperature. <S> A higher-valued resistor will then generate more noise voltage , with the voltage proportional to \$\sqrt{R}\$ . <S> Setting your O-scope up as a \$1\mathrm{M}\Omega\$ instrument gets you the noise from that \$1\mathrm{M}\Omega\$ resistor; if you're measuring some super low-impedance node like you'd find in a power supply, you gain absolutely no accuracy from the high impedance. <S> So you measure at \$50\Omega\$ , and get less noise.
In general, lower impedance termination is more resistant to noise.
Longer digital trace further from analog vs shorter digital trace closer to analog In the 4 layer PCB below (top layer) what is better in terms of interference with the analog circuitry on the lower left (current sense amp and buffer op-amp) trace 1 or 2? The FPGA does have bypass capacitors on the bottom side.The stackup that I'm currently planning to use is 0.8 mm/4 layer (0.035-0.2-0.0175-0.265-0.0175-0.2-0.035) The traces are from an ADC to an FPGA, the clock frequency is 50MHz.My understanding is that having ground and power planes underneath the traces would make the high frequency component of the signals return under the traces. So placing the traces further away from the analog seems like it might reduce the interference, on the other hand a longer trace is a bigger loop area. <Q> You have not shown the power and ground plane. <S> I assume it to be solid underneath the traces as I don't see any other components or via in the region. <S> If there are no more traces to be drawn, go for the one with the least trace length. <S> If hte matching is well done, the interference will be really less. <S> There will be no big loop area even if you go for lengthier trace. <S> Almost all the high-speed currents will be just below the trace. <S> Only disadvantage is that the signal is close to the edge which might be susceptible to external noise. <A> Coupling field strength generally falls off at distance squared. <S> If you can make the parasitic loop length increase by less than distance squared (e.g. not a big circle, but linear), then, first order, moving the undesired coupling loop farther away is likely to be a win. <A> If you want the cleanest, then laminate the analog trace between 2 ground sections, those sections tied together with dozens of vias. <S> If you cannot do that, then take the hint from a Howard Johnson book, where he states the Efield coupling ----- over a plane ----- to be proportional to 1/Distance^3 Now for the magnetic coupling over a plane <S> ---- <S> I don't know. <A> The return current does spread out on the ground plane beneath the trace, so keeping the traces apart reduces how much of the return currents from both traces overlap each other on the ground plane.
placing the traces further away from the analog seems like it might reduce the interference, on the other hand a longer trace is a bigger loop area
lcd 16x2 replacement and or interchangeability I need to replace a 16x2 LCD on a salt chlorinator display. I searched the part number (GX1602KP) but can't find it. I see alot of 16x2 lcd displays which look similar - are these lcd's interchangeable ? <Q> This Chinese site found searching for GZ1602 makes it look similar - according to Google Translate. <S> Brand: GX Model: <S> GX1602 Type: <S> LCD screen Screen size: 2 (inch) <S> The character type liquid crystal display module is a dot matrix liquid crystal display module specifically for displaying letters, numbers, symbols, and the like. <S> It is divided into 4-bit and 8-bit data transmission methods. <S> Provides a 5×7 dot matrix + cursor display mode. <S> A display data buffer generator CGRAM is provided, and CGRAM can be used to store font data of up to eight 5×8 dot matrix graphic characters defined by itself. <S> Provides a wealth of command settings: clear display; cursor back to origin; display on / off; cursor on / off; display character flicker; cursor shift; display shift. <S> An internal power-on automatic reset circuit is provided. <S> When the external power supply voltage exceeds +4.5V, the module is automatically initialized and the module is set to the default display working state. <S> The display content is 2 lines, each line displays 16 characters, and each character size is 5×8 dot matrix. <S> Character generator RAM can be customized according to customer needs, in Japanese, Russian and other 12 different countries. <S> The liquid crystal display module (LCM) has LCD colors of yellow-green, blue, and gray for customers to choose from. <S> The backlight color of the LCD module is yellow-green, orange, white, red, emerald, and blue, which can be selected by customers. <S> The use and storage temperature are: normal temperature (operating temperature 0 ~ +50 ° C, storage temperature <S> -10 <S> ~ +60 ° C); wide temperature (operating temperature <S> -20 <S> ~ +70 ° C, storage temperature <S> -30 <S> ~ <S> +80 ° C); Temperature (working temperature -30 <S> ~ <S> +80 ° C, storage temperature <S> -40 <S> ~ +80 ° C), for customers to choose. <A> There are a few types of LCD controller chips and some have compatible commands and some don't. <S> The command set should be easy to verify with a logic analyzer. <S> Or just take the most common type of controller (HD44780 or compatible perhaps) and try it. <A> 99% of 16x2 displays have identical interfaces. <S> The only 'variable' is the contrast setting. <S> These are cheap enough to buy and try. <S> Just make sure the physical dimensions match where you have to install.
If the controller or command set of the original display is known, it can be replaced with a display with compatible controller that has an identical command set.
Single Supply Class-A-Amplifier circuit to amplify voltage signal alternating around ground I have a sensor which outputs a voltage signal alternating a few millivolts around zero volts and want to design a circuit which prepares this signal for processing by a microcontroller like in the picture below: Now the problem is that i only have a supply voltage range between 0V and 3.3V and therefore need to add some DC offset to my signal and then amplify the alternating part. I think that op amps are not suited for my application because they amplify the DC signal as well as the AC Signal and so I thought of a class-A-amplifier circuit like this: simulate this circuit – Schematic created using CircuitLab Now the question:Is it possible with the class-A-amplifier to process my millivolt input signal to the larger one in the picture at the top or are there any better methods? <Q> Use an opamp circuit, non-inverting config, with a gain of 30. <S> I. E. R2/R1 = <S> 29 <S> Connect R1 to gnd through a large cap. <S> Make a resistive divider across your Vsupply to get midpoint of 1.5v <S> Connect your signal through sufficiently large cap to non-inverting pin. <S> Connect resistive divider centre point also to inverting terminal through large resistor (say 1Meg) <S> That's your circuit. <S> Choice of opamp and of caps depends on your pulse width and spacing etc etc. <S> Also opamp power supply should be properly selected for opamp chosen. <A> Your circuit is Ok from the point of view of the input signal polarity since it adds an offset to the input in order to avoid its saturation when \$V_\mathrm{in}\$ goes below zero: note however, from the picture you show, it seems that you need a voltage gain of at least $$A_v=\frac{V_\mathrm{out}}{V_\mathrm{in}}\approx\frac{1.5\mathrm{V}}{50\cdot 10^{-3}\mathrm{V}}$$ <S> which is readily obtainable from the circuit you propose, provided you put some care in its design. <S> Also you must consider carefully the time duration of input pulse, in order to design your amplifier with a bandwidth sufficiently larger in order to avoid excessive "linear distortion" in the output signal. <S> I propose a design with an OpAmp where all the parameters above can be changed without too much calculations simulate this circuit – <S> Schematic created using CircuitLab <S> When \$V_{in}=0\$ , this amplifier is characterized by the following offset voltages $$V_+=V_-=\frac{V_{CC}}{2}=1.65\mathrm{V}$$ <S> thus the given negative \$V_\mathrm{in}\$ cannot saturate its input. <S> Every parameter, from the gain \$A_v\$ to the cutoff frequencies \$f_H\$ and <S> \$f_L\$ can be easily changed by changed the circuit parameters according the known "calculation rules" for the OpAmp. <S> There is only a particularity: the bias network I've used to bias its \$V_+\$ input is called noiseless because it avoids the injection of (too much of) <S> the shot noise due to the current flowing in the \$R_b\$ resistors. <S> It is the only part I advise you to use, even in your BJT amplifier, if you decide to go for it: \$50\mathrm{mV_{pk}}\$ may not be the lowest voltage, but it is a sufficiently low value in order to start to worry for the input signal to noise ratio. <A> You have some kind of misconception about op-amps. <S> You just have to find a correct op-amp configuration to do it. <S> They can add output DC offset and made not to amplify input DC component. <A> Here are two solutions; first is discrete bipolar transistor circuit; 2nd is opamp circuit. <S> Calibration and offset drift may be a bother. <S> And the output will not give a solid 0.0 volts. <S> But the opamp circuits cannot give a solid 0.0 volts output either. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The RC filter isolates the opamp from the several volts spikes of the ADC as it takes samples.
You could use a PNP emitter follower into a NPN common-emitter.
Why would a Intel 8080 chip be destroyed if +12 V is connected before -5 V? The Intel 8080 is a classic microprocessor released in 1974, fabricated using an enhancement-mode NMOS process, and shows various unique characteristics related to this process, such as the requirement of a two-phase clock, and three power rails: -5 V, +5 V, and +12 V. In the description of the power pin from Wikipedia, it says Pin 2: GND (V SS ) - Ground Pin 11: −5 V (V BB ) - The −5 V power supply. This must be the first power source connected and the last disconnected, otherwise the processor will be damaged. Pin 20: +5 V (V CC ) - The + 5 V power supply. Pin 28: +12 V (V DD ) - The +12 V power supply. This must be the last connected and first disconnected power source. I cross-referenced to the original datasheet , but the information is a bit contradictory. Absolute Maximum : V CC (+5 V), V DD (+12 V) and V SS (GND) with respect to V BB (-5 V): -0.3 V to +20 V. Even if V BB is 0 V when it's unconnected, V DD would be +17 V, and it shouldn't exceed the absolute maximum. Is it the original claim on Wikipedia that a Intel 8080 chip be destroyed if +12 V is connected before -5 V correct? If it is correct, what is the exact failure mechanism if I do this? Why would the chip be destroyed if +12 V is applied first without -5 V? I suspect it must has something to do with the enhancement-mode NMOS process, but I don't know how semiconductors work. Could you explain how the power supply is implemented internally inside Intel 8080? Did the problem exist among other chips in the same era built using a similar process? Also, if I need to design a power supply for the Intel 8080, let's say using three voltage regulators, how do I prevent damages to the chip if +12 V rail ramps up before -5 V? <Q> In NMOS, the substrate must be the most negative point in the entire circuit, in order to make sure that the isolating junctions of other circuit elements are properly reverse-biased. <S> So, I suspect that the -5V supply, among other things, is tied directly to the substrate, and if the other voltages are supplied without this bias present, there are all kinds of unintended conduction paths through the chip, many of which could lead to latch-up and self-destruction. <S> To answer your last question, if your power supply doesn't have the correct sequencing by design, then you need a separate sequencer — a circuit that itself requires the -5V supply to be present before it allows the other voltages to reach the chip. <S> To echo some of the comments on your question, I don't recall any special care being taken in the actual 8080-based systems of the day. <S> However, such systems were usually built with four power supplies — or more precisely, two pairs of power supplies: ±5V and ±12V <S> (-12V would have been used in any serial interfaces), each driven from a transformer winding and a bridge rectifier. <S> It would have been natural for the 5V supplies to come up before the 12V supplies — and of those two, -5V would be quicker than +5V, being far less heavily loaded. <S> So (again I'm guessing), <S> the power supplies either "just worked" in terms of sequencing, or the danger was not really as severe as the datasheet writers would have you believe. <A> if I need to design a power supply for the Intel 8080, let's say using three voltage regulators, how do I prevent damages to the chip if +12v rail ramps up before -5v? <S> the CPU draws very little current at -5V, so with an oversized filter capacitor it will naturally come up fast and go down slowly. <S> +12V can be made to rise slower by having a lower unregulated voltage which provides less 'headroom', and lower capacitance relative to current draw to make it drop faster. <S> A bleeder resistor will ensure that voltage drops fast enough even with low loading. <S> I simulated the power supply in the Altair 8800 . <S> All supply voltages rose pretty much together within 4ms of switch on. <S> At switch off the +12V supply dropped first, followed by the +5V supply and then the -5V supply. <S> Here's the first mains cycle at switch on:- <S> And here's the switch off after 60 mains cycles:- <S> The Altair's -5V circuit looks like this:- simulate this circuit – <S> Schematic created using CircuitLab <S> The combination of high unregulated DC voltage (relative to 5V), large filter capacitance and light loading gives a fast rise time and slow fall time. <S> The Altair's +12V supply has a similar circuit, but 12V is not much less than 16V <S> so the voltage drops below 12V faster (also helped by higher current draw from the +12V supply). <A> The latter voltage insured that all of the active devices on the IC remained isolated by maintaining a reverse bias on the PN junctions that separated them from the common silicon substrate. <S> If any I/ <S> O signal went "below" substrate voltage, it could potentially drive the isolating junction into a SCR-like latchup condition, with the resulting continuous high current potentially destroying the device. <S> The required sequence of turning on and turning off the three power supply voltages was intended to minimize this risk. <S> As a previous answer correctly pointed out, in practice system designers ran fast and loose with this requirement. <S> Basically, the most important thing was to power the rest of the system logic with the same +5 supply that drove the CPU, so that at minimum the voltages applied to CPU input pins would never be greater than the CPU "+5" supply, or lower than the CPU "-5" supply, and to insure that the "+12" supply was equal to or greater than the "+5 supply at all times. <S> A schottky power diode sometimes was bridged between those voltages, to maintain that relationship e.g. during power-down. <S> Typically, the electrolytic filter cap values for the three supplies were chosen such that -5 and +12 ramped up fairly quickly, and +5 lagged a bit after. <S> MOS process refinements allowed later IC designs to be powered solely by +5, and if a negative substrate voltage was needed it was generated on-chip by a small charge pump circuit. <S> (e.g. 2516 EPROM vs. 2508, 8085 cpu vs. 8080.)
In the process used for the 8080, +12 provided the primary voltage for the logic, +5 supplied voltage for the I/O pin logic (which was intended to be TTL compatible, thus limited to 0 -> 5 volt signals) and -5 was connected to the substrate. I don't have a complete answer for you, but the 8080 was one of Intel's first chips to use an NMOS process rather that the PMOS process of the 4004, 4040, and 8008 chips. With a little care you should be able to avoid that situation.
Power Supply Earth for Bench Testing I work in a typical electronics lab, with resistive earth mats, ESD straps and isolated DC power supplies to test prototype PCBs. I prefer not to use the power supply earth-ground, and to leave the test setup "floating".Mobile phones and laptops do it just fine, so why can't my PCB, and all its connected equipment? However my colleague is uncomfortable having things floating and says they need a reference (connect Earth to the 0V rail). 1) What are the pros and cons of connecting the power supply to Earth? 2) The oscilloscope ground pin has continuity to Earth. Why? 3) Any relevant referece materials available? Many thanks! <Q> Ungrounded devices float at some potential. <S> This potential may be set by AC line filtering capacitors or just stray capacitance in the mains transformer or safety capacitor that is from the primary to secondary side. <S> Connecting two floating devices together makes their potential difference equal but there will be a surge of current at the moment of contact to charge/discharge the capacitances. <S> This is fine if connectors have grounds contacting first and they stay connected while data connections mate. <S> But if there is a break in the ground connection due to bad connection, all that potential difference between data pins may damage chips. <S> This can be witnessed even at home; try taking for example a TV and some source device that are both ungrounded. <S> Touching their metal cases can cause a tingling or stinging sensation (leakage current) and sometimes a small spark can be seen when connecting the equipment cases with wire. <S> This is the reason it reads in the manuals that equipment must be connected to other equipment while devices are unpowered. <S> Laptops have grounded power supplies so they are grounded. <S> 1) <S> Having a grounded/earthed supply means your prototype is always at ground/earth potential so it is safe to connect to other grounded/earthed equipment like PC JTAG adapter or oscilloscope. <S> Imagine that power supply is driving 2A into load and the ground wire between power supply and load disconnects. <S> If load is connected to other equipment like grounded PC or grounded oscilloscope, the 2A would keep flowing via earth wiring, oscilloscope ground lead or PC USB cable. <S> This is why I keep my lab power supply ground potential floating when I don't need earth ground referenced supply. <S> 2) <S> Oscilloscopes are grounded for many reason, one of them is safety. <S> Imagine accidetally connecting one scope ground lead to mains live voltage. <S> All metal parts and other scope channels grounds and equipment they are connected to would then become live. <A> Main reasons for connecting electronic devices to earth is safety and fault detection , also mobile devices have internal earth connection too . <S> Let's imagine one of your electronic equipment is get damaged and live wire touching to metal case . <S> if your device connected to earth which connected to the case , in this situation your device is short circuited so current will flow from case into earth , it will either trip the circuit breakers or differential relays and you will know a fault happened. <S> But if your device not connected to earth circuit won't be close and your device's case work same as live wire <S> but you won't notice it , then one day your hand will be touch into case and if you are close to the neutral wire circuit will be closed on you so you will electrocuted from it . <S> Also mobile devices have internal earth connection , <S> phone's case connected into ground of the battery so as I said if any fault happen current flow from case into ground of battery and mobile phones have protection circuit so device cut off the power source to prevent any explosion. <S> I must add , one of the reason for connecting to earth is system build like that . <S> Even you don't connect your devices into earth , main transformer is connected to earth <S> so It still can flow into earth if it find a way <S> but if you install a insulated transformer into lab you don't need earth as long as live and neutral won't connect nothing happen in this system you can safely hold live wire without any protective equipment <S> but otherwise you need to earth <A> In addition to very important safety considerations, there is still the issue of static charge. <S> Wind, humidity, clouds, carpets, shoes, clothes, friction, etc. can cause objects, including you, to generate a large voltage. <S> Ground yourself to one floating lab bench and you will transfer your charge to that bench. <S> Carry a sensitive device to another floating bench that has picked up a different charge, and you might violate some voltage spec when you put down the component. <S> Similar to the doorknob spark <S> you can get from shuffling shoes on certain flooring. <S> Ground <S> both benches (and kitchen, garage, etc.,) with some bleed resistance to the same earth, and now you can safely move parts across the building with less chances of zapping something.
But there is also a downside for having ground-referenced power supply as it could also create a ground loop. A mobile phone is not connected to other equipment than the charger so having a floating device is not a problem or a hazard by itself.
Rms value and average value for a periodic wave Why are we using RMS value over average value for a periodic wave? <Q> A resistive heating element gets hot when connected across 220V AC mains, even though the mean (average) voltage of the peak voltages (+311V and -311V) is 0 volts. <S> Many loads don't care that the voltage reverses polarity with each half cycle. <S> The 220VRMS value gives a more practical way to estimate the real power delivered to the load. <S> For non-ohmic (inductive or capacitive) loads, there is a power factor correction, but as long as voltage and current are in phase, VRMS = IRMS * Resistance. <A> Assume you are powering on a bulb with a AC RMS voltage of 10 V. <S> It will create same brightness as that of 10 V DC. <S> A periodic Sinewave for example, with peak to peak of 28 V will also create same brightness but the average value of the signal is zero (no information on the strength is conveyed with the average value). <A> RMS should be used to represent signal having zero average (due to symmetry of signal to the positive side and the negative side ). <S> Periodic waves like sine waves usually have zero average. <S> Average value gives the DC content of the signal.
RMS value is used to represent the strength of the AC signal or equivalent of DC signal.
Design of 50 ohms RF trace for 2.4GHz...Double layer FR-4 PCB I will use a 2.4GHz transceiver on my new project. The PCB material will be FR-4 with 1.6mm thickness and the connector is a SMA. My doubt is about the RF trace that should have 50 ohms impedance. Using AppCAD 4.0, inputing the parameters shown below I have got a 50 ohms result for Width = 45mils and Gap = 8 mils from RF trace to GND. Also I got almost the same result on the online calculator. Does this combinations (45/8 mils) looks correct for you? What more can I do to improve my layout?Regards. transparent view: edit: this is my final layout... edit: newer... <Q> Your calculations check out for the given values, but keep in mind that the dielectric constant of FR-4 is not tightly controlled, and may vary between 4.35 and 4.7 between manufacturers [1]. <S> Since your trace length is very short, this variation will not have a big effect (you can try the values in the calculator). <S> For more demanding applications, special high-frequency PCB materials (for example: Rogers RO4000 [2]) are available, however they are much more expensive to produce. <S> By having a solid ground connection, you reduce the parasitic inductance in the return current path, which will improve your signal integrity. <S> If you use a coplanar waveguide, the copper pours below and on the sides of the conductor must be strongly referenced to each other. <S> This means putting vias to 'stitch' the top and bottom planes together, along both sides of the conductor, to surround it with the ground connection. <S> This is discussed in [3]. <S> The recommended stitching distance between vias should be at most λ/4, with λ/10 as an optimum. <S> For 2.4GHz this results in a via distance of maximum 3.12cm, with 1.25cm recommended. <S> So, for longer trace lengths and higher frequencies stitching becomes more important than in this case with a very short trace length. <S> [1] https://en.wikipedia.org/wiki/FR-4 <S> see: <S> dielectric constant permittivity <S> [2] <S> https://www.rogerscorp.com/documents/726/acs/RO4000-LaminatesData-sheet.pdf <S> [3] Choose the size of via for shielding and stitching <A> for this short of a distance (under 1/8th of a wavelength) <S> impedance requirements get a lot looser, so on that premise its more than suitable, and lines up with my own calculator. <S> As to the layout, I cannot particularly fault it, you're keeping good separation between it and other nearby signals, you have vias right next to the signal ground so the return current on the plane on the opposite side does not have a large detour, <S> you have well and truly shotgun blasted your board with ground plane vias. <S> Only thing I struggle with is spotting where the decoupling capacitor is, for this the decoupling cap should be as close to the pins as you can manage, ideally on the same side as the chip, with its traces on the same side of the board. <S> If it's the pair on the center left, I would at minimum spin around the bottom one, and possibly shift those a bit to make their connections as short as possible to the chip. <A> To what others have said, I'll add, <S> You probably don't want to let the ground fill in between the pads of your DC-blocking capacitor. <S> This will probably lead to excess capacitance to ground, and degrade the return loss of your RF input. <S> You may want to move the RF connector a bit further away, so that the blocking capacitor doesn't have to be directly underneath it. <S> You need quite a bit of space around the ground legs of the connector to allow for selective wave solder, or for a big fat iron to reach in there (more so now that you've removed the thermal relief).
It can be beneficial to disable the thermals around the GND-pin holes of the RF connector.
Should I get rid of my bias towards active high? I realise that in all my circuits I consider active-high the "natural" default and active-low a goofy situation that needs to be inverted. I pull down all data and control lines (resets, output enable, input enable, inputs for an ALU or adder...), put inverters all over the place, and generally experience a lot of annoyance over every active-low pin. Can you enlighten my what the advantages of an active-low architecture might be, or why I shouldn't care? It seems that wildly mixing avtive-high and active-low, based on the ICs demands, can be a nightmare to interpret, document and debug. <Q> Active low does have one significant advantage: the active level is always the same, even in systems where components run on different supply voltages or can be turned off completely (power gating). <S> So for something like a reset line, it can make a lot of sense as you can hold components in reset while the power supplies settle and <S> what not by simply pulling to to ground, which is usually the same voltage level for all components in a circuit. <S> Presence detect pins also make sense to be active low - the device or connector only needs to short the pin to a ground pin, there are no dependencies on power supplies <S> so there are no issues with sequencing or gating. <S> Active low can also give an indication of a missing transmitter or bad connection, as in the case of most UARTs. <S> Anther consideration is the reset state of a pin. <S> For example, it's rather common for FPGAs to have internal weak pull-ups active on all IO pins before they are configured. <S> Therefore, it may make sense to make some external signals active low so they are not considered asserted before the FPGA is configured. <S> Or maybe you want the signal asserted, in which case make it active high. <S> On the other hand, inside of a CMOS circuit, the logic level in most cases doesn't really make much difference. <S> Long wires can be split by periodically inserting inverters, and you just need to make sure <S> an even number are added or the inversion is taken into account by changing the destination logic. <S> Additionally, it's common to build "inverted" versions of basic blocks such as full adders so you can chain adder with inverted outputs to adder with inverted inputs and reduce the number of transistors you need (this is because the core of a full adder is inverting, so a "normal" full adder includes at least one inverter). <S> In many cases, the tools can do this automatically as an optimization step. <A> Active low signals were very often used in TTL circuits for noise immunity (apart from the drive capability as mentioned by Neil_UK). <S> The input of a TTL gate is guaranteed to recognise any level of 0.8V or lower as a low, and any input of 2V or higher as a high. <S> If the signal is taken true for only a short time relative to the false state (common in many memory systems) then the idle state ( false ) will have Vcc - 2V of noise immunity if the signal is active low (which was 3V in 5V circuits, 1.3V in 3.3V circuits) as opposed to 0.8V of immunity in the low state. <S> This assumes that the idle state is at the rails (a reasonable assumption for a single gate load). <S> So idling in the high state gave better noise immunity and this remains true for devices that have 'TTL compatible' inputs. <S> For the situation where the logic is difficult to understand, I suggest using assertion level logic notation which then makes the intent of the circuit quite clear. <A> In the bad old days of TTL, with its strong sink capability and weak source capability, and open collector being commonly used to OR signals, active low was a natural. <S> These days, with CMOS having more or less symmetrical drive capacity, it makes sense to use whichever you find easier to handle. <S> I personally can do logic sums in my head more easily in active high than active low. <S> Some ICs will have active low inputs or outputs. <S> Then it makes sense to 'ride the horse in the direction it's going'. <A> Others have mentioned the skewed output drive strength of old TTL bipolar logic gates, where pulling current down to ground is stronger than supplying current pulling up to Vcc. <S> For high-speed CMOS, drive strength is better-balanced: pull-up is nearly as strong as pull-down. <S> For 74HC04 basic inverter, pull-down is marginally stronger than pull-up. <S> This trend often persists for other complementary-MOS gates and processors. <S> No big difference but every little bit can help.... <S> take for example driving a blue LED with a microcontroller I/ <S> O pin (running from a +3.6V supply). <S> This is a marginal design where LED series resistance is small. <S> I'd choose pull-down for that little bit extra headroom. <S> Another point favours pull-down. <S> Good designers have a nag in their head whispering "ground integrity". <S> So ground is almost always more solidly established than DC supply in a complex system. <S> Bus distribution is most often ground-referenced for example.
Where logic levels must have good noise immunity, pull-down is preferred.
Should I connect the computer VBUS line to the 5V line of the board? I am doing a project that needs USB communication with my microcontroller.On the board there is already a 5V power supply. I would like to know if it is correct to connect the VBUS line from the computer with the 5V line that already exists on my board (put L4). Or should I leave VBUS floating? (remove L4). I have no experience with USB development, I know the need for U10, but how I should feed U10 pin 5 to me is still dubious. <Q> It will cause unpredictable conflict between your on-board power source and USB host-supplied VBUS. <S> However, your device must have a circuitry (typically a GPIO input with proper level translation and additional ESD protection) to sense the VBUS presence. <S> This function is defined in Section 7.1.5.1 of USB 2.0 Specifications, <S> The voltage source on the pull-up resistor must be derived from or controlled by the power supplied on the USB cable such that when VBUS is removed, the pull-up resistor does not supply current on the data line to which it is attached. <S> and further explained in Section 7.2.1, p.171: <S> They [devices] may not provide power to the pull-up resistor on D+/D- unless VBUS is present (see Section 7.1.5). <S> When VBUS is removed, the device must remove power from the D+/D- pull-up resistor within 10 seconds. <S> If you have a concern about pin5 on ESD protector U10, then it would be the best to keep this pin at 3.3V form some internal rail, it will provide somewhat better level of protection. <S> There are more answers on this topic, like here . <A> Don't connect USB Vbus to your 5 V net. <S> Use VBUS for pin 5 of the ESD protection, and you can use it to detect whether a USB connection has been made. <A> Don't short circuit 5V from PC to 5V from regulator. <S> You might need the PC 5V to detect when it is connected though.
If your board has it's own power, the VBUS from upstream connector should NOT be used as power source and should NOT be connected to the +5V rail.
Importance of electrolytic capacitor size A large aluminum filter capacitor about 1" X 3" can, from a 1972 power supply, needs to be replaced. it is 4000uf/50V/85 degrees. Any reason why it can’t be replaced with a much smaller 4700uf/50V/85 degrees Electrolytic cap from Amazon? I don’t understand the difference, except for physical size and price. <Q> It probably could, but the devil is in the details. <S> Ripple current, loss etc.... <S> What is key to appreciate is the present cap is from the 1970's. <S> Technology has moved on from there in not only in dielectric but also in manufacturing techniques. <S> Compare the two data sheets is the best advice <A> Adding to the answers:The cost of an Electrolytic Capacitor is nothing compared to any impact of a failed power supply , especially in the field of medical, manufacturing, military or other important usage. <S> But even when used only for private purposes, the time effort to change it again after failing a 2nd time is not worth to choose cheap caps. <S> There are many low-quality caps sold via internet that do not even match the ratings printed on the casing. <S> The capacity could be less, the maximum voltage and temperature could be lower. <S> Like stated already in other answers, it is worth to purchase a high quality cap with higher ratings both for voltage and temperature (in that case e.g. 80V and 105 C degrees), and ESR and other datas should be compared via data sheets. <S> Since old caps anyway had a high tolerance of f.e. -20%/+50%, a higher capacity value should be no problem unless the ESR does not become too low. <S> So 4700uF or 5600uF as replacement for a 4000uF should be fine. <S> Normally there is much heat produced in old power supplies since they work via series regulation. <S> The caps should not be close to or touching heat sinks . <S> If the supply is anyway open, these points could be checked as well: <S> Are all diodes/bridge rectifiers feeding that cap ok? <S> Sometimes a broken cap is the result of bad diodes. <S> All the screws pressing the power elements (f.e. 2N3055 transistors) to the heat sink(s) should be re-fastened. <S> In many cases those screws are not tight enough anymore after half a century. <S> Any resistors (or other elements) show burnt casings? <S> Any burnt/colored areas on the PCBs? <S> If the power supply was broken and not used for a long time, the first test should be via feeding from a variac after repair. <A> Adding to JonRBs Answer: <S> Capacitors in the 70's were huge in comparison to current caps. <S> Advances in manufacturing led to a massive size shrink along with other benefits. <S> Some effects from these improvements are contrary to each other. <S> I try to give some thoughts on it. <S> Smaller caps have lower ESR. <S> This is something which could be problematic within a intricate design, but the probability that in a 70's supply a high ESR might be crucial is rather low. <S> You can figure out from the schematic, if a low ESR is acceptable. <S> The inrush current might grow to high with a low ESR. <S> but on the other hand they might improve convection inside the housing. <S> You could choose a capacitor with slightly higher voltage and temperature rating to improve lifetime, because the ratings of parts from the 70's were a lot more conservative due to the greater variations in manufacturing processes.
The voltage should be slowly raised in order to enable all Electrolytic Caps to rebuild the oxyde layers without high surge currents - also the new cap could have been stored for a long time. Smaller caps have a lower surface, making it more difficult to dissipate thermal power
Measuring the external memory power consumption in FPGAs? I am trying to get a power/energy breakdown of DDR3 and core logic. I used Quartus power analyzer tool to get the power estimates, but I am not sure whether it includes the power consumption of external memory like DDR3, HBM. In general, how do we measure/model the power consumption of the external memory access in Intel FPGAs? <Q> Power analyser tool estimates the power consumption by the FPGA alone due to all the connected peripherals. <S> To calculate the power consumption of the off chip peripherals, datasheets are really your friends <S> Consider all static power consumptions Calculate dynamic power consumptions due to switching load ( \$n <S> \times V^2 \times f \times C\$ ) <S> how exactly to calculate depends on teh applicaiton, components and the usage <S> For example, if there is an external flash, you can calculate current needed for a simple read, fast read, idle condition and write at particular planned clock frequency. <S> and then, estiamte how much percentage each task will happen over a period of time. <S> the current consumtion than can be scaled accordignly. <S> Also, consider quiscent current for deivces which are not switched during operation Other loads such as toggling LEDs, <S> RF transmission etc <S> All theory can be well verified with practical current mesurement. <S> Consider datasheet maximum for worst case. <A> You measure the power using appropriate measuring instruments to monitor the actual current consumed and the actual voltage. <S> You model the power consumption by adding the estimated power consumption of all of the components of the system. <S> It's as simple...and as difficult...as that. <A> I haven't seen a power estimation tool for memories. <S> This is the method I use: <S> The memory datasheet will provide the current consumption for each of the different types of cycle at a specific rate. <S> (e.g. Write cycle, read cycle, refresh cycle, burst read, idle, etc). <S> You have to define what your usage pattern for the memory will be in terms of the rates for each of those cyles. <S> Then it is a relatively straightforward exercise with a spreadsheet to sum up all those power consumptions.
For each component you use the datasheet to estimate the power consumed for your specific usage scenario.
Should I stop charging my laptop battery at 50%? I know that there are lots of questions about battery health and charging, but mine is very specific. For a lithium ion battery, leaving it at full charge (or low charge) damages the battery capacity. I am able on my laptop to stop the charging of the battery at a certain percentage between 50% and 100% in software. This is accomplished with dell's own software, power manager. The question is therefore, would it be beneficial for my laptops battery capacity health to limit the charging to say 50%, when my laptop is plugged in for extended periods of time? <Q> would it be beneficial for my laptops battery capacity health to limit the charging to say 50%, when my laptop is plugged in for extended periods of time? <S> Yes it would, most manufacturers of Li-Ion based batteries charge them to a charge level of around 40% to have the lowest stress level in the battery and extend their storage lifetime. <S> (Actually the batteries are not charged at the factory but assembled such that they have a 40% charge level, the balance of the chemicals is made such that this happens) <S> So 40% (50% is close enough) could indeed result in a longer lifetime of the battery when you're not using it . <S> But as soon as you start using the battery, the charge level drops so the battery could end up having a much lower charge level for a while which causes more stress on the battery making it wear out more quickly. <S> In the end it will be a compromise , do you want to have the maximum battery charge available to you and accept that the battery will wear out sooner, or will you accept a smaller battery charge but have a battery that will wear out less quickly? <A> For instance if you normally use 40% of battery charge during the day, and you want the day's average battery level to be 50%, then you might want to charge the battery up to 70%. <S> Your daily use will then typically drain it down to 30%, which isn't so low that it affects battery longevity. <S> Or if you store the device unused for long enough that the battery discharges 10%, then you might want to charge up to 55% (and let it discharge to 45%, averaging 50% over the storage time.) <A> Adding to the other answers <S> If you don't want to use the battery, 40% should be optimal. <S> But if you want to use it, you have to develop a different strategy. <S> If you find out, your battery is usually discharged by 40% when you are on the road or wherever you aren't connected to the grid. <S> Then you could find a tradeof between losing power and ageing the battery by discharging it fully charging your battery higher and ageing it. <S> And if you keep your laptop e.g. 90% of the time connected to the grid the latter one should be the greater problem. <S> Then you should choose a lower maximal charge. <S> If you are on the road all the time, it is more favourable to prevent the battery charge fall below a certain level most of the time.
You also need to take into account how much the battery will discharge when idle and/or in use.
Rapidly driving a keyboard signal My goal is to build something that simulates rapidity (50 to 1k Hz) pressing a key on a USB keyboard. My first inclination was drive a transistor from a 555 timer with the two keyboard matrix pins on the collector and emitter, unfortunately the potential between them is only 1.5dcv and the 2n3906 I have in my kit doesn't seem to work. Is this really a transistor problem or something else? Also, I'm not looking for a keyboard emulator, I want something to test off the shelf keyboards. Any other ideas are welcome. UPDATE/EDIT: So I went out and got a NTE133, I was told it was the equiv to the BS138. The drain to source resistance was about 7Mohm and even when the 555 was off the signal from the keyboard went though. Did I get a dud? Schematic below 2hz, Picture below. I'm doing destructive testing on keyboards to measure timing, that's why I don't want to use the arduinos to simulate one. simulate this circuit – Schematic created using CircuitLab <Q> Realistically the only way to achieve maximum rate is to simulate it in the keyboard implementation directly, alternating the state on each poll by the USB host. <S> Likely the limit is going to be 500 Hz, ie, change state on each millisecond polling, but perhaps there is a way to send multiple key events per poll <S> , you will have to study the HID keyboard spec. <S> Thus to really achieve your goal, you want to find an MCU eval board with native USB and a USB keyboard example project, and "hack" it to do this. <S> An Arduino Leonardo, various "Teensy" and some MBED boards may have this capability. <S> Electrically "pressing" keys on an existing keyboard implementation would have to be a lot slower (or else queued by exfiltrating a timing trigger somehow) to avoid things like severe slowness of actual change resulting from mistaken matchups between intervals. <S> You may be able to do faster still by faking things in software from the hosting OS driver side... <A> That said, the keyboard scanning probably has an upper limit to how quickly it will accept key presses. <S> Try 10Hz or less. <A> I got it working by outputting the 555 timer through the 1k resistor to the higher voltage side of the matrix and putting the lower voltage side to ground, now the 555 acts as a sink. <S> No transistor needed. <S> Thanks for all the input!
A logic-level FET like a BSS138 would work for this.
Confusion in understanding control system? I am learning basics of control systems from following link http://instrumentationandcontrollers.blogspot.com/2010/11/types-of-control-systems.html It says that automatic washing machine & traffic signal system are example of open loop system. It does not gives detail why?? Despite the fact that "automatic" appears before washing machine Also please kindly find attached photo of a common door closer. What type of control system is being used by it?open loop or close loop? <Q> An open-loop system usually has a timer which instructs the system to switch on the furnace for some time and then switch it off. <S> The Washing machine uses a timer to turn on and turn off washing and drying without measuring how washed or dried the clothes are. <S> The traffic light uses timers to switch lights without measuring how many cars are actually on the road. <A> Because it has no feedback. <S> Washing machine does just a programmed sequence of operations, this is so called automatic. <S> The manual operation would be for example that you determine the step of the sequence and duration. <S> Also the traffic light control is just an automatic sequence. <S> Closed loop is for example a temperature control, where you measure the temperature and take action in base of that. <A> It's worth first explaining what open loop control system is. <S> This is a control system where you essentially blindly set an value. <S> This may work well, based on experience. <S> You may for instance know that a sausage is warm after 10 minutes in warm water bath, without actually checking the internal temperature of the sausage. <S> For some processes, this is OK, because it's either cheap to overrun a little bit, or cheap to underrun. <S> An example of this would be my home ventilation; I don't check humidity, temperature or any other factors; it gets turned on at 09:00, and off at 22:00, no matter what. <S> The cost of running it is tiny, and the cost of sensors and programming high. <S> This is also used in most washing machines; they run based on a time, not any measurement of the process value, which is how dirty the clothes are. <S> In short; a open loop system does not look at the system to determine where the process is. <S> It blindly sets the set point. <S> Imagine a sled, on a rail, which accepts commands to go left or right for some distance, e.g. go 10 cm left . <S> If you are at the leftmost position, and send this command, it will lead to a crash. <S> In such a situation, open loop system will not suffice. <S> A closed loop on the other hand, has a feedback. <S> The Process Value (PV) is fed back into the controller, modifying the set point. <S> Different types of controllers exists; for analogue values, a PID controller is commonly used. <S> But let's continue with the previous example of the sled on rails. <S> Now the controller knows where the sled is, so the set point to the controller can be changed from go left by 10 cm to go to five centimeters right of the left end stop , and the controller will know which direction it should drive the sled, and for how far. <S> Another example is the simple thermostat on an oven. <S> The oven (actuator) heats up the room, and the process value is sensed by the thermostat, which turns off the actuator when the setpoint matches the process value. <S> Control engineering is a big topic, and this is a very brief introduction.
Both, the automatic washing machine and the traffic signal system are open-loop for the same reason the website mentions for the home heating system.
MOSFET broke after attaching capacitor bank I have a solenoid that has a coil resistance of \$0.3\Omega\$ and accelerates a steel projectile here. I've posted the schematics below. Normal Version that acts as a control The GPIO8 goes to 5V to switch on the MOSFET and turn it off when the projectile is detected with the optical sensor. And it works just fine . Next, I tried it with 10 supercapacitors that are connected in series. I charged it up to 27 volts. Version #1 When I powered up the circuit, there was a spark when I connected the capacitor ground to the MOSFET's ground. The Gate and Source circuit should have been opened because when I first connected it, GPIO8 is at 0v. After some troubleshooting, I found that I killed the MOSFET. I believe there are 2 possibilities at play.First, it is possible that the parasitic capacitance on the MOSFET may have caused an oscillation and thus, voltage spike. I added R2 to increase the fall time slightly and thus, reduce the charge. See the video here (Skip to 4:00) Not only is the parasitic capacitance causing an oscillation, but another factor is also that I actually have an RLC circuit here. My load is a solenoid and my power source is my supercapacitors. Thus I added D2 so that it doesn't start cycling back and forth. I also replaced the MOSFET with a new one. Version #2 And yet the same thing happened, GPIO8 is at 0v before I connected the capacitor but the MOSFET completed the circuit anyways and broke, this time it is caught on camera . So that's where I'm at now. My capacitor is charged to 27V and since I've added the components to get rid of oscillations, I can't think of anything else. According to the datasheet, the breakdown voltage of the IRF3205 is at 55v and I'm well below that. Any bright ideas? <Q> Your gate drive voltage is too low. <S> That MOSFET needs 10V to turn on completely. <S> 5V just barely clears the 4V threshold when the MOSFET just barely starts to conduct. <S> DO NOT use the Vgsth if you intend to use your MOSFET at a switch. <S> That is the voltage <S> it just barely starts to conduct at. <S> Use a Vgs at least as high as the one used to obtain the given RDson. <S> The Vgsth is for using the MOSFET as an linear/analog device. <S> According to Figure 1 in the datasheet, with 5V across the gate-source and 27V across the drain-source (I'm ignoring the solenoid resistance since it drops relatively little voltage), the MOSFET saturates at 10A. <S> That's 270W being dissipated in your MOSFET. <S> And Figure 1 is at 25C. <S> Your MOSFET is heating up while it does all this which makes it operate more like in Figure 2 where even more current being conducted. <S> In this case it is saturating at 30A with a 27V drop which is ~800W of heat being dissipated. <S> With a listed junction-to-ambient thermal resistance of 62 C/W, that's a temperature rise of 17,000 and 50,000 Celcius, respectively. <S> Also, look up gate drivers and consider whether you need one or not for your MOSFET or if directly driving the gate capacitance from a piddly low-current I/ <S> O pin is sufficient for your application. <A> I'd wager that the problem isn't an oscillation <S> , it's just the initial inrush of current into the MOSFET that's killing it. <S> When you connect your super-caps to the circuit, it will charge up the parasitic MOSFET capacitors \$\mathrm{C_{oss}}\$ and \$\mathrm{C_{rss}}\$ . <S> According to the datasheet , \$\mathrm{C_{oss}}\$ is only about 781pF and <S> \$\mathrm{C_{rss}}\$ is only about 211pF when \$\mathrm{V_{ds}}\$ is 25V, but per Figure 5 of the datasheet those values are much higher when \$\mathrm{V_{ds}}\$ is at a lower voltage. <S> So, I believe the failure sequence is as follows: <S> Initially there is no voltage across the MOSFET, so the parasitic capacitance values are a few nanofarads. <S> You apply 27V, with a series resistance of a mere 0.3Ω (plus whatever inductance that solenoid has; we don't know that number). <S> Quite a few of amperes flows into that MOSFET to charge up those parasitic capacitors. <S> It's for a very short time, but it's a very high peak current value! <S> ... <S> MOSFET blows up due to high surge current. <S> Remedies: <S> EDIT <S> Another failure mode just occurred to me: Similar as before, but let's just worry about the gate-to-drain capacitance (still a few nanofarads). <S> You apply 27V instantaneously, so a bunch of charge easily flows through that parasitic gate-to-drain capacitor \$\mathrm{C_{gd}}\$ . <S> The current through that gate-to-drain capacitor is easily enough to introduce a large voltage across that 20k resistor that was holding the voltage low. <S> MOSFET turns on, blows up due to high surge current. <S> This second hypothesis is probably the more likely hypothesis. <S> As DKNguyen points out, your circuit as constructed will likely blow up the MOSFET even in normal operation. <S> As before, the best solution is to find a way to limit the peak current. <A> The GPIO is probably too high impedance. <S> You want to include a proper gate drive chip running off 12-15v. <S> You can just use a linear regulator off your 27v bus. <S> R2 is only hurting you by making your gate drive impedance higher in this case. <S> I suggest dropping the value to 10 ohms. <S> If possible, start your tests at 1v and work your way up, making sure everything is okay. <S> You will save a lot of silicon this way. <S> And please put balancing resistors across your supercaps. <S> I don't know what the leakage of your caps is <S> but I would guess that 1k in parallel with each cap would be on the safer side if you want to charge them to max voltage. <A> At the risk of sounding flippant at your expense, there is an old joke about a patient seeing a doctor: <S> Patient: " <S> Doctor, it hurts when I do this." <S> Doctor: <S> "Well, then don't do it." <S> In this case, substitute "connect the ground last" for "do this". <S> Don't do it. <S> Always keep the grounds tied together. <S> If you must connect two systems while they are operational, always connect the ground first, then power, then the control lines - and make sure <S> the control lines are protected so that applying power when they are floating will not give you problems. <A> If you are interested in an inrush limiter circuit, Texas Instruments makes one that has an evaluation module on Mouser here . <S> The datasheet for the TPS2491 takes into account (funny enough) power limiting <S> the series pass MOSFET (to ensure just this thing doesn't happen). <S> I am not sure if this will be practical for your design or not, but it's easy enough to try and to at least get an a-ha moment to understand what is happening to the MOSFET in your circuit. <S> Good luck!
You probably aren't driving the gate hard enough. Slowly bring \$\mathrm{V_{ds}}\$ up to 27V before applying your super-cap, and/or, Add some series resistance to limit the maximum possible current out of your super-cap. As to your specific failure mode, Mr Snrub is probably correct, although the inductance of the coil really ought to act as an inrush limiter.
Eagle UI: separating hundreds of components that are stacked on top of each other I finished a complicated schematic in the free version of Eagle, switched over to the board editor and was encountered by this: In case the image doesn't say it all, there are about 100 components all stacked on top of each other with the same origin. Separating them manually will take ages, is there a way I can fix this automatically so I can place my components? <Q> Select the move tool and type the component designator you wish to place. <S> Eg: click on move, then type D5 , and you're holding D5 ready to place somewhere. <S> Autoplacers are a feature of more expensive tools. <A> In case of a repeating schematic blocks: backup your current schematic. <S> delete all except one instance, update board editor, place components in desired layout copy/paste the first instance in the schematic editor, update board editor, place new added components in desired layout (matching the layout of the first instance if desired) <S> repeat step 3 <S> If possible (the editor I use has it <S> , I think all editors have it, but haven't used Eagle for years), you can group components, so, in the board editor, group the components after step 2. <S> Next, after updating the board editor in step 3, put the new added components on top of their equivalent components of the first instance. <S> Move away the original group. <S> Maybe you can also copy/paste the first instance in the board editor as well <S> (after copy/paste in the schematic editor) and then update and back annotate. <S> However, my experience is that that is a terrible way. <S> In case of functional, non-repeating blocks (and if Eagle allows, my editor does): backup your current schematic. <S> open a new schematic copy/paste a functional schematic block from the original schematic to the new schematic update board editor, place new added components in desired layout repeat step 3 and 4 <A> There are some ULP scripts that might be useful to you. <S> Take a look at 5) <S> autoplace_v3.ulp by David Moodie Crude <S> autoplace ULP, adaptation of Cadsoft original IIRC. <S> v4 compatible, creates grouping based on SCH and handles multiple sheets. <S> Uploaded by David Moodie from OptoSci Ltd. and 7) place50.zip by Matthias Weingart <S> Most useful for analog designs. <S> Uploaded by Matthias Weingart from IngBuero fuer wiss. <S> Geraeteentw. <S> - Solutions for embedded electronics. <S> for example.
Simple Autoplacer, run this ulp in the schematic; and exec the resulting script in the new created PCB - it will place all parts of the board to the position in the schematic.
Which one of these DS18B20 temperature sensors is fake? A while back I bought two DS18B20 for half price, and today I bought another one for double the price. The part number on the cheaper one is printed while on the expensive one it's laser etched: On the left is the expensive one. Even though the price was double, the readings that I'm getting using Arduino are the same: Sensor1 is the expensive one. While the temperature on mercury thermometer was 23.9-ish degrees. So what's the difference? Why is the build quality and price different but the temperature readings are the same? <Q> After purchasing many DS18B20 sensors and probes on ebay I can assure you that price is a useless indicator for quality. <S> Let me address the question in the title of this thread: Which one of these DS18B20 temperature sensors is fake? <S> Fortunately, you can find out for yourself: <S> After looking at some hundreds of DS18B20 and comparing them to chips that are known to be produced by Maxim Integrated <S> I concluded that the manufacturers of counterfeit DS18B20 do not bother to conceal the fact that they're counterfeit. <S> They just try to make them look and act authentic enough to be able to sell them. <S> Based on that conjecture, try this: <S> Does the part have a ROM code that matches the pattern 28-xx-xx-xx- <S> xx-00-00-xx ? <S> If not, then it is not Maxim-produced. <S> (Manufacturers of counterfeit parts seem to go to some length to make sure the ROM codes of their chips do not collide with the ROM codes of Maxim's or other producers' chips.) <S> In the scratchpad register, look at byte 6 (reserved and value not specified in current datasheets. <S> It reads 0x0c <S> right after power-up and before the first temperature conversion -- if it doesn't even read <S> 0x0c <S> right after power-up then your part is definitely a fake). <S> If this byte does not change following 44h temperature conversion commands even though the temperature reading changes then your DS18B20 is not produced by Maxim. <S> (Background: Byte 6 is used in the DS18S20 to get 12-bit temperature resolution, and the DS18B20 and DS18S20 share the same circuit to the maximum extent possible, see detailed description in Maxim Application Note 4377 .) <S> There are many other ways to tell, e.g. you can look for implementation bugs, implementation detail <S> (e.g. the amount of time it takes to actually perform a temperature conversion), or response to undocumented function codes. <S> I've summarized a bunch of differences I came across here: https://github.com/cpetrich/counterfeit_DS18B20 <S> Alternatively, Ask Maxim tech support if the combination of date code and batch code printed on the chip exists in their database. <A> There isn't any good way to know. <S> You'd have to check product change notices to confirm. <S> The price difference could simply be because "each vendor charged the highest amount they could get away with". <S> (The fact that you bought the parts indicates that this strategy worked.) <S> Again from Elliot Alderson, the only way you can really trust a part is if you trust the supply chain . <S> Otherwise the part could be counterfeit, or even if it's not counterfeit it could have been subjected to poor handling practices or similar. <S> It sounds like the supply chain wasn't trusted in either of these cases, so each part should only be trusted to the degree that you have tested it. <S> Here is a nice presentation from Xtreme Semiconductor, a company that specializes in detecting counterfeit parts . <S> Slides 20-31 show some examples of the tricks people pull when making counterfeit parts. <S> Some are easy to detect with the unaided eye, others require advanced tricks like X-ray or opening up the part, as shown in the same slide set. <S> Probably not something you're looking to do here. <S> tl;dr Just because the parts look different and were priced differently doesn't necessarily mean that one is good <S> and/or one is fake. <S> Put the part through rigorous testing, or just buy a new part from a trusted source. <A> Thanks to Chris for his testing and counterfeit detection sketches! <S> Maybe the attached graph of the readings from four sensors I purchased from a Chinese supplier on eBay plus one genuine Maxim DS18B20 (bought from a reputable distributor) will answer the question. <S> All five sensors are at the same temperature. <S> It's pretty easy to see which is the genuine part. <S> You can make your own minds up whether you can rely on the readings from the counterfeits. <A> As of June 2020, you can know with good accuracy. <S> The accepted answer is not completely correct anymore. <S> All the information is provided in https://github.com/cpetrich/counterfeit_DS18B20 which also links to Arduino sketches which can tell you, based on the expected data returned by the chip, whether your sensor is original or not. <S> The clones do not fully respect the datasheet. <S> Basically, for the easiest check, if the ROM does not follow the pattern 28-xx-xx-xx-xx-00-00-xx then the DS18B20 sensor is a clone. <S> Using the ROM code you can also find information about the issues with that specific clone. <S> Most if not all ds18b20 not bought by Digikey or equivalent suppliers are clones.
you cannot proof that a part is authentic but if it behaves differently from an authentic, Maxim-produced DS18B20 then it is certainly a fake. As Elliot Alderson rightly points out, both could be legitimate, or both could be fake. If the packaging and marking are different between the two parts, that could simply be because the manufacturer changed their packaging process.
Running an electric motor beyond its ratings I ordered a 36V 500W brushed DC motor but unfortunately a 24V one came. I'm faced with the dilemma of returning it and waiting for a month before getting a refund, then ordering a new one and waiting about a month again for it to arrive, totaling 3 months of waiting (with the current one), OR run it beyond its ratings. I understand that increasing the voltage will increase the rotation speed proportionally. It is currently rated at 2700 rpm @ 24V and ~27A drawn. If I increase the voltage to 36V then rpm will rise to about 4000 and to maintain the 500W power output I will need to supply ~14A. Questions that arise are: Are my assumptions correct in the first place? Is it possible for construction damage to occur when running a motor beyound its rated rotation speed? Is efficiency compromised in such setup. <Q> It is probably no issue to run a motor beyond its voltage rating. <S> Electronically seen, you might exceed the break down voltage between two adjecent copper wires in the armature, but it is quite unlikely at this low voltage. <S> Mechanically seen, as voltage is proportional to rotational speed, a higher speed burdens the bearings more. <S> Running a motor beyond its current rating will heat up the motor. <S> You can temporarily run it beyond its current rating provided you don't exceed the thermal limits. <S> If the motor becomes too hot, the isolation of the copper wires of the armature degrades or even melts, causing shorts between the windings, which on its turn increases the motor current causing a thermal runaway. <S> The point when this happens depends on the cooling of the motor housing, ambient temperature, etc, so, not easily to predict. <S> You cannot do simple power calculations with motors: when the motor is running with no load, the current will be low, so the input / output power will be < 500W. <S> When stalling the motor, the motor voltage will be close to zero and the the current high, but still the power will be way lower than 500W. <S> Read <S> this answer to see how the power and efficiency are related to the torque and <S> only have a local maximum. <S> To make useful assumptions for a motor, you need its torque vs speed / current / power / efficiency graphs (as in the linked answer above) AND/OR its physical constants (speed constant, torque constant etc) <A> I think you have 2 options, lower your supply voltage to 24V or return the motor and get the one you want. <S> A motor has voltage and speed ratings for a purpose. <S> One to limit current to prevent the wire from overheating and melting the insulation and a speed limit to operate within the limitations of the bearings. <A> If you apply 36V instead of 24V it will draw more current and you will probably destroy the motor.
Contineously running a motor at higher speed will not directly damage the bearings, but the bearings will wear out faster.
Confusion about voltage at negative terminal of capacitor I have some confusion about the negative terminal of a capacitor. Consider the following Now, my understanding was that postive charges accumulate at the positive ("top") plate of the capacitor, setting up an electric field within the capacitor which causes negative charges to accumulate at the bottom plate. Does that mean that the bottom plate of the capacitor will be at -5V eventually (at steady-state). Or does it mean it will still be 0V?The circuit simulation tool I ran this with, says it is 5V. Can someone explain what the negative plate voltage is? And also what happens if there was no ground but another circuit component there? <Q> The negative plate of the capacitor is connected to ground. <S> Therefore, if you ask for the voltage at that single point (rather than explicitly with respect to some other point) then the answer must be 0V. <S> This point is always at 0V, by definition, because it is connected to ground. <S> The amount of charge exiting from the negative plate is exactly equal to the amount of charge that enters the positive plate, so the entire capacitor structure remains charge neutral. <S> As voltage increases across the capacitor the voltage across the resistor decreases, which means that the current must also decrease. <S> Given enough time, the voltage on the capacitor rises to be the same as the supply voltage. <S> At that point the voltage across the resistor falls to zero and current flow also falls to zero. <A> If the bottom plate is not grounded, then you must clarify what is the "reference" point. <S> For example, the (+) plate may have, say, +3V with respect to the (-) plate. <S> That would be the same as saying the (-) plate has -3V with respect to the (+) one. <S> So, what would be the voltage at the (-) plate with respect to ground? <S> To answer that, it would be necessary to know the circuit connected to that plate. <A> Let's go back and try to understand what voltage is. <S> Let's say you have a 1.5 volt battery. <S> What is the voltage difference between the positive and negative terminals - 1.5V of course. <S> Now what is the voltage of the negative terminal with the battery floating in space, or the positive terminal? <S> We don't know. <S> Voltage is a potential difference between 2 points. <S> Ground is a reference point. <S> You could tie either battery terminal to ground and it is still a 1.5V battery. <S> In your circuit you could tie the positive side of the capacitor to ground and leave the negative side open. <S> You still have 5V across the capacitor but the positive side would read 0V and the negative side -5V. <S> So remember that a "ground" point is a measurement reference. <S> You could tie the reference to earth or leave it open <S> but that's another topic.
You are correct that the electric field on the capacitor causes charge to flow from the negative plate to ground.
How to create large inductors (1H) for audio use? I am building a tube amplifier, and have decided to add a 5ch EQ to it. Historically, the ones used in guitar amplifiers were all passive RLC filter style systems, with large inductances of 0.5 - 2H for the 80Hz channel. I know now days it would be smaller to use an active op-amp style, but I am doing this as a hobby and want to try making a passive one. I was slightly surprised to find Digikey does not carry any small inductors over ~100mH.My guess is this is because no one uses them for small current applications anymore with the advent of DSP or active capacitor based filters. Any advice for creating large 1H inductors with <1mA of current that does not involve winding 300 turns through a 1" torroid? Or does anyone know how they were historically made, really fine wire i would guess with many turns? <Q> The primary of a smallish power transformer is on the order of 1 H. Magnetizing current for such a transformer is on the order of 0.35 A, which means that its inductive impedance is <S> $$Z = \frac{V}{I} = <S> \frac{120\ \mathrm <S> V}{0.35\ \mathrm <S> A} = <S> 343\ \mathrm{\Omega}$$ <S> This means that the inductance must be $$L = <S> \frac{Z}{2\pi f} = <S> \frac{343\ \mathrm{\Omega}}{6.2832 \cdot 60\ \mathrm{Hz}} = <S> 0.909\ <S> \mathrm <S> H$$ Inductors in that range, for low frequencies, use the same construction techniques as transformers, but with just the one winding. <A> They were wound like transformers, as Dave Tweed mentioned. <S> But they were of specific construction that is different from typical power transformers: the laminations were thinner, so they would be low loss at audio frequencies, and they may well have been gapped for linearity. <S> It may also be worth it to investigate using the biggest E cores that you can find, and possibly even gapping them with Kapton tape or similar (to do it <S> really right <S> you get the inner leg precision ground, but <S> I'm assuming this is a hobby project, not for production). <A> Pot cores. <S> The winding was on a simple bobbin - much easier than a toroid - and it is likely to be nearer 3000 than 300 turns (of very fine wire). <S> The bobbin is then fitted between two ferrite cores which are tightly clamped to virtually eliminate the air gap. <S> Some variants had a moveable slug for fine tuning over a few percent, rather like a radio IF transformer. <S> Signal levels had to be strictly controlled to limit harmonic distortion as the core started to saturate. <S> (As with today's switching supply cores, there were different ferrites with different characteristics, allowing e.g. lower distortion if you didn't need the highest values of specific inductance). <S> Useful search term : Vinkor was one of the common makes. <S> I may dig out some datasheets later on... <A> There is a lot of materials available for making custom transformers/Inductors. <S> When I discovered this I felt quite liberated because I could finally just make what I needed. <S> At university we had an old machine to wound up these transformers. <S> You mainly just need something that rotates and counts. <S> I understand why you do not want to custom wind toroids. <S> It is a pain. <S> Specifically you have to go under the categories: <S> Coil formers (An easy-to-wind (still hard to count though) plastic assembly Ferrite Cores <S> (There are a lot of different ones, but for low frequency you are not too picky i believe. <S> These will be secured to the coil former with clips) <S> Magnetic wire. <S> (Chosen based on average current) <A> "Historically" perhaps isn't the example you want. <S> Historically, amplifiers had levels of distortion, noise and mains hum which we wouldn't accept today from the cheapest kid's radio. <S> Not only that, all electrical/electronic devices broadcast significant electrical noise which would be picked up by other equipment. <S> Winding toroids is never fun. <S> However only toroids can ensure your high inductance isn't receiving or transmitting noise. <S> This really isn't somewhere <S> you want to take shortcuts.
It may be worth it to look at the audio transformers available for tube amps (such as the ones that Hammond makes, available from www.tubesandmore.com), and either just use them as-is, or rewind them.
When can the output of any flip flop (e.g., JK FF) be indeterminate? I came across following problem: In an SR latch made by cross-coupling two NAND gates, if both S and R inputs are set to 0, then it will result in A. Q = 0, Q' = 1 B. Q = 1, Q' = 0 C. Q = 1, Q' = 1 D. Indeterminate states I (wrongly) felt that answer would be option D, indeterminate state. But it was option C. The explanation given was: Here we know that the output will be definitely 11. So its not indeterminate. However its invalid. I understand that this is true for an SR latch. But now I am thinking when the output will be indeterminate. Can we call output of a level-triggered JK flip flop (with clock duration more than flip flop delay) to be indeterminate when J=K=1? I know this corresponds to toggle state, but due to race around condition, can we call it indeterminate? <Q> Considering any kind of flip flop, yes, there are possibly indeterminate states at the output. <S> The named example, level-triggered JK flip flop, might start to oscillate if the "forbidden" input combination is set. <S> It depends on the technology the circuit is based on, the propagation delay of its gates, the exact timing of all input signals, and so on. <S> Another possible effect is metastability . <S> This can stay for a indeterminate duration, and even worse produces illegal values "between" 1 and 0. <S> After this unknown time it can finally set on a legal value which is indeterminate; it can be 1 or 0. <A> A JK latch will toggle its outputs with J=K=1, and you cannot exactly predict how much time each toggling takes, so the final state will indeed be unknown. <S> Furthermore, it is possible that its internal control signals are faster than the output transistors can switch, so in that case, you might not even get a square wave, but an output signal that does not even reach valid logic levels. <S> Other ways get an indeterminate state are to violate the setup/hold time requirements, so that you do not know whether some J/K input happens to be read as low or high when the clock is processed; or to observe the initial state after power-up, before the flip-flop has been initialized (with asynchronous set/reset) or before some value has been clocked in. <S> You can get the same effect with a simpler circuit, like the buffer below: the output state is either high or low, but you don't know which one: simulate this circuit – <S> Schematic created using CircuitLab <A> Can we call output of a level-triggered JK flip flop (with clock duration more than flip flop delay) to be indeterminate when J=K=1? <S> I know this corresponds to toggle state, but due to race around condition, can we call it indeterminate? <S> Answer: <S> No <S> When J= <S> K=Clk=1, the S!,R! <S> Intermediate states will toggle to complement the output states of Q,Q!. <S> The latched SR state of S!=R!=0 is not possible. <S> The State of a latch with S!=R! <S> = <S> 0 is not indeterminate as both Q,Q!=1. <S> the unknown in a simple latch is which SR input changes 1st, from above, that determines which output of Q,Q! <S> Changes from 11 to either 10 or 01. <S> but this does not occur in a JK FF. <S> ALSO: <S> Flip Flops are Clk edge-triggered, which MAY have Latch functions for S,R that are level-dependent
"Indeterminate" means that you do not know what the state is.
Parallel RC as a filter? I was looking at an amplifier schematic and saw a curious portion that I realized I wasn't sure what its purpose was. simulate this circuit – Schematic created using CircuitLab At first glance I thought I saw a low pass filter, but then I said wait no, those are in parallel. What's that capacitor doing there? Is it smoothing the audio source, acting as some kind of noise filter? Is there a name for this kind of device? Would any assumptions need to be made (like whether the audio source has a constant current, or constant voltage or neither) to determine the function of this device? I was operating under the assumption of direct current and audio was encoded as time varying voltages, but if that's not the case, would the circuit do anything different? I saw some things thrown around online like smoothing capacitor, so on an audio signal would it just smooth out sharp peaks? Would this be make it sound nicer, like how a sine wave sounds different than a square or triangle wave? What would the difference be if I skipped that and connected the audio source directly to my NODE1 there? <Q> It looks like a first-order passive high-pass shelving filter . <S> Mainly used to reduce (de-emphasize) excessive bass as a form of coarse equalization via bass-trebble (tone) controls. <S> Some features: <S> Attenuation of low-frequency content is: \$\alpha = 20 \log \big(1+\frac{R_1}{R_2}\big)\$ . <S> In this case approx 5 dB attenuation. <S> Attenuation of high-frequency content is 0 dB. <S> There is a transition band in-between that goes from \$f_z = <S> \frac{1}{2\pi R_1 C_1}\$ <S> to \$f_p <S> = \frac{1}{2\pi (R_1 || R_2) C_1}\$ , which are the frequencies where the zero and the pole of the transfer function of the filter are located. <S> In this case, transition goes from 1,205 Hz to 3,617 Hz. <S> There are active and higher order variants of the high-pass shelving filter that allows you steeper transition bands if you need them, and also help avoiding loading effects. <A> Simulate it with a swept sine. <S> Assuming the source is low impedance and the following circuitry is high impedance, it's a 1:3 voltage divider at DC, and passes high frequencies unattenuated. <S> There's a zero at \$\frac{1}{2 \pi \mathrm{(3.3nF)(40k\Omega)}} = <S> \mathrm{1200Hz}\$ , and thus a pole at \$\mathrm{3600Hz}\$ . <S> So, it's an equalizer (or emphasis) network of some sort. <A> It is a high pass filter and I believe it is part of tone control. <S> You can adjust the high frequency response of the amplifier. <S> It might help to see more of the circuit.
Its role is to attenuate low-frequency content of the audio signal without cutting it off.
How calculate and chose the right BJT that comands a relay I was projecting a simple circuit that, from a digital OUTPUT from an Arduino zero (3.3 V) switch on a BJT that is connected to a relay. The base of the BJT must have a current of about 5mA or less. Is this configuration enough to run a relay? I have simulated this circuit and Ic is about 49mA; if I change the relay, Ic increases or it's fixed? I'm not really good at this thing simulate this circuit – Schematic created using CircuitLab <Q> if I change the relay, Ic increases or it's fixed? <S> Check the datasheet for your relay. <S> 50 mA is a reasonable number for a low power 5 V relay, but pretty low if your relay is rated for more than ~2 A switched current. <S> To maintain your 5 mA base current limit you might need to either use a Darlington configuration, or change from a BJT to a logic-level MOSFET. <A> In your case (5mA/50mA) this gain is relatively low <A> A BJT can be considered a "current amplifier", therefore its gain is of interest. <S> In your case (5mA/50mA) this gain is relatively low. <S> For a given amplification, figure out the required base current. <S> The base circuitry is then quite similar to a resistor-LED-circuit. <S> Note, that Ic is specific for the used relay (datasheet).
Different relays will draw different coil currents at their designed operating voltage. A BJT can be considered a "current amplifier", therefore its gain is of interest.
Laptop failure due to constant fluctuation of AC frequency and voltage I spend most of time on a boat (not on holiday :/ ) where a huge generator provides electricity. The potential problem is that the output is very dependent of the "mood" of the engine. Sometimes the frequency of the AC that goes to the sockets of the cabins is 48 Hz, sometimes 52 Hz. Anyway, generally between 48-52. (EU norm 50 Hz) My question is regarding the safety of my equipment such as notebook/laptop. My theory is that this fluctuation of frequency only harms the PSU/Charger that converts AC to DC for my laptop, therefore that will die first, before everything else. What do you guys think? Can the internal components be harmed as well, or is my theory correct? Can this harm other appliances? Also, I am curious what kind of UPS can regulate such fluctuations into 'flat' 50 Hz (EU) and stable voltage and current? <Q> Don't worry. <S> Most modern PSU fist change the AC to DC via a rectifier. <S> The internal electronic then creates an own AC used to transform the voltage to the desired one. <S> The internal AC has a much higher frequence to be more effecient. <S> Most PSUs work with frequencies from 0 to over 60Hz. <S> Even DC with down to 80V is possible. <S> So anything up to 60 Hz should be fine, if you have between 100V and 240V. <A> My theory is that this fluctuation of frequency only harms the PSU/Charger that converts AC to DC for my laptop, therefor that will die first, before everything else. <S> There's a bridge rectifier and reservoir capacitor(s) inside the PSU for this purpose and the bridge rectifier works well with even 100Hz. <S> Can the inside components be harmed as well or is my theory correct? <S> Can this harm other appliances? <S> Instead of frequency fluctuations, spikes on the AC line can be harmful but <S> I'm not sure if it's likely to see those spikes on the AC line due to the motor / alternator. <S> A well-designed equipment "should" have protection devices (i.e. filters, suppressors, etc) for unwanted components <S> (e.g. spikes, high frequency radiations via conduction, etc.) <S> on the AC line. <S> Laptop adapters/chargers do have these protections. <S> Some AC motors' speed depend on the frequency of the supply voltage. <S> Thus, if you have any equipments contain those kind of AC motors then these equipments may be affected by the frequency fluctuations. <A> Supplies in the 130W range sold in Europe (I believe more than 75W) are required to have power factor correction so the input section is rather more complex than some other answers assume. <S> From this source is a block diagram: <S> I still think it's more likely that it is voltage surges causing your problems, assuming you are experiencing failures. <S> Active PFC circuits (any modern supply with PFC will be active rather than passive) expose some input elements to surges more than simple rectifier input circuits. <S> There are power conditioners sold for use in developing countries that might help protect your supply. <A> As you just said in comments, the laptop has a wide acceptable input range. <S> That is not surprising since it is a switching power supply. <S> Others have discussed how laptop supplies (like a great many loads) simply do not care about frequency inside conceivable ranges. <S> They state 50-60Hz because 50Hz or 60Hz is the frequency of essentially all terrestrial power. <S> As far as voltage, they are clearly stating a working range of 100-240V, and again they're just regurgitating the range of essentially all terrestrial power: <S> 100V in Japan or 240V in the UK. <S> too low spikes are just a momentary low voltage, and the switcher will try to ride through it. <S> The only risk is that it shouldn't spend minutes at too-low voltage, because lower voltage means the switcher will draw more current, and at a a point, that will overheat and burn up current pathways in the device. <S> I would advise getting a physical, copper-and-iron, wound stepdown transformer from 240V to 120V. <S> That will passively dampen spikes, and smooth out some power issues. <S> It will also divide the voltage by 2, meaning the power would have to jump to 480V before it would exceed the voltage spec. <S> A transformer built for 50Hz is slightly better. <S> The size of the iron core decideds a transformer's ideal frequency, and 16% won't matter on a transformer this small. <S> (I'm assuming 100-500 W). <S> -- <S> Other equipment Within sane range (say, maximum aircraft 400Hz)... <S> Obviously, anything that rectifies doesn't care about frequency. <S> Also, resistive heaters don't care about frequency. <S> Things using a variable frequency drive don't care about input frequency, because they are slicing and dicing to make their own frequency. <S> Rotary machines (including transformers) do care, however. <S> As alluded, transformers are tuned for a frequency by the size of their iron core; motors as well. <S> A motor or transformer will be very, very unhappy on railroad 16.7 Hz or aircraft 400Hz. <S> Clocks depend on 50/60Hz being right on the button. <S> There was a newspaper-worthy scandal in the EU as the grid operators were not able to sustain 50.000 Hz, and had to "speed up the grid" in the evening to catch everyone's clock up by the few seconds they had drifted. <S> The grid is managed that precisely. <A> One aspect that has not been clearly mentioned is that in a huge ship (I suppose it is huge, since you said the electrical power is produced by a huge generator) <S> you have many powerful electrical motors connected to the electrical line. <S> So the electrical system of a ship (especially if it is a tanker or a freighter, as opposed to a cruise ship <S> ) behaves more like an electrical system in an industrial environment, than a household electrical system. <S> Normal laptop power supplies do have overvoltage protection devices, but they are designed to withstand AC line spikes that usually happen in a household/office situation. <S> In an industrial environment, where many inductive loads are electrically near the power supply, spikes with higher energy content may be generated (bigger amplitude, longer duration). <S> These could stress the power supply protections so much that they could fail in the end, so leading to a potential damage to the power supply main circuitry itself.
The practical limits will be: too high a voltage causing insulation or component breakdown, but insulation is cheap. Your PSU doesn't really work with AC. As stated in @rundekugel 's answer, the frequency of the AC line does not matter unless it's much higher than 60 Hz (e.g. 1kHz) since the actual converter inside the PSU requires DC input.
Does the word voltage exist in academic engineering? In Portuguese, the word voltage does not exist. Neither academic nor technical. In engineering, Portuguese speakers refers to volt as electric tension or potential difference. The word voltage was popularized in the Portuguese language because some places use 220V and others use 110V and people always had to ask if the "voltage" for the equipment is 110 or 220. So, it's kind of a nickname/shortcut for non-technical people to refer to electric tension. What about in English academic engineering? Does the word voltage exist or is it just a shortcut/nickname for electric tension or potential difference? <Q> Yes, voltage is a technical word in English. <S> From Wordnik : <S> noun A measure of the difference in electric potential between two points in space, a material, or an electric circuit, expressed in volts. <S> In fact, Wikipedia even lists "electric tension" as a synonym, though I hadn't heard that before. <S> Some other answers have noted that Electric Tension was used to describe a potential difference until the mid-20th century in England, but it went out of popularity. <S> Google’s Ngram shows that voltage is far more popular than Electric Tension ever was, though. <A> The water analogy of electricity was historically influential, both terms, "tension" and "current", were the result of this analogy. <S> In the early 1900s, "tension" was the standard technical term in English for electric potential. <S> The B+ of a vacuum tube was called High Tension (HT), and a Cathode Ray Tube required "Extra-High Tension" (EHT) to operate. <S> For some reasons, the word "tension" in English became obsolete in the middle of the 20th century (I cannot find a reference), and the term "voltage" became the standard technical term instead. <S> Similarly, the old technical term for a "capacitor" was "condenser". <S> A microphone that works by the change of capacitance was (and still is) called a "condenser microphone". <S> In 1926, the term "condenser" was abandoned in English, but it took a generation or two to pick up the new term, fully replaced the old term around mid-20th century. <S> However, the translation of basic terms in electrical engineering to other languages was done long before this transition, so in many other languages, the technical term is still "tension" or "pressure", and a "capacitor" is still a "condenser". <S> The main reason seemed to be an effort to reduce the confusion between electrical engineering and mechanical engineering terms. <S> Early 1900s was still the heyday of steam engines, and the confusion could be very real, and I fully understand the choice for "capacitor" over "condenser". <S> But I think the choice "voltage", from a physical sense, is very unfortunate. <S> Most physical quantities, as physical phenomena, have their own names independent from their units of measurement. <S> When we talk about force as a phenomenon, we don't refer it as "newtonage", neither we use "wattage" for power. <S> $$\require{cancel}$$ <S> \begin{array} {|l|l|l|l|}\hline\text{Phenomenon} &\text{Name} &\text{Unit} &\text{Numerical Name}\\\hline\text{A push} &\text{force} &\text{newton} &\text{-}\\\text{Flow of charge} &\text{current} &\text{ampere} &\text{amperage}\\\text{Rate of work} &\text{power} &\text{watt} &\text{wattage} <S> \\\text{Electric <S> Potential} &\cancel{tension} \text{voltage <S> (!!)} &\text{volt} &\text{voltage <S> (!!)} \\\hline\end{array} <S> The introduction of "voltage" makes electric potential lost its own name, making it the only physical quantity named after its unit of measurement in English. <S> However, "voltage" is the standard term English, we have to follow it all along... <A> In the International System of Units (SI) and the corresponding International System of Quantities, as described in the international standards series ISO/IEC* 80000 Quantities and units, quantities are always independent of the unit in which they are expressed; therefore, a quantity name shall not reflect the name of any corresponding unit. <S> However, ISO 80000 Part 1 General as well as IEC 80000 Part 6 <S> Electromagnetism note that the name “voltage” is commonly used in the English language and that this use is an exception from the principle that a quantity name should not refer to any name of unit. <S> Itis recommended to use the name “electric tension” wherever possible. <S> The same information can be found in the series IEC 60050 <S> International Electrotechnical Vocabulary (IEV), especially IEC 60050-121 . <S> * The International Organization for Standardization (ISO) collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization. <A> Yes. <S> They do exist. <S> In fact, voltage is actually potential difference. <S> When you say voltage at a point is 5 V, we mean to say the potential difference of 5 V with respect to ground. <S> in another case, when there are two points at not zero potential, and we have to measure the voltage between those two points, we say "voltage is … V with respect to another point". <S> If one point (point A) is at 20 V and another point (point B) is at 25 V, we say voltage at point B is 5 V with respect to point A. <S> And this is of course the potential difference between those two points. <A> In my physics experience, I've seen both the words voltage and potential difference used. <S> I've never heard of the word electric tension in any context. <S> Potential difference was more specific to situations where the relative voltage, or, $$\Delta V = <S> V_1 - V_2$$ was the important quantity desired, while voltage referred to a single reference measurement, or the above difference, based on context, which could often be inferred from the nature of the problem.
Mostly it's referred to as voltage or potential difference.
Test grounding of wall sockets I just had (and solved thanks SE) a problem with a wall socket where the ground contacts were malfunctional . Much of the electrical safety relies on proper grounding. As far as I can read, details for state-of-the-art measurement of grounding is a science for itself, but the described methods seem far from impossible to do. I want to test grounding of each socket (and possibly measure resistance for monitoring) as far as possible and practical for safety reasons (personal protection, possibly reduce noise (especially bad with telephone at my parents' house), and obvious prerequisite for surge protection). Question: What are practical, safe and economic ways for a technically skilled person, but not regularly trained as electrician? E.g. 3 or 4 wire measurements with earthing, "measuring probe" and "helping-earth probe" ? In particular, I am not specifically interested in one earthing metal pole but in the earthing as such that "arrives" at the sockets. I live in an area with houses and small gardens, but with low distance to other houses/streets (5-50 m distance in all directions). So far, I could not figure out in what way my apartment house is (or: should be) grounded (no answer from landlord, I think they do not know themselves). Is it then necessary to disconect the earthing pole from the rest of the house as I read somewhere? (Seems that this was primarily meant only to test one specific earth pole independent from possibly other earthing poles) I am willing to spend some money, but not necessarily hundreds of dollars/euros per household. I see devices (from cheap (24 €) to expensive (174 €), all still affordable) such as this , this , this and this . Price would be ok as I can use it for my family and friends as well, this is why I'd prefer DIY (if considered safe). Do such devices offer a valid approach to my question? Disclaimer: All electrical engineering of the house was done by certified electricians (at least I hope so), and I do not want to change anything - unless I discover a problem and will then obviously ask a certified electrician to solve it if it requires work on the mains. <Q> These simple testers are easy to use and test more things than just the ground. <S> Here's one on Amazon . <S> Here's the Amazon search <S> I used. <S> They cost about $10usd, often less. <S> You just plug it in to each outlet and read the 3 lights. <S> Convert USD to EUR here . <S> So far, I could not figure out in what way <S> my apartment house is (or: should be) grounded (no answer from landlord <S> , I think they do not know themselves). <S> Older houses may not have a ground at all. <S> My house was built in the 1950s. <S> It has a ground from the main electrical box to the outside, but most of the wiring in the actual house does not have a ground. <S> I specifically had to run a separate circuit for my computer which had a ground, because a surge protector (part of my UPS) requires a ground to work. <S> Is it then necessary to disconect the earthing pole from the rest of the house as I read somewhere? <S> Why would you do that? <S> What is the purpose of this particular question? <S> Did you hear you had to disconnect the ground pole from the main electrical box to test ground in the outlets? <A> Safest way would be with 1 of the 3 neon bulb testers, Its pretty easy to read them to tell if they are mis-wired or you have a floating ground or neutral etc, e.g. you have a floating ground, the active to neutral one would be full brightness, and the other 2 would be quarter brightness <S> DO NOT DISCONNECT YOUR EARTHING POLE!!!! <S> your ground may be fine, but your neighbors neutral may not, a few years back I measured about 7A flowing into my ground rod from a neighbours house. <S> Its also the correct thing to not trust the sparky <S> , there was an outlet with active and ground flipped on the last place I stayed. <A> Usually you would need a licence and approved and calibrated instrument. <S> If you want to DIY, you can buy a second hand instrument that is functional but it missed the calibration schedule, but it still costs hundreds of euros. <S> If you'r from UK you can search on ebay for: Metrel Alphatek Eurotest Instaltest Easi test
These instruments are very expensive.
Why would an AC motor heavily shake when driven with certain frequencies? I have an AC motor from an old TCL 160 Boxford NC Lathe, it has three wires named U V and W. I thought that incorparating the original driver to a modern control interface card would be difficult or unreliable, so I got the motor out and wired it to a Lovato VE1 04 A240 3-phase AC motor driver for tests. Motor is free on a table and shaft is not connected to anything. When I run the motor using that driver, between the frequencies 20 and 40 Hz, motor starts to shake heavily as if there is an eccentric disk on the motor shaft. But on the other frequencies, 0-20 Hz and 40-100 Hz, there isn't any shake it runs considerably smoothly so I think it's not related to a mechanical problem like a bearing failure. The original driver was a 0.43 kW one and the one I'm testing is 0.4 kW, and I set the maximum current to 1.5 A just to be safe. I don't know if the motor is an AC synchronous or AC asynchronous, the only guess I can make is that the motor is AC synchronous because it feels like if the rotor wasn't able to catch up with the frequency fed into it and is shaking like a stepper motor. Also it runs at 1500 RPM with 50 Hz input, and 3000 RPM with 100 Hz input, as far as I can measure with my smartphone stroboscope. What do you think the reason would be? What would I do to make it run correctly? <Q> Motor is free on a table and shaft is not connected to anything. <S> Try clamping it down to a nice solidly built table or bench, and repeat your test. <S> Chances are that the problem will go away, or at least be minimized and shifted in frequency. <A> The problem is almost certainly due to mechanical resonance. <S> The rotor may be inadequately balanced. <S> There could also be some damage to the motor such as a broken blade on an external or internal fan. <S> There may be a certain amount of imbalance due to the motor being operated without a key in the shaft. <S> It might be possibility that a broken rotor bar could cause a problem like that. <A> Note that even a well-mounted AC motor without load can oscillate around its in-phase position. <S> Typically rotor and the electronics are designed to dampen such oscillations, but with a large discrepancy of available power and mechanical power drawn from the system, you can get problems.
It's most likely an imbalance in the motor plus a mechanical resonance in the "mounting".
Capacitors with same voltage, same capacitance, same temp, different diameter? I recently had an old LCD monitor power supply go bad and figured I would try replacing the caps, all four of these were bulged and or leaking. They are all rated to 105 degrees C, all 1000uF and 25V yet two of them are much higher diameter. In my experience with a physically larger capacitor, you either get more voltage (more plate separation?) or more capacitance (more plates?), why are these caps different diameter? Side question, the board had more than enough space for the larger caps everywhere, why not use the large (or small) caps everywhere for higher quantity discounts. <Q> The answer lies in the datasheet and the designer requirements for cost, space, reliability, cost and temperature rating. <S> There are many choices. <S> (Did I say cost;) http://www.capxongroup.com/prodsearch.aspx?lc=1&siteid=&ver=&usid=&mnuid=2082&modid=16&mode= <S> The Part Number defines; <S> e.g. KF102M025I200A KF Family construction of foil film and dielectric <S> , there are many others xxe value 102 for C in uF with exponents 10 00 uf <S> M = 20 % tolerance on C xxx voltage rating <S> A letter code for Case Dia & radial lead space <S> xxx height xx.x <S> mm <S> The height and Voltage reduce ESR while the diameter affects everything. <S> The parameters for selection of these low ESR caps are; C, Vdc, Size, max temp range, xxxx Hrs Endurance of accelerated MTBF at extreme temp & RMS <S> ripple current <S> the electrical variables for these choices are % DF at 120Hz, Ripple current @ 100kHz, @ <S> 10kHz, ESR I don't know the formula, but the diameter is determined by foil area, thickness, turns, ESR,and temp rise due to ripple current rating at max temp for xxxx hours due to thermal conductance and temp rise above max ambient rating and Arrhenius effects on Endurance . <A> Either they had different ESR ratings, as @hacktastical suggests, or the bigger ones are just an older design and/or the board manufacturer buys whatever is cheaper at the moment, then throws them into one bin. <A> The relatively larger caps were likely to be a low ESR type, perhaps also with a higher thermal rating. <S> That has some influence on the size/density. <A> More advanced etching processes can increase surface area of foils, allowing higher capacitance in smaller packages.
Cap manufacturers are getting better at making smaller caps, they're getting better at reducing manufacturing variations so that they can consistently hit ratings with not-quite-as-good caps, and some of them just plain lie.
What happens to a computer as it is cooled towards absolute zero? What happens to the behavior of a conventional circuit using conventional components as it is cooled towards absolute zero? To make this question more relatable: What would happen as we cool a modern CPU and its power circuits? In general resistance decreases with temperature, so I assume that initially we would see the device's power consumption drop for the same cyclic frequency. (Or would the clock mechanism change its frequency as it is cooled also? If so, let us assume for the rest of the question that the clock is external and not altered.) Do transistors and other semiconductors begin to fail below certain temperatures? If so, by what mechanism? At some very low temperature I assume that various conductors would begin to superconduct. How does this alter the behavior of the power supply and computational circuits? <Q> Metallic conductors might become superconductors, but that doesn't mean anything if the first thing you hit is a chunk of inert silicon. <S> There are types of logic that are designed to work at cold temperatures and using superconducting materials, for example, Josephson junctions . <A> First it would stop working because everything is designed to work in the commercial temperature range (0 to 70 degrees C). <S> One would hope it works down to 0 degrees, although if the designers were not dillegent it may not make it down that far. <S> It would stop working because the various bits would fall out of specification, and each would do so differently. <S> It's hard to say what would fail first. <S> It would be one of: <S> There would be so much timing skew between the signals that the various digital bits were receiving and what they required at that temperature <S> that interfaces (i.e., between memory and processor) would stop to work, or logic would get scrambled. <S> Oscillators would stop working because the gain necessary to maintain oscillation wouldn't be there. <S> Voltage references would generate the wrong voltage (and not predictably, at least not up front). <S> Transistors in the power supply circuits would fail to conduct properly. <S> There's a vanishingly slim chance that it'd still work below 0 <S> degrees C. <S> I'm starting to guess here, but there's yet more that can go wrong: <S> Anything spinning, such as disks and fans, will seize up because of differential thermal contraction. <S> They may be OK when they warm up again, but depending on how the disks stop working they may damage their surfaces. <S> If the temperature goes down far enough, differential thermal contraction between various bits (i.e., the traces and the PCB, the leadframes in the IC's and their epoxy packaging, the silicon chips themselves and either the leadframes or the packaging) will cause things to crack. <S> This damage would be permanent. <S> If this is done in atmosphere, water vapor (followed by CO2, and then the various other constituents of air) would condense on the PC. <S> Oxygen would condense out before nitrogen, which may cause interesting chemical reactions. <S> Basically, long before anything superconducts, the 'puter will have stopped working, and will very probably be permanently damaged. <S> Note that you can design electronics to work at cryogenic temperatures <S> -- I used to work in IR imaging, where the imaging array was running at 77K, with CMOS circuitry. <S> But that was circuitry that was specifically designed for the task, and which did not work well (or at all, sometimes) at higher temperatures. <A> Convex Computer Corp exploited the speedup of CMOS logic for some of their machines. <S> The transconductance (delta_amps out for delta_volts in) got bigger, and the parasitic gate and flipflop and buss capacitances were charged and discharged ....... <S> faster,as temperature dropped. <S> I think they used liquid nitrogen.
Anything based on semiconductors would simply stop working, because they depend on electrons kicked into the conduction band by their own thermal energy.
What is negative current? I understand that voltage is relative to ground, so that can be negative.However, I'm currently looking at current-sensors ( ACS712 current sensor ) and in the performance characteristics table, it specifies the Optimized Accuracy Range . In the case of this sensor, it's being specified in Amps, ranging from -5 A to +5 A I can't find anything explaining how you could have a negative amperage. As far as I know, electric current is the rate of flow of electric charge within a (part of a) circuit. How could the flow of charge sensed by the sensor be negative? <Q> understand that voltage is relative to ground, I prefer to disagree. <S> A voltage is against a reference point. <S> Often that reference point is ground but not always. <S> Taken the above into account your current is defined the same way. <S> Take a pin/port of a component or circuit. <S> You can now define the current going into that port/pin as positive from which it follows that if current comes out of that port/pin the current is negative. <A> It means current can flow in any direction through the device. <S> Just like with AC mains voltage will alternate polarity over a load, the current flows either clockwise or counter-clockwise direction in the loop via load. <A> Electric current, in a physical sense, is the rate of flow of electric charge indeed. <S> But charge can flow in one direction or in the opposite direction. <S> That's the reason for positive or negative current: it's a matter of how you set your reference. <A> It's current going in the opposite direction to the direction defined as positive, nothing more or less than that. <A> A sensor that can read negative and positive current could be used to mesaure rate of charging or discharing a battery. <S> with one being a positive current and the other negative. <A> Negative current is the flow of charges produced by a negative voltage. <S> You seem to think that current is the magnitude of the charge flow, like speed is w.r.t change of position. <S> In fact, the current is a vector and it has a direction, like velocity . <S> It's just that in a wire there are only two possible directions for the charges to flow, so the current becomes a scalar with a sign. <A> Quite simply this IC supplies a positive slope voltage as the output that can be directly translated to a current. <S> If the voltage across pins 1+2 and 3+4 is negative or positive the device will still represent the current as a positive voltage. <S> The polarity of the voltage is what is significant here. <S> The data sheet implies that if a negative or positive voltage is applied to the aforementioned pins then the IC can handle 5 amps regardless of the polarity.
Negative current is current flowing in the opposite direction to positive current, just like the axes on a graph have negative and positiva in opposite directions.
Why does F + F' = 1? I have the function: \$f(x,y,z,w) = wx + yz\$ I found its complement function to be: \$f '(x,y,z,w) = w'y' + w'z' + x'y' + x'z'\$ I have to show that: \$f + f '=1\$ but I can't see how to do it. It seems as if there just isn't anything that cancels each other out. Edit As suggested, I have now used DeMorgan's theorem and found this: \$f + f' = wx+yz+(w+y)'+(w+z)'+(x+y)'+(y+z)'\$ But it still seems to me that there is nothing that brings me closer to the realization of \$f+f' = 1\$ <Q> The point is, it really doesn't matter what the function \$f()\$ actually is. <S> The key fact is that its output is a single binary value. <S> This is known as the law of excluded middle . <S> So ORing a value with its complement is always true, and ANDing a value with its complement is always false. <S> It's nice that you were able to derive the specific function \$f'()\$ , but that's actually irrelevant to the actual question! <A> All previous answers are correct, and very much in depth. <S> But a simpler way to approach this might be to remember that in boolean algebra, all values must be either 0 or 1. <S> So... either F is 1, then F' is 0, or the other way around: F is 0 and F' is 1. <S> If you then apply the boolean OR-function: F + F', you will always have one of both terms 1, so the result will always be 1. <A> My answer is similar to the one of Dave Tweed, meaning that I put it on a more formal level. <S> I obviously answered later, but I decided to nevertheless post it since someone may find this approach interesting. <S> The relation you are trying to prove is independent from the structure of the function \$f\$ since it is, as a matter of fact, a tautology . <S> To explain what I mean, I propose a demonstration for a general, correctly formed, Boolean expression <S> \$P\$ <S> in an arbitrary number of Boolean variables, say \$n\in\Bbb <S> N\$ , \$y_1,\ldots,y_n\$ , where <S> \$y_i\in\{0,1\}\$ for all \$i=1,\ldots,n\$ . <S> We have that \$P(y_1,\ldots <S> ,y_n)\in\{0,1\}\$ and consider the following two sets of Boolean values for the \$n\$ -dimensional <S> Boolean vector \$(y_1,\ldots,y_n)\$ $$\begin{align}Y&=\{(y_1,\ldots,y_n)\in\{0,1\}^n|P(y_1,\ldots,y_n)=1\}\\\bar{Y}&= <S> \{(y_1,\ldots,y_n)\in\{0,1\}^n|P(y_1,\ldots, <S> y_n)=0\}\end{align}$$ <S> These set are a partition of the full set of values the input Boolean vector can assume, i.e. \$Y\cup\bar{Y}=\{0,1\}^n\$ and \$Y\cap\bar{Y}=\emptyset\$ (the empty set), thus $$\begin{align}P(y_1,\ldots,y_n)&=\begin{cases}0&\text{if }(y_1,\ldots,y_n)\in \bar{Y}\\1&\text{if }(y_1,\ldots,y_n)\in Y\\\end{cases}\\&\Updownarrow\\P'(y_1,\ldots,y_n)&=\begin{cases}1&\text{if }(y_1,\ldots,y_n)\in \bar{Y}\\0&\text{if }(y_1,\ldots,y_n)\in Y\\\end{cases}\end{align}$$ <S> therefore we always have $$P+P'=1\quad\forall(y_1,\ldots,y_n)\in\{0,1\}^n$$ <A> All good answers that provide the necessary justification in one way or the other. <S> Since it is a tautology, it's hard to create a proof that doesn't just result in "it is what it is!". <S> Perhaps this method help tackle it from yet another, broader angle: <S> Expand both statements to include their redundant cases, and the remove the repeated cases: \$=+\\\ \ = <S> wx\cdot(y'z'+y'z+yz'+yz)\ + <S> \ <S> yz\cdot(x'w'+x'w+xw'+xw)\\\ <S> \ = <S> wxy'z'+wxy'z+wxyz'+wxyz\ <S> +\ <S> yzx'w'+yzx'w+yzxw'+yzxw\\\ <S> \ = <S> wxy'z'+wxy'z+wxyz'+wxyz\ <S> +\ <S> yzx'w'+yzx'w+yzxw' \$ and <S> \$′=′′+′′+′′+′′\\\ \ \ = <S> w'y'\cdot(x'z'+x'z+xz'+xz)\ +\ <S> ′′\cdot(x'y'+x'y+xy'+xy)\ <S> +\\\ \ \ <S> \ <S> \ <S> \ <S> \ <S> \ <S> \ x′y′\cdot(w'z'+w'z+wz'+wz)\ + <S> \ <S> x′′\cdot(w'y'+w'y+wy'+wy)\\\ \ <S> \ = <S> w'y'x'z'+w'y'x'z+w'y'xz'+w'y'xz\ <S> +\ ′′x'y'+′′x'y+′′xy'+′′xy\ <S> +\\\ \ \ <S> \ <S> \ <S> \ <S> \ <S> \ <S> \ <S> x′y′w'z'+x′y′w'z+x′y′wz'+x′y′wz\ <S> +\ <S> x′′w'y'+x′′w'y+x′′wy'+x′′wy\\\ \ <S> \ = <S> w'y'x'z'+w'y'x'z+w'y'xz'+w'y'xz\ <S> +\ <S> ′′x'y+′′xy\ +\\\ \ \ <S> \ <S> \ <S> \ <S> \ <S> \ <S> \ <S> x′y′wz'+x′y′wz\ + <S> \ x′′wy\$ <S> I've kept the terms in consistent order to make the derivation more obvious, but they could be written alphabetically to be clearer. <S> In any case, the point is that \$f\$ ORs seven 4-bit cases, and \$f'\$ ORs nine, distinct 4-bit cases. <S> Together they OR all sixteen 4-bit cases, so reduce to \$1\$ . <A> F + F' = <S> 1 means that you have to show that no matter the state of the 4 inputs, OR'ing the result of those 2 always result in 1, <S> A few minutes in excel shows it is indeed the case. <S> You can use "NOT()" to invert between 0 and 1 in excel. <S> F = <S> W <S> * X <S> + Y <S> * Z F' = <S> W' <S> * Y' + W' <S> * Z' + <S> X' <S> * Y' + <S> X <S> ' * Z' <S> As to why this is the case, If you want F to be false, e.g. setting W and Y low, you just made F' true. <S> If you make X and Z low, you also made F" true, same for swapping there pairs. <A> Since Carl asked nicely. <S> Starting point: $$f(x,y,z,w)=wx+yz $$ and $$f′(x,y,z,w)=w′y′+w′z′+x′y′+x′z′$$ Take the following steps with \$f'\$ : $$f′(x <S> ,y,z,w)=w′(y′+z′)+x′(y′+z′)$$ $$f′(x,y,z <S> ,w)=(w′+x')(y′+z′)$$ DeMorgan: $$f′(x,y,z,w)=(wx)′(yz)'$$ DeMorgan, again: $$f′(x <S> ,y,z,w)=(wx + <S> yz)'$$ <S> So now the right-hand side of \$f'\$ is just the simple negation of the right-hand side of \$f\$ . <S> Which is a little anti-climactic, since now we just rely upon the fact that any expression \$x + x' = 1\$ , which is what people have been saying all along about \$f+f'=1\$ , but at least it provides a little Boolean-algebra explanation for why that is true. <A> By simple definition of \$+\$ <S> (OR) and \$′\$ <S> (NOT) <S> A | B | A + B--------------- <S> 0 | 0 <S> | 0 <S> 1 | 0 <S> | <S> 1 0 | 1 <S> | 1 1 <S> | 1 | 1 <S> A | A′| A + A′---------------- 0 | <S> 1 | <S> 1 1 | 0 | 1 <S> \$∴ ∀f. <S> f + <S> f′ = 1\$
It is a fundamental fact in Boolean algebra that the complement of a binary value is true whenever the value itself is false.
Simplest way to count quadrature encoder counts I need to count the pulses from a high-res quadrature encoder but the prohibitively high frequency means I can't use my main processor. I looked into the HCTL-2022 IC which does exactly what I need, although the part now appears to be obsolete. Are there any other chips which would serve as a suitable substitution? Alternatively, how would I implement a circuit to count the pulses? <Q> I'm not familiar with that chip but a quick scan of the datasheet shows that it has four external interrupts. <S> All that should be required is to monitor, say, one of the encoder outputs with an interrupt pin and when triggered count up or down depending on the status of the other encoder output. <S> Figure 1. <S> Encoder signals and resultant count. <S> Pseudo code// Triggered by encoder output Ainterrupt { if(B) { encoder-- <S> // <S> Encoder is running anti-clockwise. <S> } else { encoder++ <S> // <S> Encoder is running clockwise. <S> }} <S> You make the encoder variable as many bits long as required for the accuracy you need. <A> Ok, as far I understood from your comments you have a LPC1768 with QEI, but the real problem is that you want signed math. <S> Well it's not a problem at all, you have just to define or parse the QEI output (let say 32 bit unsigned integer uint) to signed integer int. <S> You may do all the signed or unsigned math you want. <S> If you subtract actual value in uint format with other uint, you get the uint result, which will give you always the same direction. <S> For the sake of simplification, let we have a compass 0(360)-359 degrees. <S> If you subtract 350-20 you get 330, but if you subtract 20-350 you get 30. <S> The angle difference is always 30 degrees, but you get two different results. <S> Now suppose you do signed math, same angles 20 and -10, 20--10=30 <S> , -10-20=-30. <S> You see, when using signed math, you always get the shortest distance and direction. <S> Same rules apply when using binary numbers: signed will always give you shortest path - so you don't have to care about roll-over with sign (direction, in which direction should I turn to reach the target with a shortest move). <S> I give you an advice: there is no roll over problem ever, try to use the calculator and you will understand. <A> Microchip PIC has the QEI <S> some NXP processors have a QEI , <S> STM32 ARM processors have an encoder mode that shares two timers <S> AVR chips with two timers and a configurable logic unit can be set up to efficiently read quadrature encoders <S> Multiple NXP ARM processor families have a QEI: take a look at the LPC17xx , LPC18xx , and LPC43xx series
Many microcontrollers have hardware quadrature encoder support:
Can I disable a battery powered device by reversing half of its batteries? When I have a device that is powered by two batteries, can I disable it by turning one battery around? For illustration, the purpose is to leave batteries in the device for storage, but it does not have an off switch. There are multiple solutions for this, but I would like to understand specifically the solution using reversed batteries. My idea is as follows: By turning around one of two 1.5 V cells, I let the + poles touch in the middle. The voltage between the - poles should now be 1.5 V - 1.5 V = 0 V. Therefore, there is no current through the device. The problem is: Two batteries may not be perfectly balanced. One of them may be more discharged than the other, so they have different voltages. It could work if the battery with more charge would be automatically discharged until it reaches the same level. Then, the cells would get into perfect balance, and stay that way even with fluctuations. Does that make sense? What properties of the device are required? If it works with two batteries, does it work too with other even numbers of battery cells? <Q> If the batteries are not perfectly balanced they would still have some net voltage. <S> And that is assuming that the batteries are connected in series. <S> If the batteries are connected in parallel, flipping one of them the other way around will basically create a dead short between them. <S> At best, that is going to drain them. <S> In either case, storing batteries in a device is not a great idea. <S> If the device ends up being stored for a longer period than you anticipated, and the batteries start leaking, the internals of the device is hardly an ideal place for them to spill their fluids. <S> Retrofitting an off switch would be better, although that still doesn't solve the issue of long time storage and leaking. <A> You can , but as per @Dampmaskin's answer, it's not really the best plan. <S> Why not do as often seen in new electronic devices, and inserst a plastic slip between a battery and one of the terminals (or between two batteries). <S> If the slip is long enough to be seen outside the closed battery box, and a clearly visible sign says something like "remove before use", you can just pull the slip to "reactivate" the device. <A> Very often, multiple batteries are series-connected. <S> And the total voltage is used to power "stuff". <S> In that case, flipping a battery to turn off current works just fine. <S> However, the less-usual case of parallel-connected batteries won't allow you to flip one: the resulting failure could be spectacular. <S> It would be unusual, but possible that the series-connected mid-point between the two batteries is used for some purpose. <S> This could be spotted by a wire emerging from the bridge that joins one battery "+" to the other battery "-". <S> If the bridge is free of wires, you're good to go. <S> Keep in mind that battery-flipping as on-off mechanism involves a lot of battery-handling. <S> Keep your fingers off those battery ends where electrical contact is made, and avoid touching the bridge or other electrical contacts. <S> Finger grease/acids can increase contact resistance. <S> The other obvious caution is to check which battery is flipped! . <S> You can easily apply reverse voltage to your electronics if you guess wrongly. <S> After a few microseconds, you have likely trashed your device. <A> You can. <S> It is a bad idea for a few different reasons, besides those already listed. <S> You can yourself forget what you did. <S> Depending on how important is the device, you may swear a lot. <S> Or worse. <S> Batteries are geometrically asymmetric. <S> You may have hard time inserting them in the battery compartment and/or deform the contacts, the compartment, the cap or something else. <S> Residual voltage - some <S> (electronic) devices are especially vulnerable to very low voltages and may break in an unexpected manner. <A> As others have said, reversing one half of the batteries is likely to work... with some possibilities for trouble (leakage when the batteries expire and so on). <S> However that only works if your batteries are AAA, AA, C, D or similar cylindrical batteries with end connections... and only under circumstances where the container has open contacts. <S> I've seen battery holders where polarity is 'enforced'... <S> where the +ve end fits against an insulating plate with a hole or slot in it <S> so when you turn the battery round no contact is made at the +ve connector (obviously the -ve end of the battery that is the wrong way round is sitting againstthe +ve end of the holder). <S> The big -ve end of the battery won't fit into the hole or slot so the battery sits against the plastic insulator. <S> That's obviously better than reversing because it's now an open circuit so it doesn't matter if the batteries don't have the same voltage... <S> and you can break the circuit by turning only one battery round in a device with more than two batteries. <S> If your device uses button batteries, these are often connected to one end and to the side (or sides)... <S> so turning round such a battery means you've got continuous metal between the +ve and <S> -ve connectors of the holder... <S> so the 'other' battery won't see the opposing voltage and the device will receive 1/2 of it's expected voltage. <S> And one battery will run flat. <A> As others have said, reversing half of the batteries is likely to work.... <S> for a while. <S> But it depends on what kind of batteries you have. <S> For example, Energizer batteries are guaranteed NOT to leak, whereas Duracell batteries are guaranteed to leak corrosive fluid into your device. <S> If you use Duracells, remove them when your device is in storage.
Reversing one or more of them will change the length of the battery pack and/or face battery terminal to the wrong contact. You can also use posterboard or any other relatively rigid but thin non-conductive material to accomplish the same effect.
Fundamentals of Output Characteristics of a BJT, what is the difference between load line and output characteristics? In any transistor, we plot the output characteristics, i.e. effect of change in Vce on Ic.Next, we write the KVL equation on the transistor, i.e. Vcc = IcRc + Vce and plot the load line from this equation. So, now we have 2 relations between Vce and Ic - output characteristics and load line. Why do we 2 plots for the same 2 variables, when will each be used and what all do they signify? <Q> The transistor characteristics say what the transistor does. <S> The intersection of the transistor characteristic lines with the resistor load line says what the amplifier does. <A> The load line plot does a poor job of demonstrating the non-linear voltage error of a large swing output. <S> But it does tell you the base current needed vs output voltage. <S> A better plot would be error voltage vs output for a given load current V/R and current or voltage gain. <S> But for high current and full swing, you can see the spacing between base steps changes quickly below Vce=2V. <S> This tells you the difference in peaks gets worse and this means harmonic distortion. <S> So avoid <S> Vce<2V. hFE begins to drop here to the left of the M point defined by the saturation zone, but depends greatly on collector current. <S> The low current also has poor hFE so <10% <S> max current <S> Ic should also be avoided. <S> This is defined by the N point and must also be avoided so Q is the middle of the M-N range. <S> This means for the poor simple H-bias design <S> the useful output swing is reduced in this case to 2/3 Vcc. <S> This can be easily fixed by adding a Rcb negative feedback and series input R with other changes needed , but then output linearity can be improved by the amount of gain reduction. <A> The important piece of missing information is that in the circuit, current through the resistor flows into the collector -- both currents are Ic, and they are the same. <S> So, for a given base current, the blue line shows you how Ic changes with Vce, as determined by the transistor. <S> The red line shows you how Ic changes with Vce, as determined by the resistor. <S> Since these currents are the same , the actual Vce and Ic in the circuit will be the point where the blue and red lines meet.
The load line says what the resistor does.
Safety: Can I harm myself using an AC, 9V 500mA power supply for experiments? I'm about to embark on my first electronics project and would like to check off a safety question: I want to build a bicycle hub dynamo -> USB charger based on these instructions . It is basically a rectifier circuit to convert 6V, 3W AC produced by the hub dynamo to 5V DC needed for USB. In order to conveniently test and experiment with the circuit, I'm thinking about buying a wall power adapter ( this one) that converts 230V AC (Europe) to AC, 9V 500mA, since this is very close to the 6V AC, 500mA that the hub dynamo provides. I did some research and it looks like AC is up to 4 times more dangerous than DC. My question: would it be dangerous if I accidentally came into contact with the 9V, 500mA AC provided by the wall power supply? Thanks a lot! EDIT: To clarify, by "safe" I mean, "Can I harm myself?". <Q> Yes totally safe. <S> AC or DC below say.. <S> 30 volts will never shock you except maybe in extreme cases like putting both leads on your tongue. <S> Once you get to around 120VAC it will hurt you but even then it's unlikely to kill you unless you get stuck on it or have a heart condition or something. <A> Below 50V AC / ~65V DC is considered non lethal for most of the population, below about 15V AC / 20V DC its rare that you would feel it from touching the contacts, <S> So yes perfectly safe <A> For example, if you stuck two needle electrodes through your skin into your bloodstream, one on each arm. <S> Even then you would have to be pretty unlucky to die due to heart fibrillation. <A> As the other answers say, voltage as low as 9v, even AC, is not going to harm you by electric shock, unless surgery is also involved. <S> However, 9v at 500mA is 4.5 watts, which is enough to set something on fire. <S> If that 500mA is a rated output, then it might be able to push out several times that current into a low resistance load before the supply's thermal trip goes, or perhaps even indefinitely.
The only way such a voltage could be dangerous is if the skin barrier is broken.
Why optocouplers for MIDI Out/Thru? I found this article: midi-in-through-out Credits by Zynthian I like that it has notification LEDS for MIDI in/out/thru. However, I also notice there are optocouplers for MIDI Out and Thru, while the MIDI electrical spec, page 2/3 only defines an optocoupler for MIDI In. Is this overprotection or are the additional optocouplers useful? <Q> Optocouplers are used for galvanic isolation. <S> In most cases, this protects against dangerous voltage differences, but in MIDI inputs, it just prevents ground loops. <S> However, the optocouplers used for the MIDI outputs in the linked schematic do not provide any isolation whatsoever, because the grounds and +5V power supplies are connected together (they are the same). <S> Those optocoupers could be replaced with a simple transistor, or (because no amplification is needed) with a piece of wire. <S> That schematic is not the work of a competent designer. <S> Ignore it. <A> If you trust the other guy to put one on his MIDI in, it's over-protection. <S> But - in the real world, it's 10 cents well spent. <S> Ground loops are a big problem in audio systems. <S> MIDI is not particularly high speed, so there is no reason not to do it. <S> EDIT : <S> as others have pointed out the opto in this schema has nothing to do with isolation. <S> In fact it is not relevant to the MIDI interface at all as it is isolating between the driver and the internal circuit (this is the OUT side). <S> Take a look at this description of the MIDI interface hardware. <S> This version is done a bit differently - pin 5 is grounded, pin 4 driven high. <S> It comes to the same thing. <S> But the opto here is for interfacing with the micro, nothing to do with the MIDI wiring. <S> (I didn't look hard enough the first time.) <A> The extra two optocouplers seem to be there not for ground isolation purposes - their both sides are ground referenced so they don't block ground loops. <S> In a rather unconventional way, and not within the required resistance tolerance either. <S> And the outputs are missing the ground pin for the connector shield as well. <S> The input circuitry leaves quite little current for the optocoupler input - but the higher the visible LED forward voltage is the more current the optocoupler gets. <S> In this case it is designed with a green LED so the optocoupler current is still above the turn-on threshold.
The extra optocouplers just seem to be doing logic level translation, buffering and driving the thru and out connectors.
Connecting 2 external PCBs with a 1m cable I would like to connect 2 PCBs. On each PCB there is a MCU and each MCU is able to send and receive commands from the other one.Also, PCB1 should supply power to PCB2. The 2 PCBs are separated by ~100cm and I'm wondering what kind of hardware connection/protocol I should use, knowing that I would like to limit the number of wires between the 2 PCBs. My first guess was to use UART / RS232, so it can be full duplex, and probably fast enough: 1 TX 1 RX 1 Vcc 1 GND I tested it, and it works (115200 baud rate), there's just a small ~20ms latency getting a reply when sending a request from PCB1 to PCB2. I limited the baudrate to 115200 because I don't want the connection to radiate. So I was told I could use 2 additional wires and communicate using SPI (which sounded a bit unusual to me when connecting to external PCBs together), so I'm wondering if it's a good idea, what would be the drawbacks? <Q> I tested it, and it works (115200 baud rate), there's just a small ~20ms latency getting a reply when sending a request from PCB1 to PCB2. <S> That latency is almost certainly nothing to do with the data rate of 115 kbaud. <S> In 1 ms there can be 115 bits transmitted so, <S> unless your message protocol requires thousands of bits, you are going to be stuck with that latency. <S> So I was told I could use 2 additional wires and communicate using SPI and <S> On each PCB there is a MCU and each MCU is able to send and receive commands from the other one. <S> Not all that easily achievable if you are wishing to swap who is master and who is slave. <S> I'd stick with UARTs and dump the RS232 bit - just use logic levels. <S> SPI and UARTs will generate EMC and there's no getting away from the fact that for equal baud rates, the EMI will be very similar. <A> I wouldn't use SPI at one meter due to the fact MOSI pulse doesn't start until the slave receives the clock pulse but must reach the master before the next clock pulse. <S> Just sayin' <S> It's also a pain since the link is always initiated and controlled from one end and dummy bits are required for the master to receive when it's not sending. <S> It's even trickier for the master to receive when sending (duplex communications). <S> You have to take into account the previous command in your current transmission to interpret the data being received at the same time <S> so it aligns properly since messages may be different lengths. <S> Also think about what this entails for communications where the slave might want to initate a transmission while the master is mid-transmission, or even when the master is just idle. <S> The mid -transmission problem is the previous paragraph on steroids. <S> The beauty of UART is that it is really easy to synchronize (i.e. no need to synchronize data travelling both ways. <S> Either side can just start talking whenever it wants.). <S> duplex RS422/RS485). <S> If you are going to add 2 more wires anyways (4 signal wires total) just go with full duplex RS422/485 and crank up the baud and distance. <S> I easily got several Mbps without trying and without termination and was mainly limited by my USB to UART converter which was required to snoop communications with my PC for debugging. <S> Differential signalling reduces the radiation at higher baud too. <S> You should be able to get at least a few Mbps at a meter or two, or three easily. <A> As others have noted, I would avoid SPI at that distance. <S> However, if you want to link to be faster, I would recommend passing your UART signals through a full-duplex LVDS transceiver on each PCB. <S> The differential signals will minimize radiated emissions, allowing your bit rate to increase significantly. <S> Something like the <S> DS90LV019 should meet your requirements. <S> Just be careful that you don't try to increase your bit rate too much, or you will risk having to change your protocol to something that is DC-balanced (such as 8b/10b encoding). <S> You will also need to use twisted pair cable (such as Cat5e) and provide adequate termination at each end.
I firmly stay away from SPI for this and stick to duplex UART (either RS232 or full
Can I increase the power rating of a film resistor if I increase the cooling? I am trying to apply 1 kW of heat to an area of 10x10 mm and I am considering using a panel mounted film resistor. One that meets my power requirements doesn't exist so I am wondering if I can go above the normal power rating if I sufficiently cool the resistor. Could I use a lower powered one as long as the heat was adequately removed? (I should say the point of this is I am developing a heat sink for high powered electronics and the temperature of the resistor wouldn't go above the normal recommended temperature) <Q> For a 100’C rise per 1kW input you would need a CPU style heat sink with 0.1’C <S> /W thermal resistance with size and force air of a super cooler heatsink. <S> These are typical much larger(10x) than your allocated area and have >10 <S> m/s air velocity over a large number of fins. <S> water cooling would be necessary to reduce the area to 1 cm^2 with a very efficient heat exchanger. <A> In general, yes you can push a resistor beyond its ratings just as you would a transistor. <S> We have cooled inductors and capacitors as well. <S> However, the devil is in the details. <A> In general, no you cannot operate a device beyond its specified maximum operating conditions without risking failure. <S> There are also limits on the maximum current and the maximum voltage. <S> If you are attempting to dissipate 1kW through a resistor then either your voltage or current are probably very high.
Remember that the power rating of a resistor is not the only limitation.
Why is the Digital 0 not 0V in computer systems? I'm taking a computer system design course and my professor told us that in digital systems, the conventional voltages used to denote a digital 0 and a digital 1 have changed over the years. Apparently, back in the 80s, 5 V was used as a 'high' and 1 V was used to denote a 'low'.Nowadays, a 'high' is 0.75 V and a 'low' is around 0.23 V.He added that in the near future, we may shift to a system where 0.4 V denotes a high, and 0.05 V, a low. He argued that these values are getting smaller so that we can reduce our power consumption. If that's the case, why do we take the trouble to set the 'low' to any positive voltage at all? Why don't we just set it to the true 0 V (neutral from the power lines, I guess) voltage? <Q> You are getting confused. <S> Look at TTL for example: - A low input level is between 0 volts and some small value above 0 volts (0.8 volts for the case of TTL). <S> why do we take the trouble to set the 'low' to any positive voltage at all? <S> We take the trouble to ensure it is below a certain small value. <S> Picture from here . <A> You are confusing the "ideal" value with the valid input range. <S> However, nothing is perfect in real world, and an electronic output has a certain tolerance. <S> The real output voltage depends on the quality of wires, EMI noise, current it needs to supply etc. <S> To accommodate these imperfections, the logic inputs treat a whole range of voltage as 0 (or 1). <S> See the picture in Andy's answer. <S> What your lecturer probably meant by 0.75V is one of the points making the logical 0 range. <S> Note there is also an empty range between 0 and 1. <S> If the input voltage falls here, the input circuit cannot guarantee proper operation, so this area is said to be forbidden. <A> It is impossible to produce true zero volts logic signalling. <S> There must be some tolerance allowed for, as the circuitry is not infinitely perfect. <S> Spending money trying to make it infinitely perfect would not be a good investment of design funds either. <S> Digital circuitry has proliferated and advanced so fast because its uses huge numbers of copies of the very simple and tolerant circuits that are logic gates. <S> The binary states 1 and 0 are represented in digital logic circuits by logic high and logic low voltages respectively. <S> The voltages representing logic high and logic low fall into pre-defined and pre-agreed ranges for the logic family in use. <S> The ability to work with voltages within these ranges is one of the primary advantages of digital logic circuitry - it's not a failing. <S> Logic gate inputs can easily distinguish between logic high and logic low voltages. <S> Logic gate outputs will produce valid logic high and low voltages. <S> Small signal noise is removed as logic signals pass through gates. <S> Each output is restoring the input signal to a good logic voltage. <S> With analogue circuits, it is between more difficult and practically impossible to distinguish noise from the signal of interest and to reject the noise entirely. <A> Additionally to the points that is made by the other answers, there is the issue of parasitic capacities at high switching speeds (the usually ignored capacitance of wires and other components). <S> Wires usually also have a slight resistance. <S> (A very simplified model!) <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Being an RC network, this results in an exponential falloff curve ( V ~ e^-kt ). <S> If the receiver sets it threshold very low (near 0V) then it would have to wait a significant time for the output voltage drop enough to trigger the threshold. <S> This time might seem insignificant, but for a device supposed to switch a million (billion even) times a second, this is a problem. <S> A solution is to increase the "OFF" voltage, to avoid the long tail of the exponential function. <A> Because nothing is perfect and you need to provide for this with a margin of error. <S> Those numbers are thresholds. <S> If the lowest possible voltage in your system is 0V and your threshold is 0V, where does that leave you if ALL your components and wiring aren't perfect conductors (i.e. always have some voltage drop) and noiseless in a noiseless environment? <S> It leaves you with a system that can never output 0V reliably, if it can even do it at all. <A> In a 2 rail system (usually chips powered with just a single positive voltage plus ground), whatever switch or device is pulling the output capacitance down to a low signal level has finite resistance, and thus can’t switch a signal wire to zero Volts in finite time. <S> (Ignoring superconductors). <S> So some realistic lesser voltage swing is chosen which meets performance requirements (switching speed vs. power requirements and noise generation, etc.) <S> This is in addition to margins needed to cover ground noise (different ground or “zero” voltage levels between the source and destination circuits), other noise sources, tolerances, and etc. <A> From a Process Control Instrumentation angle, this BIAS above Zero, is to provide additional info about the integrity of the instrument. <S> If an instrument was calibrated 1V-5V = 0-400 gallons per minute (gpm), then, if 1V was measured by the instrument, you'd know that there was 0 gpm. <S> All would be normal. <S> However, if 0V were measured, then that would be some kind of alarm condition that the instrument has failed or shorted. <S> A properly programmed control system would throw its respective control loop into manual to prevent slamming the valve either closed or open. <S> In other words, this BIAS from Zero allows the control system an additional, indirect method to know the health of the measuring instrument (i.e., circuit hasn't Grounded or Failed). <S> If you didn't do this then you might not know if 0V were normal or an alarm condition. <S> Of course, in the old days, we didn't have all the smart instrumentation that communicates much more diagnostic info. :-) <S> Update: <S> @Transistor has provided some additional insight, which is much appreciated. <S> For what it's worth, I did realize that there was a digital vs analog conflict in my response (mainly due to the highly technical comments/answers). <S> What I was trying to do was make an 'analogy argument', similar to water pressure vs voltage, to impart a possible reason to not 'just use 0V' as a basis. <S> However, I still may have missed the point. <S> @Transistor, I don't have enough Reputation to Comment, so my question back to you: <S> Should I delete my response? <S> I certainly don't want to mislead. <S> Thanks.
In usual logic, in ideal conditions, the logical zero would be precisely 0V.
Does an oscilloscope subtract voltages as phasors? I want to measure two voltages on an oscilloscope. They are not in phase. I want to know the magnitude of the voltage difference. The oscilloscope has an in-build math function to subtract the voltages from each other. But does it do that phasor wise ? Let us say V1=10V and V2= 15V and the phase angle is 20 degrees. We want the magnitude of V1 - V2. Does the oscilloscope give 5V, or does it take the phase into considerations when subtracting ? I am using the tektronix TDS 2024C. <Q> The math subtraction function in an oscilloscope subtracts the instantaneous values of waveform 1 from the instantaneous values of waveform 2 to create a new waveform comprising all those instantaneous subtractions. <S> Here is a pictorial example of a noisy sinewave subtracted by a clean sinewave with the output (in pink) being just the noise: - Picture from here . <A> The algebra of phasors is valid only for time invariant sinusoids of same frequency. <S> You can't apply algebraic functions on phasors whose sinusoids do not have the same frequency. <S> Thus your oscilloscope should niether assume the time invariance nor the identical frequency of your input signals. <S> Instead it will do a "point-by-point" time domain substraction of the input signals. <S> The quality of the result will depend then on the number of samples your oscilloscope has to work with. <A> I think you might be misunderstanding what phasors are . <S> They're not some magic Thing You Have To Do When It's Out Of Phase; they're a way of expressing things in terms of a sort of average, making a lot of assumptions about what the shape of the waveform is, that makes the math work out in the specific case of polyphase AC. <S> All the complicated bits of phasor math are just ways of making the result of the phasor calculation agree with the result of doing it the long way around, calculating out everything at each individual point in time. <S> Working with phasors is, in short, a way to make the calculations convenient for humans. <S> Oscilloscopes are not humans (at least not last I checked), and don't know or care about these averages. <S> They just subtract the signals at any given point in time. <S> Thus, they skip needing all the things that phasor math is good for and get straight to the result in a more direct method, which is both simpler to calculate and more generally applicable. <A> The easiest way to measure the relative magnitudes is simply to have the oscilloscope report the RMS of each signal. <S> Subtract one from the other. <S> Job done. <S> If this isn't an option... <S> You say the signals are not in phase, but you do not mention whether their frequencies are the same. <S> Assuming they are, most digital oscilloscopes have the ability to delay signals. <S> If the two signals are the same frequency, you can delay one relative to the other until the phases line up. <S> At that point subtracting one from the other can be done as normal. <S> This still works if the phase varies - you just have to do a single-shot capture and then align the phases for that capture, and repeat each time. <A> It's going to know that at 1 us past the trigger point, channel 1's signal is 5V higher than channel 2's signal and spit out "5V" as your answer. <S> Source: work for a scope vendor (Keysight). <S> You can often measure phase shift, max, and min of signals and do some quick math that way.
Oscilloscopes do a point-to-point subtraction, so it's not going to necessarily know that you have two periodic, phase-shifted signals. A basic (or expensive) oscilloscope does not understand phasors.
How to control the output voltage of a solid state relay Considering this solid state relay: The output voltage can be from 24 to 380V. How can I control this voltage? Is it proportional to the input voltage? What I need is a voltage regulator that can be controlled from a Raspberry Pi in order to control the speed of a fan. Currently, I use something as below that works as expected. I am completely new in electrical engineering, so please be kind with me... <Q> I think you've misunderstood how a solid-state relay works. <S> The coil, or input, will work on 3 to 32 VDC. <S> The contacts, or output, are rated for 24 to 380 VAC. <S> When the "coil" receives its required voltage (SW1 closes below) <S> it closes the internal "contact". <S> In a normal relay, this is an electromechanical process. <S> In an SSR it is an electronic process. <S> Either way, this keeps both sides isolated from each other. <S> General Relay Diagrams: simulate this circuit – Schematic created using CircuitLab <S> The voltage that the coil operates on and the voltage at the output depend on what voltage you feed into either side. <S> There is no conversion going on in a relay. <S> It is just a switch. <S> I wouldn't suggest using what you've displayed as the main component of a voltage regulator. <A> A Solid-State Relay (SSR) is just that: a solid-state (semiconductor) version of a relay. <S> A relay is an electro-mechanical switch that either connects or disconnects contacts together under control of an electro-magnet within it. <S> So your SSR is an on-off switch. <S> Unlike an electro-mechanical relay, it has no moving parts or electro-magnet. <S> Its functions are carried out by solid-state components. <S> It has a lot of benefits over an electro-mechanical relays. <S> One useful to its applications is that the contacts cannot spark as they open or close because there are no contacts or moving parts. <S> And yours is only for use on AC voltages between 24 Vac and 380 Vac. <S> If you pass DC through it, it can switch its contacts on but can't switch them off. <S> The DC supply would have to be removed elsewhere. <A> You may misunderstand how fans work <S> In fact, in residential wiring a very common mistake is to try to control fan speed with a lamp dimmer , which is itself not rheostatic and is a triac leading-edge or trailing-edge device. <S> You should be using a fan speed control which is compatible with the type of fan. <S> For instance some fans have several wires going into a multi-position switch (e.g. The "pull the chain for the next speed" types). <S> Others want a fan speed control . <S> Yet others want variable frequency drive. <S> "How does a consumer/residential fan speed control work" might be a good question.
You need to check the spec sheet on the fan in question, but a great many fans do not regulate speed rheostatically.
Is there any memory cell that can store more than one bit? SRAM, DRAM, Flash, EPROM - all of the memory cells contain one bit of data each. Is there any memory cell that can store more than one bit, e.g. 2 bits/4bits? <Q> Yes. <S> When you get down to the level of a memory cell, the circuitry is pretty analog. <S> So you can store multiple voltage levels in the cell, and interpret those multiple levels as encoding more than one bit. <S> There are parts in production that use this. <S> Google "multi-level memory". <A> Actually you’ll have a really hard time to find a NAND flash device that does <S> not store more than one bit per cell nowadays. <S> This is one of the reasons the price of memory cards and SSDs (in terms of $/GB) has dropped so dramatically over the years. <A> Of the semiconductor memories on the market, only some Flash EPROM technologies store more than 1 data bit per memory cell. <S> You'll find details of those parts on the internet.
As an example, the Cypress Mirrorbit technology stores two data bit in a single Flash memory element, utilising four stored voltages to do so.
PIC12F508 "Calibration value is invalid" We use PIC12F508 chips by the tens of thousands. We normally purchase from people like Future Electronics or Arrow. However, I see that our purchasing people are now buying 12F508 chips from someone whom I am not familiar with: Technoshack in Ontario, Canada. Most of the places where we use this PIC controller do not have stringent timing requirements. The native accuracy of the internal RC oscillator is entirely adequate. Quite frankly, even an uncalibrated RC frequency is adequate. However, a few projects / products do need the native accuracy of 5% or better. This is never a problem when we load the calibration value that is programmed in the factory. My problem arises with the last two batches of chips supplied by Technoshack. These arrived in sealed bags with Microchip tamper-evident seals. However, MPALB reports that the chip calibration value is invalid. I've checked with two different programmers: PICstart Plus and PICkit 3. Both report the same error. I spot-checked one chip from each of 30 or so different rails. All have the same problem. I'm not sure how to go about dealing with this. Obviously, I need to get Microchip involved but I'm not sure who to talk with. Guidance appreciated. <Q> They appear to be operating out of a house in Brampton (a heavily ethnically Indian and South Asian area of the GTA). <S> Their US address in Charlotte NC is also a house. <S> Maybe they are counterfeits or possibly genuine Microchip devices sold at a discount, perhaps into Asian markets, that do not have osccal programmed- check the exact part number on the packaging and chips. <S> I believe Microchip has some reduced cost variants that are sold only in Asia. <S> Technoshack does not appear to be a franchised distributor. <S> I suggest you contact the vendor (Technoshack) and ask some questions. <S> They may have come by them through some other supplier. <S> Then, if you don’t get resolution, go to Microchip and determine if the parts are genuine (and are full-spec), and if so if they meet your requirements or if there is a work-around. <S> My recollection is that some parts were being sold as untested or partially tested into Asian markets for use in toys etc, at a fraction of the usual price. <S> Of course the part numbers would be a bit different if Microchip supplied them. <A> Technoshack in Ontario, Canada appears to be an electronics parts broker. <S> I cannot find them listed as an authorized Microchip distributor. <S> Under the best of circumstances a broker finds excess inventory and acts as a middleman to facilitate the purchase of excess inventory for buyers. <S> Too often they enable entery of gray market parts to the supply chain. <S> Occasionally the parts are outright fakes, or counterfeits. <S> I know from articles on the internet that Microchip sells controllers at very low cost, <S> like the PIC12F508, that do not meet specifications. <S> These parts are intended for use in non-critical applications. <S> It would be a simple matter for a die packager to put on an extra shift and run off a few hundred reels of these parts with valid looking markings for sale at fully tested prices. <S> The brokers I've dealt with do not usually offer value added services like validating the parts they handle are genuine. <S> This means that brokers can be fooled by this kind of operation. <S> Microchip will be able to tell that the parts you have are genuine. <S> I doubt they have any leverage with Technoshack. <S> What you do know is that the parts you have are not factory calibrated. <S> As you have tested the parts before your product is assembled and programed you know that your process is not erasing the factory calibration. <S> The quick(dumb) fix is: <S> Use the PICkit3 to regenerate the oscillator calibration. <S> The band gap calibration is lost. <S> The proper solutions is: Return the parts to Technoshack for a refund. <S> The take away here is unless you really do not need to care do not buy erasable parts from a broker. <S> EVER! <A> The job title you are looking for is Field Applications Engineer, and you should find one local to your area. <S> They should be able to support you irrespective <S> of which supplier you bought the chips through as long as the chips are genuine.
Purchase parts only from an authorized Microchip distributor. These parts could be manufacturer overruns intended for scrap, parametric test failures, pulls from obsolete products or other reasons.
What does the "capacitor into resistance" symbol mean? I can't understand the circled symbol and didn't manage to google it. It looks like a variable resistance and a capacitor. What does it mean? <Q> It's not one symbol. <A> Just to explain the potentiometer, since you have your answer now... if you look at one physically, you'll see a carbon track, from the left terminal to the right one. <S> That's a resistance. <S> The middle terminal connects to the wiper, the copper slider that contacts the track. <S> Turning the shaft moves the wiper, varying the resistor. <S> You seemed to understand that already. <S> In some cases you'd use just 2 terminals as a variable resistor. <S> But it's often more useful to use all three. <S> If you connect, say, 5V to one side, gnd to the other, then the wiper will give a voltage between 5V and gnd, variable. <S> If you connect one end to gnd, and the other to a signal, the wiper will give a voltage partway between the signal and gnd. <S> As in a volume control. <S> Often electrical circuits are controlled by a particular voltage somewhere that you want to control. <S> You need the 3 terminals of a potentiometer, aka "pot", to do that. <S> A simple variable resistance would, by itself, just limit the current coming through. <S> That's not always useful. <S> In most circuits, a pot is used with all 3 terminals. <A> As others say - it's a capacitor on a pot wiper. <S> Q1 is an inverter with a gain without the mystery pot of about 38 x (V+ - 1) [for reasons related to transistor physics] ~= <S> 300 <S> if V+ = 9V, but with Vc_Q1 amost at ground. <S> Q2 emitter provides a buffered inverted input signal with is fed back via the left hand 100k pot to stabilise gain. <S> The position of the pot wiper alters the frequency response of the RC feedback network in a probably undesigned and 'interesting' manner. <S> Wiper far left = <S> a sub 1 Hz low pass filter for feedback PLUS a large cap on the input. <S> so signal probably low. <S> Pot wiper <S> far right - the Q2 emitter follower drives the cap but it again probably clamps the voltage enough to prevent feedback <S> so you get high gain overall plus high gain from Q2 so massive output clipping. <S> As you slide pot right to left you probably get increasing modification of signal, reduction in overall gain but change in frequency response. <S> The circuit appears to require enough signal to drive Q1 into conduction, so on very low signals it probably produces no output. <S> I started to say what that change would be but decided "it's complex" :-). <S> It would sound very bad (or very good with the right ears on). <S> As a 'bonus' the circuit acts overall as a frequency response modified Schmitt trigger. <S> I won't even start to try to suggest what happens with pot variatioj - but simulation would be interesting.
It's just a capacitor connected to the wiper terminal of a potentiometer.
how to determine architecture core detail of ARM11 processor I'm cross-compiling for an embedded Linux board, based in BCM5892 ARM11 processor. I need to know about architecture detail of this processor(‘armv6’, ‘armv6j’, ‘armv6k’, ‘armv6kz’, ‘armv6t2’, ‘armv6z’, ‘armv6zk’) for feeding -mcpu flag for compiling my application, but there is no such information in Broadcom website and manufacturer does not provide any information too. then is there any way for determining this information from Processor or OS? <Q> From this website , the answer is ARMv6TEJ: ~ $ cat /proc <S> /cpuinfo Processor : ARMv6-compatible processor rev 5 <S> (v6l)BogoMIPS <S> : <S> 398.13Features : swp half thumb fastmult edsp java CPU implementer : 0x41CPU architecture: <S> 6TEJCPU <S> variant : <S> 0x1CPU part <S> : 0xb36CPU revision : <S> 5Hardware : <S> Broadcom BCM5892 ChipRevision : <S> 0000Serial : 0000000000000000 Also confirmed from the Linux boot messages here : [ 0.000000] Linux version 2.6.32.9 (root@localhost.localdomain) <S> (gcc version 4.2.3) <S> #72 PREEMPT Mon Mar 19 01:37:54 EDT 2012 <S> [ 0.000000] CPU: ARMv6-compatible processor [4117b365] revision 5 (ARMv6TEJ), cr=00c5387d[ 0.000000] <S> CPU: <S> VIPT aliasing data cache, VIPT aliasing instruction cache <S> [ 0.000000] Machine: Broadcom BCM5892 Chip <A> Note that your examples ‘armv6’, ‘armv6j’, ‘armv6k’, ‘armv6kz’, ‘armv6t2’, ‘armv6z’, ‘armv6zk’ are architecture variants, not CPUs. <S> They will go under -march argument. <S> Due to the processor model you mentioned I think your embedded system is a POS (Point of Sale). <S> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dht0001a/CHDCDJCJ.html <S> The corresponding arch for that processor is armv6z. <S> To avoid "locking down" the code to working well only on a single CPU model, you can use the -mtune flag (receives same arguments that -mcpu) that will produce the best code for a specified CPU, while keeping compatibility across all selected arch CPUs. <S> Your arguments can be like the following: -march= <S> armv6 -mtune= <S> arm1176jz-s <S> -mcpu= <S> arm1176jz-s <A> If you are running on a Linux system, you can use 'cat /proc/cpuinfo' to get some more details about the CPU. <S> Your feature details set may be listed there.
The CPU type is arm1176jz-s (missing f, as I have not seen any POS with ARM11 processor that supports hard-float).
What does D2 do in this schematic? Can somebody clarify what the purpose of D2 is in this schematic: I understand that we are taking a clock input from the Sync jack to the transistor. The reset pin (4) is pulled high to Vcc, until the NPN transistor is activated (pulled high) and it then drops the pin 4 to GND through the transistor. (Can somebody please clarify if my wording is correct with this description? How could I be more clear when speaking about transistors?) However, I do not understand what the D2 diode is supposed to do. Is it some sort of protection in the case of an incorrect input at the Sync jack? Isn't that what the de-coupling cap is for? <Q> D2 protects Q1 from reverse bias on its B-E junction. <S> A typical small-signal transistor can only withstand a few volts of reverse bias before it breaks down, and an antiparallel diode limits it to about 0.7 V. R31 limits the current into the transistor when the input is positive, and it also limits the current through the diode when the input is negative. <A> From the look of it D2 is to protect Q1's base emitter junction from excessive reverse voltages. <S> The presence of C2 means the Sync input signal is AC coupled and it eliminates any DC bias, but now the Sync signal will attempt to drive base of Q1 equally positive and negative. <S> In the positive direction the b-e junction will conduct in normal transistor operation, but on the negative excursion there would be nothing to limit the voltage if D1 were not in place. <S> D1 also serves to clip any negative noise pulses if they are present in the environment. <A> Depending on duty cycle of the incoming "hard sync", you may need the diode to provide DC restoration. <A> Without the diode the capacitor will charge up as Base current flows into the transistor when the signal goes high, but will have no discharge path (apart from the Vbe breakdown at ~-7.5V) when it goes low. <S> This will cause the capacitor to accumulate a negative charge, 'cutting off' the transistor with negative bias. <S> I simulated the sync circuit in LTspice with a 10V pulse waveform. <S> Without the diode the Base and Collector waveforms looked like this:- <S> The capacitor quickly charged to -7.5V and stayed there, with positive pulses just reaching the Base turn on voltage of 0.6V. <S> The transistor barely managed to get 2 pulses out before it became cut off. <S> With the diode in place the waveforms changed to this:- <S> Now when the sync signal goes low the diode clamps the Base to -0.6V and discharges the capacitor, so when the signal goes high it produces a strong Base drive and the transistor maintains full output.
Apart from protecting the Base from excessive negative voltage, the diode is necessary to provide equal charge and discharge paths for the coupling capacitor.
Why do we need to use transistors when building an OR gate? Why do we need to use transistors when building an OR gate? Wouldn't we be able to achieve the same result without transistors at all, just by joining the two inputs and reading the output? <Q> What you describe is called a wired OR connection. <S> It is possible in some logic families, particularly ECL (emitter coupled logic), but not in the most common ones (TTL and CMOS). <S> In CMOS it isn't possible because when a CMOS output is low, it creates a very near short from the output pin through the chip to ground. <S> And when it is high, it creates a very near short from VDD through the chip to the the output pin. <S> So if you tied two CMOS outputs together and one output high while the other output low, you'd have a very near short from VDD to ground, which would draw a large current and likely overheat one or the other of the two chips involved. <S> For TTL, there's a similar issue, but the "shorts" from the output pin to VDD or ground aren't quite as near short as they are in CMOS. <S> There's a variant output style, called open drain for CMOS or open collector for TTL <S> , that allows wired AND <S> connections rather than wired OR. <S> These outputs are designed to only be able to sink current to ground, not to be able to produce any output current when they're nominally in the high state. <S> These are normally used with an external pull-up resistor so that the output voltage will actually reach the "high" voltage level when required. <S> Note: <S> Open collector or open drain can be used for wired OR <S> if you use active-low logic (low voltage represents logic 1, high voltage represents logic 0). <A> this lets you "join the outputs" <S> simulate this circuit – <S> Schematic created using CircuitLab <A> If you just connect the wires, you'd have the (fairly likely) possibility of a 0 and a 1 together. <S> Since a 0 is gnd, and a 1 is 5V (depending on the chips, but it's a standard) <S> , you'd have 5V and gnd connected together by wires. <S> The term for that is a short circuit! <S> You could use diodes for a simple OR gate. <S> Or even resistors. <S> The problems occur when you connect this gate to other gates, other circuitry. <S> You can build an AND gate from 2 diodes the other way round. <S> But if you try connect a lot of them together you end up with one giant circuit that doesn't function as small separate parts, but as one big one. <S> Connections that aren't in your simple gate plan, might crop up in real life, messing up what you want to happen. <S> A transistor lets you separate the input from the output. <S> The output of a transistor can't feed backward and affect it's input. <S> A relay would be another alternative, though slower. <S> Since the switch can't affect the electromagnet. <S> Early logic was RTL or DTL, resistor-transistor logic, or diode-transistor logic. <S> Resistors, at first, then later diodes, were used to form the gate, then a transistor acted to buffer the result so the next gate you used didn't feed back through this one to <S> it's inputs. <S> Now, since transistors on chips are virtually free of charge, financially that is, we have the luxury of everything being properly buffered and separate. <S> Usually that's what we want. <S> TTL logic! <A> Consider what happens if one input is high and one is low, and you connect the two inputs. <S> It depends on how you build your logic gates. <S> If your logic gates are designed so that a high is really pulled high and a low is really pulled low (CMOS) <S> then this is a short circuit and something will blow up. <S> If your logic gates are designed so that a high is "weak" or high resistance (e.g. NMOS) <S> then the output will be low, but also the other input (that is supposed to be high) will be forced to be low even though it's supposed to be high, and this will have a knock-on effect on other logic gates which use the same input. <A> There is an analog approach: Combine any number of inputs (suppose either 0 or 5 volts) with resistors. <S> If the result voltage is 0, all are off. <S> If the result voltage is 5, then all are on. <S> In-between voltages indicate that some are on and some are off. <S> Example: If there are 4 inputs, 2.5 volts means 2 are on and 2 are off. <S> result == 0: nor <S> gate result = <S> = <S> 5: and gate result ! <S> = 0: or gate result ! <S> = <S> 5: <S> nand gate <S> You don't need transistors for the inputs, just for the output to check the voltage and restore a 0 or 5 volt logical result. <S> This might be used for an analog neural network node with a non-linear output function that has a "soft" result that might not be entirely true or false. <S> After thought: Resistors used this way can slow down logic speed since capacitance following the resistors must be charged or discharged when inputs change. <S> Also, use of transistors can greatly reduce power consumption. <S> Resistors used this way can always consume power with a mix of input states. <A> With some logic elements (all car door swithches ighting up the same lamp) this is possible, but not for example with CMOS gates as they are built with P and N channel FET transistors <S> so they need defined high and low voltage input to provide the output, the input cannot be left to float. <S> Connecting CMOS outputs together would not work.
With transistors, power consumption can be roughly divided by the gain of the transistors.
How to make affordable DIY low-res absolute rotatory optical encoders with fibers? I have zero knowledge of optics and want to make a very low resolution (~10°) absolute rotary optical encoder using rapid manufacturing (I have 3D printers, laser cutter, and desktop CNC). Digging the internet I realized that one can use some sort of code-disk with some slots in an angular/concentric pattern [Fig.1], and putting a light source and sensor on both sides of the disk to count the number of steps. This is called an incremental encoder. Fig.1 - Simple relative encoder measuring only the steps with no sens of direction. (image courtesy of [Hydraulics&Pneumatics][1]) Now if you want to have a sense of direction you may have two tracks of slot patterns (e.g. at different radii) or have a mask in front of the disk, plus two sets of light source and sensor [Fig.2]. Fig.2 - A relative encoder with the sense of direction. (image courtesy of J.P. Trevelyan [2]) For example, as shown in [Table.1] if the current state is [00] and the following state is [10] we are in CW direction (or vice-versa). | | 00 | 10 | 11 | 01 | |:--:|:---:|:---:|:---:|:---:| | 00 | -- | CCW | -- | CW | | 10 | CW | -- | CCW | -- | | 11 | -- | CW | -- | CCW | | 01 | CCW | -- | CW | -- | Table. 1 - Columns are the current state and the rows are the subsequent state. This is called an incremental quadrature encoder. Now I have some issues: I don't want to have the light source and sensor in the encoder but to use optical fibers to transfer light to and from the encoder to my electronics. The reason is that I can not have any electronics in the nevironment I want to use the necoder. However, I'm being told that if there is an air gap between two ends of optical fibers, a lot of light will be lost. I want to know what is the limit? does it depend on the sensor or source or the quality of the fibers? I do not understand how absolute encoders work. Do they also follow the same rationale of a disk-mask plus light sources-sensors? Is their design simple enough to be built DIY? What light source and sensor should I use to emit light into the fibers and read it back? is there some sort of off-the-shelf sensor and source connected to fibers already available which I can plug into an Arduino for example? Sorry for my novice questions I but I would appreciate if you could help me through. Thanks in advance. Refrences: [1]: https://www.hydraulicspneumatics.com/200/FPE/Sensors/Article/False/6440/FPE-Sensors [2]: https://www.researchgate.net/publication/228362381_Mechatronics_Control_Devices <Q> I don't want to have the light source and sensor in the encoder but to use optical fibers to transfer light to and from the encoder to my electronics. <S> However, I'm being told that if there is an air gap between two ends of optical fibers, a lot of light will be lost. <S> I want to know what is the limit? <S> does it depend on the sensor or source or the quality of the fibers? <S> The limit is that you lose practically all the light. <S> You need to experiment on this one. <S> I suspect that you'll do much better if you focus the light from and to the fibers, so that it's roughly collimated as it passes through the disk. <S> You might want to search on "light pipe". <S> I do not understand how absolute encoders work. <S> Do they also follow the same rationale of a disk-mask plus light sources-sensors? <S> Is their design simple enough to be built DIY? <S> Absolute encoders have one track per bit. <S> If you want "about 10 degrees" that means 5 bits (32 divisions, 11.25 degrees/LSB) or 6 bits (5.625 degrees/LSB). <S> In general you lay out the tracks in something called a "gray code", which is arranged so that no two bits change state at the same time -- this prevents the erroneous readings that would happen in a normal code (at, for instance, the transition from 'b111111 to 'b000000, where there would be six opportunities for errors, some of them large). <S> What light source and sensor should I use to emit light into the fibers and read it back? <S> is there some sort of off-the-shelf sensor and source connected to fibers already available which I can plug into an Arduino for example? <S> I can't answer this one. <S> I'd look for LED/lasers and sensors that are pre-mounted to fibers. <A> fibres become point sources at gaps >diameter <S> so losses become 1/r^2 but with a lens can be focused back into a line source. <S> Absolute encoders are parallel bits, while incremental encoders are quadrature and grey code. <S> 3mm <=15 <S> deg IR LEDs should be good with same narrow angle detector using AWG 30 magnet wire. <S> You can extract this from any old laser mouse wheel encoder. <S> Stray light must be blocked and wheel ought to be dust free. <A> Depending on your resources, you could just go for an MRI-safe encoder <S> that's commercially available -- though small electronics thoroughly screened for potential performance and safety issues, are often allowable, certainly outside the magnet and sometimes inside. <S> Also, for the resolution you need, there may be more straightforward ways to get the same answer than an encoder. <S> For example, a ceramic or conductive plastic potentiometer might do the job just fine. <S> So many things have been done with patient response systems in MRI that I suggest you go to the literature to find similar use scenarios, and read about how others have solved similar problems. <S> https://micronor.com/products/rotary-encoders-mri/ <A> encoders from avago. <S> I printed out an encoder on a clear sheet using a laser jet with a high DPI (needs to be over 1200, but this depends on your linewidth). <S> You need to draw the wheel or strip in a vector graphics software and print it out to scale. <S> We also used mylar on the backside, but for reflective sensors this is needed, for pass though sensors only a clear sorry would be needed. <S> We used the wheels in a prototype and then made some real wheels out of phosphor bronze (had to be metal because we launched them into space).
It depends on the sensor, the source, the quality if of the fibers, how you mount them, and probably a hundred things that I don't know about. It's actually not hard to do, With the AEDR
What does Construction and Standard/Alternate within the PCB Fab Industry? In the Constructions Column, it indicates AxB. By the looks of it, B is referring to the woven glass material. See pg 7 . What is the A in Constructions (AxB) and why would anybody care ? What does the Standard/Alternate column mean ? Source of chart : Isola Group <Q> What is the A in Constructions (AxB) and why would anybody care ? <S> A tells you how many sheets of woven glass are used to build that laminate layer. <S> What does the Standard/Alternate column mean ? <S> Notice in your chart <S> that lines 1 and 2 are both for a 0.002 inch thick layer. <S> The Standard/Alternate column tells you that the 1x106 construction is the standard way to achieve this thickness, and the 1x1067 construction is an available alternate way to do it. <S> As the other answers have said, you might choose to use the alternate construction if the slightly different dielectric constant and dissipation factor suit your design better. <A> Looks like it is the number of layers of the fiberglass mat used in the PCB layer: <S> That's from PDF you linked to. <S> Some PCB layers are made with a single layer of fiberglass matting, others with 2: <A> It is important that we understand the effect of the glass used in the construction of the core material we give to an OEM. <S> A 2 ply construction vs. 1 ply will give you a different Dk and Df based on the retained resin %of the core Source: <S> [Isola Group][3] <S> Dk is a dielectric constant and Df is the loss tangent, they determine how fast signals propagate through the PCB. <S> The problem is if you manufacture a high speed PCB, then switch manufacturers, you may not get the same results. <S> This is why high speed PCB designers are interested in exactly how the PCB's are constructed, because they want their design to function the same way from a different manufacturer.
B tells you the specific weave of glass being used.
How to decode an IR from device I have the Mitsubishi DA-R45P receiver, but I don not have a remote for this device. And without remote this device can not work. I see that there are universal remotes with learn function, but for that you need to have original remote... which i dont have :( Is it possible to learn codes from device or somehow similar?I have schematics or receiver, can this help? What are my options? <Q> As @ChrisStratton syas: <S> questions about usage of consumer products are off-topic here. <S> However I feel for you having had the same problem. <S> I have used universal remotes in the past. <S> They often come with a huge list of TV's which can be selected using a preset code. <S> You might take a weekend off because there were about 900+ codes in it. <S> It might be simpler and cheaper to order a new remote. <A> maybe i can help, as i had this device in past. <S> but i dont understand schematics. <S> here are some screenshots from schematics: <S> device sensor: <S> remote: <A> You must somehow figure out the protocol how data is sent, and what the button data is. <S> Usually the schematics of the device does not help figuring out the protocol or button codes. <S> (In this case, it is a standard chip, all you need is chip datasheet and use 33.3 kHz for the carrier as it uses a 400 kHz resonator - <S> That is how much I understood from a single chinese datasheet page of Mitsubishi M50142P).
Sometimes IR remotes have a standard chip inside them so finding the remote controller schematics would help most, or at least finding an identical remote to analyze the protocol and data sent by the buttons. Even if your TV is not on the list, I had one (can't remember the brand), where you could work your way through each and every one of the preset codes.
Where Does VDD+0.3V Input Limit Come From on IC chips? There are a variety of integrated circuits that specify that their input voltage can span a fairly wide (absolute maximum) range, e.g. -0.3V to 6.0V ( ref , pdf page 4), and then have a "Input Voltage at any pin" constrain that depends on the input voltage, e.g. -0.3V to VDD + 0.3V. That, in effect, makes the chip not be I/O tolerant to voltages that exceed the input voltage by more than 0.3V but are within the absolute maximum specs of what the input voltage allow, and forces me to apply some kind of external level shifting circuit to those inputs. So what is the practical reason for this kind of limitation in the specifications for integrated circuit I/O pins? <Q> Most likely there is an ESD protection diode connected between the input pin and the VDD net on the chip, in such a way that it is normally reverse biased (A schematic showing the configuration is given in Peter Smith's answer). <S> The idea is that when there is a positive ESD event, current will flow into the lower-impedance VDD net where it will do less damage than if it's all dumped on the one poor CMOS gate that's attached to the input pin. <S> Because the limit is VDD + 0.3 V <S> it's likely in your device <S> the diode is a Schottky type instead of a PN junction. <S> With a PN junction, you'll usually see a limit of VDD + 0.6 V or so. <S> If you were to apply an input voltage above VDD (by more than 0.3 or 0.4 V) to this device, you'd forward bias this diode, and draw a high current from your source. <S> This might damage your source or, if the source can supply enough current, heat up the chip to the point of damage. <S> If you use a resistor to limit the current into the input pin under these conditions, you might find the circuit works fine. <S> Or, particularly if the chip is a very low power one, you might find the whole chip (and maybe other things connected to the same VDD) are powered up through the input pin, which often leads to unintended behavior. <A> This is due to the input protection diodes. <S> A typical input looks like this (CMOS inverter shown): simulate this circuit – Schematic created using CircuitLab <S> The diodes in newer parts are schottky devices. <S> These diodes are for short, low energy transient events and cannot handle much current (a few mA generally). <A> These diodes typically connect between each pin and the two power rails. <S> If they are forward biased by more than 0.3V, arbitrarily large currents can flow. <S> The diodes are designed to absorb transient currents produced by ESD, which represent limited amounts of energy that they can handle, protecting the sensitive MOSFET gates from overvoltage. <S> But if you drive them with a low-impedance source, you'll quickly dump more energy into them than they can handle. <A> Actually, the Schottky clamping diodes and the VDD + 0.3V are both present for the same root cause and that is SCR Latch-up . <S> The design of all CMOS ICs actually creates a pair of BJT transistors intrinsically. <S> It simply results from out the p-type and n-type silicon substrates are laid out. <S> This picture from VLSI Universe shows it well: https://1.bp.blogspot.com/-yUiobLvxMrg/UTvnjjzaXZI/AAAAAAAAABc/lRFG5-yqD3E/s1600/latchup.JPG <S> You get two intrinsic BJT transistors, Q2, and NPN, and Q1, a PNP. <S> Note, they share the one N-well and one P-well, but this particular arrangement forms something called a Silicon Controlled Rectifier ( SCR ). <S> This is not desired in anyways, but an unfortunate side-effect of this arragement. <S> It is not a problem if certain rules are followed. <S> A typical SCR has three terminals, Anode, Cathode, and Gate. <S> In general, it is forward-biased for some device that must be controlled with a positive voltage at the Anode with respect to the Cathode, however, the SCR will block any current unless the Gate is activated. <S> To activate the Gate, it must rise across a threshold which, in this design, will be the Anode voltage. <S> One the latch is activated, it will stay on even if the Gate drops. <S> It will stay on until the Anode voltage drop to near zero current. <S> For the CMOS IC, the Cathode is akin to the chips GND, the Anode is the VDD rail, and the Gates are the I/O Pins. <S> This is the crux, if any I/ <S> O pin rises much above VDD, it will enable the latch and create a short between VDD and GND causing a very large amount of current and that current will keep the latch going burning up the IC. <S> To help protect against this for small transient spikes, Shottky diodes are added to the I/O lines to clamp the input to GND - 0.3V and VDD + 0.3V inside the safe zone. <S> These diodes can only take a small amount of current and external clamping can still be required for more rugged designed. <S> For more info, EEVblog did a nice tutorial on this: EEVblog <S> #16 - CMOS SCR Latchup Tutorial
The 0.3V drop comes from the Schottky clamping diodes used to protect the pins of the chip.
What is meaning of active low input in combinational logic circuits? I am currently doing self study on combinational logic circuits. I encountered few terms like active low output, active low input. I understood what active low output means (putting not gates at output side). I guess active low means putting not gate at input side. It will be very helpful if some one can explain this using an example (note that I have knowledge of encoders, decoders, Multiplexers so you can use these in your example). <Q> There are two things: The signal level <S> What the signal means, <S> ie assertion <S> The signal level is either digital Low or High <S> The signal meaning is attached to either Low or High, so we say the signal is asserted <S> low or the signal is asserted high . <S> Usually a bar or a slash indicates a low signal assertion level. <S> In the case above the reset is asserted low, so "reseting" happens when the signal is brought low. <S> Since we could also reset while the signal is brought high, it is important to track the assertion. <S> It is especially important in HDL's to track the signal assertion level. <S> Which is why you should label all of your signals. <S> I've typically seen adding a _L or _H suffix to signal names to indicate the assertion level. <S> In the case above it would be <S> RESET_L. Even adding assertion suffixes in schematics can be helpful <A> It means the signal is inverted (like a NOT gate). <S> Let's take this 555 timer below as an example <S> Picture can be found here ... <S> Not my picture (and excuse the massive compression for this picture, hence the ugly pixels) <S> Say that a signal that goes to this pin is a 1 or HIGH . <S> Since Pin 4 is active low, it will end up being a 0 or LOW for this pin. <S> The opposite is true: If the signal leading up to the pin is 0 or LOW , then Pin 4 will be 1 or HIGH . <S> CPLDs are a good example of external logic that would shut off a device by sending a signal to an active low pin. <S> You might thinking, "Why don't we just simply make it active high instead?" <S> That's a valid question <S> and I'm not really sure to be honest <S> but if I had to guess, it could be to just simply save power. <A> Active LOW means that a 0 V level is considered to be a logic <S> 1 . <S> For instance, consider a logic input tied high using a pullup resistor and pulled to ground through a pushbutton switch. <S> Whenever the switch is not pressed, the input is at the pullup voltage, <S> 5 V for example. <S> When the switch is pressed, the input is pulled to ground. <S> That input can be considered active low, because the low level means that the button has been pressed (logic 1)
The purpose for a signal to be active low is to have some type of external logic device to turn off the signal.
How do I reduce cost for a circular PCB shape? I'm designing an electronic product that has a circular housing. There needs to be a PCB inside this form. In order to reduce costs as much as possible for high volume production (5K-10K), what shape PCB would cost the least? A circle PCB works, but so does an octagon and hexagon. I'd like to know if any of these shapes lead to a lower cost due to how production is done. <Q> Panelization and depanelization is not trivial, When a board is panelized for production, it looks something like (Picture from surfacemountprocess.com) . <S> The intent is to maximize panel usage and minimize the number of cuts (tool touches) needed to separate the board. <S> Ultimately what determines the cost of the board will be how many can fit on a panel, if you require extra tongues and spacing for clean depanelizing, you may pay more. <S> Also you fit fewer round PCB's on a rectangular panel than square PCB's Ultimately a circular PCB panel <S> will be something like this, with the red parts being the mouse-bite or routing tongues. <S> There are roughly 3 or 4 approaches that can be used for de-panelization <S> 1. <S> V Groove approach, only good for straight lines, and rectangular PCB's. <S> A narrow groove is cut into both sides of the board along your outline . <S> Images are taken from Murata MLCC Datasheet https://search.murata.co.jp/Ceramy/image/img/A01X/G101/ENG/GRM1882C2A102JA01-01A.pdf <S> This V Groove can be broken in a few ways 1.a. <S> Hand breaking which is also known "Snapping" 1.b. <S> Cutting Wheel (Paper cutter per @ScottSeidman) 1.c. <S> A router can also be used (alternate to b) 2. <S> "Mouse-bite" approach. <S> Perforation is placed along the board outline, A router is used to remove all the extra board material, then the board is snapped along the perforation, usually by hand. <S> The edge can be cleaned up with router or by sanding <S> Both of these approaches put flex the board, and still require a small connecting piece with the board. <S> 3. <S> 100% Routing, requires special jigs to hold the PCB, but the entire circle is routed out of the design <S> Generally a fabricator will charge extra for router use, but this is such a normal process now-a-days that it may be already in the cost. <S> However total routing (without mouse bites or a small v groove edge), is a large premium. <S> The tradeoff there is no board flex and cleanest edge. <S> I would echo @ScottSeidman suggestion to approach your fabricator for input on pricing and suggestions. <A> Most of the panelized circular (or similar) <S> PCBs I've seen at Asian factories use a straightforward X-Y array with relatively generous spacing between the circles. <S> Usually you want to maintain the panelization intact through pick and place assembly and perhaps beyond that to testing and so on, so the board has to be connected well enough it doesn't come apart, and should have panel fiducials and tooling strips compatible with the requirements of your assembly line or assembly house. <S> You could probably use an X-Y array with a combination of routed and V-groove to get the best of both worlds and avoid those mouse bites entirely. <S> Anyway, without doing much work, my thought is probably an elongated hex with flattened sides (so really an octagon) in an X-Y array (not packed) with V-groove and routed roughly triangular "holes" <S> is optimal by a small margin. <S> To a first approximation, cost is based on the rectangular area that will totally enclose your PCB if they are panelizing. <S> V-grooves add a bit of cost but can reduce area by the 'kerf' of the routing tool. <S> Vendors will want to use a relatively large routing tool. <S> Also V-grooves usually are required to be completely across the panel, vertically or horizontally, so they're not suitable for hexagonal packing. <S> It might slightly complicate setup for pick and place if they're not on an X-Y array too, but probably not very significant. <A> Depanelizing tools are like specialized pizza cutters that open the V-grooves. <S> They work in straight lines. <S> I suspect, but don't know for a fact, that the circle would add expense. <S> I also suspect that the more operations, the higher the cost. <S> You want to give the depanelizer a flat edge to work with at all times, so the operator doesn't have to hunt around for the right position. <S> Other than that, I recommend working with your fabricator. <S> First, start by getting a quote for what you like. <S> It might not be as expensive as you think. <S> If you don't like it, work with them to minimize cost.
You might also think of whether there's a clever way to panelize your board to make all the cuts horizontal or vertical.
How critical is the magnetics circuit diagram in a magjack RJ45? I'm looking at an example circuit diagram from ST, and it use sa 48F-01GYDXNL part that I can't find anywhere. It's a gigabit ethernet jack with magnetics, and the datasheet shows a circuit looking like: Link to the datasheet . I'd like to use a different part, but what I want to know is do I have to have the exact same circuit inside the magnetics ? Is any old RJ45 magjack going to work ? Or is it somewhere between the two ? The one I'd like to use is (datasheet here ) which appears sufficiently different to cause me to worry they're not functionally the same. <Q> You can use the 08621JX143-F just make sure the pins with TRD go toward the phy. <S> The 08621JX143-F has one less center tap pin and some LED's. <S> The frequency response might be a little different, but the overall functionality should comply with IEEE ethernet standards. <A> Magjack designs are often paired with various Ethernet PHY chips or NIC chips that have builtin PHY. <S> There can be differences in: Jack to Phy turns ratio. <S> Common mode inductance. <S> LED support. <S> Mechanical characteristics. <S> So you have to select carefully. <S> Update: <S> Now that you included links to the datasheet for both parts I took a look and compared. <S> I am inclined to believe that the Ethernet jack part of the 2nd link will be electrically functional as with your first link. <S> Do be aware that the jack at the second link does also include two USB Type A socket connectors on the same footprints. <A> refer below links for additional information <S> Why Are Ethernet/RJ45 Sockets <S> Magnetically Coupled? <S> https://www.avnet.com/wps/portal/abacus/resources/engineers-insight/article/ethernet-magnetics-discrete-or-integrated/ <S> http://ww1.microchip.com/downloads/en/AppNotes/VPPD-01740.pdf <S> https://www.kinet-ic.com/uploads/AN063_Rev2.1.pdf <S> How important is the number of cores in RJ45 jacks with magnetics https://www.belfuse.com/resource-center/icms/br-app-note-MAG-EMI-bel-magnetic-ICM-application-note.pdf
you can refer any intel ethernet controller datasheets like i219 Basic specifications will be mentioned there like isolation,OCL,Insertion & Return Losses, Cross talk , EMI features etc.
Why is -12v connected to GND in this schematic? A couple of questions: 1) It looks to me that the three-prong ground connection in the bottom right of this schematic is the only ground connection. I used the GNDPWR symbol in my KiCad schematic to represent this ground connection. Can (or should?) this GNDPWR be connected to the "normal" GND which I am using on the jack sockets which connect to the inputs/outputs? 2) Why is the -12v connected directly to this GND? Is this just saying that the GND reference is -12v? But because I am using op-amps and a dual-rail powersupply, don't I also need a 0V GND? Won't it cause a short-circuit if I connected the -12v GND to the 0v GND? I do not understand. 3) Can I use any op-amp for U1-U4? 4) For the op-amps should I use a +/-15V power rails? Or should I use +/-12V? 5) What does the test point 4 mean (near the +12v)? Should I just ignore that? <Q> I suspect that this circuit is intended to use a single 12 volt power supply, with the negative terminal of the supply connected to the circuit ground. <S> The power input labelling is misleading - it appears to imply a total 24 volt supply (+12 and -12) rather than a single 12 volt supply. <S> Does the place you found this circuit recommend any particular op-amp? <S> If not, I expect any op-amp that will work from 12 volts will do. <S> When you want to ask questions about a circuit you found somewhere, you should include links to the source (and you should look around the source - it might answer your questions...) <A> 1 and 2) I read that as chassis ground symbol, so negative supply pin is just connected to a metal chassis. <S> This and the fact that the op amps do not use ground as reference <S> but it is biased to <S> half of the input supply pins would suggest that this is a single supply device so it needs single 12V supply and 0V ground, not dual +/- <S> 12V supplies. <S> 3) <S> In general, no. <S> You would have to know what this filter is for and what parameters of the op-amp are important for the circuit operation. <S> If this is from the ARRL handbook, then this is for audio <S> I suppose <S> and it uses LM324 which is pretty generic. <S> 4) <S> It says 12V on the circuit, not 15V. And based on 1&2, this circuit needs +12V only, not -12v. <S> 5) <S> They are not test points, it just shows the supply input connections to op-amp supply pins 4 and 11, but here the label 11 is missing. <A> The most important assumption not shown is what voltages are used by the OA V+,V-. <S> The certainty is that the midpoint between the external , DC V+,V- inputs shown will become the DC output of all Op Amps with a null input. <S> The input and the notch out are AC coupled, the latter of which is dubious for the reason of blocking DC only on this output. <S> The missing assumptions for 0V and OA Supply and lack of specs , makes this schematic incomplete. <A> I breadboarded this up and after a couple false starts, I finally have some answers: 1) <S> Connecting -12v and +12v causes it to start smoking. <S> Don't do this. <S> 2) Connecting it up with +12v and 0v (GND) works perfectly! <S> They should remove the -12v label as it is simply incorrect, it is actually just 0V and GND, NOT -12v. <S> This schematic is also missing a label #11 on the lower GND input, which is the negative supply for the op-amp. <S> The op-amps are supplied by pin #4 and #11. <S> I am assuming they incorrectly labeled it -12v because usually the op-amps are dual-rail, but in this case they are powered by just +12v/0v (single supply). <S> The op-amp is the LM324 as indicated on the original schematic and text which can be found at https://www.americanradiohistory.com/Archive-DX/Ham%20Radio/70s/Ham-Radio-197802.pdf on page 70 <S> (page 72 in PDF). <S> Thanks to @SamGibson for finding the original schematic. <S> So for anyone else looking at this schematic, the -12v is wrong and should be marked 0v. <S> This is a great little filter.
So the schematic is wrong labeling it as -12v. Test point 4 and the test point connected to Ground are just handy spots to connect your meter to measure the supply voltage - you can ignore them. And pin numbering must match, but in general quad op-amps have matching pinouts these days.
Switching and non-switching period in buck converter I am using TPS54260 buck converter to step down 10V to 3.3V. Load = 1A. While measuring the output voltage and switching frequency using oscilloscope, I am observing this: During no load condition at the output, no switching at the switching node (before the inductor,) but output voltage = 3.3V. During loaded condition at the output, switching frequency, duty cycle is observed at the switching node and the output voltage is 3.3V. How come, during the the 1st case, I am not observing any switching waveforms at the switching node, but I am receiving 3.3V at the output? Please clarify. <Q> Looking at the data sheet the only reference i can find to low power mode is pulse skipping. <S> The switching frequency range is quite wide and all quite high so if you have very little load at the output it could be skipping a lot of switching cycles when in <S> it's 'Eco' mode. <S> If you think about the logic of the situation, if the datasheet says it doesn't have any other way of providing the output voltage (say an internal LDO which is more efficiency than switching at very light loads) <S> then the the voltage at the output can only be maintained by rail capacitance and/or the regulator switching intermittently. <S> What switching frequency have you set it to and what value inductor are you using? <A> How come, during the the 1st case, I am not observing any switching waveforms at the switching node, but I am receiving 3.3V at the output? <S> Well, you wanted 3.3 volts at the output and you are getting it. <S> Given there is no load current and probably minimal <S> leakage current taken, the chip does not need to try and switch to maintain the output at what is the correct voltage level. <S> If it did switch, it would be injecting a little bit of energy into the output capacitors each cycle and guess what... the output would rise above 3.3 volts <S> and it wouldn't be a very effective regulator <S> would it? <A> This is a classical skip mode operation where the feedback voltage (the voltage at the comp pin) is internally monitored by an extra comparator featuring hysteresis. <S> When the feedback is above a certain level (I believe they say 500 mV in the data-sheet), the circuit switches normally with a full-length switching pattern. <S> At some point, the load is such that the feedback voltage passes below the 500-mV threshold. <S> Because of the extra comparator, all switching cycles are interrupted and the power switch is turned off. <S> See the below simplified sketch for illustration: <S> When the switch turns off, the output is left with the capacitor being charged at your regulation voltage (3.3 V it seems) and the current absorbed by the load and the divider network. <S> As such, the rate at which \$V_{out}\$ falls depends on the time constant \$C_{out}R_{load}\$ : if the output current is truly zero amp (no-load) and if the resistive divider is of large ohmic value, then it can take a large amount of time to reach the hysteresis band and reactivate the switching cycles again (as \$V_{out}\$ falls too far from the 3.3-V target, \$v_{FB}(t)\$ rises up again). <S> With your scope, sync in normal mode trigger, not auto, so that you can see the bunches at a low repeating rate. <S> Another option is to slowly reduce the current from full load until it starts skipping pulses. <S> As the current goes lower and lower, the distance between the bunches expands: the converter is now operating in a hysteretic way. <S> If you want to further dig the world of switching converters, you can check this book . <A> During no load condition at the output, no switching at the switching node (before the inductor,) but output voltage = <S> 3.3V. <S> This is Pulse-Skipping mode, feature of this IC. <S> More about Pulse-Skipping, you can read this AN from TI ( Link ) <S> You maybe need to change Oscilloscope time to see something like this: <S> Update: <S> I'm using TI TINA to simulate your circuit. <S> I downloaded model from here and change Rload to very large. <S> This is result (you see at PH curve) <S> : Note that the simulation result is little bit different to experiment from TI because affect of output Cap.
When the output current reduces, the feedback voltage also does and the duty ratio goes down bringing the peak current down.
Why does reverse supply polarity damage the ICs? Assuming you need for example an OR gate for your circuit project, you can use a 4071 IC or simply use two transistors as shown below: But I wonder why the ICs are very sensitive to reverse supply polarity (as any other IC) but the transistor version circuit is not sensitive to supply reverse polarity connection (no damage.) Note:I found another question which asked about what causes damage when the car battery is connected in reverse (but his question is about the car fuses not logic ICs): Why reverse polarity causes damage <Q> The 4000 series ICs (and most modern ICs) use some form of CMOS topology. <S> This use complementary MOSFETs to drive high and low, rather than the resistor-BJT arrangement shown in your diagram. <S> The problem with MOSFETs when it comes to reverse polarity is the body diodes within their structure. <S> If we look at the structure of say a simple inverter, we end up with something like this: <S> The diodes aren't added intentionally, but rather come about due to the structure of a MOSFET (*). <S> These diodes aren't disastrous issue ( <S> **) <S> in a CMOS gate if the source of the PMOS (to transistor) is at a higher voltage than the source of the NMOS (bottom transistor). <S> The diodes are always reverse biased, so pose no threat. <S> If however we were to reverse the polarity and connect the PMOS source to a negative voltage with respect to the NMOS gate, then clearly there is now a direct current path through both body diodes. <S> This causes a direct short-circuit across the power supply. <S> Regardless of whether the transistors are switched on or off, a large current will flow, causing excess heating and damage to the circuit. <S> (*) You can search further for MOSFET body/parasitic diodes. <S> ( <S> **) <S> Ignoring possible latch-up issues. <A> in the oft-used Nwell process (Pfets are in Nwells) with Nfets implanted directly on the P substrate, the Nwells must be tied to the most positive potential, or the various diode junctions become forward biased. <S> When the VDD and GND pins are reversed, given the large # of Nwell contacts and the large # of substrate contacts, there are hundreds if not thousands of diodes that become forward biased <S> I recall a particular IC design-handbook I used that stated " <S> a single contacts has 4,000 ohms of resistance". <S> Thus 4,000 contacts likely has ONE OHM of resistance, because of the plenteous intermixing of well-contacts and substrate-contacts. <A> These are designed for only a few mA, but a reversed power supply is a low-impedance source that can deliver much more than that. <S> The chip burns, whereas, a simple transistor circuit contains no such diodes and because of transistor symmetry, is not sensitive to reversed polarity.
Generally, ICs contain protection diodes on pins which become forward biased in the event of a reversed supply.
What would happen if I connect a battery which has lower voltage than output of battery charger At this moment I'm building a charger for Li-ion battery based on constant current, constant voltage method. I have used IC LT3741 to build a charger. The specs of my charger: 8,4 V, 2A But I dont understand one thing: When charging the battery, the voltage of battery will be lower than the output voltage of the charger (but of course greater than 6V.) What would happen then with the output of the charger? Because we have now the situation of two "DC-sources" which are connected parallel and have different voltage. Or am I wrong? Two DC-sources, different voltage, connect together will not be good idea, will it? <Q> Your charger circuit should limit the current to not exceed the maximum charging current of the Li-Ion battery. <S> Note that this maximum charging depends on the charge state of the battery . <S> Exceeding the recommended values stresses the battery and shortens its lifetime. <S> If you grossly exceed what the battery can handle it might overheat and start smoking or catch fire! <S> You're using a DCDC converter chip as a charger <S> , that's OK but do realize that charging a Li-Ion cell properly isn't something a DCDC converter can do on <S> it's own. <S> Especially for fast charging the maximum charge current needs to be controlled depending on the battery's charge level and temperature. <A> the answer for your question is, Current will only flow from a High potential/voltage (in your case Battery charger) area to a low potential/voltage (in your your case, it is a discharged battery, which have a potential/voltage less than the charger voltage) area. <S> If the charger voltage is equal to battery voltage, current will never flow either direction......!!!!! <S> Simply says, if you want charge a battery , charging voltage must be little bit higher (WITHIN SAFE LIMIT OF BATTERY SPECIFICATIONS LIKE CHARGING and CHARGING CURRENT FROM IT'S MANUFACTURES DATASHEET, OTHERWISE BATTERY MAY EXPLODE) than battery voltage. <S> Nothing will happen as you afraid. <S> I hope you got it..... <A> What would happen then with the output of the charger? <S> Because we have now the situation of two "DC-sources" which are connected parallel and have different voltage. <S> Or am I wrong? <S> You are wrong. <S> By definition, the voltage at a single point or 'node' is the same for all components connected to it, so the charger and battery will always have the same voltage when connected together. <S> This is why you are using a 'Constant Current, Constant Voltage' regulator rather than a straight voltage regulator. <S> When disconnected the charger will (when properly adjusted) put out 8.4V. <S> At the same time the battery voltage will rise due to the charging current. <S> When battery voltage reaches 8.4V the charger will progressively lower the charging current to prevent the voltage from going higher than 8.4V. <S> The LT3741 itself is not a battery charger. <S> You can safely use it to charge a Lithium-ion battery provided that you have mechanisms in place to handle fault conditions such as an over-discharged battery (must be charged at a lower current until reaching 3.0V/cell), charger malfunction (not limiting current or voltage), cell voltage imbalance, and excessively high battery temperature. <S> A BMS (Battery Management System) or PCM (Protection Circuit Module) does some of that. <S> For the rest you need extra circuitry, or manual monitoring and intervention (not recommended).
When the battery is connected and tries to draw more than the set current, the charger will drop its voltage to limit current.
How will I know the power [dissipation] of resistor when buying resistors? It is very basic question for many people. It is very easy to predict the resistance of a resistor with multimeter, but sometimes it comes with resistors which are slightly larger in size, such as, diameter 5mm, length 15mm and diameter 7.5mm, length 24mm. I get confused when buying resistors as power because there are some catagories 2W or 3W or 5W. How can I measure or is there a formula or table to learn the power [dissipation] of resistor. <Q> You can only predict the resistance of a resistor, when you know its physical properties (dimension, material it is made of, etc). <S> Then you could apply equations for electrical resistivity to determine the resistance. <S> But, then, if you know the physical properties, you can also predict its power rating using equations for heat capacity or thermal capacity. <S> You can easily measure <S> the resistance of a resistor using a multimeter. <S> To determine the power rating, manufacturers test at which power dissipation the resistor becomes too hot. <S> They measure the temperature and power dissipation using a "multimeter" (probably a more sophisticated) $$ P_{dissipated} <S> = \frac{ V_{\text{across resistor}}^2 }{ R } $$ or $$ P_{dissipated} = <S> I_{\text{through <S> resistor}}^2 \cdot R $$ and note the power dissipation. <S> Next, they likely go for safe and rate the resistor at a lower power (e.g. 95% of the measured power dissipation), but you don't know the choosen margin. <S> So, there is no way <S> the measure <S> the power rating . <A> The power rating of a resistor is given by the manufacturer. <S> Generally a larger resistor will be able to dissipate more power, but there is no such rule that a certain length equals a certain power. <S> That is why the power rating is clearly shown in the datasheets or when you order from a distributor. <A> The power dissipation is not a property of the resistor, but of the circuit it's in. <S> Power is \$P= <S> I^2R\$ <S> ; it's a function of the current (or equivalently the voltage) <S> the resistor sees in the circuit.
You can know what the power dissipation will be by solving the circuit to find the current through (or voltage across) the resistor.
How does current flow to the ground in Delta connection (Ungrounded) in an Earth Fault? I'm an electrical engineering student and I'm currently in In-Plant training in Electricity Company of my country. Recently I visited an electrical grid. I got to know about Earthing transformer. I asked an engineer about how the current flows to the ground in a delta connection as it's not grounded and to flow current there must be a return path to the source. But she didn't give me a satisfying answer, she said there will be current flowing and said that current flows from high potential to low potential (zero potential earth). However I had doubts since that time because unlike in star connection (grounded), there is no path for return current in delta system (with ground I meant). I know that an earthing transformer creates a virtual ground to current flow in delta system to the virtual ground, so they can measure earth faults for tripping. But it is not clear to me how current flows to the ground if there's no earthing transformer . I saw in some post saying that this is due to inductive coupling. I'm talking about a 33kV delta system. Since current flows through a closed system, why is this possible? I'm talking about ungrounded delta system, no neutral, no leg is connected to ground and no earthing transformer. Fully isolated Delta connection. So I want to know how and why current flows to the ground in an earth-fault/1-phase touch earth in a delta-system. In theory it must be zero as there's no return path to the source? <Q> In normal configurations a single fault on a delta system will not cause any significant earth fault current. <S> This is useful because the distribution system can tolerate a single fault without interruption to the consumers. <S> This benefit is of little use if the system does not detect the first fault and fix it. <S> As a result, earth fault detection relays will monitor the phase-earth voltage on each line. <S> (This will create a very weak star/wye point on the system.) <S> When an earth fault is detected on one phase then that phase is deliberately shorted to earth. <S> This can help make fallen wires less likely to electrocute someone. <S> A contact of the relay will signal the fault which can then be investigated, cleared and reset. <S> If the first fault is not cleared then a second fault will result in high earth current and trip out the system. <A> There is no such thing as an ungrounded system. <S> The delta system is grounded through parasitic capacitance from each phase to ground. <S> This parasitic capacitance appears in the zero sequence network as an impedance (XC) connected to the neutral bus. <S> When you have a phase-earth fault the zero sequence current has to flow through this XC. <S> As such, it is typically very very small. <S> In the picture i show below i have a one-line drawing showing a source at left (1.0 per unit = nominal voltage) connected to Bus H. <S> The transformer between bus H & L is a wye-delta. <S> So, bus L is an ungrounded delta bus like you are talking about. <S> If we account for the parasitic phase-earth capacitance then we would connect the large XC from the bus to reference bus in each of the three sequence networks (positive, negative, and zero). <S> In my figure i only show it in the zero sequence (bottom) because it is negligible in the positive (top) and negative (middle). <S> Note that ZF = 0 for solid ground fault. <S> The lady's answer, "current flows from high potential to low potential (zero potential earth)" is just ignorant. <S> Without an earthing source <S> (zig-zag grounding bank etc.) <S> there will be no significant fault current as i describe above. <S> When you get to the point in your studies that you learn/practice symmetrical component analysis you will see this clearly. <S> I'd recommend Blackburn's book . <S> russ <A> Please understand that even though a star system usually has a neutral line (3Ø 4-wire), when the load is properly balanced there is no current flowing through this line <S> and it is not needed. <S> United States used to have an ungrounded delta transmission line when there were few customers and shorter lines. <S> This had the advantage of reducing the number of customers lost with a single ground fault and reducing earth-ground current during a fault. <S> In such systems earth ground is reflected through the primary of the source transformer and the secondary of the load transformer. <S> Distributed power at the customer end in the United States today normally does ground the delta configuration by tapping one phase of each transformer (see fig. 1). <S> This is sometimes used for facilities which need both 3Ø and 1Ø service. <S> Distribution is done by a WYE (Star) configuration. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> This is done primarily to prevent transient voltages, improve the safety of the transmission line, and quickly identify faults. <S> It is important to know that the system will work fine for delivering power without an earth ground.
Current does not need to flow to ground within a delta transmission system, it is a floating system with a very small capacitive coupling to ground.
op amp non-inverting input isn't working? simulate this circuit – Schematic created using CircuitLab Why in above schematic if i left the non-inverting input floating or connected to 1V or 3.3V the LED stays ON and current doesn't change with increasing/decreasing the Vref voltage? Also LED current increases with increasing voltage and it's not constant... Table 1 (when 3mm LED and transistor is connected) : Vref | Op-amp out | Emitter | VBE-----+------------+---------+--------0.0 V| 2.96 | 2.75 | 0.810.5 V| 1.16 | 4.48 | 0.661.0 V| 1.68 | 3.96 | 0.682.0 V| 2.77 | 2.93 | 0.753.0 V| 2.98 | 2.75 | 0.80 <Q> There were lots of problems with the original circuit. <S> You've made some changes, but it's still a bit wonky. <S> Assuming your LED is that 3mm green LED that appears to be on the breadboard, your current should not exceed about 15-20mA tops. <S> The forward voltage will be around 2.5V. <S> The LM358 op-amp has an output that can swing down to 0V on a 5V supply but cannot go higher than a few volts. <S> Since Vbe is 0.7V <S> or so we should limit the voltage across the sense resistor R1 to something reasonable, say 0.5V. <S> So R1 = 0.5V/0.02A = <S> 25 ohms. <S> That is chosen so that there's enough voltage for the LED but the voltage is much higher than the few mV offset of the LM358. <S> Now divide your 3.3V maximum Vref down to 0.5V with something like 10K/1.8K and apply that to the non-inverting input. <S> The 1.8K to GND will also deal with the input bias current if you disconnect the input. <S> The compliance of the resulting current sink at 20mA out is about 5V - 0.1V - 0.5V = 4.4V so the 2.5V LED has plenty of margin. <S> Maximum dissipation of the transistor into a short is <S> 4.5V * 0.02A <S> = 90mW which is fine. <S> Leaving the LED open will cause the op-amp to attempt to drive 20mA into the transistor base, which it can do without damage at room temperature. <S> This particular circuit will likely work without the cap/R2 but the pair improves stability. <S> Unless you connected power to the LM358 backwards and it got hot <S> , it's unlikely you have damaged it, they're pretty rugged devices. <S> However, your measurements do not look good. <S> Check the wiring first, and try another op-amp. <A> When Vin- matches applied input Vin+ <S> the current in Rs matches the LED. <S> Thus 100mV/mA= 100 Ohms for a max of 5V-Vce-Vf =2~3V <S> thus 20~30mA. <S> When Vin+ is floating I assume the constant LED on is due to reactive leakage of wire signal , from stray noise such as 50 Hz? <S> Line noise where 0.01 uA leakage into say <S> 10MOhm <S> (OpAmp) is 1V and is possibly more then clipped by supply voltage diodes. <S> With AC stray noise duty cycle would be 50% on a floating control input. <S> V(in+) <S> Thus Vin+ is voltage controlled yet be a low impedance relative to Zin <S> = 10M. <S> So even 100k would be adequate. <S> Shunt cap of 100pf may be necessary in that case if there were large RF EMI signals. <S> So a lower impedance control is preferred. <A> It's about equivalent to connecting it to the supply. <S> Otherwise, the voltage on the - input should track the voltage on the + input. <S> If you're looking for changes in the current by watching to see if the LED brightness changes, you might not even be able to detect it... <S> your perception of brightness is logarithmic, and you're looking at about 10dB from 1V to 3.3V. Measure the voltage at the junction of R1 and R2, and verify that it's tracking Vin.
If you don't connect the input, the bias current will drive it to one of the rails, in this case probably the positive one.
LED Brightness or Luminance is additive or not? Let us say I have an LED whose brightness or luminance value is 50 Cd/m2.If I lit 10 same type of LEDs with same conditions of current and voltage then will the brightness of these 10 LEDs will add up to 500 Cd/m2 or will it give some other value? <Q> You're comparing apples and oranges when you refer to 'brightness' and 'luminance'. ' <S> Brightness' is 'how bright is the light when I look at it', while 'luminance' is 'how will lit is the surface the light is shining on'. <S> Brightness is not additive - putting a light next to another light doesn't make either of them brighter. <A> It depends if the radiated light is diffused over the same area. <S> 50Cd/m^2 = <S> 50 nits would be a dim desktop monitor or a bright mobile. <S> Adding 10 in the same area with diffusion makes that 10x brighter. <S> But if not diffused each source would not appear to be brighter by neighbouring LEDs but the flux of radiated light would brighten the reflected light by 10x. <S> The same is true with Xmas lights. <A> Yes, luminance is additive, 10 LEDs illuminating a given surface gives you 10 times more cd/m² than a single LED.
Luminance is additive - if you shine more lights on the same surface, the surface gets more light.
Can I use individual strands in cat5 or cat6 solid core wire for pcb mods / jumper / repairs? I have a small PCB repair I need to make. I am reading that wire-warp is a suitable option. I don't have any wire wrap, but I do have tons of cat5 and cat6 solid core wire. I was thinking about gutting one of those and using an individual strand from as a jumper. Is this a bad idea? Or is this a case of if it works then it is a good idea? <Q> You can, its really great for breadboard wire. <S> I used to use Ethernet or phone as my primary way to prototype (now I only solder). <S> so it's easier to attach to most SMT pins. <S> There are some caveats with solid core ethernet wire though, The main problem is strain relief and breakage. <S> As with any wire, stripping can nick the wire, and the wire breaks after moving. <S> This is especially a problem if the wire is causing intermittent connections, it can be hard to track down. <S> So be careful not to nick the wire, and if you can, provide strain relief with tape or soldering. <S> Don't move the wire after it is placed. <S> (you should also not use solid core ethernet from the wall to the computer, if it's stepped on it breaks, and if it's moved too much it breaks, use stranded instead) <A> If the wire will fit (not too thick) <S> and you can cut and bend it to fit and not short anything out <S> then there's no problem. <S> You'll need to solder it in place, though. <A> it works alright, the biggest hassle is that the insulation is ordinary PVC, and that melts at a fairly low temperature, so it can be tricky to retain the insulation near the ends when soldering. <S> wire with Teflon or irradiated PVC insulation is easier to use.
A better way is to use the standard "blue wire" 30AWG for jumper connections, its smaller If you don't have a soldering iron and solder then it won't work.
Is there any relation/leak between two sections of LM358 op-amp? I want to use section A of an LM358 OpAmp to limit the current and section B to convert Arduino PWM to analog, and then connect output B to non-inverting input of input A as reference voltage. The LM358 VCC is not going to be lower than 12V or higher than 16V. Is there something like "leakage" of current/voltage or anything else between two sections that can damage the Arduino or two sections are isolated from each other? <Q> There is a very slight interaction between the two amplifiers at low frequencies that is likely equivalent to a few uV of Vos shift. <S> That is in part due to the shared bias network. <S> Changing dissipation from the output section of one amplifier can cause temperature gradients across the die as well has heating, as @analogsystemsrf mentions in a comment, which will cause Vos changes in the other amplifier. <A> You should be ok. <S> The two op amps are mostly isolated from each other. <S> However, because they are both share the same die, if you were to exceed the maximum limits on one of the op amps, you may screw up the operation of the other. <S> ( Example...driving one of the inputs of one op amp at a voltage greater than Vcc may very well effect the operation of the other op amp.) <A> Considering the resolution of PWM and possible error to desired target and the jitter error of a PWM to LPF average voltage, I think the crosstalk error of -120dB is irrelevant.
If both amplifiers are working with signals measured in volts it won’t likely be noticeable, let alone problematic.
Is a PWM required for regenerative braking on a DC Motor? The DC Motor is question is here: DC Motor I was wondering if you use a diode and a switch in a manner shown in the below picture if the motor would exhibit regenerative braking when SW1 is open? (Assuming that the voltage from the motor when braking exceeds the motor power) Is the use of PWM required? simulate this circuit – Schematic created using CircuitLab <Q> ( <S> Assuming that the voltage from the motor when braking exceeds the motor power) <S> That's the problem — it doesn't. <S> You need a way to boost the voltage coming from the motor to a level that will actually charge the battery. <S> You can use a separate boost converter, or you can create a more tightly integrated solution that uses the inductance of the motor itself as an element in a boost converter. <S> Either way, it does involve some sort of PWM control in order to regulate the power flow. <A> The voltage on the terminals of your motor will be a function of its speed and the load. <S> It will not be higher than your power supply, unless some external force is trying to accellerate the motor. <S> Regenerative braking supposes that you get back some energy from the motor while braking. <S> Therefore the braking must be performed by applying a load accross the motor. <S> For example, you could "short-circuit" it using a low value/high power resistor. <S> The resistor would force the motor to supply power and transform that power into heat. <S> At the same time the motor slows down. <S> When we use a resistor to slow the motor down, the energy is lost into heat. <S> We could use that heat and transform it into "electricity", but that is not the most effective way. <S> It is better to change the resistor with a more complex system that would transform the power. <S> It can for instance be an inductor. <S> In way similar to switched power supplies we can "charge" the inductor and "discharge" it into your power source. <S> It would be applied to your power source in a different path than the one controlling your motor through PWM. <S> So the diode accross the controlling switch <S> would not do anything while slowing down your motor. <A> Your circuit doesn't provide any braking because the motor is running free when the switch is open, and won't produce higher voltage than the battery unless it is 'over-driven' to higher speed by an external force. <S> To brake the motor you must put a switch across it, like this:- simulate this circuit – <S> Schematic created using CircuitLab <S> When SW2 is closed it 'shorts out' the motor. <S> While the motor is spinning it acts as generator, producing voltage which pushes current through SW2. <S> The current produces torque which brakes the motor. <S> This is dynamic braking, but not regenerative. <S> However if PWM is applied to SW2 then each time it opens the collapsing magnetic field in the motor's winding inductance creates a 'back-emf' voltage which tries to keep the current going. <S> The current then takes the only path available to it, through D1 into the battery. <S> As well as charging the battery the back-emf current also produces braking torque in the motor. <S> The only change required is to keep the 'motor' switch open while applying PWM to the 'brake' switch. <S> Most controllers use MOSFETs which have built-in body diodes, so an external diode is not needed either. <A> Railway locomotives traditionally used separately-excited DC motors, in which the magnetic field is provided by a field winding rather than a permanent magnet. <S> These were in practical use for both traction and braking long before high-power switched-mode motor control was feasible; at least one such locomotive was built in the UK by 1940 (though the railway it was intended for could not be electrified until the 1950s). <S> The EMF of the motor armature is proportional to the product of the magnetic field strength and its rotation speed. <S> To brake regeneratively into the fixed-voltage overhead line, the field current had to be raised to make the armature EMF exceed the line voltage. <S> A separate control for this was installed in the cab. <S> For normal running in traction, the field was simply connected in series with the armature. <S> In diesel-electric locomotives, there is no overhead line to regenerate into. <S> Instead, power from the main generator is used to excite the motor fields, and the armatures are connected to a high-power resistor in which the generated power is dissipated.
If the controller uses PWM to control motor speed then this circuit can be 'free', because it uses the same switches that are used in a half-bridge configuration.
Reduce voltage/amperage from adapter to fit another device Complete newbie here. So I got this old gaming console with the original adapter, which outputs 8.2V / 850mA. The cable connection with the body of the adapter is somewhat loose, though, so if you twist it in this particular way, no less, no more, and manage not to move it, the console stays powered up and it's usable. I'd like to build my own adapter for the console using another, newer adapter which outputs 9V and 1A. The cable has two inner cables with which could (presumably?) be connected to a circuit that corrects these values so that they match the console requirements. Is this even possible? What would such a circuit look like? I'm in Europe, in case this matters at all. <Q> <A> The circuit you are looking for is called "linear voltage regulator". <S> Internally a regulator is like automatically adjustable resistor: it adjusts it's resistance so that the voltage between output and reference (often ground) point gets a pre-determined value. <S> With a quick search, I couldn't find one with 8.2 V output voltage, but you can take for example 5 V regulator and create your own value with a voltage divider (two resistors) between output, reference leg, and ground. <S> For example datasheet, http://www.farnell.com/datasheets/2096735.pdf <S> You don't need to worry about the current, for your console <S> the current is just the maximum current it can take. <S> For the power supply, the voltage is the maximum current it can give. <S> The console won't draw more current than it needs. <S> You need to pick a regulator where the maximum output current AND maximum power dissipation is sufficiently high. <S> As @Hearth commented below, you will need a regulator with a sufficiently low voltage drop. <S> There are often denoted with the acronym "LDO", but you must check the dropout voltage from the datasheet. <S> It's very probable, that the console works just fine with no voltage regulator in between. <A> If the game is more than about 15 years old, or if the adapter is significantly bigger and bulkier than a cell phone charger <S> (i.e. if it's two inches on a side or more) then the adapter is probably just a transformer and an unregulated DC supply. <S> This means that the game will have its own voltage regulation, and as @user24368 suggests, you can just use a 9V regulated adapter <S> and it'll be close enough. <S> The regulator inside your game will run a bit hot, but hopefully not too hot. <S> You could also run a 9V adapter and put a single power diode or a resistor in series with one of the leads (the '+' lead is traditional, but not necessary). <S> A 1-ohm resistor would drop your (presumably regulated) 9V down to 8.2V or so at 820mA. <S> Given that you'd probably want to make a "wart" on the cable, I'd use a 5W, 1 \$\Omega\$ resistor in line with one of the leads, and I'd cover it with heat shrink tubing. <S> Then I'd check to make sure that it wasn't getting too hot -- <S> and if you ever had a fire start behind the game, your homeowners or renters insurance company would use the modified cord as an excuse to call it your fault.
But in your case, I would say the 9V-1A adapter would work fine.(Reason being the circuits are generally designed to regulate any offset voltages.) If you are willing to open the console and get your hands dirty, you are looking for something called voltage regulator near the DC socket on board. However, any series element (even a linear regulator) you use would get warm, because it has to burn up power to drop the voltage.
Looking for circuit board material that can be dissolved We are working on a product where the entire device needs to be dissolved in liquid after the device has operated and the device is no longer usable or desired. This is a down-hole application. The device body is either aluminum or magnesium. There is a small lithium-ion battery plus a circuit board with some electronics. There currently exists technology that can dissolve the aluminum body - a brine solution of about 5% Potassium Chloride (KCl) is circulated until the device is dissolved. Our client would like to have the circuit board break down / dissolve as well. The board is currently FR4 glass epoxy with traces on both top and bottom layers. We will have a look to see if there is any chance that we can constrain the traces to the top-side layer only - this might allow us to use an aluminum circuit board. However, I'm not hopeful this will be possible. I'm looking for suggestions for either suitable PCB material OR techniques that might allow the board to be dissolved. For example, we are considering using a much more fragile PCB material (paper-epoxy) and using a small explosive charge to shatter the board into much smaller pieces. However, I'd like to learn about other techniques that might achieve our goal. Note that is NOT a shopping question. If someone can suggest a PCB material that would directly be suitable - that's awesome. But I'm after other techniques that might achieve a similar outcome. I'm aware that the individual components won't be dissolved by the brine solution. However, the goal is to make the pieces small enough that they can be pumped without clogging the system - the pieces can be filtered out and discarded. [Edit] From the comments below: 1) Not military 2) PCB is currently about 1.5" x 1.0". Was larger but we've been shrinking it. 3) Operate time from deployment to end of life is measured in hours. I'm not the lead engineer on the project but I think there is sufficient battery capacity for about 24 hours of operation. 4) PCB is sealed inside a heavy-wall aluminum canister. Circuit board is not exposed to any liquid during operational life. 5) Max temperature that we have been testing to is 100C. Surprisingly, the particular Lipo battery that we are using is quite happy at that temperature. 6) The unit dissolving or breaking into smaller pieces is simply so that it doesn't cause obstruction when it has finished its job. Nothing nefarious - just sort of "cleaning up after itself". <Q> The goal of the ReUSE project was to increase the recyclability of electronic assemblies in order to reduce the ever-increasing amount of electronic waste. <S> Source: <S> http://environmentaltestanddesign.com/dissolvable-printed-circuit-board-recycled-with-hot-water/ <S> If that doesn't work, nitric acid will work on just about everything. <S> Oh, if you wanted to 'roll your own' manufacturing process, you could find a dissolveable material (maybe a some kind of cellulose?) and print on it with on of these PCB conductive ink printers: https://www.voltera.io/ <S> As per Edgar Browns suggestion, also this idea for dissolving polyimide for flat flex: <S> Try a mixture of Methanol:THF=1:1 , but it will take 1-2 days; The easiest way to dissolve Kapton - <S> is to use 0.1-0.3M NaOH in water. <S> By using alkaline solutions you can completely decompose the Kapton - down to initial monomers. <S> https://www.researchgate.net/post/can_polyimide_filmskapton_dissolved <S> NaOH is lye <S> , I don't know in what concentration you would have to have to get kapton to dissolve <S> but that seems like it would be easy to experiment with. <A> You should reconsider metal core PCBs Example . <S> I've used them for high power LEDs, and we etched in house using basically standard processes. <S> This is the one we bought . <S> Of course they do place limits on your design (and they're annoying to hand-solder), but they can be double sided (example from the same supplier as above, not someone I've ever used). <S> They'll give you a solution that will dissolve in anything your Al case will dissolve in. <S> The insulating layer is typically 100 µm thick, and it appears to be epoxy-based prepreg. <S> I'd assume that if surface-mount components can be dealt with, so can small pieces of polymer insulation, which are likely to break up. <S> It could be scored by routing, slotting the board, or even by hand with a scriber so that it breaks into smaller pieces (I don't know whether this is a research 1-off or a production run, so I don't know what processes are plausible). <A> like aluminium, Alumina is soluble in potassium hydroxide, and is available as a substrate from many manufacturers, also, some manufacturers will do double sided aluminium. <S> Probably the most soulble solution would be aluminium metalisation on alumina substrate, special solders and fluxes will probably be needed to attach the parts, but all the interconnect should dissolve in your alkaline salt solution. <S> I'm not aware of any place that can provide that as a standard option. <S> wood pulp bonded with a soluble salt would be another interesting experiment, but would require the use of only water-free processes during manufacture <A> For FR4 you only need to dissolve or decompose the epoxy in between the fibers. <S> The usual process is to pyrolyse it. <S> Next to FR4 there are other materials to make a PCB from. <S> Polyimide film is often used in flexible boards, and this can be dissolved. <S> https://electronics.stackexchange.com/a/221926/148363 <S> Unaware of application <S> you might need to glue this flexible pcb to another more easily dissolved substrate for rigidity or thermal purposes. <S> Some defective products already have damaged flexible PCB due to water from drinks. <S> Tight collaboration with your PCB house is required. <S> Since this is a rather unusual product requirement. <A> Consider using a flexible PCB and using a "can-crusher" design to compress it in one axis, then compress again with another in a 2nd axis. <S> You will be left with a pellet which can be released from the enclosure easily. <A> The easiest way to dispose of the PCB is during the design phase of your project, not after it's deployed. <S> That is, don't use a PCB at all . <S> Your circuit can use a stiff piece of non-coated cardboard as a substrate . <S> Long-leaded components (resistors, diodes, etc) can stick right through the cardboard. <S> Shorter-leaded components (ICs) might need a socket. <S> Instead of traces, make your connections using good old-fashioned wire wrap techniques . <S> I've used this as a poor-man's prototyping technique for ages. <S> When it comes time to dispose of the device, cardboard is fairly easy to destroy (source: the packages on my porch any time it even slightly rains). <S> What you'll be left with is the components themselves and a rat's nest of Kynar -coated wire. <S> Kynar is resistant to acids, but there are solvents that will destroy it (some of your electronic components likely have Kynar in/on them <S> so you'll need this chemical anyway). <S> If possible, choose a solder that breaks down in the same acid that you're using to dissolve the casing. <S> The main downsides to this approach are that the devices are harder to manufacture (more labor, less automation) and that they're much less sturdy (less of an issue since yours will be in an enclosure). <S> If your circuit is extremely complex, you might have to go with a larger board size, or make a multi-layered circuit by stacking several boards on top of each other.
Researchers from the National Physical Laboratory (NPL), in London, in cooperation with partners In2Teck Ltd and Gwent Electronic Materials Ltd, have developed a 3D printable circuit board that separates into individual components when immersed in hot water. Flexible PCB will also be easier to burn away.
Why not use actual switches instead of Relays Relays are expensive and have limited lifetime. Choosing a proper relay for a particular application is also important. In other words relays require a bit more of everything when compared to a simple electric switch. So why not instead of using relays, manufacturers somehow couple a small servo to a simple electric switch. Then use it for switching, by controlling the servo. We too can do it (though not perfectly) . What's the reason that relays are so popular and so common for power switching? What's the flaw in using a switch as suggested? Edit: Using switch can be a one time investment instead of using relays which requires replacement. Also, the cheapest servo motor(premade) can do the work for us. Main idea is lifetime, which I think, may be a reason enough! Also, if this setup is fabricated there may be less chances of individual part failures. <Q> That's what a relay is. <S> In both cases you have a set of contacts carrying current. <S> In both cases there is a mechanism for moving the switch: in a relay, this is done by a magnetic coil. <S> All techniques available for switch construction can also be applied to relay contacts. <S> The lifetime issue is only visible on relays because they generally get switched a lot more often than humans can manage on a switch. <S> If you look at lifetime specs in terms of number of operations they're usually similar. <A> Everything you say about relays is also true about switches. <S> Switches have a limited lifetime, they have to be chosen for the task, and the "somehow" in <S> "manufacturers somehow couple" is going to require individual engineering work and validation each time -- meaning, great expense. <S> Moreover, cheap hobby servos aren't nearly reliable enough, or, in a lot of cases, fast enough to replace solenoids as drivers for contactors. <S> The flaws in your thinking are: <S> You want to replace a relay with a switch plus an actuator, without realizing that a relay is a switch plus an actuator, and that we've had over a hundred years to optimize the relay. <S> And cheap wear out faster than good (generally expensive) ones do <S> -- just as cheap relays wear out faster than good ones. <S> You do not realize, or are glossing over the fact, that the key word in the phrase "hobby servo" is "hobby". <S> Hobby servos are not designed for high reliability. <S> They are not designed to work over a wide temperature range, or for tens of thousands of cycles. <S> Design a servo as reliable as a solenoid coil, and your "relay not a relay" idea gets more expensive than a traditional relay. <S> A servo + switch is going to be slow . <A> There are a few solid reasons to use relays. <S> Your system could do them, but never without extra cost and component count (which adds even more failure modes). <S> Very few applications these days actually wear out relays on a regular basis. <S> Rapid switching can often be done using solid-state components ( SSRs or simply FETs depending on the application) (but not with a typical servo) and relays can be over-specified. <S> These two approaches take care of the major failure modes. <S> Relays are inherently insensitive to dirty power circuits. <S> They're reliable even during cranking in automotive applications, when the rail voltage drops by up to around 20%. <S> Try running a PWM control circuit (as used to drive cheap servos) <S> and you'll realise the need for voltage regulation <S> (these aren't the cheapest components to add). <S> That becomes more complicated when you want to do relay logic. <S> This is pretty simple these days, as anything more complicated would be done using microcontrollers (etc.), but in something like a car you can put a relay at the far end of one circuit and use it to provide input to another. <S> In the unlikely event of a failure this relay is user-replaceable. <S> In a home heating system mains power is used to actuate relay coils, meaning you don't need to run a separate low voltage line*. <S> In both of these cases you either need local power supplies/voltage regulators or extra cables. <S> * <S> A 3-port valve as used in wet central heating systems may be close to what you describe but simpler: a low-speed mains motor moves the valve and a lever; the lever operates a switch that cuts off the motor. <S> Others are spring-return. <A> When using SPST switches, you need to manually switch them ( as you most likely know ) <S> whereas, if you use a servo to switch on and off at high speeds, you may realize that the speed of the servo can only go so fast, when a human switches the switch multiple times <S> , the delay between switches is almost never congruent, this is where a relay could come into use for making equal-delayed-switches. <S> Relays would be useful in this case, because in your application running the circuit could tell it to switch for a given time, and be exponentially more accurate with its delays. <S> Hope this helps! <A> You can use a switch to operate a relay with a "safe" voltage level while the relay controls a voltage of several thousand volts which is much safer than a switch directly controlling the several thousand volts...
You do not realize, or are glossing over the fact, that switches wear out just as relays do. Even relays are too slow for some applications; your servo+switch idea is going to be tens or hundreds of times slower than a relay with similar power handling capability.
Diminished data rate with logic output optoisolator I am using a logic output type optoisolator ( H11L1S ) that has a nominal data rate of 1 MHz, yet in practice I can't even achieve 100 kHz. Where am I going wrong? Is this maximum data rate unattainable? Here is the relevant circuitry: I am driving the LED at 2.8 mA, which is well above the minimum turn-on current of 1.6 mA (plus 10 % guard band suggested by the datasheet). Q18 is a prebiased NPN with 2K2 base resistance and 47K pull-down resistance. Below is a scope capture of the clock signal ( ADC_SCK , yellow) and LED cathode (blue). Once the transistor turns off the cathode voltage takes more than \$5\mu s\$ to reach +3V3 -- i.e. the LED turns off very slowly -- such that the receiver does not register the change in state. This means the hot-side circuitry ( ADC_SCLK , blue) sees a very slow clock: <Q> Take another look at the datasheet, specifically at the 'recommended' R L pull-up resistor value. <S> That's 270 Ohms, while you're using 15k. <S> That device sources very little <S> (if any) current when the output goes high, so <S> the rise time you're seeing is directly proportional to that R L pullup resistor you're using (combined with the gate capacitance of your Q40 and any parasitics). <A> The switching time test circuit from the linked datasheet shows that the LED is controlled with a push/pull driver with a rise/fall time of 10 ns: Your open-collector driver will not be able to manage that. <S> Consider using some logic inverter (e.g., (SN)74AHC1G14) instead. <S> Furthermore, the circuit uses a speed-up capacitor. <S> Fairchild's application note High Speed Optocoupler and its Switching Characteristics <S> H11LxM <S> , H11NxM shows that it should be 470 pF. <S> However, should not be needed for 100 kHz. <S> The output pull-up resistor should be smaller. <S> Q40 just inverts the signal; you can omit it if you use a non-inverting buffer to drive the LED (or if you use a PNP to drive the anode). <A> Once the transistor turns off the cathode voltage takes more than 5μs to reach +3V3 -- i.e. the LED turns off very slowly <S> The problem here is that the transistor doesn't turn off instantly after being in saturation. <S> You can reduce the effect by either reducing the base resistor value, putting a small speed-up capacitor in parallel with the base resistor, or by using a Baker clamp : <S> The FET contributes to the distortion of ADC_SCLK signal as well, so I would see if it could be avoided or replaced by a buffer/invertor IC if you need to increase the fan-out. <A> The FET can pull up but it can't pull down. <S> There is only the 15k resistor to pull down. <S> It's the falling edge at R187 that is slow. <A> To speed up the opto's input, two things can be done. <S> Decrease R189 to 150Ω to supply ~10mA to the LED when ADC_SCK is active. <S> (3.3v - 1.15v)/10mA = 143.3Ω <S> Add a small "speed-up" capacitance across R189. <S> For Z=50Ω and R189=150Ω, Xc should be: \$150Ω || X_C = <S> 50Ω\$ \$\frac{1}{150Ω} + \frac{1}{X_C} = <S> \frac{1}{50Ω}\$ <S> \$0.00\overline 6 <S> + \frac{1}{X_C} = 0.02\$ <S> \$0.02 - 0.00\overline 6 = 0.01\overline <S> 3\$ <S> \$\frac{1}{0.01\overline <S> 3} = <S> 75Ω\$ <S> \$ <S> X_C = <S> \frac{1}{2\pi <S> fC}\$ , pluging in 1MHz for <S> \$f\$ <S> , <S> \$ 75Ω = \frac{1}{2\pi \cdot <S> 1M\cdot C}\$ and solving for C: <S> \$ 75Ω\cdot 2\pi\cdot 1 <S> M = <S> \frac{1}{C}\$ <S> \$ <S> 471,238,898.038 = <S> \frac{1}{C}\$ <S> \$C <S> \approx 2.2\$ <S> nF <S> You can also add a small speed-up cap across the unlabeled resistor on Q18-A's base. <S> However, note in the datasheet that it specifies the maxiumum \$t_{on}\$ and \$t_{off}\$ of 4µs. <S> \$\frac{1}{4µs}\$ <S> = 250kHz, not 1MHz!
Using a recommended pull-up resistor on the opto-isolator output is also essential if you expect the frequency to be near the nominal maximum.
Humidity and Moisture in Submerged Die-Cast Aluminum Enclosures I have an electronic enclosure (5" x 5" x 4") made of die-cast aluminum alloy (A360). I plan to use it under water (~5 ft) continuously. I first pressure test it: I monitor the pressure inside, while exposing it to 10 psi air pressure from outside for 2 min. If there is no pressure increase inside (<0.1 psi change), I consider the seals are good. Then I submerge it in 5 ft water. I have a moisture sensor inside the box. If any water droplet bigger than 1 mm forms inside, the moisture sensor will be triggered and send a signal out. After about 15 days underwater, the moisture sensor is triggered. I took it out and didn't see any sign of water inside. I didn't even see any water on the sensor. But verified the sensor works fine. So there must have been some moisture on the sensor surface. So the question is: Where did this water come from? When I sealed it, the humidity inside was about 45% and the water temperature is a constant 75°F. There is no heating element of any kind inside. <Q> Your 2-minute 10 PSI pressure test is laughable in terms of your actual requirements. <S> Let's work the numbers. <S> Your 4" × 5" × 5" box has a volume of about 1640 cc, or 0.00164 m 3 . <S> Dry air at 25° <S> C has a density of about 1.184 kg/m 3 . <S> Therefore, you have about 1.94 g of air in your box. <S> The dew point is about 2.0% mass of water vapor at 25°C. <S> Therefore, it would only take about 38.8 mg of water to raise the humidity inside the box to the dew point, starting with perfectly dry air. <S> It took 15 days (21600 minutes) for that to occur, representing an infiltration rate of about 1.8 µg per minute. <S> On the other hand, raising the pressure inside the box by 0.1 PSI, starting from 15 PSI, requires adding 0.667%, or 12.9 mg of air. <S> Over a test period of 2 minutes, this would require an infiltration rate of no less than 6450 µg per minute, more than 3 orders of magnitude greater than that of the water test. <S> Given the design lifetime of 10 to 20 years, you're going to need to test for an infiltration rate that's a factor of 500× less than what you're getting now. <S> Cut that in half again if you're starting out at 45% RH. <S> Silica gel can absorb about 30% of its mass of water. <S> Adding 100 g of silica gel inside the box would allow your 20-year infiltration rate to be about 1000× higher than it can be without it. <S> In other words, this could mean your current seals are good enough. <S> You also might consider pressurizing the box with 10 PSI of dry nitrogen before you drop it in the water. <S> A high-quality sealing grease on your mating surfaces and seals would probably help a lot, too. <A> You should definitely drop in a Silica Gel packet (that you find in new water bottles when you buy them) as mentioned in above comments. <S> That should help with minimizing the humidity inside the enclosure after it has been sealed and keeping it that way for a very long time. <S> The probable cause must be that the circuit heats up and the submerged enclosure is generally cold causing condensation on the circuit parts due to the humidity in air within the enclosure. <A> Did some testing and found the leaking through seals is in the molecular flow range (less than 10^ <S> -7 std cc/sec). <S> This will keep maximum internal humidity around 70% @75F without any absorbents. <S> Our electronic guys are OK with this condition. <S> I still have 10 boxes underwater for ongoing test and will let you know if anything change. <S> I probably will do a temperature cycling test to see if anything will change.
Once again this is only a speculation but adding a silica gel packet will definitely give you better durability results.
Arduino Pro Micro Frying Up I made a PCB with two Arduino Pro Micro 5V on it as a control system for a machine. The PCB is powered by a 24VDC PSU and a part of that 24 VDC goes to a R-78C12-1.0 DC-DC converter. That gives 12VDC 1A to the RAW pin of the pro micro. It worked perfectly for few weeks but one day the Arduinos randomly fried itself when I powered the machine/PSU on. The voltage regulator was the part that was damaged. I couldn’t figure out what caused it. I used a spare PCB and since the second Arduino isn’t important/needed for now, I only used the one Arduino. I also placed a polarized capacitor between the 12VDC DC-DC Converter and GND. I did this thinking it will stabilize the 12 VDC going into the RAW pin. Now when I turn it on it works sometimes, other times the HR-SC04 Sensor and LCD screen don’t work and I noticed the VCC pin on the Arduino dropped to 2.8V. When it was having this problem, I tried to connect a micro-USB cable in and that caused the Arduino to fry again. It was the voltage regulator again that was burned and blackened. I looked online to see if others had similar problem and couldn’t get an answer. The only option I can see right now is to get a DC-DC Converter that output 6.5V instead of 12. Attached a screenshot of the schematics I’ve drawn on the easyEDA software. Would anyone have any idea what caused this fry up? Schematic of the PCB The PCB where the Arduino Pro Micros burned out Close up of the voltage regulator. It says "4BMD" on it before it was blackened. The PCB mounted on the machine.There is plastic spacers so there isn't electrical contact between the sheet metal and the PCB. <Q> The Torex XC6204 LDO regulator (markings '4BMD') is rated for 150 mA at 10 V max. <S> Recommend 6 V <S> ~ 9 V usage. <S> Updated <S> (information overload ahead): <S> The 'Pro Micro' is a SparkFun design, as far as I know at least, but the boards you are using are clones. <S> The perils of Open Hardware. <S> PDF of the SparkFun product, and schematic can be found here: https://www.sparkfun.com/products/12640#documents-tab <S> The GitHub: https://github.com/sparkfun/Pro_Micro <S> The genuine SparkFun boards use a Microchip MIC5219 regulator in the current revision, with an absolute max of 20 V, and 500 mA. <S> Those parts can be run continuously at 12 V. <S> I don't remember where (schematic?) <S> but I found a revision note reference to a 400 mA part that was likely in the original design (there were two previous revisions), that has since been upgraded by SparkFun. <S> Notice that the poly-fuse on the boards is also marked '4' for 400 mA, likely for an older revision board, and way too high for the Torex XC6204! <S> The regulator on the clone boards are not necessarily Torex XC6204, but may be clones themselves of the Torex part, or a similar related part. <S> The last marking character 'D' is supposed to be the production batch, so I would expect it to change, but it doesn't seem to. <S> My Google-fu is strong, and I can be relentless. <S> I recently purchased 10 similar boards for under $1 each, and did my research before applying power. <S> I found a listing on an auction site 'yoycart' (new to me) for reels of the Torex XC6204 parts when I searched for 'regulator 4bmd'. <S> I knew someone would mention the absolute max of the Torex XC6204 part being 12 V, <S> but yes, you don't want to run at that continuously; it's like driving with the accelerator-pedal always on the floor! <S> You must account for power supply ripple, as well as thermal derating margin, especially with an enclosure. <S> Hence the datasheet indicates 10 V max usage, and I recommended 9 V. Note that the 3.3 V boards should have an 8 MHz crystal, and the 5 V boards a 16 MHz crystal. <S> Running at 16 MHz at 3.3 V is out-of-spec for the ATmega32U4. <A> Several problems: As noted in another answer, the regulator XC604 on these modules is rated to maximum 10V. <S> You cannot supply it with 12V. <S> This can be fixed by swapping out R-78C12-1.0 for the 9V equivalent, R-78C9.0-1.0. <S> Consider adding EMC filter as advised by the R-78 datasheet. <S> You should add a flyback diode across the relay coil. <S> Neither the transistor nor the DC/DC will like the reverse spikes. <S> Use a TVS (rated somewhere around 30-33V) or similar fast diode. <S> You could use the same one on the 24V input as well. <S> Also consider adding small series resistors towards buttons and connectors, to avoid ESD damage. <S> Never fun to troubleshoot such problems. <A> The regulator on those are often marginal compared to authentic parts and will sometimes go up in smoke <S> around 12V. You might be able to replace the regulator with a genuine part from the BOM and have no trouble. <S> You may also put a diode in line with the 12V input to bring that voltage down a bit and protect against reverse power. <S> Or just buy genuine assemblies from the OEM and save yourself a bit of time and frustration.
I suspect you were using knock-off Arduino Pro Micro modules. At a bare minimum, you'll at least want some 10uF cap on the 24V, instead of the 1uF currently placed there.
Input Protection Diodes Functioning with Circuit Breaker? I was wondering if I was correct in assuming these diodes were being used for ESD protection on the input of a power circuit. I'm a little bit confused what the point of the diodes are when there's a circuit breaker before them too if anyone has any idea. The breaker is rated at 5 amps which is less than the maximum rating of the diodes. The only thing I can think of is the diodes are handling up to a certain level of transient and if the transient becomes too large the circuit breaker will kick in and protect the diodes and obviously the rest of the circuit as well. I know usually there is a resistor in this type of network from what I've read online but there doesn't seem to be one in this setup which I was also confused about. simulate this circuit – Schematic created using CircuitLab <Q> There is a mechanism that kicks the circuit breaker into action, and it relies on current (not voltage) in the traditional circuit breaker. <S> When the current reaches a certain level a coil triggers the spring loaded switch to turn off. <S> Some breakers also have a bimetallic that triggers when it gets to hot in response to current. <S> Source: https://electronics.howstuffworks.com/circuit-breaker2.htm <A> Circuit breakers are mainly intended to prevent fires. <S> They don't react quickly enough to block ESD pulses, and they don't react quickly enough to prevent many failure modes due to over-current in the protected equipment. <S> What they do is prevent excess currents from flowing for minutes, hours, or days, which could overheat the protected wiring or equipment to the point of igniting nearby materials and burning a building down. <S> ESD protection diodes can react much more quickly, absorbing energy from an ESD pulse that might last only nanoseconds or microseconds. <S> Depending on the kind of power source, the diodes in your circuit might be protecting the load from a reversed power source. <S> The diodes having higher current rating than the breaker threshold suggests this is more likely to be the function of the diodes, rather than ESD protection. <S> (Yes, this would be using the breaker for a slightly different function than its usual one of fire prevention) <A> With the voltage source connect as shown in the schematic, the zener diodes are reverse biased and will (try to) clamp the input voltage to about the sum of their individual zener voltage. <S> If for example D1 and D2 are a 4.7V zener and D3 a 6.2V zener, the clamp voltage will be about (4.7+4.7+ <S> 6.2)V = 15.6V. If the input voltage is standard 15V, the zeners won't be clamping. <S> If the input voltage would become 30V, the zeners enter the break down, clamping the the input voltage at 15.6V or bit higher, depending how much current is running through the zeners. <S> Depending on the output impedance of the voltage source, a very high current can run through the zeners. <S> Therefore, a circuit breaker is added to prevent the zeners will burn due to a too high current. <S> If the zener would get damaged, they likely fail open and the circuit will still see the 30V of my example above. <S> That is the reason that the circuit breaker is rated less than the (lowest) maximum rating of the zener diodes, so it trips before the zener diodes get damaged. <S> The zeners also serve a function as reverse polarity protection (as The Photon already pointed out). <S> Then, the zener diodes clamp the (negative) voltage at about the sum of their individual forward voltage. <S> Again, this could cause a very high current to run and the circuit breaker should act before the zener diodes get damaged. <S> Why a circuit breaker instead of a resistor? <S> A resistor could also be used instead of shown circuit breaker. <S> When the input voltages becomes higher than the clamping voltage, the zener diodes will try to clamp again, causing a current. <S> This current also flows through the resistor which causes a voltage drop, such that the input voltage minus this voltage drop becomes about the clamping voltage again (or a bit higher depending on the current). <S> A draw back of using a resistor is that <S> it <S> will <S> always cause a voltage drop (depending on the load current), even when the input voltage is below the clamping voltage. <S> For that reason, a circuit breaker may preveal over a resistor.
If the power source is reversed, a high current will flow through those diodes, which will trip the breaker.
How best to lower LED garden light intensity We have a house with an outdoor staircase lit by warm white LED strip lights. However, the lights are "too bright" and we want to lower the intensity of the LED lights. On another loop of the garden lights, the same LED strip lights are lower in intensity because there are more LED lights on the same loop. So, I was wondering what resistor (or other device) would be the smartest to use? A dimmer normally works with 110 V but that is not possible here to use. It has to be something on the low voltage loop so therefore I was thinking adding a resistor or similar to that specific loop. The loop it is currently on is a 5 V loop with 8 LED lights pulling 6 W each (=48 W). Anyone with experience/suggestions? <Q> Outdoor hermetically sealed supply don't have this adjustable screw (potentiometer). <S> In this case replacing the supply may be necessary. <S> It can be cheaper or not more expensive than buying a PWM dimmer. <S> PWM dimmer consume more electricity but are better fit for accessible, user friendly, regularly adjustment. <S> For a one time adjustment, setting a lower voltage is better and more climate friendly (LOL). <S> The change in voltage has to be of the order of 10%. <S> If you replace the supply by a supply with 30% less voltage, it will be too much. <S> You can reduce voltage safely. <S> But you can't increase it. <A> Using a resistor won't be the smartest thing to do here. <S> It will dissipate a lot of heat depending upon the dimming requirements. <S> I suggest using a PWM to dim the LEDs. <S> Something like this <S> : You can make one yourself if you are patient enough. <S> Alternatively buy it online. <S> Make sure your PWM dimmer works at 5 V. <S> If you are making one yourself, there are two ways to do it - Using a microcontroller like arduino or plain hardware based using 555 timer. <S> TIP: If you happen to be laying long cables to run multiple LEDs (or a long LED strip) from a single power supply source, consider using 24 V LEDs. <S> It will give you lesser voltage drop in the cable and your lighting will look more uniform. <A> if you can adjust the voltage of the supply to the LEDs that will probably be the simplest fix. <S> If not, adding a resistor is a good idea, picking the right resistor could be tricky. <S> but if your 48W and 5V figures are accurate about 4.7 ohms 2W sounds like a good start.
The best and most economical way is to reduce the voltage of the supply with the small orange screw near the terminals.
Why do we like LFSR and PN-sequence in FPGA? Everybody has learnt that we like LFSR in FPGAs because of its simplicity , FPGA like structure. PN-sequence generator is build up from LFSRs. (So we would think that this is FPGA-like generator...) BUT PN-seqence generator generates a new bit in each clock cycle, therefore we have to wait n clock-cycle to get an n -bit long word. So to get a new PN-Sequence word in each clock-cycle we need to implement a much more complicated structure. This repo contains such an "lfsr" , however the implemented design wont look alike an LFSR. My questions: Am I right? (Is this statement correct?) It isn't a trivial task to generate PN-Sequence word in each clock-cycle. Why does the industry prefer the PN-sequence than other random generators? <Q> Why does the industry prefer the PN-sequence than other random generators? <S> Because autocorrelation would give only one exact peak. <S> Doing cross correlation of different PN code would give zero result, kind of selective receiving. <S> You send a sequence to DUT and you receive the response, used in radar, sonar, model identification,... <S> etc <S> It isn't a trivial task to generate PN-Sequence word in each clock-cycle. <S> Not needed, as the transmitter sends a signal serially. <S> EDIT: <S> Let we have a boat with 4 sonars placed at sides, bow and aft. <S> If each of them sends different PRBS, then doing cross-correlation of received signal with known code you'll selectively reconstruct the received response <S> EDIT2: <S> From your linked document, there is also a nice description of the application for these codes: <S> A PRBS (Pseudo Random Binary Sequence) is a binary PN (Pseudo-Noise) <S> signal. <S> The sequence of binary 1’s and 0’s exhibits certain randomness and autocorrelation properties. <S> Bit-sequences like PRBS are used for testing transmission lines and transmission equipment because of their randomness properties. <A> We use LFSRs because it is a compact way of storing long sequences. <S> The maximum length of the sequence is \$2^n - 1\$ . <S> Hence you can have a PN code of length a million that can be generated with 20 flip-flops. <S> For shorter codes you don’t need to use LFSRs. <S> You can just store them in memory. <A> PN-sequence generator generates a new bit in each clock cycle, therefore we have to wait n clock-cycle to get an n-bit long word. <S> This is one way to do it. <S> It's also possible to design logic in the FPGA to generate multiple bits of the sequence at a time. <S> For example, if your flip flop outputs are called x[1] , x[2] , ... , x[5] like in your diagram, and your next output bit y[0] is given by x[5] . <S> Let's say you want to generate two bits of the sequence at a time. <S> Well you know that the "future" output y[1] would be given by x[4] . <S> So generating your two output bits at a time is easy. <S> But you also need to update the state by several cycles at a time. <S> In your 1-bit-at-a-time version, you have x[1] = <S> x[2] <S> XOR x[5] . <S> To generate two bits at a time, you work out what each state bit would be after two cycles of your serial version. <S> Instead of getting x[4] , x[5] would be updated with the value from x[3] : x[5] = <S> x[3] . <S> And x[4] would get the value from x[2] : x[4] = <S> x[2] . <S> Similarly for x[3] . <S> The first couple bits are slightly trickier. <S> You'll take the new state of x[1] from the serial machine and shift it along one step to get x[2] = <S> x[2] <S> XOR x[5] . <S> And work out what x[1] <S> will be after two cycles, it's x[1] = <S> x[1] <S> XOR x[4] . <S> Generating more bits at a time gets trickier, because you need to generate new inputs to the head of the shift register from bits that aren't even stored yet, but they can in fact all be calculated from the current state of the machine.
Simple bit-sequences are used to test the DC compatibility of transmission lines and transmission equipment.
PCB routing mismatches the schematic I have design a board on Kicad and now trying to route the tracks. However there is something I don't understand. Different pins on a microcontroller are connected to 3.3V power source through a capacitor or a resistor, like on the schematic below. simulate this circuit – Schematic created using CircuitLab When I then place the components and try to route the tracks, Kicad proposes to connect the pins of the microcontroller to the closest component. For example, if I place C1 in front of P2 and C2 in front of P1, it will tell me to route P1 to C2 and vice versa. It's the same with anything connected to +3.3V on my schematic. I think it's because I use the "+3.3V" symbol of Kicad but is it acceptable ? Or do I have to use a "dedicated" +3.3V label for each pin ? Thanks :-) <Q> It is all about the name of the nets (the interconnections between the parts) <S> But from the logic of most circuit design programs, each piece of interconnection line has a name, and all lines with the same name are electrically connected together. <S> This allows to omit interconnections, if they make the schematic less readable. <S> For your desired behavior, the interconnections from the two sources need distinct names. <S> Often, pins of sources give names to attaches interconnections automatically, and I guess, that's what happened here. <S> One often needs two identical voltages, one for analog, and one for digital parts of the circuit. <S> For convenience, the part libraries often have two similar parts for this. <S> (this answer is quite universal, and applies to other design tools like EAGLE, too) <A> This is normal. <S> It will connect components to 'nets'. <S> Most software will show connecting nets. <S> It is therefore up to you to lay out the PCB correctly and place the capacitors in close proximity to the intended pins on the microcontroller. <S> The software is unable to determine where you want components placed on your PCB, hence all it will do is show the net connections, so yes, this is acceptable and it is normal. <A> P1 and P2 are both power pins connected to 3.3 V and it doesn't matter whether you connect C1 to P1 or P2. <S> In complex schematics, its common to place all decoupling capacitors at one end as shown: You can see a bunch of decoupling caps in the bottom right corner. <S> While PCB layout, it's the designer's duty to make sure all pins get their decoupling cap. <S> Order doesn't matter. <A> As others have told you the behaviour you see is how it is supposed to work when you draw the schematic the way you did. <S> There is however the option to use so called net-ties to clarify to kicad that the capacitor is to be connected directly to the specified pin (while still allowing to also connect it to 3v3 in the end) <S> Right now this is sadly only available as a workaround <S> You do it by placing the net-tie symbol between the cap and +3V3 and then also having a footprint to place in the pcb. <S> (The next version will most likely come with a better option.) <S> This however comes at the downside of needing additional power flags to tell kicad that the output of the net-net tie is powered.
Because P1, P1, C1 and C2 are all on the 3.3V net, they will all be connected to each other, hence if you put the components anywhere near any component that is also connected to the same net, it will produce a connection line showing those components are connected to each other. As you draw it, it is very clear that there are two distinct voltage sources with their positive terminal not being connected together.
Heatsink on top of SMD plastic IC or bottom of PCB? I often do designing power modules and always wonder if there is some kind of rule how to properly cool SMD devices.I always look at final products from other companies and some are placing heatsink on bottom of PCB, while others pleace them on top - directly on SMD components with plastic packages. Many power IC and MOSFETs do have large thermal pads on bottom, and thermal transfer should be really good to the bottom. But adding about eg 2mm FR4 of PCB is also good thermal insulator... Comparing to like 1 mm or less of epoxy resin on IO package, I dont think it could be worse... To be more confusing, some power IC do not even have proper thermal pad - e.g. TPS63070 that I am working with right now. What is better? Adding heatsink on bottom or on top? Does it depend on how thick Cu layers are? <Q> The general rule is, there is no general rule. <S> In every case, you need to understand the complete thermal path from die to ambient. <S> Some SMD parts have thermal pads, in which case, you can use lots of copper vias to conduct the heat to the other side of the PCB where you can deal with it more effectively. <S> Other SMDs use only their lead frames to get rid of heat, in which case, you can do much the same, with the additional requirements for electrical isolation. <S> Applying a heatsink directly to an epoxy package is the least desirable — but sometimes the only available — option. <A> Here's a somewhat general rule. <S> When a SMT transistor or IC has got a thermal pad at the bottom, it's designed to heatsink to the PCB. <S> (The components without thermal pads heatsink to the PCB through pins.) <S> The thermal resistance from the die to the thermal pad is much lower than to the top of the IC. <S> In case of ICs, the thermal pad is as the ground potential, and you can connect it to a ground plane which is a good heat sink. <S> In case of transistors, the thermal pad often can't be connected to a ground or power plane. <S> So we heat sunk the IC to the PCB. <S> Then how do we dissipate the heat from the PCB? <S> A PCB can dissipate some power by itself. <S> If the PCB is large enough, and the power which it has to dissipate is not too large, and the ambient temperature is not too high, then the PCB itself may be enough. <S> If the design is dense, and the PCB can't dissipate enough power by itself, then additional heat sinking is needed. <S> You can heat sink the PCB to a metal enclosure. <S> You can add SMT heat sinks. <S> You can bolt-on heat sinks. <S> There are different ways to do this, which depend on different requirements and situations. <S> This is where it becomes tough to speak about general rules. <S> additional reading: <S> How does power dissipation for surface mount components work? <S> Optimize heat sink design - connect cooling pad on PCB backside by vias <A> You probably don't need a heatsink if only the thermal junction to board thermal resistance is 13C/W and the part supports an operating temperature of 125C. Assuming the highest output voltage and current, which is 2A and 9W, this would amount to 18W of power through the device. <S> With a lower bound efficiency of 80% this would mean a figure for worst case power dissipation would be around 3.6W. A 3.6W dissipation would amount to a 50C temperature rise above the PCB temperature. <S> This means if the PCB temperature were 40C, the part would be 90C, which would still be well within it's operating temp. <S> This is also assuming no heat lost to air, the air contributes an additional 20% heat loss through the top. <S> There are also some caveats to this figure, one being that some of the inefficiency is due to the inductor and some power will be dissipated there, so this worst case scenario should be less than 50C. <S> Keep in mind that a heatsink has 4x less of a thermal pathway than the pins, so the best way to get heat out of the part is through the pins, and a good thermal PCB design. <S> If not, then follow the recommended board layout, and use good thermal PCB design: <S> Source: http://www.ti.com/lit/ds/symlink/tps63070.pdf
You should use a heatsink if the board temperature is going to be high, or if the thermal ambient environment is bad (like higher than 50C).