source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
Altium: Designing a switch with 3 pads, 2 positions but only single contact I want to design a switch which commutes a single contact between 2 pins.I have drawn a schematic with 2 pins. IRL, the switch has 3 pads, one is unconnected. I can't associate the physical design to the logical one because they don't have the same number of pads/pins. How can I do this? Thanks. <Q> First: You want a Single Pole Single Throw switch (SPST). <S> The switch in the Altium library is likely Single Pole Dual Throw (SPDT). <S> Simply delete one of the T-pins from the PCB footprint. <S> You need to edit the switch PCB footprint, select the unwanted pin, delete, and save. <S> You can copy the whole library with the switch in it, or just create a Altium PCB file and place the switch. <S> Then use the Design-> Create library from PCB file. <A> Either change the part on the schematic to add another pin, or change the foot print of the device to include an unconnected copper landing pad, and hole if this is through-hole, remembering to open up the solder resist pattern. <A> Hum, in fact, it works with more pin in physical than in logical design. <S> I didn't read correctly the error message.... <S> Sorry. <S> Altium was unable to compile the lib because it was open in 2 altium sessions. <S> Nothing to do with unmatched pads....
Then you just edit the switch model and delete the unused pin.
VFD IGBT blew up and I can't find a replacement part So here’s my situation, a few weeks ago I received a 2.2kW variable frequency drive from a chinese seller and it worked perfectly. I then watched a video on youtube by AvE where he takes one apart (not the same model). He says that if you want to increase the lifespan on your VFD you should check so there is thermal paste on everything that is in contact with the heatsink. So I take mine apart to apply good thermal paste to it. In between the IGBT’s and the heatsink there are thermal pads (the thermally conductive rubber ones), rather than thermal paste. At the moment I think that's stupid because they do not have as good thermal conductivity as thermal paste. I remove them and apply some thermal paste. Once I have screwed it back together again, I plug it into the wall and press the start button without having any load on it. I hear a pop and then see an error message on the display. I take it apart again to see that one of the IGBT’s has blown open. My guess is that the IGBT’s weren’t supposed to have electrical contact with the heatsink… Now I'm trying to find a replacement part but I can't find one with the exact same specs as the broken one. This is the one that was installed on the VFD and blew up. I'm not an electrician nor have much experience working with IGBT’s. This is the closest one I could find on Ebay but the gate-emitter voltage is +- 20V instead of the +- 30V mine is rated for so I’m not sure if it will work. Does anyone have any suggestions or can provide any help? (if this does not follow the format of a good question I guess it could be rephrased as “What are the important things to consider when replacing IGBT’s?”) <Q> I would be concerned that the diode junction to case thermal resistance is 2.0 deg C/W for the Fairchild device vs. 0.85 for the original device. <S> In a VFD driving an induction motor the diode carries a significant portion of the current because of the reactive component of the motor current. <S> If you decide to try the Fairchild device, you should probably set the VFD switching frequency to the lowest value. <S> For any electronic product, the best way to improve the life is to try to keep the internal temperature as low as possible. <S> Be careful about mounting it in another enclosure, in a confined space or in direct sunlight. <S> Make sure <S> the space where it is operated is well ventilated. <S> Consider setting the switching frequency to the lowest value, although that may cause the motor to operate at a higher temperature. <A> This is just my opinion, but if this was a new inverter rated 2.2kW and it is using discrete IGBTs, that is a design that has not been used in mass production for a decade or so. <S> VFD manufacturers in that size range switched to using what are called IPMs (Intelligent Power modules) that house all 6 diodes and 7 IGBTs along with the firing circuits all in one potted device about the size of a credit card. <S> You can't see or replace any individual components in the IPM, nor is it worth messing with because the replacements cost more than an entire new drive. <S> So that means yours is either old stock that was sold as new, which likely explains why you can't find the part, or a very poor design knock-off of a discarded technology. <S> And that advice you saw on Youtube? <S> Worthless drivel... <A> It's similar, isn't it? <S> You should be unlikely to fall foul of the max gate voltage. <S> Given the nice tight Vgsth spec of the original, and that it costs power to overdrive a gate, they're unlikely to be deliberately driving more than (say) 15v into it, but the 30v spec on max may mean that they are more lazy on overshoots, so it would be worth measuring. <S> Note the original has an integrated gate resistor, so you would be well recommended to add one of those to tame things. <S> The original part has a single cycle switching loss of 3.5mJ, compared to 5.06mJ in the replacement. <S> This spec sweeps up a whole bunch of assumptions, but they are taken for the same set of conditions, so are a fair comparison. <S> This means the replacement part could dissipate 50% more power than the original. <S> The thermal resistance junction to case is only 10% less, so the junction is likely to get significantly hotter. <S> Whether the original has enough margin to soak this up without complaint ... ? <S> The gate charge is higher at 200nC rather than 130nC, which means it would need to be driven harder. <S> With the same drive it will switch slower, which echoes the switching loss comparison. <S> You might get away with it. <S> As you have a broken VFD, and the replacement costs only a few pounds, it might be worth risking it, perhaps with careful monitoring of the conditions when you next apply power. <S> It would be better however to continue the search for a replacement with lower, rather than higher, losses. <A> This part is cheap and reasonably common in China. <S> Your best bet if you're not planning on visiting soon is to ask the manufacturer or seller to send you some. <S> They can buy them cheaply and will have them in hand in few days, getting them to you will be harder. <S> However, I don't want to get your hopes <S> up- <S> what you did was quite traumatic to the unit and other things may be blown up, starting with the power supply and maybe the driver parts, You may wish to purchase a new VFD and attempt the repair for a spare. <S> Maybe the seller can throw the parts into the VFD package so shipping is free. <S> In the general question, if you pick an IGBT with at least as high voltage rating, at least as high current rating and similar gate charge (and package) <S> you will probably be okay. <S> Power rating should also be similar or higher, under the same conditions. <S> Also look for special features such as integral co-packaged diode, short-circuit rating and so on. <S> Some packages have overmolding so you don't need the silicone pads (but <S> thermal paste is mandatory for good thermal performance). <S> Incidentally, the silicone pads used in some of these low end products are quite inferior thermally. <S> Usually the gate drive voltage is nominally 15V or so, but transients may be greater so the gate rating might or might not be okay. <S> Substitution of stressed parts such as these involves quite a few considerations, and we don't know what the drive circuitry looks like so an exact replacement is safer.
Trying to improve the life of a product by tinkering with the design is never a good idea.
Circuit of Instrumentation Amplifier Please tell me what will happen if the instrumentation amplifier is constructed as this circuit? As in this, two resistances R1 are grounded in between? What is the problem in this circuit? <Q> In short, with the split and grounded R1 circuit, A1 and A2 amplify both common and differential mode signals by the same gain. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> So their outputs will be easily saturated when trying to extract a tiny differential mode signal superposed on a high common mode. <S> On the other hand the "classical" floating R1 circuit always amplifies common mode voltages by unity gain, whatever differential mode gain you set using R1/R2. <S> simulate this circuit <S> That's the clever thing about it, it helps extracting "small" differential on "high" common mode, improves CMMR and dynamic range. <A> With ideal resistors and ideal amps, that's a perfect diff amp. <S> Usually the R1+R1 resistor is not split and grounded in the middle. <S> That allows for the adjustment of a single resistor to adjust the overall gain. <A> As Olin says if we had ideal op-amps and ideal resistors then it would be a perfect diff amp <S> but we don't have ideal resistors and we don't have ideal op-amps. <S> With your circuit the two inputs are effectively amplified seperately, this causes several problems. <S> If the first stage gain is large and the common mode signal is large then as Carloc points out saturation of the first stage is likely to be a problem. <S> Similarly if there is any imbalance in the second stage diff amp CMRR will suffer badly. <S> Your points 2 & 3 apply to both topologies. <S> They apply slightly to the conventional topology and far more severely to the OPs topology. <S> As a first approximation lets consider the case where the op-amps are ideal but the resistors are not. <S> Regarding the first stage in the conventional topology both the common mode gain of both top and bottom amplifiers will be 1 regardless of the resistor values. <S> Therefore the first stage will not convert common mode to differential mode. <S> On the other hand in the OPs topology the top and bottom amplifiers can have different common mode gain and hence can convert common mode to differential mode. <S> Regarding the second stage <S> it's common mode rejection will be the same in both cases. <S> However in the conventional topology it will only see the original common mode voltage, while in the OPs topology it will see the amplified common mode voltage. <S> So the impact on overall common mode rejection will be much greater in the OPs topology than the conventional topology.
If there is any gain imbalance between the two first-stage amplifier circuits then CMRR will suffer badly.
How do I eliminate latches in FSM Verilog implementation? I'm trying to create an FSM that left shifts a register until the MSB is 1 while counting the number of shifts completed. However, I have an issue with latches because I don't re-assign each register on a state change; namely, the shift and ctr registers (see the "if (shift[3])" statements). To get rid of the latches, I know that I need to assign these registers, but I don't want their values to change at the end--I want them to remain constant. How do I achieve this? `define RST 2'b00`define LSH1 2'b01`define LSH2 2'b10`define DONE 2'b11module leftshift(clk, rst, x, shift); input [3:0] x; input clk, rst; output [3:0] shift; reg [1:0] state, nextstate; reg [3:0] shift; // reg holding shifted bits reg [2:0] ctr; // ctr keeps track of how many shifts until completion always @ (posedge clk) if (rst) state <= `RST; else state <= nextstate; always @ (state, x, shift[3]) case (state) `RST: begin shift <= x; ctr <= 0; nextstate <= `LSH1; end `LSH1: begin if (shift[3]) begin //shift <= //don't change; //ctr <= //don't change; nextstate <= `DONE; end else begin shift <= {shift[2:0], 1'b0}; ctr <= ctr+1; nextstate <= `LSH2; end end `LSH2: begin if (shift[3]) begin //ctr <= //don't change //shift <= //don't change nextstate <= `DONE; end else begin shift <= {shift[2:0], 1'b0}; ctr <= ctr+1; nextstate <= `LSH1; end end `DONE: begin //shift <= //don't change; //ctr <= //don't change; nextstate <= `DONE; end endcaseendmodule <Q> What I see is the coding style where you have a registered and a combinatorial section. <S> It is a good coding style but it also only works if you are 100% consistent in your code: Everything you clock (state, counter, shift) must be in the clock section. <S> and you must only use non-blocking "<= <S> " assignments. <S> All combinatorial code must be fully! <S> encoded and you must only use blocking "= <S> " assignments. <S> (posedge clk) <S> if (rst) begin state <= <S> `RST_STATE; shift <= <S> `RST_SHIFT; <S> cnt < <S> = `RST_CNT; <S> end else begin state <= <S> next_state; shift <= <S> next_shift; <S> cnt <= next_cnt; end end <S> I am not going to re-code <S> your whole combinatorial section (sorry) <S> but there you use: always @ <S> ( * ) // easiest!begin case (state) `RST_STATE: begin next_shift = <S> x; next_ctr = 0; next_state = `DONE; end <S> `LSH1: begin if (shift[3]) begin next_shift = shift; <S> // <S> no change next_ctr = <S> cntr; // <S> no change next_state = <S> `DONE; // <S> end else begin next_shift = {shift[2:0], 1'b0}; <S> next_ctr = cntr+1; next_state = `LSH2; end etc...end As I said it is a good coding style because it can handle some nasty cases better but the other coding style (I just saw another answer appear) <S> is much easier. <S> Disclaimer: code not compiled there may be typos in there <A> Splitting sequential and combinational parts is not efficient for your design. <S> In addition, the combinational always block has non-blocking assignments ( <= ), which may cause unexpected synthesis results. <S> A single sequential always block should solve the problem. <S> always @ <S> (posedge clk) begin if (rst) begin state <= <S> `RST; shift <= 0; ctr <= 0; <S> end else begin case (state) `RST: begin shift <= <S> x; ctr <= 0; state <= <S> `LSH1; end <S> `LSH1: begin if (shift[3]) begin //shift <S> <= <S> //don't <S> change; <S> //ctr <= <S> //don't <S> change <S> ; state <= `DONE; end else begin shift <= {shift[2:0], 1'b0}; ctr <= ctr+1; <S> state <= <S> `LSH2; end end <S> `LSH2: begin if (shift[3]) begin <S> //ctr <= <S> //don't change //shift <S> <= <S> //don't change state <= <S> `DONE; end else begin shift <= {shift[2:0], 1'b0}; ctr <= ctr+1; <S> state <= <S> `LSH1; end end <S> `DONE <S> : begin //shift <= <S> //don't <S> change; <S> //ctr <= <S> //don't <S> change; <S> state <= `DONE; end endcase endend <S> Both ctr and shift are flip-flop here. <S> If there is no assignment in a case, they will preserve their values. <S> The assignment ctr <= <S> ctr also does the same. <A> Since code specific answers have already been given correctly by others. <S> I just have a general answer to this question. <S> To avoid latches in a HDL design , two major points you have to keep in mind. <S> 1) Make sure that you cover all possible conditions in - case and if constructs, even though you feel that its not gonna appear in your design or you think is irrelavant. <S> Because synthesizer will always "think" that such a condition will appear in your design and infer a latch for such conditions. <S> 2) <S> Once you did it the above thing correctly, the next step is to make sure that all the output/internal signals of your design get SOME value in every execution cycle, whatever the conditions are. <S> Otherwise synthesizer will again infer latches to hold the previous values of uncovered signals.
Start by adding a next_shift and next_cnt and in the clocking section use always @
Relation between VCO's (Voltage Controled Oscillator's) Frequency and Voltage I have a question that what is the relationship between the VCO generated frequency and applied volatage. Is there any formula which we can use to find the applied voltage and get the frequency generated against it?And thanks in advance. <Q> Some low frequency VCOs are intended to have very linear relationship, perhaps to be used as a voltage to frequency converter. <S> They often turn the input voltage into a current, to swing the voltage on a capacitor over a fixed range. <S> This sort of architecture departs from linearity at low frequency, when fixed leakage currents become a significant part of the controlled current, and at high frequency, when switching propagation times become a significant part of the output period. <S> Without the intention to have a linear relationship, VCOs may well depart significantly from nominal linearity. <S> There is a class of VCO that sacrifices frequency linearity for period linearity, using a constant current to swing a capacitor between voltages dependent on the input. <S> Here the propagation delay time is an additive constant onto the period, leakage currents are with respect to a fixed current, and a t <S> =mV+c straight line can be accurately defined between input voltage and output period. <S> High frequency VCOs for RF work usually use varactors in LC oscillators. <S> These designs are not usually driven by thoughts of linearity, but noise, spectral purity, and 'oh, if it's not too nonlinear, that would be handy!' <S> Although the capacitance is a very nonlinear function of the applied voltage, the fact that frequency varies as the square root of capacitance means that even in wide range VCOs, the change in FM sensitivity across the range can end up being quite small, often less than a factor of 2. <S> The best measurement-grade voltage to frequency converter <S> I have come across used a 4046 PLL fed back with a frequency discriminator based on a HC123 monostable. <A> If you are using a integrated circuit chip VCO then you need to look at the part data sheet. <S> They may provide a formula for the control voltage to frequency relationship. <S> More likely there will be a graph shown that shows the relationship in a normalized manner. <S> (Normalized because many IC chip VCOs can support a range for nominal frequencies dependent upon externally attached components. <S> Note also that the control voltage to frequency transfer function is likely to not be a linear relationship. <S> If you are designing your own VCO from discrete components then the control voltage relationship is going to be full dependent upon the circuit design. <S> In this case you can either try to apply basic electronics circuit analysis or as most folks do these days they characterize the circuit behavior with a circuit simulator. <A> Wideband VCO's are very non-linear, but in limited range may be selected in [MHz/V]. <S> Narrowband VCO's can be very linear with careful design. <S> Very narrowband VCXO's are stable Xtal controlled. <S> VC-TCXO are also temperature compensated. <S> VCO's are used in PLL and Synth IC's for example. <S> Here are a few.
VCOs are made with various technologies for various purposes, so the answer is 'it depends'.
Relation between torque and speed in a dc motor(say dc shunt motor) I have a doubt on relation between torque and speed in a DC motor. Please correct me wherever I am wrong.As we know that if we increase load the armature speed will decrease so that back emf also decrease or say armature current will increase and we know that torque depends on armature current Ia ( in a dc shunt motor so the flux could be constant) Torque proportional to flux × Ia Sience Flux =constant =Torque proportional to IaSo when armature current will increase torque will increase and thus speed should also increase. So my question is that if rotor slows down because armature is not producing enough torque for increased load then why does it not regain the same speed (on which it was rotating) after the armature produces sufficient torque (due to increase in current) <Q> At constant supply voltage, if you increase the load, that that slows the motor down, mechanically. <S> The reduction in speed reduces the back emf, which leaves a higher voltage left over to push more current through the windings. <S> The higher current generates more torque, which allows the motor to come to equilibrium with a higher torque into the higher load. <S> Its load will probably require higher torque to drive it at the higher speed, so the motor will come to a new equilibrium at the higher speed, with a higer current and torque. <A> You have one too many unknowns or not enough information to answer this question. <S> You are experiencing some kind of rotational speed hysteresis with assumed same dynamic power resuming after dropping out. <S> E.g. Solar powered motor. <S> DC motors have Torque proportional to Current ( like PV's have current proportional solar power input ) and then No-Load RPM proportional to Voltage. <S> Maximal Power may be around 80% of maximum no-load speed but maximal torque is always from 0 RPM and declines to 0 available torque at some voltage with no-load speed. <S> You have not defined the motor load (torque vs RPM profile.) <S> Hypothetically, if you have a load profile that is inversely proportional to speed or has positive feedback or friction is high initially and builds up momentum then you get a hysteresis effect. <A> The torque is proportional to the current and with no torque <S> the speed is proportional to the voltage. <S> In the simple model, there is a resistor, the armature resistance, and a voltage source, the back-emf, which is proportional to speed. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> In the ideal case when there is no load torque, the speed is proportional to the input voltage and the current is zero. <S> If a load torque is added, the current increases, dropping voltage over the armature resistance, so less voltage is across the back-emf, so the speed has to be lower.
If you increase the current flowing through the motor by increasing the supply voltage, which increases the voltage across the winding resistance, that will increase the torque and the motor will speed up.
Do I need to connect "active low"? Probably a simple question;For this sensor, should I connect the interrupt pin 5 to ground? The function of the pin is unclear to me. To add to the confusion page 3 says the pin is active low. Does keeping it low enable the chip to work? Or does the pin have another function and will keeping it low prevent the chip from working (like keeping a reset pin low)? <Q> The interrupt is an output, hence the "O" on the datasheet. <S> It is open-drain active low so connecting it to ground will not cause harm. <S> The purpose and function of the interrupt output is fully explained on page 9, and I don't see any purpose in repeating it here. <S> It's intended to connect to an interrupt input on your microcontroller, with a pullup resistor to Vdd (since it's the drain of an n-channel MOSFET). <A> However, if not, then you can tie it to 0V. <S> The reason for the pin is that when you are doing a conversion, you have to wait. <S> Rather than wait for the conversation to finish by polling the AVALID bit in the STATUS register, you can do something else or go to sleep. <S> The device will let you know when the conversion is done and you can access it. <S> Interrupts make the electronics more efficient because you can do more things than waiting around and polling. <A> The table "Terminal functions" on page 3 denotes pin 5 as INT 5 <S> | O | Interrupt — open drain (active low) <S> When this output is activated, the "switch" closes and connects this pin to GND. <S> (i.e. when active, there's 0V) <S> If you don't need it, leave it unconnected. <S> If you connect Vcc to it, you form a short circuit as soon as this output becomes active. <S> Connecting the pin to GND is no problem.
If you are not using the interrupt, then you can pull it high with a pullup resistor - this allows you to use the pin (if you choose too in the future). It is an OUTPUT, open Drain means it is the drain of a FET, you can imagine it as one terminal of a switch, where the other terminal is connected to GND.
Big heatsink vs. small one with a fan There's quite a bit of math to be done when you need to calculate a size for your heatsink. As there are many options to choose from if you don't know the size yet, is it just better to pick the largest one? (assuming you have unlimited space for it); Example The other option would be to use a smaller one but with a fan for forced airflow like this one. So is it more efficient if I put a small heatsink with a fan or just use a large but not 100x bigger heatsink? <Q> It all come down to thermal resistance. <S> A larger heat sink will have a lower thermal resistance vs a small heat sink. <S> Air flow will lower the thermal resistance of a given heat sink. <S> What you need to do is calculate what thermal resistance you need by finding out the power dissipation and maximum junction temperature of the part you are trying to cool. <S> Generally you want some margin below the maximum junction temperature. <S> Once you know the thermal resistance you need you can select a heat sink and air flow that will meet it. <A> Both approaches are valid and are still being used. <S> The fan is becoming more common. <S> The fan means less reliability and more power consumption. <S> No fan means larger size and often cost because DC brushless fans are so cheap. <A> If you have the space, and can find the right geometry of heat-sink that dissipates the amount of heat you need rid of at an acceptable cost, a plain heat-sink with no fan is always preferable. <S> A fan consumes power so reduces the efficiency of your system. <S> Fans are also noisy. <S> As such, in general, fans should only be used if surface space is limited, the heat-sink would need to be extremely large and expensive, or the heat-sink can not be located somewhere that has free air-flow. <S> If that is so then it is prudent to arrange your heat-sink to take advantage of the existing forced air-flow.
Moreover, fans can and will eventually fail, or be blocked, at which point over-heating can kill whatever you are trying to cool if you do not have temperature sensing and some form of thermal shut-down circuitry. However, on occasion, a fan may be required anyway for other reasons.
How can a used alkaline cell rate at 1.9 volts? Today I replaced the battery of four alkaline AAA cells in an LED bicycle lamp, which was connected in two parallel series of two cells**. The four old cells were installed new from the same pack and (unsurprisingly) look identical. The brand is Kodak Xtralife, and they are labelled as alkaline AAA LR03 1.5V. Their expiry date is marked as 09-2020. I checked the voltage of the old cells: three measured at close to 1.32 V; but the fourth one was over 1.9 (sic) V. I tested the cells repeatedly, using three different makes of electronic multimeter, along with a new alkaline cell for comparison, which measured at 1.560, 1.570, and 1.576 volts on the three meters—absolutely normal figures as I would have expected from numerous such measurements in the past. The three meters were self-consistent in repeated measurements, with the low-reading meter reporting about 11 mV below the middle-reading meter, which in turn read below the high-reading meter, although in the latter case the differences between the two meters changed according to whether the measured voltage was 1.3 or 1.6 volts: respectively 2 and 6 millivolts. While the three meters all reported initially above 1.9 volts for the fourth cell, I am finding that, with repeated measurements, the voltage is dropping on this cell, and it is now 1.879, 1.893, and 1.898 volts according to them. New alkaline cells always test at about 1.57 V, the voltage declining with use. So how can a used cell test at 1.9 volts? **This turned out to be wrong: the cells were actually in a single series of four. <Q> Battery tests are generally done under a light load. <S> If your meters are 10MOhm and the measurement affects the battery, it will likely drop much more notably with a 1kOhm resistor attached (which is only 1.9mA assuming 1.9V), let alone 100 Ohm. <S> However, it is peculiar, regardless for an alkaline 1.5V-rated battery to become 1.9V. <S> It is possible to get them to that voltage, so long as they are not 100% empty, for a short while (range of seconds to minutes, maybe a bit longer) without physical damage. <S> But it takes care and attention, which is unlikely to have been applied in this case in an inadvertent manner. <S> There may have been something strange going on in your device, which may have done something silly to the weakest battery in the whole chain. <S> But if it stays above 1.6V even with 100 Ohm loading, I'm betting it is in fact not an actual Alkaline cell, as Plasma hinted at in the comments. <A> Alkaline batteries are prone to leaking potassium hydroxide. <S> It can be assumed that as a result of leakage, the contact of the alkaline cell is contaminated. <S> During measurement, a galvanic pair is created between the contact of the alkaline cell and the multimeter probe. <S> Potassium hydroxide acts as an electrolyte. <S> The measured value consists of the voltage of the cell and the voltage between the contacts. <S> If there are noticeable leaks, be careful. <S> A caustic agent can cause respiratory, eye and skin irritation. <A> This is not an original answer, but rather an assembly of the answer by @Asmyldof, the comments by PlasmaHH and Chris Stratton, and a correction of a wrong assumption in my question. <S> The cells were actually in a series of four. <S> Probably what happened was that one cell was installed backwards. <S> There was enough voltage in the remaining cells to overcome the opposite voltage in the wrongly installed cell and still operate the lamp. <S> By chance, the operating conditions boosted the odd cell to 1.9 V, a temporary over-voltage that soon declined even on a light load. <S> Thank you to everyone for solving the mystery.
If you want to use voltage as an indicator for battery condition you need to make sure you have a small current loading them, or the measurement has much less meaning than you think.
electrostatic and capacitors... more about the capacitor though done some reading, got me thinking. done some researching got me doubting... but my question is. is there a capacitor that doesn't degrade and doesn't lose capacitance over time? from all of my research it seems like this is the end conclusion but i think most of those answers were coming from the mindset of modern capacitors being used for modern technology... i'm wondering capacitors like a layden jar or some sort... do they degrade and lose capacitance over time due to 'age' principle? glass can store or hold stored energy as well last i remember and glass takes a long time to be biodegradable or no corrosion/oxidization... i'm very fascinated by electrostatic properties.. <Q> Everything degrades over time, the only question is how quickly. <S> One mechanism is evaporation of the wet dielectrics, which causes loss of capacitance. <S> Parallel plate air dielectric caps like in the old tuners will last a long time. <S> Everything is dependent on environmental conditions and how the caps are used in the application. <S> High ripple current, high temperature and high voltage stress can lead to shorter life. <A> If you are looking for long term stability consider mica. <A> Properties of Physics <S> All insulators are dielectrics and all dielectrics are capacitors when placed between conductors. <S> All batteries have much higher Farads than ultracaps, but also comes at a price of much faster aging. <S> Murphy's Law <S> Any contamination or impurities from solids, liquids or gas that can interfere with these insulation properties , will, no matter how many parts per billion they are. <S> If the contaminants can move under a high electric field, they can be accelerated at great speeds to collide into an electrode and will detonate those particles and cause ionization. <S> This can lead to breakdown or corona. <S> A Leyden Jar named, after the Dutch city used water in glass with a rod and foil electrode. <S> The water has a dielectric constant ~ 80 <S> compared to air. <S> Thanks to Wiki. <S> If you want to get up to speed, read the archived works by Coulomb and Faraday. <S> For interesting topics <S> 1980 effects of irradiation creating plasma in dielectric by Air Force Weapons Lab Fast forward and discover how they crosslink polymerization is used by many industries in dielectrics like PVC wire insulation and automotive tires to improve strength by detonation of impurities using up to ~1GV in SF6. <S> It also makes tires pretty good insulators. <S> To understand the limits of hi voltage in insulator, one needs to understand the precursors called Partial Discharge (PD) which occurs before transformers blow up. <S> So they have lots of research, standard tests, PhD thesis topics and a instruments on how to detect and localize these events, from UHF antennae, optical to X-Ray detectors, due to the rise times of the negative resistance ionization of dielectrics. <S> The repetitive nature of PD inside a dielectric is like corona in air but not visible since it is contained but can lead to internal corona which can be a severe voltage breakdown condition depending on the stored energy in the dielectric or it can be benign slow degradation.
Electrolytic caps tend to degrade faster than ceramics and film caps.
What does the rectifier do in a crystal radio? I have been reading up on semiconductors and all of the references that I have found say that the first practical application of the semiconductor diode was in crystal radios, and that semiconductor-based rectifiers quickly gave way to tube-based amplifiers. So I am trying to understand why the rectifier is necessary at all. An excellent explanation of how a crystal radio works (and why it is now hard to get the components to build them) can be found here . For those who don't want to click, here is the circuit diagram: So the coil and capacitor form a resonating circuit. Frequencies below a threshold go through the coil to ground, and those above a threshold go through the capacitor to ground, but those at the resonating frequency are stuck and have to go through the diode to the headphones. Every description of this circuit I have read say that the diode somehow demodulates the signal, and I just don't understand how it can do that. There is, say, a 88Khz carrier frequency which is AM modulated with a 300Hz-3KHz signal of the human voice. How does the diode, by chopping off the parts of the signal under the zero, do that? <Q> It's called an envelope detector. <S> The original signal had an average value of 0. <S> If you fed this through a low-pass filter (aka a capacitor), the output signal would be 0. <S> With the diode in place, the signal can never go negative and now if you average out your signal using a low-pass filter, you get a slowly varying signal (relative to the base frequency) that no longer has an average of 0. <S> This signal is now useful for the speaker. <S> https://en.wikipedia.org/wiki/Envelope_detector <A> The diode demodulates the AM radio signal. <S> To demodulate (recover the audio signal) from an AM radio signal all that is needed is to retrieve the amplitude of the signal: <S> Source: <S> this article <S> That's what the diode does. <S> It <S> blocks <S> the negative part of the wave but lets the positive part pass. <S> This together with the capacitor recovers the audio signal. <S> Your example does not contain a resistor and a capacitor, they are present though. <S> The headphones can only work on audio signals so it basically performs the same function (a low pass filter) without needing those components. <A> Here's a physical description that might help intuitively - Hum a 1kHz tone into a microphone, and broadcast it on a 100kHz AM carrier. <S> At your receiver, ideally you would like the earpiece diaphragm to alternately displace outwards and then displace inwards every millisecond, and for decent sound quality maybe you'll settle for having it alternately displace outwards and then rebound to equilibrium every millisecond. <S> Without the diode, your earpiece diaphragm will attempt to vibrate at 100kHz strongly for half a millisecond, and then more weakly or not at all for the next half millisecond. <S> Even if the earpiece responds slightly at that frequency, your ear will not and you will hear nothing. <S> With the diode, for half a millisecond your earpiece diaphragm will be nudged outwards every 10 microseconds (5 microseconds at a time). <S> Even without any extra filtering capacitors and thus with all those 5 microsecond gaps in the current, 500 straight microseconds of having the diaphragm continually nudged in the same direction at such close intervals should accomplish some displacement. <S> That is, the mechanical characteristics of your earpiece will probably accomplish some of the actual demodulation when operating on a rectified signal. <S> When operating on an unrectified signal however, those same mechanical characteristics will demodulate it to something close to silence. <A> Without the diode the average current in the headphones (1) would be 0, so there would be nothing to hear. <S> The diode acts as a non linear component (2) that create a no-null current in the headphones. <S> It happens that this current is proportional to the amplitude of the wave received by the antenna. <S> This correspond exactly (3) <S> the audio signal. <S> (1) average over say 0.1ms (what a hear can perceive) <S> (2) more precisely : non linear and not odd (that is to say, even, or with a certain "even effect") <S> (3) in amplitude modulation (AM)
The diode prevents the base frequency from going negative.
Altium silk screen carried over into copper layer My PCB vendor informed me today that a silk screen feature that is carried over into the copper layers, lands on traces and causes shorts (yellow box on the right). The yellow box is a connector and I got the Altium library from Samtec directly, I just wonder if it is a real problem, if it is how do I remove the silk screen. <Q> He is telling you that yellow line ends up being a trace. <S> The connector model's silk-screen geometry must have been mistakenly placed on the copper layer instead of the silk-screen layer. <S> Edit the model geometry. <S> Note, whatever U5 is, also has a problem, though you got lucky in this case in that it does not interferes with any of the surrounding traces. <A> The quickest (not necessarily the best) way to fix this problem is by modifying the actual layout. <S> To do this, double click on the component causing the problem, then uncheck the "Lock Primitives" box. <S> You can now 1) select the problem line or box, 2) <S> right click and select properties, <S> then 3) change the element to another layer. <S> Or you could just delete it. <A> If you drew the board outline on a particular mechanical layer and the Samtec library happened to use that same mechanical layer for the (non-overlay) outline of the connector you could have a problem <S> iff <S> you selected "Add to All Plots" for that layer in order to have the outline shown on each layer. <S> Just because it's shown on the overlay layer does not mean it is not duplicated on another layer (and it may show up in Altium as overlay color or mechanical layer color depending on the which layer is on top at the moment). <S> In such a case you can move your outline to another layer and regenerate the Gerbers with the "Add to All Plots" tick removed from the previous layer and turned on for the new layer. <S> Alway inspect your Gerber files in Camtastic or some other Gerber viewing program, you can save a lot of time and irritation.
It is a problem, since the process does not know when creating that mask that it is silkscreen and not copper. Your problem might be in your output job Gerber configuration and your (and Samtec's) selection of mechanical layer.
adding a "power output button" to a cheap bench power supply I've an old and cheap Atten TRP3005T Take a look at atten site I have seen more expensive device that had a specific button that always disable the current emission every time you power off the device. This is a security feature to avoid disastrous error when you attach a device to the power supply when the power supply is off and has been left to wrong settings (happen to me.. but without consequences... 15 volts on an Arduino when I want to use only 9 volts to test it) Is there a way to add the same feature to my power supply ? I think that I need to build a circuit with a dual mosfet but I'm not enough skilled to design it by myself... Also I'm not sure if this option draws power from the power supplier and can make some strange side-effects to power output like wrong value for voltage. <Q> (Or an illuminated momentary push button I guess.) <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Once it's on it would stay on till you turn off the supply though, but you can add a second, normally closed button to disconnect. <A> If the Lab supply has a large storage Cap that floats <S> when AC is switched off, you want an LED charge indicator which also slowly discharges the cap. <S> Choose 40mA for 30V with a couple UB LEDs White and Red using 2 current limiting Rs=1.4k . <S> Or better if you actually want an active Low to ground to enable a High Side 30V Smart switch. <S> (this one may not be reliable << 5V although it has a charge pump for Vgs.) <A> The simplest solution that I can think of, using switched AC inside PSU with Relay and Lamp mounted inside. <S> simulate this circuit – <S> Schematic created using CircuitLab
If you can figure out the internals of your power supply, adding a relay and a momentary push-button is a fairly simple task, perhaps with an added LED for feedback.
Good uses for 1:1 probe We all know why using a properly compensated 10:1 probe is a must when viewing MHz-speed signals on a scope with a 1 MOhm input impedance. Now who can supply a good use for a 1:1 probe? These probes have not found much use in my lab. The only thing I can think of is that the 1:1 probes might be useful for making measurements of power supply ripple, switching artifacts, etc. I, however, question whether the 1:1 probe is readily capable of a connection with low-enough ground transfer impedance to really see what's going on in, for example, a switching power supply rail. Howard Johnson ( "Healthy Power" ) and Jim Williams ( "Minimizing Switching Regulator Residue in Linear Regulator Outputs" , page 11) both discuss a similar technique but use plain coax instead of a 1:1 probe. In Howard Johnson's example, the coax shield is then soldered to the board with bus wire to achieve the lowest possible ground transfer impedance. Eliminating inductance in the ground wire is key to probing the fast switching artifacts. I'm not sure how well a 1:1 probe would do in this case, but it can probably be made to work okay. Can anyone recommend any other uses for the 1:1 probe?? <Q> Noise in oscilloscope front ends is quite high, maybe 1mVp-p. <S> Using the 1:1 probe lowers the input-referred noise floor by an order of magnitude. <S> Still pretty crappy, but opens a few doors. <A> Convenience. <S> A 1:1 probe (or the x1 setting on a switchable x10 probe) will probably have slightly lower capacitance than a 50ohm coax of the same length, and also handy clips on signal and ground. <S> It's therefore a convenient tool for small signals where noise makes a 10:1 probe unusable, and for low frequencies where the relatively long ground lead doesn't cause a problem. <S> For more critical monitoring situations, you might use the scope's 50 ohm input directly, or an active probe, or a DIY probe, or a plain piece of coax. <S> I use fixed x10 probes. <S> No switch means one less thing to go wrong, and I find switchable probes' switches are often in the wrong position, and it's difficult to spot when they are. <S> When I need x1, I use a short bit of coax. <A> Coax vs 1:1 probe. <S> I've used both. <S> It depends on the source impedance to a large degree. <S> The probe does a better job matching to the 'scope input impedance (R//C) over the entire frequency range and this can matter with higher source impedances. <S> (Where the capacitive loading of a long piece of coax may degrade the HF response.) <A> It had very limited use for signals < 20 MHz <S> where 1M load with ~50pF or more with signals below 1 to 50mV. if larger a 10:1. <S> Probe is better and if smaller then a FET buffered diff probe is best or 50 Ohm terminated if possible. <S> You can alway get more bandwidth by removing the clips and ground leads with twin prongs. <S> You can use them as EMI sniffer proves to a spectrum analyzer using a short open wire or better a ground loop for RF <S> Many scopes have a 20MHz or similar BW filter. <S> This makes the 1:1 probe more useful because it is incapable or accurately capturing risetimes extending past this band without ringing. <S> The probe is simply not balanced for impedance due to the input RC impedance and probe inductance. <A> Can anyone recommend any other uses for the 1:1 probe?? <S> With a 5MHz analog scope you got for free out of a dumpster dive, probe frequency response becomes a wee bit less important ;) <S> For a beginner, it is a lot better than no scope! <A> Unlike a random piece of 50/75/93 Ohm coax cable - which at first sight seems to be a perfect replacement for a 1:1 probe - <S> a 1:1 or switchable probe still gets the benefit from using an intentionally lossy coax (which 1:10, 1:100 probes use as well), so reflections are dampened more even if the system is sorely mismatched. <S> Mind that not every scope (or scope plugin) goes down to 1mV/div - and that 1mV/div with a 1:10 probe already means you need 80mVpp to fill the screen, 400mVpp at 5mV/div (minimum of eg the Tek 7A18/7A26), 2-3Vpp <S> (!!) at 50mV/div (minimum of many really old scopes resp their general-purpose plugins - think 545B/CA. <S> Not typically 4Vpp since that kind of scope is usually 4 or 6 div high, not 8). <S> Also, DC accuracy will likely be better (unless the lossy cable is really in the tens of kiloohms), which can matter if the readout function of the scope is pressed into service as a DVM. <A> A 1:1 probe minimizes oscilloscope noise, but comes at a cost of lower bandwidth. <S> 1:1 probes are very popular for ripple measurements and power measurements. <S> Basically, a 10:1 probe means you get less probe loading (capacitance) but get 10X the scope front-end noise. <S> I go into some more detail on that here: http://www.electronicdesign.com/test-measurement/how-pick-right-oscilloscope-probe
So in the end, the 1:1 probe serves well as a connection cable to any source that is relatively low impedance and low level - like audio signals, output from passive (eg inductive or strain gauge) sensors.
Can adding a copper plane to the 2.4GHz antenna amplify signal strength? Hello, I bought a 2.4GHz controlled RC car from a Chinese vendor.Under the car chassis, there is a 30mm x 20mm copper plane attached. Surprisingly, the customer's service team replied that "it's antenna. 2.4GHz antenna amplification device." I couldn't believe what I heard. So I'm posting this question. Can that copper plane act as a 2.4GHz antenna?Also, can this plane "really" amplify something? <Q> Can that copper plane act as a 2.4GHz antenna? <S> You bet. <S> It's called a patch antenna. <S> Also, can this plane "really" amplify something? <S> Technically, no, but in common usage yes. <S> If you start out with an antenna which is just a random length of wire, the odds are very good that this will make a crappy antenna. <S> Put a properly-designed patch on the end of the wire, and its output will increase. <S> Did the patch "really" amplify the signal? <S> If you don't understand how the antenna works, the answer will seem to be yes, but if you have more knowledge you'll say no, it's just a better antenna. <A> Yes a 1/4 wave antenna needs a ground plane to simulate a 1/2 wave dipole. <S> A small one suffers some loss. <S> None is even worse. <A> The patch will affect the way the antenna behaves. <S> Whether the wire plus copper patch or wire without copper patch radiates more power depends on the length of the wire, so how each configuration is tuned. <S> The patch does allow the antenna to be brought into tune with a shorter physical length of wire, which may be important in a small model. <A> That big piece of foil acts as a top-hat capacitively loaded antenna. <A> There's two things that foil is likely doing. <S> 1) <S> Increasing the directionality of the signal so that you're not sending as much energy into the ground. <S> 2) <S> Increasing the efficiency of the antenna so that more of the energy gets converted to radio waves rather than heat. <S> Patch Antenna Radiation Pattern <S> Dipole Antenna Radiation Pattern <S> If the dipole/wire isn't facing the correct direction, there could be very little signal between the controller and the car. <S> If the patch is designed to limit the amount of directionality introduced, it may provide a pretty ideal radiation pattern for the application described given that the controller will most likely be somewhere above the car's horizontal plane.
It's more accurate to say that a properly-designed antenna will be more efficient, and produce a higher output signal, than a poorly-designed one.
Can a 220V relay be used to switch 110V appliances? Can a 220V relay be used to switch 110V power? If yes, then additionally to this question, does the amperage (A) rating on the relay change when the voltage (V) is 110V instead of 220V? To the risk of answering my own question, I think any voltage lower or equal to 220V could be switched with a 220V rated relay. Hope to be right on this, but I'm not sure about the amperage rating. My guess is that instead of 220V/5A that relay could also be suited for 110V/10A but I'm not sure. <Q> The 220 volt relay could be used to switch 110 volts. <S> However the contact current rating will not increase. <S> Contacts rated for 5 amperes will not be safe switching more than that, certainly not 10 amperes. <S> The contacts carry the current when closed so the voltage is not the issue but the size and material of the contacts is important. <S> Therefore do not exceed the contact current rating no matter what the value of the voltage being switched <A> 220 V(AC), 5 A Relay means, you can switch upto 220 V, safely across the relay and 5A <S> is the maximum current allowed to flow through it. <A> Exceeding the voltage rating may cause the insulation to pierce while exceeding the intensity (for a long enough time) will cause the conductors to overheat. <S> If this is a professional work, you will likely breach the code for exceeding the allowed Amp value, and you will be hold liable if something bad (like a fire) happens. <S> If this is a lab setting, test before using, it may work. <S> The relay rating is for max voltage and amps, using lower voltages and amps is OK. <A> Heating effect of an electrical current is i^2Rt. <S> No mention of v. <S> Amps are amps, regardless of what voltage is pushing them. <S> So no, it's the same 5A rating.
Lower voltage but higher amperage means the same power, but the dissipated power may be larger (experiment required), hence risk of overheating. Current rating remains the same, even if you are switching 110 V.
Electric field measured in the far field of an antenna I came across a question: Electric field measured in the far field of an antenna at a distance r of 50 m = 1 V/m. Find the electric field at a distance of 500 m from the antenna. The solution starts with - E is proportional to 1/r . I know, E = V/r. Is it assumed that potential is the same at 500 m, to say that E is proportional to 1/r !? I thought E is proportional to 1/r^2, as per coulombs formula, if the charges are assumed to be constant. Where did I go wrong ? <Q> Since you are in far field conditions, the electric field generated by an antenna decreases with 1/r. <S> Do not confuse it with the electric field generated by a point electric charge. <A> Farfield = <S> 1/r. <S> Does it suggest that potential is equal everywhere in the Farfield? <S> No it does not. <S> The presence of electric field implies that there will be potential difference, in other words, no potential difference or a constant potential would imply no electric field. <S> You are wrong in assuming \$E = \frac{V}{r}\$ for any r. <S> Instead, $$E = -\frac{dV}{dr},$$ <S> thus \$E\ \ <S> \alpha\ \ \frac{1}{r}\$ would imply a logarithmic potential field (i.e. logarithmic dependence on r) rather than a constant field as you say. <S> If you want to know as to why \$E\ \ <S> \alpha \frac{1}{r}\$, you can imagine a sphere of radius r centered at the source. <S> Then, power leaving the sphere is:$$P_{rad} = <S> (4\pi r^2)P_d,$$where \$P_d\$ is the power density of the field. <S> To have finite power radiated away from the source even for large r, \$P_d \ \alpha \frac{1}{r^2}\$. <S> As, power density is related to squared of the electric field magnitude, thus we get a inverse dependence on distance for electric field. <S> In other words,$$P_d \ <S> \alpha <S> E^2 \ <S> \alpha \frac{1}{r^2} <S> \implies E\ \ \alpha \frac{1}{r}$$ <S> In general, power density will be of the form:$$P_d = \frac{C_1}{r^2} <S> + \frac{C_2}{r^3} + \frac{C_3}{r^4}+...$$For large r we just get the radiated term (first term) as the others would be negligible, but for small r's we will have small radiating term and we will have field mostly due to rest of the terms and it will constitute near field. <A> From basic geometry and conservation of energy, you should be able to see that the power has to go down with the square of the distance from the source. <S> Think of the same power being spread out over the larger surface area of a larger sphere. <S> That surface area goes with the square of the distance (sphere radius). <S> Everything else follows from there. <S> Since the E field is proportional to the square root of the power of a traveling EM wave, the E field (measured in volts/meter in your example) must go down with the distance, not the distance squared. <A> Another way to consider this: The power density existing in the far field of an e-m radiator (a transmit antenna) consists of an electric and a magnetic field, which are always related to each other by the ~ 377Ω impedance of free space. <S> Both of those fields decay at a 1/r rate, which results in a 1/r² decay rate in radiated power.
Your problem is due to confusing power and voltage (E field strength).
I'm having trouble understanding SPST terminology Before someone LMGTFYs me, let me get this straight: I HAVE DUCKDUCKGO'ed this already, but I still cannot understand what switch 'pole' and 'throw' mean. I really just need a clearer explanation of switch terminology please! <Q> Think of "Poles" as a piece of wire you want to connect somewhere. <S> And "Throws" as places where you can connect that wire to. <S> N <S> Poles = <S> N wires M Throws = <S> M places where you can connect each of the N wires to. <S> SPST <S> (Note that on a SPST there isn't necessarily a side which is the Pole or the Throw.) <S> The following might help you understand the terminology better: <S> To add a "Throw" you just add an extra connection on the right side. <S> (therefore one pin at least, becoming SPDT) <S> To add a "Pole" you duplicate the entire thing, therefore at least 2 pins, becoming DPST. <S> SPDT <S> Following is a SPDT. <S> Which means you have a piece of wire you can connect to two places. <S> To add a "Throw", you just need to add a new place to connect it to (one more pin on the right side, having a SP3T). <S> To add a "Pole", you would have to add a piece of wire and two places to connect it to, therefore at least 3 pins and duplicating the entire thing. <S> (Assuming you are maintaining it as Double Throw) Images are from Sparkfuns explanation. <A> According to SPST, SPDT, DPST, and DPDT Explained by littelfuse: <S> SP: <S> Single Pole, one circuit controlled by the switch. <S> DP: <S> Double Pole two independent circuits controlled by the switch which are mechanically linked. <S> Note: “Pole” should not be confused with “Terminal”. <S> The DPST switch, for example has four terminals however is a Double Pole (DP) and not a four pole (4P) switch. <S> ST: <S> Single Throw, closes a circuit at only one position. <S> The center position is off. <S> DT: <S> Double Throw, closes a circuit in the up or down position (On-On). <S> A Double Throw switch can also have a center position such as On-Off-On. <S> When comparing SPST vs SPDT - both have the 2 states. <S> Same naming convention is used for relays, see below diagram: NO - means "normally open" NC - means "normally connected" ( Original image source and example web page using that image - see <S> "i" icon near "Contacts configuration" option list) <A> The number of poles is the number of wires that can be switched simultaneously. <S> In other words, a double-pole switch is basically just two switches sitting next to each other, with a single button or lever (or whatever) to activate both simultaneously. <S> A single throw switch means the output is simply either connected or not connected to the input. <S> A double throw switch means there are two inputs and one output (or vice versa). <S> In one switch position, the output is connected to one input. <S> In the other switch position, the output is connected to the other input. <S> To give a simple example of how this is used, consider a typical "two way" switch for a light. <S> The schematic looks like this: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> so, it both switches are in the "up" position, we get power at the output. <S> Likewise, if both switches are in the "down" position, power is transmitted. <S> But, if one is down and the other is up, there's no connection through, so no power at the output. <S> If the circuit is on, switching either switch will turn it off. <S> If it's off, changing either switch will turn it on.
In case of SPST they are On/OFF, but in case of SPDT they can be ON1/ON2, so you can use SPDT as SPST when you would leave the 3rd pin unconnected.
Does low power in the mains causes damage to household supplies? I live in Cambodia and power outages here are not uncommon. One of the things I have noticed today that devices like kettle or induction heater would switch off after working for 10-20 seconds. I am assuming this is due to low current (or maybe lack of potential) in the power line. Does low current damage equipment? If yes which household supplies should I switch off not to cause permanent damage (fridge, tv, PC, etc), or should I just keep switching kettle on and make some coffee? Does low power in the mains cause damage to household supplies? If it does should I disconnect any of them? <Q> The low voltage may very likely cause the equipment internal power supply to try to draw more current, which could result in overheating and possible equipment damage. <S> Anything that contains electronics should not be supplied with voltages lower than the rated input voltage stated on the nameplate. <S> Resistive equipment such at resistance heaters will not be an issue. <S> It is not nonsense. <A> Low voltage can cause a lot of different kinds of electrical equipment to fail. <S> We had problems with low voltage in a house I lived in long ago. <S> We had to replace the motor for the water pump several times, and the heating elements in the oven and stove burned out. <S> That last sounds stupid. <S> How does something that is supposed to get hot burn out because the voltage is too low? <S> It wasn't the elements themselves, but rather the connectors. <S> The heating elements draw less current when hot, but due to the low voltage didn't get hot fast enough. <S> The connectors were therefore carrying higher current for longer than intended, and they got hot and burned out. <A> Devices such as AC LEDs have a wide input voltage rating, but if they drop below this then more input current must be drawn to regulate the output. <S> This extra current can raise temperature and damage inexpensive designs. <S> Any devices that need a certain voltage to function properly will cut-out safely . <S> Any designs that self-heat like AC motors stalling with 60% line voltage might be possible in a severe brownout when started in this condition, but you might know it. <S> The biggest risk as I recall my brother's experience in Uganda is the re-closure <S> of poorly balanced 3 phase lines can cause some farms to get excessive voltage that are lightly loaded. <S> He said all his appliances got smoked. <S> circa'80's I had the same thing happen to my staff, when inside the factory after a power failure, our loads were unbalanced and power supply on the lighest load phase <S> got 30% surge voltage on startup and broke the CA Marathon ATE tester... <S> after repairs, we told maintenance to rebalance the drop cables off each phase. <A> It's possible to design pathological equipment that fails under low voltage conditions, as an exercise in 'what not to do', but generally any design that has started life like this will have been caught by testing, and modified before it gets to customers, so that it 'fails safe' under low voltage. <S> A good way would be to have a heat generating mechanism that works at all voltages, and a cooling mechanism that fails. <S> For instance, a motor cooled by a fan on its own shaft, that fails to turn at low voltage. <S> A kettle is such a simple device, that my first thought was 'yes, you can make coffee at any any voltage'. <S> However, if you leave it unattended, the automatic switch-off might require a certain minimum rate of steam flow to send enough steam down to the thermal switch (mine does), so it could boil dry. <S> But then all kettles (should) have an overheat cutout as well, that will switch off if boiled dry. <S> There's an interesting class of motor that will overheat if run at too low a load! <S> Think about that one for a moment. <S> It's a particular type of induction motor, and at low load the speed rises to near synchronous, so the slip frequency drops so low that the armature saturates and draws excessive current. <S> Not all induction motors behave like this.
Low voltage can definitively damage electronic equipment, such an induction heating equipment controls.
Does a ADC output a PAM signal? To clarify the question: As far as I understand, a ADC (Analogue to Digital Convertor) will sample and therefore quantize an analogue signal. Does a ADC therefore output a Pulse Amplitude Modulation (PAM) signal? And if not, why not? If you look at Pulse Amplitude Modulation, on Wikipedia, you see that PAM "..is a form of signal modulation where the message is encoded in the amplitude of a series of signal pulses". Therefore, the mere sampling of an analogue signal results in PAM. The article goes on to say "Demodulation is performed by detecting the amplitude level of the carrier at every single period." However, when an analogue signal is sampled, the information is encoded. No actual need for modulating a carrier. It seems. P.S. The question may appear rather odd, but I'm coming from the world of AM and SSB amateur radio, where modulation conjures a certain image of what modulation amounts to. <Q> Sampling and quantization are two entirely separate concepts that you seem to be somehow conflating. <S> If you sample a continuous-time signal without quantizing it, you do indeed end up with a PAM signal. <S> But it is not a digital signal. <S> An ADC both samples its input signal and then converts the amplitude of each analog pulse to a digital word. <S> The word is a quantized measurement of the pulse amplitude. <S> A DAC can convert that sequence of words back into a series of analog pulses (PAM signal again, but still quantized), and this must be followed by a "reconstruction filter" (e.g., low-pass filter, sometimes included in the DAC itself) in order to get a continuous-time signal that closely approximates the original. <A> Does a ADC therefore output a Pulse Amplitude Modulation (PAM) signal? <S> No, not all ADCs do. <S> For example Sigma-Delta (same as Delta Sigma btw) ADCs don't, they output a bitstream which is a clocked (at the sample rate) stream of ones and zeros (so binary data). <S> For some SD ADCs this signal looks more or less like a PWM signal . <S> In the end it depends on the architecture of the ADC what kind of signal comes out. <A> No, the output from most ADCs (SAR for example, but also Flash and some sigma-delta) is properly called PCM (Pulse Coded Modulation). <S> Many DACs on the other hand, do output PAM.
A few (notably some sigma-delta ADCs, output PDM (Pulse Density Modulation) which shares some features with PWM, namely that averaging can approximately reconstruct the waveform.
Function Generator, BNC T and Termination I have a question regarding best practices for function generator to circuit/scope connection. I recently had an issue chasing down excessive ringing on square wave signals coming from a function generator set to Hi-Impedance mode (output is 50 Ohms, setting only affects the reading). The connection at the generator was a BNC T-connector, with one side connected via 3ft of pomona coax to an oscilloscope with no termination (just the standard 1M input impedance), and the other end was connected to a 3ft pomona mini-grabber for circuit connection. According to "The Art of Electronics, 3rd. Ed." Appendix H.1.2 and H.2 on transmission lines, terminating the cable at the source (series/backtermination through the built-in 50 Ohm at generator output) should be enough to eliminate wave reflections, and terminating both ends is not required. After contacting the manufacturer about the excessive ringing, they initially diagnosed this as a faulty generator, but were able to reproduce the same results on other generators with the same setup, so recommended not to use a BNC-T if I want to use HiZ mode, but to just probe the circuit input with a scope probe. I find the more permanent connection via T more convenient. So is there some kind of best practices here that people follow to assure clean square waves from their generators to the DUT? Is it generally not a good idea to BNC-T the connection from generator to scope? And/or do both ends need to be terminated, contrary to the position of AoE (although it does mention it as a "just to be safe" measure)? UPDATE: coming back to this after some time and hardware changes (no T-connectors, just single 50 Ohm coax): Image 1) a 10MHz square wave using 50Ohm coax, terminated with a 50Ohm in-line BNC terminator and connected directly into the scope. The result is decent. Image 2) same as 1, except using a 10M probe to measure the BNC output of the feedthrough resistor. Image 3) same as 2, except with a short bnc-to-mini-grabber adaptor connected to the 50Ohm feedthrough resistor. Scope probe is connected to mini-grabber tips. Image 4) same as 3, except 100kHz square wave instead of 10MHz. Anytime the mini-grabber is connected, I get massive overshoot on square waves at any frequency, even with proper termination. What is the cause of this, and how do I remedy it to get a similar wave as shown in image 1 to appear at the input of my test circuits (without having a board made with a BNC connection)? <Q> The generator has a source impedance of 50 ohms. <S> Without the 50 ohm load, the generator is essentially unterminated and the signal will reflect back since it sees an impedance mismatch. <S> The best termination is a 50 ohm feedthru but a T connector will suffice if the feedthru is not available. <A> Put to the input of your oscilloscope a T-BNC connector. <S> Extend the cable to your circuit from it. <S> Have a matched 50 ohm load at the circuit. <S> If needed, add a parallel or serial resistor for proper matching at the circuit end. <S> Without it the reflection stays and your oscilloscope can show different voltage than your circuit gets. <S> It gives some insulation between the instruments and your circuit. <S> The signal strength is unsure, if there's poor matching. <A> A 3-foot section, out and back, with assumed Er of 4, has electrical length of <S> 3-feet * 2x (out,back) <S> * 2x (sqrt(Er)) <S> 3 * 2 <S> * 2 <S> = 12 nanoseconds. <S> If your generator rise time is any faster than 5 or 10 times that 12 nanoseconds, you will see degraded pulse flatness (aka overshoot).
For best results it should be connected to the device under test with a 50 ohm coax cable which is then terminated with a 50 ohm load. I personally try to connect the oscilloscope to the circuit with Hi-Z probes, if it's possible. You can also have an attenuator at the circuit end of the cable.
Is Driving a Motor With PWM Inherently Less Efficient than Using a Lower Voltage? This is a hypothetical question but it's been bugging me for a while. Lets say I hook up a DC motor with a propeller on it to a 10v battery and let it run. Then lets say I hoop up a second identical DC motor and prop to a 100v battery, but with a PWM controller. And lets say I tuned the PWM duty cycle so that the RPMS of the motors are identical. The power output of the props should be identical now as well, since it's related only to the RPMs. Lets also say that hypothetically the mosfets in the PWM controller have 0 on resistance, and switch instantly. Will both systems be identical in efficiency? Or are there some inherent losses due to the PWM throttling? <Q> Lets also say that hypothetically the mosfets in the PWM controller have 0 on resistance, and switch instantly. <S> Will both systems be identical in efficiency? <S> Or are there some inherent losses due to the PWM throttling? <S> You haven't stated the PWM frequency. <S> But assuming it's around 100 kHz, then it's very close to being the same as if you didn't use PWM. <S> If it's in the MHz - GHz region then you can expect some of the energy being radiated outwards because some of your wires will act like an antenna. <S> I'd call this a "loss". <S> If it's in the sub 20 kHz range then you can expect to hear the PWM sound, this might drive you mad. <S> I'd call this a "loss". <S> If it's in the 30 kHz range, you won't go mad, but your dogs might. <S> (It's like giving them tinnitus. <S> I'd call this a "loss". <S> When your transistors are off, the motor will act like a generator (if it is rotating), giving current back to your system. <S> This means you will need some clamping diodes to guarantee that the terminals don't reach unsafe voltages. <S> So that's some losses , depending on how you handle that excessive current. <A> Yes, there are eddy current losses in the motor iron, if you consider just the motor. <S> But it's not really a fair comparison to just draw the box around the motor. <S> Generating a smooth-ish DC voltage from the mains or a battery efficiently generally involves PWM and filtering, and the filter inductor will have losses and the switching losses will occur in either case. <A> Lets also say that hypothetically the mosfets in the PWM controller have 0 on resistance, and switch instantly. <S> What's the point of making that assumption? <S> That's EXACTLY <S> what would be different: it's called switching losses.
You should consider the efficiency from mains or battery source input to shaft horsepower output, and consider power factor too in some cases.
Why is there ampere listed in DC power adapter? 1 .The current of a battery is decided by the resistance of electronics used.(Correct me if I am wrong).So there is no limit to current than can be taken from a battery (Except the time) 2 .Is this also true for AC(No limit for current that can be taken out) If both of the statements are true, then why there is ampere listed in the dc adapter (mobile adapter). 3 .When I use the DC adapters for my project what should I take into consideration (Should I take the amperage listed into consideration) <Q> The current of a battery is decided by the resistance of electronics used.(Correct me if I am wrong). <S> Correct mostly. <S> So there is no limit to current than can be taken from a battery (Except the time) <S> No longer correct. <S> If you take too much current from a real battery, the output voltage will drop (due to internal resistance), and the battery will heat up. <S> In extreme cases, the battery could catch fire, or ignite flammable materials nearby. <S> Is this also true for AC(No limit for current that can be taken out) <S> For the AC power supplied in your walls, the current is limited by a circuit breaker. <S> If you attempt to draw too much power, the circuit breaker will trip and cut off the current. <S> When I use the DC adapters for my project what should I take into consideration <S> You should use a DC adapter rated to provide more current than your project requires. <A> The maximum current that can be delivered by an AC power supply, DC-DC converter or other type of power supply is determined by the design of the power supply - how much current the components can safely handle, how much power can be dissipated by the components (and heatsinks) in the supply. <S> When selecting a power supply (AC adaptor) for your project, you must select a supply that will deliver the correct voltage, and AT LEAST the current required by your project. <S> A higher current rating is fine (even desirable), as the load should only draw the current it requires. <A> I think you can figure out from the other answer that supply always exceeds demand unless you are thinking about coin cells trying to drive a relay. <S> But there are some other facts you need <S> may want to consider and read more about. <S> Every power source has a safe output limit and AC devices have breaker ratings for that device at max load. <S> It is always capable of delivering more power for short duration, as the magnetic breakers have a time delay that has an inverse trip time with over current. <S> Short circuits on the other hand, can be one thousand (1000x) the breaker current rating of 10A until tripped and they are designed for this possibility. <S> Overcurrent protection (OCP) is a must for every application where safety issues can occur. <S> This is done with fuses, breakers and PTC thermal resettable types where applicable or with relays , crossbar switches etc. <S> Maximum power transfer occurs in practice ( and theory at half the no load voltage of an unregulated source when the load matches the supply. <S> But this can cause thermal failures. <S> Normally unregulated power sources are designed to be within ~ +/-10% ( more or less) such as Car batteries when not being charged, AC grid to North American homes, LiPo cells from 3.7 to 3.3V/cell for most of its capacity. <S> So unless you want to cause massive thermal issues with half the power lost in the power source or the wires to never draw even close to the unregulated output short circuit or maximum power limit of the source. <S> Closed Loop regulated Power Supplies are difference, because the negative feedback in the regulator lowers the effective output impedance and tightens the load regulation error. <S> Although you can use 100% of the rated power output it is prudent for reliability reasons to leave sufficient margin for heat rise. <S> The short circuit impedance of AC distribution transformers of about 8% of the rated load meaning short circuit current is 1/8% of rated output or 12.5x rating of a typical 15kVA padmount. <S> So a typical modern North American residence with 200A 240Vac split phase service could easily get 10kA short circuit current before the breakers trip. <S> It can turn a huge screw driver tip into vaporized sputtered copper on plastic glass lens. <S> ( personal experience )
The reason the circuit breaker is there is because if the current is too high, the wires and the outlet itself will heat up and could start a fire.
Why does not current flow in open circuit wire connected to a closed circuit? I know that current flows from higher potential towards the lower.I was wondering why it does not flow towards point "a" and why isn't it divided into " i_y " and " i_x "? And if it happened, it would stuck inside the open wire at point "a", and current " i_y " would head back towards the source. Here, I was thinking, what is the potential of an open circuit in this picture and is it lower enough for the current i to flow in it? simulate this circuit – Schematic created using CircuitLab <Q> I have no idea whether this will help you or anybody else <S> but your question, " I was wondering why it does not flow towards point " <S> a" and why isn't it divided into " <S> i_y" and "i_x"? " makes me think that you imagine that the 'a' branch can accept current. <S> If I can use a road and traffic analogy it might look like this: Figure 1. <S> Top: open circuit. <S> Bottom: closed circuit. <S> As Figure 1 attempts to show, the whole road network is full of cars. <S> When the switch is closed and traffic is allowed to flow all the vehicles in the closed loop align and begin to circulate. <S> None branch down the dead-ends as there is no room and no way out for the vehicles. <S> (The main circuit is already full and can't take vehicle from one of the dead-ends.) <S> You could make similar analogies with water pumps, etc., but the important thing to realise is that the system is already primed. <A> Under steady state conditions there is no place for the current in that branch to flow as others have said. <S> The potential at the end of that wire is equal to the potential at the start of the branch <S> so there's no push. <S> At time zero though there is no potential anywhere except for at the source. <S> When the supply is activated for the first time or a switch closed charge carriers begin to move down the line (we'll use the positive convention for simplicity). <S> At this time the supply has no idea the end of the line is open so it will continue to pump. <S> As our friendly charge carriers slam into the open circuit they will begin to build up until they are they are strong enough to push back against the flow. <S> Then the system will fall back into equilibrium at which point no more carriers will flow down the open end of the line. <S> The potential will be the same. <S> You could do an experiment with a pulse generator and an oscilloscope and some coax and look at the voltage waveform that appears at the end of the line. <S> Now depending on the physical structure of the lines and speed of the input signal more interesting things could happen. <S> For instance a high frequency signal might find a coupling path and radiate like an antenna. <A> There is no potential there! <S> it is like throwing a ball when there is no gravity, what would happen? <S> a loop must be closed in order to flow current. <A> There is a small amount of biasing current at point a . <S> I have been point a . <S> I felt it. <S> It taught me a lot about electrical safety. <S> The effect is similar to a hydraulic line that is unpressurized but entirely full of oil. <S> When you open the valve to connect it to a 2000psi source, the oil compresses very, very slightly, and there is this momentary "blip" of flow at a . <S> It then balances out and flow stops. <S> Essentially it is placing an electrostatic charge on the wire past <S> a . <S> That can be measured or calculated, but it is inconsequential unless the physical material past a is quite large, or the voltage is quite high. <S> I can tell you biasing a 20,000 square foot metal roof, to 600V, is noticeable. <S> You can think of it as a small capacitor. <S> AC is a different deal. <S> That tiny biasing current will flow every half cycle, or rather, almost all the time as it changes the electrostatic charge. <A> In an open circuit condition , state that voltages is present between them at across " a "of the source , that is zero current , there is zero power dissipoation . <S> P= <S> I E , <S> i is zero and any multiplier by zero is zero . <S> In an open circuit , the end wire between two point are infinite resistance I= V/R v/infinite. <S> Is I = zero .
There are no voids for "current to flow into".
Using D+ and D- from USB as power source? I'm working on a project in which I need to be able to toggle the power to a USB device connected to my laptop. My original intention was to reduce the 5V USB power to 3V, and then toggle the USB on and off with a program I would write. However, after some research it would appear that it is not possible to turn USBs on and off with code. With that in mind, I'm wondering if it would be possible to use the D+/D- lines as a togglable "power source" for my device. I've read in several places that the data lines on USB's are 3.3V? Is that true for all data USBs, or am I missunderstanding? If its true, then I could theoretically reduce those 3.3V to 3V and it would solve my issue (as I could send/not send data). I'd also like to importantly point out that I say "power source" because it doesn't have to be constant, just a short on/off pulse with a high enough voltage. Also I realize that there are products that exist on the market that I could buy to simplify my issue, but the idea of my project is to use things I already have available to me (just the usb device, resistors and cables). I also should have noted exactly what my project is - its very much a patchwork project. I've hijacked the remotes for two strings of Christmas lights. I've hardwired one to be permanently on and other permanently off. Each remote originally contained a 3V Lithium coin battery (CR2025). Instead of hijacking the IR system, I decided it would be easier to hijack the power one. So I decided to replace the battery with USB power, which is where I ran into my problem I stated above. Pulsing the USB power to the ON remote will simulate me pushing the ON button, and likewise for the OFF remote. <Q> Most likely not... USB output impedance is high. <S> It's designed to match a 90-Ohm differential impedance in the cable. <S> That means there is (the effect of) a resistor in series to match this cable impedance. <S> Driving through that "source impedance" results in a voltage drop that increases with current draw. <S> You will only have the appearance of ~3V when your current draw is almost nothing. <S> After that it will fall quickly. <S> This is a microcontroller output curve. <S> This microcontroller is much stronger (voltage drops less for given current) than most dedicated USB outputs, but it illustrates the problem you'll face. <S> You have limited control <S> You do not have complete control over the state of the data lines in a USB port. <S> The protocol requires periodic state changes to those lines, for things like checking if the device is connected, enumerating (listing) <S> all the devices on the bus, etc. <S> So if you have a (very) low current device attached and it otherwise works, it will cycle on and off unexpectedly from your perspective as your computer operates normally. <S> Pull-up resistors interfere with control <S> Control is packet based Devices on a USB bus must announce themselves when asked (part of bus enumeration) and if you do not respond correctly, data sent to the port will not actually change the pins (the driver will just send your program an error code instead). <S> USB pins sit behind a bus/port controller and follow a rather elaborate protocol. <S> It's not like a GPIO pin in a microcontroller. <S> Further, if you successfully communicate with the USB host controller, you will still not be able to hold the line in a dedicated state. <S> Data is transmitted in packets (defined sequences) and the protocol uses a non-return-to-zero encoding scheme, which is not inherently self-clocking. <S> To correct for this limitation, USB controllers enforce a run-length limiting (RLL) scheme that limits the number of 0's or 1' that can come in an uninterrupted chain. <S> That means even if you send all 1's to the USB controller, the line will not stay in that state uninterrupted. <A> From your description, the hijacked "IR transmitter" doesn't qualify for "the USB device" by a long mile. <S> As I understand the problem, you want to turn power on and off on this IR transmitter using a PC with USB ports. <S> While there are indeed means to control port VBUS power on certain hosts and hubs, the control is deeply embedded in the host controller driver at the kernel-level, and is not accessible from user space. <S> Using D+/D- will face the same difficulties (there are USB test modes that can drive the lines on and off permanently, but they need a special USB test stack-up, which doesn't have programmable user interface). <S> Assuming that you have hijacked the transmitter correctly and it operates simply by power-on and power-off, you need a real USB device that has control over a GPIO signal, which can be used either to control a transistor, or even directly your IR dongle if the GPIO has enough power (20-30 mA <S> I would guess will be enough to simulate the coin battery on-off, but it needs to be researched). <S> As an example, this demo board ( MCP2221 ) does have four GPIOs available (in addition to I2C and/or UART. <S> (It even can be a small DAC and ADC.) <S> The board comes with configuration software and a USB host driver that allows to toggle these GPIOs. <S> The GPIO can sink or source 25mA of current at 3.3V level. <S> It costs $20, difficult to beat this price. <S> I am sure there are other demo devices that can do the same function. <S> CORRECTION: <S> MCP2221A can't drive 25 mA at 3V, sorry. <S> You will likely need a transistor switch to drive your IR device. <A> I've read in several places that the data lines on USB's are 3.3V? <S> Yes and no. <S> Yes, they are supposed to have a signal between 0 and 3.3V on them. <S> No, they cannot be used as a power source. <S> The lines are both connected to ground on the host side via a 15kOhm resistor each. <S> The host will detect a connected device once one data line is pulled up to 3.3V with a 1.5k resistor.
USB requires the data line to be pulled high in the nominal case, so it may not be possible to operate (depending on your load and configuration) in a manner where you can turn off the device as doing so would signal to the port controller that something is wrong, or that no device is connected. Unfortunately, your way of thinking is not implementable.
Explanation needed on using 2 transistors as a switch I'm suggested to use following design to drive a load with a micro-controller. I would like to know that why is it needed to use 2 transistors (n-ch and p-ch) to act as a switch and not only one? I searched over Google and youtube, and most pages were using one transistor (mostly n-ch) to make a switch, like this page: http://www.electronics-tutorials.ws/transistor/tran_7.html Could you please explain me advantages or disadvantages of having such design (2 transistors) over one-transistor switches? simulate this circuit – Schematic created using CircuitLab <Q> If the digital signal swing is the full 5 V, then you can use just the final P-channel FET. <S> The circuit you show would work with the power voltage being up to the maximum G-S voltage the second FET can handle. <A> This is a top side switch. <S> Most of the circuits you have probably seen are bottom side switches. <S> Top side switching adds some interesting issues that are unique to that application. <S> As such, there are numerous reasons for the two stage switch you indicated. <S> The two main ones are: <S> Even when the switched voltage is the same as your logic power supply voltage, the high level logic output voltage can be significantly lower than the rail. <S> This can result in inconsistent switching of a single P-Channel MOSFET. <S> The gate of a MOSFET is basically a capacitor, and because the P-Channel MOSFET is relying on that pull-up resistor to turn it off, the size of that pull-up needs to be relatively small if you need to switch this power quickly. <S> As such, the current you need to be able to pull down through the pull-up when the N-Channel is on can be a lot higher than your GPIO can sink. <S> Additional Benefits <S> The two stage control also allows you to switch a much higher voltage to the load than the logic supply. <S> Theoretically you can switch up to the Vds maximum of the P-Channel device with a two stage driver. <S> However, the circuit would need to be modified to limit the voltage on the gate of the P-Channel to under Vgs_max. <S> Further, top-side switching of very high voltages is in general, problematic. <S> By using a small signal N-Channel for the first device you can significantly reduce the capacitive load on the GPIO pin. <S> This reduces the strain on the latter and keeps your logic supply less "noisy". <A> As an add-on to @OlinLathrop <S> ’s <S> answer, the other difference between the P-channel FET (with or without the additional N-channel FET) and the N-channel FET shown in your link is that the P-channel is a high-side switch (switches the Vcc to the load) while the N-channel is a low-side switch (switches the ground to the load). <S> For simple loads without additional I/Os, such as LEDs, motors, etc., the low-side switch is fine. <S> For loads with I <S> / <S> Os connected out to separately powered circuits, such as other microcontrollers or sensors, it is generally preferred to keep the ground connected and use a high-side switch.
The advantage of the two-transistor circuit is that the power voltage being switched and the digital signal power voltage don't need to be the same.
Can SMPS works as cellphone charger? I tested for charging a mobile using SMPS, it charges the battery 10% percent in an hour. But when I connect the same mobile to the mobile charger it charges the battery 40% in an hour. The SMPS input and output ratings are IN 100 - 240V AC 0.23A, 50-60HzOUT 5V DC /2000mA. Charger input and output ratings are IN 100 - 240V AC 0.35A, 50-60HzOUT 5V DC /2000mA. What is the reason for slow charging in SMPS..? And Can I make fast charging using any additional circuits for SMPS? <Q> 1) <S> The phone determines the charging rate, so not the power adapter. <S> 2) <S> The original power adapter uses signals (on the USB datalines) <S> so that the phone knows <S> it is the original power adapter which can deliver the required current. <S> 3) <S> the phone cannot detect that the SMPS can actually deliver 2A (since the SMPS does not provide the identification signals like the original power adapter does) <S> so the phone plays it safe and draws a smaller current. <S> Then charging takes longer. <S> It depends on the actual charger and phone model what will be needed to make the phone fast charge with a non-original adapter. <S> For some (older) devices shorting the DATA+ and DATA- pins is enough. <S> Others use more complex signals, like QuickCharge . <A> Mobile phone chargers use mains voltage which is converted to 5V DC and then the USB socket is used to charge the phone. <S> Inside the charger is an SMPS which converts the voltage to 5V. <S> The reason your device will charge slower when using your home made device is because the manufacturers include resistor dividers on the D+ and D- pins in the charger to give a set voltage. <S> These then tell the phone the type of charger being used and how much current it can draw from it. <S> A good article is HERE which explains quite nicely how different voltages on the D+ and D- pins can determine current draw. <S> It is all a safety thing to prevent an overcurrent situation. <S> Basically, the phone won't recognise your charger as an authentic charging unit so will only draw a small amount of current from it as a safety precaution. <S> As for your question; Can I make fast charging using any additional circuits for SMPS? <S> Even doing this will result in it only being able to 'fast charge' that particular model, as others will have different configurations (as I found when I designed a portable charger that charges my Samsung nicely but is slow with charging any iPhones after the iPhone 6s). <A> When connected to a USB port, the portable device has no power to consume more than 100 mA. <S> This is enough to conduct a digital negotiation, in which the portable device determines the port power requirements. <S> A typical SMPS power supply does not support a digital negotiation. <S> Battery Charging v1.2 <S> Spec and Adopters Agreement describe the charger requirements. <S> There is also a brief overview. <S> You can use a USB charger adapter emulators. <S> For example, the MAX14630 / MAX14632 are charger adapter <S> emulators that can be configured to automatically detect a USB BC1.2-compliant device, Apple 1.0A device, Apple 2.1A device, or Samsung <S> ®Galaxy Tablet 2A device.
Yes, you can, but you will most likely have to open up a faulty (authentic) charger and measure the D+ and D- pin voltages so you can get your resistor dividers correct.
why phase controlled rectifiers are not used in the place of normal rectifier I have planned to design a SMPS like the one in PC.In most smps,AC is rectified using bridge rectifier adn converted to high frequency AC and step down using high frequency transformer and bucked/boosted based on application. Instead of going through these many process.why cant use a phase controlled rectifier and have required DC voltage. <Q> Among other things, switching noise in a standard SMPS is somewhat isolated from AC by the rectifier. <A> That's how it was done for years, BEFORE the advent of smaller-cheaper-faster MOSFETs to do it with PWM. <S> An SCR front-end doing voltage control results in a much larger power supply PU of output, plus more heat rejection that must be dealt with. <S> Yes there are more parts and processes involved, but they are significantly smaller and more efficient. <A> I have an ancient Tek analog storage oscilloscope that uses a power supply like this. <S> It's not very common in the current century because it requires a lot of iron in the transformer. <S> Modern switching power supplies operate at high frequency so the circuit (particularly the transformer) can be made much lighter and cheaper.
If you use SCRs for voltage control, it's much harder to keep switching transients from propagating back out into the AC line and causing unacceptable RFI.
Wire gauge for low voltage, high current DC application For the past few weeks I've been shopping for a remote controlled car for my son, along with some batteries and a charger that will work with them and his other RC toys. To make a very long story short, it would be generous to say that the RC industry has many "standards" for it's electrical connectors. Rather, there appears to be an almost limitless menagerie of arbitrarily shaped plastic and metal pieces to connect batteries to their respective toys and chargers. Consequently, the consensus advice I've received from friends, online research, and our local hobby shop is to cut off the heterogeneous connectors from our various components and solder on a standard type across our fleet. OK, I can do that. I'm having difficulty, however, in choosing appropriately-rated connectors and wires, because I don't understand the safety implications in this application. I've done enough home wiring to choose the right wires/breakers/outlets/etc. for 120V AC. Unfortunately, I don't trust my layman's understanding of electrical/electronic principles to translate safe home wiring to safe RC car wiring. The crux of the issue is that I'm seeing staggering numbers for some of these RC components - like 100A maximum draw for the car's electronic speed control (ESC) unit, or 65A for a high-performance brushless motor. I say "staggering" because the main breaker in my house is 100A, and we're talking about a little toy car here. Of course, my house uses 120V AC, and this toy uses (nominally) 7.4V - 11.1V DC. So, my gut tells me that I don't need a big slab of metal like the main bus in my home's breaker panel to safely move electrons from the toy car's battery to its motor. I mean, 100A@120V has to be somehow "more" than 100A@7.4V, right? This is supported by the fact that the car comes with 12 AWG wires, not the hot dog sized cable that carries 50A to my oven, for example. Still, in looking at spec sheets for my connector and wire options, most things seem to be rated based on how much current they can handle, without specifying a voltage. Similarly, I've read a number of posts on other forums that say essentially "amps are amps" for safety/heating purposes, regardless of whether they're AC or DC. Again, few mention voltage. So, with all of that as context, my question is: Does either the type of current (AC or DC) or the voltage matter when choosing a safe connector/wire to carry a given number of amps? To recap, I'm looking to standardize on connectors for, say, up to 15V DC over wires a few inches long. In practice, the current will probably be a few tens-of-amps. But, the battery is rated to discharge at 300A (no kidding!) and the thingy it powers (the ESC) is rated at 100A. So, I assume I should plan to use connectors that can handle 100A@15VDC. Apologies for my verbosity; I just want to make sure that I neither burn down my son's new toy, nor our house. Thanks in advance for your help. <Q> Forget such things as the motor start current - what matters here is the current required to charge a discharged battery. <S> Charging is low voltage, so insulation is not an issue. <S> In all probability, the existing charging cables are adequately sized (probably only marginally, to save cost) so use wiring which is a bit thicker. <S> For example, if the largest charger cable is 18AWG, use 16AWG for your connections. <S> Start with this in mind, and check the heat developed on charge - a minor rise in temperature is OK, but distinctly warm to the touch means that the cable is undersized. <S> Better to be safe than sorry. <A> Voltage doesn't burn connectors, the current does. <S> Regardless of the electrical potential difference. <S> Wires are heated (and sometimes smoked) by current as well, and connectors smoke by current. <S> Wire get burnt by power dissipation over wire resistance and flowing current. <S> That's why the RC industry doesn't mention any voltage. <S> [the voltage is however important when you make the initial contact, or use an electromechanical switch]. <S> Why don't you let your son to figure this out on his own, he probably will be better prepared for real life if he fries a toy or two (mine did!). <S> That's why the RC industry has a range of banana-type coupling for each case, just as the electronics industry has thousands of barrel connectors for proper use cases. <S> You can "standardize" across your fleet for the highest denominator, but it won't be optimal. <A> It's possible the "little toy car" is an entry level model and has more modest requirements than the extreme systems built by enthusiasts - but if your son wants to hot rod his Christmas presents, (that's a victory, right?) <S> you'll eventually move into that territory. <S> So depending on the size and style of motor, controller etc, 25A may be enough to begin with, 50-100A later. <S> But if you're starting with 100A rated components, rate connectors likewise. <S> And below 50V there's not much reason to worry about voltage compatibility or insulation safety for connectors or wires. <S> Anything like a motor controller, switch, or relay that's only rated for 12V or 24V will say so on datasheet and (usually) case. <S> Two things are different between your domestic stove and the car, even when both are rated at the same current. <S> The car is much smaller than your house : resistance over a few inches of cable creates some loss, but resistance over fifty feet causes a much more serious waste of power. <S> That said, every little loss eats performance. <S> If you ave two evenly matched cars, upgrade the 12awg wires in one to 6awg and see if it goes faster. <S> Duty cycle is limited to a few minutes by the battery, and as you increase performance, battery life falls further. <S> You can tolerate short term overloads that would melt components in continuous operation. <S> (Visiting a friend in America for Thanksgiving, her house wiring failed just as the turkey was done!) <S> Check for overheated wires or connectors after a fast run and upgrade if necessary.
And yes, you need to size your connectors and wires for proper ampacity of the RC toy.
Start and stop using one pushbutton I have to give a signal to a inverter. I am trying to do this using two push button. First pushbutton can turn on and turn off the signal(a relay coil) like a toogle switch. Second push button can only turn off the signal (previously on by first pushbutton) Is there any way to do this only using several relays (relay logic)?If not suggest a simple method. <Q> Two methods: Using internal logics of the inverter. <S> Some contain comprehensive programmable inputs and sometimes even PLC (ladder diagram) like functionality. <S> Relay latching circuit: <S> Pressing On engages the relay. <S> The relay then bypasses the On switch. <S> The Off switch disconnects the relay. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> See this app note, it does the toggle switch you just need to add the second button to it. <S> https://www.maximintegrated.com/en/app-notes/index.mvp/id/4444 <A> Well simplest method is to use some type of latch. <S> That utilizes two buttons - one for latching the input state and second for reseting it. <S> But of course there are two buttons... <S> It works really simple. <S> Reset (4) pin is tied to Vcc, so the IC is not resetting. <S> Discharge (7) is not connected anywhere, so that means the capacitor (C1) will stay charged even if you turn switch off. <S> As soon as you pull the switch, the C1 will charge and stay charged. <S> That will also change the state of NE555s internal comparator, because now you´ll have more than 0V on trigger (2) <S> pin <S> , so it latches the state of capacitor (ON). <S> When you press the button again, capacitor will discharge and now you have 0V on treshold (6) pin, which is not higher than 0V trigger value, so the latch will reset. <S> Of course, you have transistor and resistor on output so you can draw more than 20mA from <S> NE555. <S> Hope i helped :) <A> Two DPDT relay toggle circuit : <S> The second function can be done by a button shorting out the coil of d1, provided a resistor in series with PB1 limits the current if both buttons were pressed at the same time. <S> The resistor needs to be small enough to still effectively short out the relay coil in the transition state.
So one of the simplest method is to use NE555 timer, as shown on the schematics:
How to reduce noise? (FlipFlop circuit creates lot of noise) I have this flipflop circuit and an sound detector circuit (amplifier + microphone). When connected to individual battery (NiMh) they work nice. But when connected to the same battery, the sound detector goes crazy. It looks like the flipflop is creating a lot of electric noise that reaches the amplifier. My questions are: 1. Why an analog flip flop LED circuit creases so much noise? I expected this from digital circuitry not from a simple flipflop. 2. What kind of filter should I install to suppress the noise? <Q> What you have made is digital circuitry- <S> it switches. <S> Using the same unregulated supply is injecting noise into your amplifier through the battery power. <S> Possibly the problem is in the electret microphone front end receiving power from the battery without regulation or enough filtering, but we have no way of knowing what circuit the detector uses let alone redesigning it to work better. <S> If my guess is correct, a simple RC filter on the electret load may work: simulate this circuit – <S> Schematic created using CircuitLab <A> Follow Spehro for cct improvements and read this for EMC practice. <S> EM interference is simply a crosstalk of R,L and C. <S> L <S> The current is not too large but the size of wire loop area creates inductance between pulsed voltage near 0V and the ground return wire. <S> So this loop must be eliminated with twisted pair throughout the entire path to the LEDs, Q's and battery. <S> The proximity of high impedance Mic input makes it also necessary to shield the mic with shielded twisted pairs and perhaps Ferrite sleeve to reduce the inductive coupling from pulsed current induced into high Z mic. <S> This is Inductive noise. <S> C Capacitive noise is the crosstalk of dV/dt between low and high impedance signals. <S> Shielding works best or twisted ground around signals, with no shared ground current from any other source or load. <S> Coax with connection at one end only works better but that is for shielded twisted pair. <S> R Conductive noises are from shared ground currents or supply currents and although you have done a good job with decoupling caps besure your mic circuit does not share any ground current with the noise source. <S> Second is that the current drawn by the LEDs,Q's and battery may share wires used by the amplifier circuit. <S> Avoid this and use direct power and ground wires ( or radial Star method from Cap decoupled voltage source.) <A> Supposes the FF pulls 10mA for 10 nanoseconds. <S> Suppose there is coupling into a "loop" of the analog circuit, that loop being 4cm by 1cm. <S> Some voltage will be induced, assuming 1cm separation from FF to analog circuit. <S> Vinduce = <S> MUo <S> * MUr <S> * Area/(2 * PI <S> * Distance) <S> * dI/dT Vinduce = <S> 2e-7 <S> * Area/Distance <S> * dI/dT Vinduce = <S> 2e-7 <S> * 4cm <S> * 1cm/1cm <S> * 1mA <S> /nS <S> Vinduce = <S> 2e-7 <S> * 0.04 <S> * 10^+6 amp/sec <S> Vinduce = <S> 8 <S> e-9 * e+6 = 0.008 volts
A neat tight twisted wire layout works well with a twisted wire that carries no ground current to absorb stray C noise.
Can i replace op amp lna111 with amp lm324cd? I'm pretty beginner here but I'm trying to implement this circular The issue that I didn't find lm224 so i replaced it with lm324cd and I think it's not doing the same Electrical properties, this part taking charge of transmit electrical signals from over my head and transmit it to arduino, (Funny part) I can see electrical signal over my head, however I already done with this but it's seems that I've make some mistakes here, but I don't know really where. This the full circular (I'm afraid that it's not helpful so feel free to ask any question) This zoom on the electrods I put on my forehead. And this for the signal electrods I put on my front head. Edit: As a sensor, the solid gel electrodes have high input impedance and low output impedance: what this essentially means is that current can easily pass through downstream to the rest of the circuit (low output impedance) but would have troubling passing upstream back to my temples (high input impedance). This prevents the user from being injured by any high currents or voltages in the rest of my circuit; in fact, many systems have something called a patient protection resistor for additional protection, just in case.Many different electrode types exist. Most people suggest Ag/AgCl solid gel electrodes for use in EKG/EOG/etc applications. With this in mind, you need to look up the source resistance of these electrodes and match it to the noise resistance (noise voltage in V/sqrt(Hz) divided by noise current in A/sqrt(Hz) -- of my op amps -- that is how i choose the correct instrumentation amp for my device. This is called noise matching, and explanations of why matching source resistance Rs to noise resistance Rn works can be found online like here . For my INA111 that I chose And i replaced with lm324cd , the Rn can be calculated using the noise voltage and noise current of the data sheet like: A possible specific question: Can I replace LNA11 amp with any other amp? if yes what should I change in my circular ? <Q> INA111 is an instrumentation amplifier . <S> LM324 is an operational amplifier . <S> Those devices are NOT interchangeable. <S> You need instrumentation amplifier, it is clear from your schematic. <A> The issue of whether an LM324 can replace an LM224 is best answered by looking at the data sheet . <S> But let me save you the trouble - in this circuit it will work just as well. <S> The INA111, however, is a different thing entirely - <S> it's an FET input instrumentation amplifier (effectively three op amps wired together in a single package). <S> An LM324 will definitely NOT work in place of that. <A> Well, you can certainly replace an INA111 with an LM324. <S> It may or may not work as well as you need, but that's a different story. <S> What has not been mentioned is that instrumentation amps are made from op amps. <S> Find the data sheet for the INA111 on the web. <S> On the first page you will find the equivalent circuit, which employs 3 op amps and a bunch of resistors. <S> There is nothing preventing you from rolling your own version using 3 of the 4 op amps found in an LM324 or LM224. <S> Of course, the LM324 isn't a very good op amp, especially in terms of things like input bias currents and offset voltages, while the INA111 does use pretty good amplifiers. <S> Plus, building an instrumentation amp with good performance also requires things like well-matched and precise resistors, so your home-brew instrumentation amp will likely not work nearly as well as an INA111. <S> Whether that is good enough is an entirely different matter. <S> With that said, I suspect that you have really messed up your circuit. <S> Trust me, this won't work. <S> If you've done what I think you've done, you have no feedback, and the output of the "new" LM324 is stuck at about plus or minus 7 volts. <S> You need to learn far more about circuit operation before you can try making substitutions. <S> Among other things, you have not recognized that the schematic you have used is incomplete - the INA111 symbol does not show the gain connection which must be made, so there is no way of knowing what gain it is operating at. <S> So, first things first. <S> It has been mentioned that you have not provided a schematic of what you have built. <S> You responded by providing pictures of your breadboard. <S> Look - you are avoiding the issue. <S> Stop it. <S> A picture of your breadboard is not a schematic and never will be. <S> Use the schematic editor - edit your post and select the icon with the diode and resistor, or just hit ctrl-m. <S> Now make a schematic of exactly what you are doing. <S> Stop making life hard on folks who want to help.
Particularly, I think you have simply replaced the INA111 with a single section of an LM324. As has been mentioned, an INA111 is an instrumentation amplifier, not an operational amplifier.
Physical meaning of reflection coefficient being complex Below is a wikipedia section about reflection coefficient in electrical engineering: source: https://en.wikipedia.org/wiki/Reflection_coefficient It says that the incident to reflected wave ratio is complex . What does it indicate in practice here being complex? <Q> it just means that the reflection coefficient can be represented as a complex number/quantity in the form : a +jb or in polar notation using magnitude and angle. <S> It doesn't have any "physical" significance or so. <S> Its just a mathematical tool to represent the nature of a quantity and simplify calculations. <S> For eg: when we represent an impedance as a complex quantity, Z = a+jw , we can infer that a there is a real part which doesn't vary with input voltage (Resistance), and there is an imaginary part <S> jw ( Reactance part due to inductors and capacitors), which can varies with input voltage frequency. <A> There are no such physical things as complex currents, voltages or electromagnetic fields. <S> The observation point can be the end of the line or any other point on the line. <S> Actually in every case, when the wave reflects due the mismatch, the reflection factor phasor generally is complex . <S> It's real in some rare (see NOTE1) points where the phase difference of the incident and reflected wave is N*180 degrees where N is positive or negative integer or zero. <S> NOTE1: <S> those points where the reflection coefficient is real are placed at quarter of the wavelength intervals. <S> One of them is at the end of the line if the load is resistive. <A> What does it indicate in practice here being complex? <S> It indicates that either the characteristic impedance (Zo) is complex (a la low frequencies in telecom applications) or the load (Zterm) is complex or some combination that makes the answer complex. <A> Besides meaning complicated (which this is), complex refers to complex numbers and complex algebra, a number system based on the "complex operator", the square root of -1. <S> In the late 1800s, Oliver Heavyside invented both coaxial cable and the math to describe it - complex vector analysis. <S> This is the heavy lifting of electrical engineering math. <S> The math world uses a lower case i (i for imaginary) as the symbol for the operator. <S> In EE land, i already is used for current, so we use j for the operator. <S> https://en.wikipedia.org/wiki/Complex_number <S> https://en.wikipedia.org/wiki/Oliver_Heaviside
Complex reflection factor simply presents the existence of phase shift between incident and reflected sinusoidal waves when they are measured or calculated as complex phasors at the same point and the reflection fator = phasor of reflected wave divided by phasor of incident wave.
Connecting TVS before or behind Ethernet transformer? I looked at several designs and recomendations. Why do TVS diodes sometimes connect before and behind transformer? What are the pros and cons of connecting before or behind? <Q> Interesting question. <S> In reality it probably does not matter much which side you put them on, however there are pros and cons to both. <S> Connector Side <S> PRO: <S> As a general rule of thumb you want transient suppressors as close to the connector as you can so the voltage/current spike does not go very far on the PCB and has less chance to cross-couple into other, unprotected traces and devices via capacitive coupling. <S> Being on the connector side, obviously fulfills this requirement. <S> CON 1 <S> : The suppressors are really there to protect your electronics. <S> Beyond the transformer they are really not "directly" protecting much other than the transformer which probably does not care if there is a spike on the line. <S> CON 2: If a spike is sufficient to destroy the TVS diode to a short, you now have a shorted communication cable. <S> CON 3 <S> : There is some question about whether the TVS devices significantly changes the impedance of the line. <S> This could impact the bandwidth of your system. <S> Driver Side PRO <S> : Here the device IS actually protecting your sensitive devices. <S> CON 1: <S> The spike energy is dissipated in your side of the interface. <S> This can result in transients propagated into the ground system of your PCB. <S> CON 2 <S> CON 3 <S> : It is possible that a sufficiently nasty spike can short the insulation in the transformer resulting in a significant reduction in functionality. <S> Summary <S> From the above, it would appear connector side is the better protection method, but driver side is less intrusive on the functionality of the line. <A> Just a word of warning from experience : <S> If you have the protection diodes on the connector side (before the transformer) and your system has a floating ground, this can cause CRC errors and Ethernet dropout. <S> I believe the failure mechanism is due to the protection diodes momentarily becoming forward biased as the floating signal ground settles. <S> I would always fit the diodes on the PHY side. <A> Most of the examples I am seeing show it on the phy side which makes sense from the standpoint of it is close to the device you are trying to protect and if you are using an RJ45 with integrated magnetics it is only place you can put it. <S> I have never used a TVS for an Ethernet design and have never seen it used in any Ethernet reference design. <S> The Ethernet transformer provides some amount protection against ESD. <S> Some Ethernet phys have some level of built in ESD protection. <S> You could always design the TVS in and not populate if they are not needed.
: Careful routing has to be established to ensure the lines from the connector through the transformer are isolated from other signals up the TVS point.
Measure current with a Branford Multimeter I have a cheap multimeter called "Branford" and I would like to measure the current used to charge a phone. As a power supply, I use a DC generator, which I rotate by hand. Then I have a DC/DC Step Down Converter to get constant 5.2V. Then I use the power to charge the phone. It works perfectly fine and the phone is charging as it should. However, I would like to see how much current goes through, but I get very small values. I think I'm doing something wrong, because I'm a noob in this field, therefore I would like to ask if what I am doing is correct. The multimeter front panel looks like this: and the connections are like in the picture below. I've been looking around the internet and it looks okay, but considering that the multimeter is not the same, I would expect that there might be some other set-ups. As you can see, I get that 0.7 (A?) which stays constant and that is where the suspicion starts. I also tried to turn the knob to the "20A" option, but there is no change. The generator has the following specs: Output voltage: 5V-24V Max load voltage: 40V Max output current: 1500mA Max load power: 20W So, can anyone help me to figure out if it is the measurements that I'm doing wrong or the generator can't deliver more? <Q> If you use the "20A" socket, the meter must be set to "20m/20A" for a correct reading. <S> Since you say you read 0.7 in both the 20m/20A and 200m positions, I expect the current is really 0.7 Amp. <S> That is a reasonable current for the phone to draw when charging. <S> The charging current will vary, depending on the state of charge of the battery, and can be limited by the charger circuit in the phone. <A> You're using the wrong jack on the meter. <S> The jack labelled "20A" is <S> ONLY used when the range selector is in the 20A position. <S> For all other current ranges, you should have the red plug in the "->|-ΩmAV" jack on the right. <A> Note the mA socket is on the right and dial has only 1 position for 20Adc and another for 20A Ac (~) using left jack. <S> You will want to test Vdc shunt when charging vs RPM <S> then Idc in series vs RPM. <S> Count revs/minute and record results! <S> Then report here.
If the switch is set to any other position, the reading (if any) may or may not be meaningful.
Anything else required for safe low side MOSFET switch? I want to switch power on & off to a module using an MCU output pin. For this I am using a BS170 MOSFET as so: simulate this circuit – Schematic created using CircuitLab This seems to work fine, however I'm wondering if there is any reason to add additional parts for protection, stability or something else? Even the 10k resistor seems to me like something I might be able to drop. <Q> BS170 is not well specified for 5.0V drive. <S> You would be better off with a 2N7000. <S> There are much, much better parts if you don't mind SMT, which you should definitely explore if the MCU supply voltage is less than 5V. <S> If the load could be inductive you should have a catch diode across the load. <S> The 10K resistor will turn the MOSFET off if the MCU port pin goes high-Z. <S> This could be vital if the micro finds itself in a low voltage reset condition- <S> if the load is heavy the MOSFET could be damaged by being partially 'on'. <S> If your supply to the load is higher than the MCU supply, consider a gate resistor which could protect the MCU if a short occurs across the MOSFET drain to gate pins, say by an errant test probe or a failed MOSFET. <S> It can go between the 10K and the gate something like a few K. <A> Depends on the load, it's type and and the current it needs. <S> If the load is too big for the mosfet, then that's an issue too. <S> The 10kΩ resistor is not there for protection, per say. <S> The pull-down resistor is there to prevent the mosfet from turning on unexpectedly from a floating pin. <S> When the micro controller is off, or the gpio is in high impedance mode, the gate can fluctuate. <S> If the mosfet is controlling something dangerous, the sure it's for protection. <S> If it's an led, then its more so it doesn't look like an amateur project. <A> You need to examine the GROUND currents and GROUND inductances. <S> You can easily create an oscillator if the MCU GND is wrongly wired to the FET GND. <S> Daisy chains are not a good approach to GROUNDING for high current systems.
If it's an inductive load, then a flyback diode is needed to protect the mosfet from the collapsing inductive field creating a voltage spike.
How to use a TL431 for PMOS based reverse polarity protection? Consider the following illustration: I wonder if it possible to replace the zener with a TL431. The usual application of TL431 supplies specific voltage to next stage but the zener here supplies voltage Vin - Vz to turn the pmos on. <Q> 10V zeners are quite nice devices for this application - quite sharp knee and far more accurate than required, not to mention cheap (only one part vs 3), and easily available. <S> I can't imagine you would more than a couple types for any practical range of MOSFETs. <S> Usually Vgs(max) is 20V, 10V or 8V, and usually the 8V types cannot handle your upper range of voltage. <S> You can likely use a TL431 with a couple resistors, but I would suggest a Zener diode in most cases. <S> The TL431 is an active circuit- <S> it has this characteristic below the minimum current for regulation <S> (the below graph is typical, worst case <S> Imin is 2.5x higher- 1ma): <S> The low current behavior of a 10V zener is far less non-ideal. <S> For example, this nice SMT zener family ( MM3ZxxxST1G ): Even at a few uA <S> they are behaving reasonably. <S> The other "interesting" characteristic is the "tunnel of death <S> " stability range: <S> The MOSFET gate represents a capacitive load on the TL431 shunt regulator. <S> Especially if you choose to use it at lower voltages than 10V you could run into stability issues at the edges with some MOSFETs at some temperatures. <S> You cannot make a Zener diode oscillate, at least not so as you would notice it in this circuit. <A> Why would you want to replace a cheap fast robust zener with a slower, more expensive TL431? <S> You can. <S> But with more components, there's more to go wrong. <S> Here, you don't need that accuracy, and simplicity says use a zener. <A> The circuit is designed for reverse polarity protection .The <S> Zener is present to protect the gate of the Pmos fet .As <S> long as the zener starts working before the Vgs rating is exceeded then it is fine .There <S> is no point in using a more accurate and in this case more tempermental device .If <S> you must still do this you could use <S> say a 10K and a 30K resistor to program the TL431 to 10Volts .
A 431 can be configured with two resistors to clamp voltage more accurately than a zener, especially at lower currents.
Voltage divider/reduction with no current draw I am trying to measure the voltage on an LED in an open circuit (awfully shown below) using an arduino, however the voltages are too high for its inputs. I have tried: Using a voltage divider (~10K total resistance), however current flows and the LED lights up (which I do not want) Bumping up the total resistance of the divider to ~10M, the LED glows extremely dimly (which is fine), but now the arduino doesn't read the voltage correctly (due to noise or something?) So now, I have decided to use some Zener diodes to drop the voltage before connecting with the arduino, is this (theoretically) going to work? Or is there some simpler/smarter way around this? Also stumbled on this: Measure voltage with no current , but it seems a bit more complicated than using the diodes. Any input is greatly appreciated. Many thanks Update: Thanks everyone for all the help, ended up using the pull-up as it was the simplest <Q> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> Opto-isolator current monitor. <S> This will decrease the voltage available to your 12 V LED by about 1.4 V or so <S> but this is unlikely to be an issue. <S> Notes: <S> R2 may be omitted if an internal pull-up is used. <S> That makes it a single-component solution. <S> @Trevor's solution is simpler and cheaper. <S> Use this one if isolation between the circuits is an advantage. <A> Perhaps I am missing something here, but from your comments you seem to only need to know when the LED is on, or to put it another way, when the switch is closed. <S> You can do that with a simple diode and a pull-up. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> if you can replace the switch ... <S> simulate this circuit – <S> Schematic created using CircuitLab
The opto-isolator LED has to carry the full current for your existing LED. As jsotola suggests, you can use an opto-isolator but rather than connect in parallel you can connect in series to reduce power consumption and save another series resistor.
Base resistor in open collector I want Vout to be logic 0 when Vin is logic 1 (+5V), and Vout to be high impedance otherwise. I designed the circuit like an open collector switch. But how can I design R1 value ? Usually in transistor switches, I calculate the base resistance value by calculating Ic(sat), putting Vce = 0V in the collector circuit. Then I divide it by Beta to find the minimum base current needed to switch on the transistor. Here there is no collector resistance or collector current. So I can't figure out what should be the minimum value of base current to turn on the transistor. simulate this circuit – Schematic created using CircuitLab <Q> As stated, you can't do it. <S> You must specify both the maximum expected collector current and the minimum required output voltage. <S> Since open-collector outputs are ordinarily associated with saturated operation, it is usual to assume a beta in the range of 10 to 20, with the exact choice depending on how conservative you want your design to be. <A> Max it out. <S> Or decide what the max ICE current should be for your design, then calculate the resistor for saturation at that current. <S> An open collector with an unknown ICE you have to hazard a guess and make a determination of what your design should handle. <A> There is no Vout in your circuit as shown. <S> If you are simply taking the collector to an Arduino digital pin as an input, then you'd have to turn on the internal pullup resistor to provide current for the collector. <S> From the BC547 datasheet : For the Arduino input current on an AtMega328 <S> the DIO pin schematic shows the Pullup resistor on the I/O pin. <S> The value of the pullup is shown in Table 32-2: <S> So the minimum values is about 20 KOhm, through maximum of 50 kOhm. <S> With a 5 V MCU you will draw about 250 uA maximum, if you are operating at 3.3 V <S> then you'd expect a maximum of about 17 uA. <S> If you then look at your BC547 datasheet you'll notice that you need 50 uA or less to ensure you could pass the 250 uA collector current. <S> However there is a tendency to overdrive the Base <S> so I'd suggest using around 250 uA Base current. <S> That would give a base resistor of about 18 kOhm in your circuit. <S> This would be equivalent to an effective Hfe of 1. <S> If you use the minimum Hfe of 10 (for overdrive) as suggested by others then you end up with only 25 uA Base current and an R value of about 170 kOhm. <S> There is nothing wrong with this of course, so you decide what overdrive you want to use. <S> My preference is to not use high value resistors in digital circuits if I can avoid it. <A> Adding to the answers of others: Your micro-controller has some internal pull-up inside. <S> So yeah some small current will flow to the ground via the transistor, when Vin is pulled high. <S> Now you can easily design for R1 using that info of collector current.
You have to supply a pullup resistor to define a Vout and a collector current.
Creating positive and negative voltages from a battery I have a circuit that is usually powered by +9V and -9V (uses two 9 volt batteries), but I am trying to power this from a rechargeable 3.7 V battery. The circuit does not draw much power (can be powered down to about +2V and -2V), but it has op-amps that require both a negative, and positive swing (LF412). I was wondering if it would be possible to create a dual supply from the battery, and if so, what the most efficient method would be to do this. I have a couple of ideas that i might try: 1.) Use a boost converter to step the voltage up, generate a pulse (555 timer), send it to a transformer, and use a center-tap to create the separate voltages, which can then be filtered. 2.) Follow the steps for idea 1, but instead of a 555, use a micro controller to generate a sin wave (DAC lookup table method), which can be amplified, send this to the transformer, as this would be more efficient for transferring the power. I would really appreciate the help!! <Q> There are off the shelf chips that do what you want. <S> Also, you don't need any transformers since there is no need for isolation in this case. <S> For the positive supply, you need a boost converter . <S> This is assuming you connect the negative side of your 3.7 V battery to ground. <S> There are also switcher chips that are intended for making a negative supply from a positive one. <S> If your negative current demand is low enough, a charge pump might be all you need. <S> These things are available off the shelf. <S> Generally you supply the inductor, a few caps, and sometimes a external diode or two. <S> Look around at offerings from Microchip, TI, LT, Linear, and many others. <S> Or use a distributor web site to drill down parametrically across the vendors carried by that distributor. <A> For low currents and analog circuits, I prefer charge pumps to inductor-based boost and boost-buck circuits. <S> If the current required is relatively low, then a charge pump inverter will turn some of the +9 V into -9 V. Maxim, Linear Tech, TI. <S> For the 3.7 V source, you can use two charge pump parts, one to double the 3.7 V to 7.4 V, and one to invert that to -7.4 V. IIRC, someone makes a single chip that does everything, double plus invert. <S> Sounds like something Maxim would do. <S> Found it: https://pdfserv.maximintegrated.com/en/ds/MAX680-MAX681.pdf <A> Why not just use two rechargable 9V batteries? <S> That way you can still recharge it when it runs out of power, and you won't have to tinker with its electronics and risk breaking it.
For your low voltages and currents, you can find chips that contain the switching element.
How would I go about reverse-engineering RF lights? I have some ceiling fans and lights throughout the house that we control with RF remotes. I'd like to be able to automate these lights with my raspberry pi, but in order to do that I'd need to figure out the frequency and how the remotes communicate. Advice? <Q> start with a teardown, either do your own, or view one by an "expert" on you-tube. <S> you need to know three things, what the RF frequency is, what the modulation scheme is and what the data protocol is <S> It may be that the makers have used a common RF chip like NRF2201 in which case you could start with a NRF2201 module and need less detailed knowledge of the RF protocol... <A> Check the FCC <S> Id for hints. <S> Check the remote for crystal with a frequency stamp on them. <S> Reverse engineering RF protocols involve alot of testing. <S> If you don't know the frequency it works on then your job is much harder. <S> Then you have to contend with encryption and revolving codes depending on the protocol. <A> From chipset, you will get to know the i) frequency <S> it operates on and ii) <S> possibly the physical layer protocol/modulation scheme it uses. <S> A chipset can be capable of operating over multiple wireless protocols(e.g. <S> AT86RF215 can use any of FSK, PSK, QAM etc). <S> If this is the case then you will need to observe transmitted signal over spectrum analyzer or oscilloscope and identify the modulation protocol being used (This can be a really tough job). <S> But if the chipset is like Si4421 which can use only one protocol then life becomes much simpler. <S> You will also need to know its higher layer data protocol (e.g. frame format indicating how the information is encapsulated in the physical layer frame, addressing scheme to different devices, any encryption it uses etc). <S> Look if any documentation about this is available online. <S> All of the above (knowing modulation scheme and higher layer protocol) can become simple if its possible to acquire or reverse engineer the software stack on which the system is running.
Use a software defined radio module to scan radiowaves for hints. You need to tear it down first to know what chipset they are using.
How would you search for this type of screen? I have this cooking thermometer with a very small screen. For my DIY projects I'd like to buy some of these screens in bulk (say 10 or 20) and use them with arduino but I can't find a proper name to search them with. It seems to be something between a 7-segments display and an LCD screen, probably designed specifically for this application. How would you go about searching something similar on the marketplace? EDIT: I'm adding that I'm specifically interested in these very small screens. The one in the picture is 10mm X 25mm <Q> That is an LCD display. <S> There are various types but the small ones (2-3 digits) are more difficult to control as they need an AC signal on the pins. <S> The bigger modules have a controller which takes care of that. <S> Digi-key, Mouser, Farnell, RS, Future electronics should have some. <A> Raw LCD glass is a thing <S> While the specific display you're seeing there is custom and therefore not available, you can get generic seven-segment, multi-digit LCD glass through the usual suspects. <S> You'll need a LCD driver chip to drive whatever glass you get though, as for 3-4 digits it'll likely be too multiplexed to "bit bang" <S> the AC drive waveforms <S> (Silicon Labs makes the CP240x family that can be talked to via I2C or SPI). <A> You can try "alphanumeric LCD Module". <S> Those are the easiest to control.
What you're looking at is a slightly customized seven-segment LCD "raw glass" display that requires the user to provide the appropriate AC drive waveforms.
Why two separate batteries in a circuit don't work? If electrons move from the negative side to the positive side in a battery (thus creating electricity when they flow), why does the same circuit with two separate batteries not create electricity? As you can see in the diagram, electrons should be moving from A to B because the wire is connected from A - side (high pressure) to B + side (low pressure). Why is there no electron flow? <Q> Actually some current occurs when you connect the wires. <S> It stops soon, maybe in few pico- or nanoseconds when the system has found a new balance. <S> So, you cannot see anything, only some sensitive instruments could notice the change in the electromagnetic field or see the short light pulse. <S> I even do not know if there exists light sensors that sensitive. <S> What balance? <S> There really exists the electric field over the battery which pushes the electrons outwards from the minus pole and pulls them towards the plus pole. <S> Only a medium where electrons can easily move is needed. <S> If you connect the wire+led in your system, the electrons flow to the wire from battery A and some are taken off by battery B. <S> The basic fact: THE TOTAL AMOUNT OF CHARGE DOES NOT CHANGE. <S> Battery A is soon positively charged and starts to pull electrons . <S> Respectively B gets negative and starts to push electrons. <S> The current weakens and stops when the generated opposite force compensates the pushing effect which is generated by the chemical reaction in the batteries. <S> In electric theory we have a concept named "capacitance". <S> It depends on dimensions and materials. <S> It tells how much charge a structure can take for a given voltage without having a fully closed conductive path for the current. <S> ADD due a comment : <S> Inserting some load to both batteries by adding some conducting part between the poles of A and another between the poles of B does not make the current continuous in the original led+wire. <S> The inserted loads do not remove nor add electrons in A nor B, they only make continuous chemical reaction possible inside the batteries. <S> Just for your information: Chemical reactions are electron transfers from a material to another. <S> The new electron orbit configurations bind the atoms together in a new way <S> and we see it as a formation or disbanding of chemical compounds. <S> Some chemical reactions have so high tendency to happen without external energy input and change the eloctron orbit configuration so heavily that we can see the tencency as electric voltage. <A> Current flows in loops. <S> A negative terminal is not a free flowing font of electrons. <S> A postive terminal is not an unlimited sink for any number of electrons. <S> (Air power does that, the "aether" is atmospheric air). <S> There is now a 2 x battery voltage potential between A+ and B-. <S> It is patiently waiting for those two points to be conneccted, which would complete the loop. <A> Current only flows when there is a potencial difference. <S> The two batteries aren't connected so there is no potential difference. <S> You cannot separate voltage and current like that, because the voltage makes the electons flow (current), when the circuit closed. <S> And this circuit is not even closed.
The battery creates voltage, or potential between its terminals - not between any terminal and some "aether" which connects everything together.
Is it bad to run traces directly over each other on separate layers? A bit new to PCB design, I have to run two traces between two pins, and the best way I can think of is to have one trace go to the bottom layer through a via and then run directly under the top layer trace. Are there any issues that can come about doing this? They're pretty low power signal traces, but can the traces affect each other through induced fields, or are the top and bottom layers generally isolated? edit: The traces are running over each other for about 700mils. They're SPI data lines. <Q> At high speeds "crosstalk" may become an issue. <S> "Crosstalk" happens when one signal's electric field couples the signal over to an adjacent trace that mimics the source signal. <S> This can interfere with the signal being passed along the second trace and create false crossings and other noise that cause the receiver to detect errant data. <S> The best way to eliminate this is to have traces running in opposite directions (perpendicularly) on adjacent layers, or have a ground plane between each layer. <S> These methods minimize the coupling between two signal traces. <S> At lower speeds this generally won't be a concern though. <A> The only answer to the actual question in the title is: Maybe Is it bad? <S> Not necessarily, but there will be both capacitive and inductive coupling between them. <S> How much depends entirely on the shared length, size, and the distance between the traces. <S> Assuming these are for example digital signals from a microcontroller at lowish speeds, it is unlikely to be a problem. <S> Fast signals and analog signals - then you need to tell us the specifics. <A> Source: <S> EDN <S> You can calculate the capacitance by finding the area that is crossing between the traces and the height between them and the electric relative permeabilty \$ \epsilon_r\$ which is around 4.4 for fr4 PCB material: <S> Source: Reference Designer <S> Usually this results in a capacitance that is a few pf's, if that is too much capacitance between nets then run traces on different layers OR use a different layer stackup to ensure there is a ground plane running between signal layers. <S> So decide if a few pF of capacitance would be detrimental to your design, this usually only applies to high speed designs, another way to avoid this is to have a stackup like this (for a four layer design): <S> Signal GND POWER Signal <A> Here is a specific example, with 4 millivolt peak-peak signal of 1million ohm source impedance, driving an Analog-Digital Converter of 10pF input capacitance. <S> The interferer is MCU clock, located 1 millimeter away from the signal trace. <S> With interference[the screen shoot illustrates this case], the SNR is -22dB (that MCU trash is 12X stronger than the 4milliVolt signal. <S> To compute this, the "Gargoyles" button is checked, also the far-right "I/C" button is checked, and then "Update" button is clicked. <S> Without interference ("Gargoyles" turned off) <S> SNR is +39dB (signal <S> nearly 100X stronger than the ----- random thermal noise ----- measurement floor. <S> Thus the presence of Efield interferer caused ---- in this case ---- 60dB change, or 1,000:1 change, in the Signal Noise Ratio. <S> and here is the (editable; you got here by clicking OFF the global-trace mode and then clicking on the "trace wizard") default dimensions of the trace used as the vulnerable signal trace, the victim of Efield trash injection, modeled in this version as parallel-plate capacitance coupling. <S> How does SignalChain Explorer work? <S> By modeling the Signal Chain, the tool has access to the NODE IMPEDANCE; when a current (displacement current, arriving from capacitive interference) enters any node, the error voltage is simply Current <S> * Node_Impedance. <S> In this example, the signalChain has only 1 node available to respond to interference: the point of connection between Sensor Output and the ADC input. <S> The default Efield interferer is the MCU clock, defaulted to 1mm distance from the signal trace, with 100MHz clock rate and 2.5 volt peak-peak voltage. <S> The sensor has Zout of 1Million ohms. <S> The ADC has 100 ohms Rin and 10pF, a time constant of 1nanosecond and <S> F3dB of 160MHz; the MCU clock energy blasts onto the ADC, attenuated only by the capacitive-division of the two series capacitors:1) <S> the parallel-plate coupling model used between the two traces (MCU trace andsignal chain trace)2) the node capacitance, dominated by 10pF of the ADC sampling capacitor.
Running traces on two separate layers can be bad because you are introducing parasitic capacitance between the layers.
Why do we need external pulling resistors when microcontrollers have internal pulling resistors? Microcontrollers do have internal pull up-pull down resistors yet most of the circuits have external pulling resistors. I looked on Google for answers and a few sites said that those resistors are not that strong but I thought they were good enough to work. I had the thought that they might need external because the internal resistors need to be triggered by programming. So for some unplanned situation, they attach external resistors as well. But I'm not certain about it. What is the real reason behind using externals when we do have internals? <Q> Needing a more precise resistance than the internal resistor. <S> Internal pull-up/-down resistors have very wide tolerances. <S> Needing a resistance larger or smaller than that <S> provided internally. <S> For example, I 2 C typically uses stronger pullups, while you might want a very weak pullup for monitoring a switch, to save power. <S> Needing to pull to a voltage other than the microcontroller's supply voltage or ground. <S> Using a pull-up/-down resistor along with the ADC on the microcontroller. <S> Some microcontrollers disable their internal resistors on any pin the ADC is connected to. <S> Needing a pulldown resistor on a microcontroller that only has pullups. <A> Some (or perhaps many) microcontrollers do have internal pull-up resistors, but these are often quite high values. <S> Many applications would require lower value pull-ups. <S> Pull-up resistors may also be required at the inputs to normal logic circuits (gates, counters, etc.) <S> which do not have internal pull-ups <S> (and sometimes we want pull-down resistors, instead...) <A> In addition, you would use an external resistor every time you need an actual resistance value. <S> MCUs usually don't have actual pull-up resistors but rather MOSFETs sinking a small current, so their equivalent resistance value can vary wildly depending on the signal you apply to the pin.
There are a few possible reasons, such as Needing the resistor to be present during power-up, as the microcontroller will not yet have started executing.
Moore vs mealy, why the output is delayed in the former? I am not an engineer (software developer myself) but would like to understand why moore machine output is delayed. I know that in Moore's machine, output depends only on the state, while in Mealy's machine, it depends on both the current input and the state. But to me it does not explain why it is said to be delayed (or as one source puts it "the change in the input will manifest on the output in the next status". But why? If the Moore input is entered, the state changes and the output is generated. So what is this delay? <Q> Moore outputs are synchronous with clock. <S> It changes only with state transition at clock edge. <S> Mealy outputs are asynchronous. <S> They can change immediately with input change, independent of the clock. <S> So we can say moore machine is not as "fast" as mealy. <A> If the Moore input is entered, the state changes and the output is generated. <S> So what is this delay? <S> Therefore the output corresponding to the current input will show up only after a (positive-edge) clock transition is made. <S> Therefore the output has one clock cycle delay corresponding to the Mealy state machine. <S> To put it more, the output of a single-input Moore machine can be written as Z(x=n)=f(flip-flops' state(n)), where flip-flops' state(n) is a function of x(n-1) and other parameters. <S> Since the output Z(n) for x= <S> n corresponds to flip-flops' state(n), and the current flip-flops' state is n-1, we have to wait for one clock cycle so that the output corresponding to the time n appears, which is when flip-flops' state has changed to n. <A> The additional delay in a Moore SM is simply the remaining fraction of a clock cycle after the input arrives, and before the next active clock edge. <S> Whether or not this is significant depends on the context of the design. <S> One feature of the Mealy machine passing inputs directly to the outputs is that it may produce output signals with durations much less than a clock cycle, called glitches or runt pulses. <S> This may cause mis-operation of logic depending on the state machine. <S> The Moore type (and also the preferred single-process style of state machine in VHDL) produces clean output pulses one or a multiple of a clock period in length. <S> Both styles of state machine are vulnerable to mis-operation given inputs arriving too close to a clock edge, so using either type, you should synchronise inputs to the clock before passing them to the state machine.
In the Moore machine the output is dependent only on current state, and the latter to the input prior to the next clock transition.
Why I2C designed to work with pull-up resistors and not pull-down ones? I understand that in I2C, SCL and SDA lines use pull-up resistors and the pin drivers are open collector NPN devices which can drive pins to ground. This gives I2C an advantage that the same bus now can be shared with multiple slaves, and even if two or more slaves accidentally try to drive the bus at the same time it won't cause any damage to the system. But this can also be done using PNP open drain drivers and pull-down resistors on SDA and SCL lines. Things like clock-stretching and multi-master arbitration can be achieved with this too. Does the current implementation of I2C protocol gives any benefits over the above suggested alternative implementation? <Q> That's a lot less of a restriction than forcing power to be the common connection to all IIC devices, as would be required if the lines were driven high and floated low via pulldowns. <S> Note that IIC devices don't all need to be powered from the same net or the same voltage. <S> This would not be true if both bus lines had to be driven to the single common power voltage. <A> In the good old days, TTL drivers were much better at pulling a signal down than pulling it up. <S> Therefore, protocols like I2C, but also interrupt lines, reset, and others, were all implemented using a pull-up with distributed pull-down. <A> It's easier to use ground as a common reference among subsystems that might have varying supply voltages. <S> If you use PNP transistors to pull up to a supply voltage, all subsystems would have to be connected to the same supply. <A> Good answers abound here, but there is also another reason. <S> If the quiescent state of the bus is at ground, there is no way to tell if the bus is connected or just hanging in space. <S> It is normal for the pull-up to be located at the master device. <S> Slaves usually do not have a pull-up. <S> This is because the pull-down current that would be required to assert a low level would increase with the number of devices connected to the bus. <S> A slave, when plugged into the bus, can then detect that the line is pulled-high ( <S> Assuming it is not being used) and know that the bus is actually there and quiet. <S> That would not be the case with a ground biased bus. <A> If I understand the question correctly one aspect is: <S> Why do you use pull-up resistors and NPN transistors instead of pull-down resistors and PNP transistors? <S> First of all you should note that you don't use bipolar transistors (NPN, PNP) but MOSFETs (which exist in four different variants). <S> Devices using the " pull-up and NPN " variant use a n-channel enhancement MOSFET. <S> Because the source of this MOSFET is connected to ground the gate-source voltage (controlling the current flow) is equal to the voltage between gate and ground. <S> So the MOSFET can be controlled using a voltage between 0 and Vdd. <S> There would be three possibilities to implement the " pull-down and PNP <S> " variant: Using a p-channel enhancement MOSFET <S> On an NMOS or CMOS IC p-channel MOSFETs with comparable characteristics (resistance etc.) require more space than n-channel MOSFETS. <S> In microelectronics space is money so p-channel MOSFETs avoided if possible. <S> Using an n-channel enhancement <S> MOSFET <S> This would require the output of the logic circuit driving the transistor to have a "LOW" voltage of the supply voltage (e.g. +5V) and a "HIGH" voltage above the supply voltage (e.g. +10V when the rest of the circuit is supplied with +5V). <S> The reason: The source-ground voltage will be Vdd when the MOSFET is conducting. <S> The gate-source voltage must be positive so the voltage between gate and ground must even be higher. <S> You would need two voltage supplies - and a circuit shifting the output of the logic circuit from 0... <S> +5V to +5V <S> ... <S> +10V ... <S> Using an n-channel depletion MOSFET <S> Unfortunately I can't tell you much about this solution. <S> However I found some page using Google saying that depletion MOSFETs are more difficult to produce than enhancement MOSFETs and they are avoided for this reason. <S> I know from power electronics (not microelectronics) that the "two power-supply" variant described above is even preferred over depletion MOSFETs. <S> (But I cannot tell you why.) <S> EDIT <S> Using n-channel depletion MOSFETs you would probably need a negative voltage (e.g. -5V) <S> so you would also need two supply voltages... <A> There is also one more added benefit for having common ground and pull-up data lines (over having common VCC and pull-down): <S> Even if the original intention was to connect devices on the same PCB at span of few inches only, it was successful enought <S> so now is not uncommon to have the lines long couple of feets and connecting "devices" which could be computers or something of equal complexity, with some devices having its own power sources (of different quality <S> , say you connect something wall-plug powered with something battery powered). <S> It is better, if connection works "at least good" even in not ideal and out-of-spec conditions. <S> And lot of such connected devices may be somehow connected also by other means, then only I2C communication. <S> Usually when connecting devices together you connect it with common ground - sometimes as part of other functions, sometimes just because it is mounted on metal case and the devices are ground-connected with the case too (or with common cooler or something like that) or there may be shielded cable with grounded shield inside - which also connects the grounds. <S> If you also directly connect power lines (VCC) of such devices, you will get problems when those lines would be on different voltage naturally (sure, it may say 5V here and there, but depending on the construction and part tolerancies of power sources it could be also 4.9V or 5.2V or even changing, if it is battery powered and sometimes running some motors, making the power drop and rise over time). <S> In such case there is effectively short circuit betwenn those power sources of part a Volt and depending on the sources (and resistance of the ways) <S> there could flow relatively high currents resulting not only in energy waste and heat rising, but maybe even in damaging (or shortening life) of some of those sources. <S> Which is not good. <S> Having common ground and pull-ups avoids <S> such problems - ground is ground and pullup resistors allow for only really small cross current even if the VCC differs a lot over the devices.
Electrically it makes sense because ground is the one common connection to all devices on a IIC bus.
Can rotary encoders be multiplexed? I am currently working on a MIDI based project and would like to use rotary encoders. However, due to the nature of my project I need to multiplex a total of 64 encoders with 8:1 multiplexers. Is this possible or will an encoder just not work in this way due to pulses possibly being missed while the code is scanning elsewhere? <Q> If you're trying to use 64 encoders as "frob knobs", the more typical way of doing this is to use each encoder for multiple purposes, and have some way of controlling which purpose the knob is serving at any given moment. <S> Otherwise, I'd probably urge you to throw a microcontroller at each encoder, or at least have more than one microcontroller, each servicing as many individual encoders as it can without multiplexing -- you're already throwing more money at a problem than would typically be used, just keep going down the same path to make the device you want. <S> Alternatively, you might consider absolute encoders, so you don't need to worry about missed pulses. <S> That's the best I can offer without knowing more about what you're trying to accomplish (hence, "XY problem", as my comment says). <A> Yes, but... Yes, they can, it is simply a question about the sample rate. <S> You have to perform some calculations. <S> You have to determine the maximum speed for which you want your encoders to function. <S> Then you have to figure out how fast the signals will change. <S> You now need to scan all encoders fast enough not to miss a single of these state changes. <S> I suggest that you draw this out on a piece of paper to figure out if it makes sense for your particular choice of MCU. <S> ...it is unlikely to be the best option <S> Today you can buy a microcontroller for the same price as a multiplexer (well, close enough). <S> Just set a microcontroller for each group of 4 or 8 rotary encoders, and have the controllers report back over a shared I2C or SPI. <S> They will offload your main MCU and present a nice absolute or relative integer. <A> It is a typical 'Nyquist' problem: As long as your sample frequency is twice the signal rate you are safe. <S> So how fast can you rotate a knob and how fast can you scan them. <S> Beware that some encoders can have 'noise' during operation just like normal switches can 'bounce' and need de-bouncing. <A> I prefer shift registers to multiplexers, but would think it would be practical to feed 64 encoders into sixteen 8-bit shift register chips (e.g. 74HC597) and have a small ARM poll those and repeatedly output the resulting counts (perhaps via MIDI system-exclusive events or other means). <S> If the shift rate is 2mbit/second, that would yield a polling cycle of about 15,000Hz which should be adequate. <S> At a 16Mhz CPU clock rate, that would allow about 64 cycles per byte, which would keep the CPU quite busy but be should be workable with clever code. <S> A key trick to getting good performance would be to use "sideways computations". <S> If variable A0 holds the first input from each of 32 encoders and B0 holds the second input from each encoder (in the same sequence), and one performs the computations: delta = <S> B0 xor A1 xor B1A1 = A1 xor (delta and A0) // <S> Same sampled value must be used here and below. <S> B1 = <S> B1 xor (delta and not A0) <S> then B1:A1: <S> A0 will hold 32 three-bit grayscale counts. <S> If one then performsthe computations: delta = <S> B1 xor A2 xor B2A2 = <S> A2 xor (delta and A1)B2 = B2 xor (delta and not A1) <S> The approach may be extended to arbitrary depth, but a key observation isthat one need not process later stages as often as the earlier ones. <S> Onecould thus maintain arbitrarily long counters without having to increasethe amount of computation work per scan cycle. <A> You can use this boards https://hackaday.io/project/27611-i2c-encoder <S> You can make a 4 chains of 16 encoders and use a I2C multiplexer for read all the encoders with only one I2C peripheral <A> It depends on how you plan on using the encoders. <S> If you need to keep track of the exact position of each encoder I would strongly advise against multiplexing them. <S> Using small cheap dedicated micros for each would be a far better solution. <S> However, you should understand that keeping track of a quadrature encoder is not a trivial task even with a dedicated micro watching it full time. <S> The tracking algorithm in fact needs to be quite complex. <S> The issue being encoders have undefined states when the edge of the encoder disk is exactly over the sensor. <S> This can and will result in a meta-stability condition with multiple edges being generated which need to be handled appropriately. <S> Note this is should not be confused with bouncing. <S> The signal edges you are receiving is telling you the encoder is hovering around the edge and is pertinent position information. <S> If however, these are simply soft volume controls with no absolute position dependency <S> then there is another way to use them. <S> You can scan numerous encoders to read their "velocity" or "rate of change" rather than their absolute position. <S> These velocity numbers can then be used to increment or decrement appropriate stored values which can be applied for whatever purpose they are intended. <S> The scanning then looks for cycles in the encoder rather than edges. <S> The scan rate needs to be fast enough that the cycle time is significantly less than the maximum cycle speed of the encoder and produces as small a latency in the velocity measurement as possible. <S> How many you can track at any given time is therefor limited. <A> I would personally recommend using a couple of small CPLDs or FPGAs to track all the encoder states directly, providing counts via I2C or SPI. <S> This should be much more reliable than software based tracking over that many encoders. <A> In my case the issue is caused by the nature of the system which I cannot divulge unfortunately. <S> (I had no other <S> choice)It was found that multiplexing the encoders was nowhere near fast enough to pick up the transitions reliably, even on a simple test rig, before being integrated with the rest of the code. <S> AVOID DOING THIS IF YOU CAN!
This depends on the number of pulses per revolution that your encoder has.
Wiring a momentary button to USB Please forgive my lack of knowledge as all of this is VERY new to me... old dogs and new tricks and all ;) I want to wire a usb cable to a simple momentary button to use as a push-to-talk button. I've done some LIGHT FIT stuff; built button boxes with zero latency boards and such, but I'm not sure how to find this answer. Heck, I'm not really sure how to search the question. The button is 2 prong of course. I don't want to utilize a board (learning arduino now, but there's no room for a board in th his project) if it can be avoided, but would rather just have the button connected to the cable and plug it in to have Windows recognize it as a button. Can I simply wire this button to a usb cable? If so, what fills the remaining 2 positions on the USB connector (data pins I believe) if anything? I don't want to blow up my newly built pc ;) Thanks in advance for entertaining my ignorance. <Q> Cheapest and simplest solution <S> I know of is to find an old USB PC keyboard. <S> Extract the small circuit board from it. <S> Run <S> xev on your Linux machine to find which two contacts you have to short together on the board to get some character as an output. <S> You can also trace out the patterns on the keyboard membrane to do this. <S> Then buy any momentary button and connect it to the two contacts you found. <S> This way I have PageUp/PageDown footswitches for my notes, when I play piano. <A> Unfortunately, you cannot wire a button directly to a USB cable. <S> This will not work. <S> Using something like an ATmega32u4 that has a USB transceiver built-in would work. <S> Give this sparkfun tutorial a read. <S> Note that you aren't limited to using the sparkfun board. <S> I chose that tutorial because it is Arduino-compatible. <S> You could use a different MCU that has a USB transceiver. <S> I would also research the difference between a virtual COM port over USB (which provides a serial link between your MCU and the PC), and USB configured as a HID (Human Interface Device). <S> For your application, I imagine that you would like to emulate a keyboard (HID). <A> No, USB does not support simply connecting a switch across pins. <S> USB uses a complex comminication protocol that requires a chip. <S> That said, you could accomplish a simple contact detection using an FTDIchip.com FT232 chip in gpio bitbang mode; these are widely available and have decent software support. <S> But there's no way to just detect a simple switch closure without any additional circuitry. <S> I've read the usb.org specs. <S> It's not at all like the old parallel printer ports. <A> I want to wire a usb cable to a simple momentary button to use as a push-to-talk button. <S> USB has a standard that includes PTT, it's called "HID Telephony" <S> For that you'll probably have to program a USB microcontroller, or pull apart an existing telephony device and us its circuit.
You'll need an MCU to monitor your button, and communicate over USB.
Is it true that when extending a high watt appliance with a extension cord I should use a thicker (higher AMP) cable than the cable of the appliance? I am an European, so I am not sure if that matters. All I know is that I need a cable with the right amp or wattage throughput capability. Our cables are named as 3x1.5, 3x2.5, 3x3.5 etc, I guess that is 3 cords that are 1.5 cm thick, 2.5 cm thick, etc.. So I have heard the following and wanted to debunk it if its a myth. If let say an appliance of mine uses a 3x1.5 cable which has max 4 KW throughput, do I need a thicker extension cable than the original cable that this appliance use? Or I am good with the same one as long as the capacity of the cable is higher than the appliance itself. It seems very pointless to me, but it makes me anxious when using random extension cords on higher wattage units like oil radiators and washing machines, as I never find them thicker then the appliance cable itself. <Q> Basically yes — as the extending cable should be the same or higher in capacity to limit the losses which get turned into heat. <S> It does depend on the distance — <S> so it may be safer, more sensible and convenient to fit a new supply point where the device is to be used. <S> As for the sizes, the 3 * 1.5 is three cores or wires of 1.5mm 2 in cross sectional area — most in industry just tend to say 1.5 or 2.5 and don’t mention the units as everybody knows... <S> they should, of course, mention the units but ... <S> As for not finding extension cables that thick - they are obviously not common and are usually made specially as necessary - I have made a few in the past. <A> There are several aspects to consider: <S> The extension cable should not overheat by the current flowing through <S> The voltage drop over the cable should be low <S> The short circuit current flowing through the cable should be so large that the circuit breaker will act fast. <S> A very long and thin extension cable may require an extra small circuit breaker or fuse for full protection against short circuits at the far end of the cable. <A> First of all, a 3 x 1.5 cable should be made up of 3 cores, each having 1.5mm\$^2\$ cross-sectional area (and not 1.5 cm as you thought). <S> The extension cable should have equal or more current carrying capacity with respect to the appliance's wire. <A> Assuming home scenario, that is, extensions not longer than few meters <S> The rule is not "thicker" alone. <S> So you should not use <S> 0.5mm² Christmas lights extension for a 4kW heater that comes with 2.5mm² cord. <S> But using exactly same cable is ok. <S> The cable "thickness" refers to thickness of copper wires inside. <S> External thickness is irrelevant, it can be 0.5mm² of copper in 2cm of insulation or 2.5mm² of copper in 1cm of insulation. <S> The thickness of insulation determines how the cable is resistant to mechanical wear (that's why garden extensions appear to be ridiculously thick), to determine the current carrying capacity you need to read the small print on the side of the cable. <S> Cable alone doesn't determine everything, you also need to read ratings of plug and socket too.
Yes, an extension cable with equal or more thickness than that of appliance's cable should be good to use. It's "same or thicker".
How could I amplify the output from this UV photodiode sensor I have been able to use the following photodiode sensor board to detect changes in the output of the sensor when a UV plasma is active. However, changes sensed when the UV plasma is active are very small (0.01 - 0.05 mV). This is when the following board is directly connected to the the 5V supply of the arduino. How would I go about amplyfying this output signal? I've carried out some research on Op-Amps, and thought I might be able to use it in a transimpedance configuration. Would this work? Photodiode board: https://www.adafruit.com/product/1918 Datasheet: https://cdn-shop.adafruit.com/datasheets/1918guva.pdf Thank you for all your replies. This is the schematic of the photodiode break-out board. Other photodiode's I have tested prior to this operated in large wavelength ranges (350nm to 1100nm) and as expected the output contained a lot of noise, probably due to ambient lighting. However, with a wavelength range of 240nm to 380nm this break-out board does not respond to changes in ambient lighting but only to the presence of weak-UV radiation. It is consistently at 0V till plasma is active. The only problem is that this signal is very small (mV magnitude) <Q> Do you know how much UV light there is? <S> Maybe the photo current is just very small (nA instead of uA.) <S> You could use the photodiode spec sheet to estimate how much current you expect. <S> You could try making R1 bigger (say 100 Meg ohm) and make C2 smaller (say 1 nF). <A> That break-out-board has an op-amp built in. <S> Resistor R1 sets the gain. <S> You should be able to get more gain by swapping it. <S> Doubling it to 2M would double it, 5M would quintuple it, for example. <S> A side effect of increasing R1 is that it will move the cutoff frequency of the low-pass filter formed between R1 and C2. <S> You may need to reduce C2 by the same factor if you find your circuit responds too slowly. <S> In addition, the op-amp has a gain-bandwidth product which will have a similar effect when you increase the gain. <S> You may also find that you have alot of noise and the problem may really be that your plasma is a very weak source of UV. <S> These may be greater contributors to your lack of a strong signal and the better solution for those may be finding a different detector. <S> That said, boosting the gain is a very easy and cheap thing to try <S> so it's probably worth a go. <A> What you are looking for is generally known as "signal conditioning". <S> This is a circuit that applies gain and/or offset to a signal in order to make it more suitable for subsequent processing. <A> Here are the instructions from the AdaFruit link: <S> To use, power the sensor and op-amp by connecting V+ to 2.7-5.5VDC and GND to power ground. <S> Then read the analog signal from the OUT pin. <S> The output voltage is: Vo = 4.3 <S> * Diode-Current-in-uA. <S> So if the photocurrent is 1uA (9 mW/cm^2), the output voltage is 4.3V. <S> You can also convert the voltage to UV Index by dividing the output voltage by 0.1V. <S> So if the output voltage is 0.5V, the UV Index is about 5. <S> When the device is connected properly and working correctly it does not require any additional amplification. <S> If you double check all of your connections and they are right <S> you may have a defective board. <A> I would remove R3 as it is killing the negative feedback loop gain. <S> Beyond that, I would probably add a couple of 7400 series inverters after that to clean up the square wave if it needs it. <A> Alternatively you can make your own board with better opamp for this purpose as you mention transimpedance amplifier.
You can design your own (yes, using one or more opamps), or you can purchase a commercial module designed specifically for this purpose. I would try to add a low pass filter on 5V supply - inductor, capacitor and low ESR cap. All you can do about this remove C2 entirely and get as much bandwidth out of the op-amp as you can (or replace the op-amp). Also consider that your plasma may be emitting UV in a wavelength that is off the peak of the responsivity curve for this detector.
Determining the correct polarity of a supposedly wrong polarized capacitor I'm fixing an audio interface (Scarlett) that was behaving poorly and found a likely culprit - a bulging capacitor. However, it appears to be soldered on the board backwards (based on the `screen on the board!) Given this is a production device it's possible it was screened wrong and assembled right.. this thing was working for many years. Given that it should have actually exploded, is there a way I can test the board with a multimeter for the actual intended polarity? <Q> That's funny, they actually added two plusses to the screen and they still put it in backwards. <S> You could power it up and see which pin is more positive, but powering it up without the cap may also affect the results. <S> You would be better to lightly solder in the cap the way it was, since it was working, leaving the legs long in the direction you think is right then power up and check the voltage is positive on the positive. <S> Then power off and attach the cap properly, reversed if need be. <A> It can be assumed that the capacitor has a label that does not match its polarity. <S> This was known at the time of production. <S> This explains his work over the years. <S> Focusrite Scarlett 2i4 inside. <A> Check for continuity ('zero' resistance) between TP1 and each of the capacitor holes. <S> The hole which is connected to the test point is positive . <A> Given ; it worked for many years <S> it is input input storage capacitor for a 3.3V regulator <S> it has a telltale QC ink mark to check polarity <S> I conclude the silk screen is wrong, twice (lol). <S> The replacement part must be the same or higher voltage. <S> Solid Tantalum E-caps typically withstand up to 10% or rated Voltage in reverse ( and some up to 25% ) <S> (based Aerospace experience since 1975. ) <S> Ref <S> But Alum oxide Caps start to breakdown at -1V and will fail with -1.5Vdc. <S> Low ESR may be very desirable as well as same or higher voltage and same value within same tolerance with the same lead pitch and same diameter. <A> I've seen tants mounted backwards that have worked for years and others that failed in seconds. <S> Aluminums should fail fairly quickly however. <S> Reverse voltage over about 1.5 volt will quickly strip the dielectric off the foil. <S> This kind of suggests the screen is wrong. <S> EDIT Aluminums can withstand reverse voltage up to 1.5 volts per CD applications notes. <A> Partial reverse engineering Trace out the circuitry around that cap - find ICs connected to the nodes of that cap, lookup the datasheet and tell form the connections <S> which node must be positive and which must be negative. <S> Probe for continuity / low resistance (< 1 Ohm) <S> nearby IC nodes and the cap nodes device ground or DC power connector or I/ <S> O ports test points, clearly labeled as voltage rails <S> (Wait a while and swap the leads to ensure you don't read false-low values due to other caps charging.) <S> In some cases it might not be 100% clear for (inexperienced) people, you may then ask for further tips providing the partial schematic. <S> (This might also give wrong results depending on the function of the cap and the used measuring method and might be accompanied by reverse engineering.)
Measuring during operation Put the cap back in and probe the nodes with a voltmeter or (better) a scope.
Can I put a resistor in parallel with LED to limit current consumption? I want to power one 3V LED with a 3V source. Let's say I'm using a 10 Ohm limiting resistor, and the LED eats 20mA of current. Using a CR1225 with 50mAh capacity that will last for about 2.5h. What if I connected another 10 Ohm in parallel with the diode + limiting resistor? The voltage across would still be 3V - enough for the LED - but the current going through the LED would be halved. What about the total power consumed by the circuit? I don't mind if the LED gets dimmer, I'm ok running 10mA or less through it. All I want is to increase the amount of time it emits light. Would that work? <Q> If you add the resistor across the battery, so it's in parallel with your existing circuit it will draw an additional 300 mA <S> (I = V / R), assuming the battery is able to supply it. <S> This is not what you want. <S> You could reduce the current draw by adding the additional resistor in series with your current circuit. <A> That MIGHT drop the battery voltage a little bit which would reduce the LED current, but that really is a secondary effect. <S> You effectively have two independent circuits attached to the battery. <S> As such,the current taken from the battery is increased by the 300mA that will go through the 10R shunt, for a total of 320mA. Not for long from a 50mAh battery though.. less than 10 minutes. <S> If you want it to last longer, increase the 10R until you cannot accept the dimmer brightness any more. <S> BTW: <A> No, that won't work. <S> The battery will be empty sooner. <S> Maybe you're (wrongly) taking the 50 mAh from the battery into account. <S> A battery does not provide current , it provides a voltage . <S> The current only flows when a load is connected. <S> The load (LED + series resistor) then determines the current. <S> The CR1225's 50mAh indicates how much current it can deliver for <S> how much time, it is not the maximum current that you should be loading it with. <S> A Fresh high quality CR1225 might deliver 100 mA if your load has a low enough resistance. <S> That will deplete that poor CR1225 within an hour though. <S> Placing a resistor in parallel is generally a bad idea. <S> It is like making your car go slower by breaking while not releasing the gas/accelerator. <S> Use a LED with a high efficiency. <S> Some LEDs are quite bright at a low current of only 1 mA. <S> That will make the battery last for a few days.
If you add another resistor in parallel with the LED and its limiting resistor it will not change the current going through the LED, just draw more current from the battery. To make the LED last long: Use a RED LED as there require the lowest voltage meaning the LED will still light up as the battery depletes. Running a 3V LED from a 3V supply is generally a bad idea.
Extract sound file from electronic answering machine I have an AT&T Model 1738 digital answering machine and I want to record the sound from one of the messages. The speaker in the answering machine is low quality, so I do not want to just play the message and use a microphone because that would result in a low-quality recording. I would rather collect the audio information from some point before it gets amplified and sent to the speaker. What would be the necessary equipment and general procedure to do this? <Q> General procedure starts with a schematic. <S> If you can't find one (likely) then you will have to 'reverse-engineer' it by tracing the wiring inside the device. <S> Necessary equipment equipment includes a screwdriver, continuity tester or multimeter, pencil and paper, and a good pair of eyes. <S> Some basic electronics knowledge will also help. <S> Or just take the signal directly from the speaker terminals - on the assumption that the electronic signal is relatively high quality and most of the degradation occurs inside the speaker itself. <S> Even in this case a schematic is still useful, as it may help to determine what kind of interface is required. <A> Before you start reverse-engineering your answering machine, you need to realize that the recorded sound is pretty poor quality to begin with. <S> The telephone network itself bandlimits signals sent over it to a band from 300 to 3400 Hz. <S> This allows the early phone network to provide uniform service quality over the analog loop from the central office to the customer using reasonable electronics for the early 20th century. <S> Today it limits the bandwidth needed to transmit millions of calls over the digitized phone network. <S> This band doesn't even capture the fundamental of a typical speaker's voice --- the system relies on human perception to reconstruct the voice sound from the strongly filtered version of it. <S> Your answering machine will have been this in mind, so it will use only a fairly crude ADC (maybe 8 bits, maybe less), and a low sampling rate (perhaps 8 kSa/s) to reduce costs. <S> It will also have recorded whatever noise was on the line when your caller left their message. <S> So even if you find the analog signal being sent to the speaker, and re-digitize it with an ADC, you might not get much better sound than you had before. <A> They sometimes use Windbond recorder chips, and low quality is reduced if signal is too low or high. <S> If you have an aux input that accepts 1V typ audio signals on a computer you can record it and playback by tapping into speaker wire pair.
Once you have a schematic you should be able to determine where the audio signal is produced and amplified, then you can try probing different points to find the best signal.
Mains voltage waveform on oscilloscope I connected the oscilloscope probe to the mains supply but i find its waveform not excactly pure sine wave, So im wondering if there are problem in my scope or something else. <Q> The sinusoidal waveform of the mains supply is very rarely (if ever) perfect. <S> It is affected by transmission line impedance, as well as the parasitic inductance, capacitance, and resistance of the system. <S> The signal your scope shows appears fine to me. <A> To expand on Dan Mills' answer, devices with rectifiers - and this includes switched mode power-supplies used on many domestic devices from computers, TVs, oscilloscopes, etc. - draw a pulse of current as the mains voltage rises above the capacitor voltage. <S> If the power supply is not very "stiff" (low impedance) then the AC voltage will droop with the load. <S> Figure 1. <S> Notice that the rectifier current in this half-wave example peaks before the AC peaks and is nearly turned off at mains peak. <S> This would result in a mains voltage distortion similar to that seen in your scope. <A> Using a standard scope probe on the mains is a very bad idea from a safety perspective, but that said.... <S> What you are seeing there a bit of flat topping, this is usually caused by crude power supplies doing the bridge rectifier into a cap thing and only drawing current near the mains peaks, but it does not look too bad, I have seen far worse.
Additionally the machinery used to generate the mains power is often imperfect and will lead to imperfect waveforms.
On-- off-- on-- off timer circuit I want to make a circuit to do something that is relatively simple but I need to find the optimal way to do it in order to take as little current as possible. Here is what I want to do: When a trigger is activated, it launches a cycle. A cycle is: 1 second "on" (let's say it closes a relay) 1.5 second "off" (opens the relay) 1 second "on" (closes the relay) Turn "off" and then wait for the event to initiated the circuit to happen again. (relay is open). The time "on" for step 1 and 3 needs to be the same but they are not necessarily the same as in step 2. The trigger that activates the circuit will probably be a short pulse. I thought about using one or two 555 timers and counters but I am not sure exactly how and not sure if that is the best way.. What would be the best way to do something like that to use as little current as possible? The circuit will run with batteries(4*AA or one 9V)... <Q> Based on your need for arbitrary on and off times and a small fixed cycle, as well as low current, then as mentioned in the comments, a micro controller will do what you need with minimal code. <S> Pseudo code While(1){ sleep;}Interrupt on input high to low{ output on; <S> wait 1000ms; output off; <S> wait 1500ms; <S> output on; wait 1000ms; output off;} <A> This is something you want to do with a microcontroller. <S> After initialization, the micro goes to sleep. <S> It is then woken by a particular edge on a pin. <S> Make sure no current is required to keep the pin in the unasserted state. <S> For example, if using a pullup, have the high to low transition wake the micro. <S> That means during the long off time when power consumption matters, no current goes thru the pullup. <S> When the micro is woken, it produces the on-off-on-off sequence on one of its output pins, which is used to drive a relay. <S> During that time, power consumption in the micro doesn't matter much since it will be swamped by any relay you can find. <S> Even so, running a XLP PIC from its internal oscillator will take very little current. <S> When the micro finishes the on-off-on-off sequence, it goes back to sleep looking for the next wakeup event. <S> If the sleep time is long and even tiny amounts of current matter, use a BJT to turn on the relay, rather than a logic level FET. <S> Both will work fine, but the BJT will likely have less leakage. <S> Some FETs, particularly those that turn on with low gate voltage, can have surprisingly high leakage currents. <S> This is just a general guideline, and you have to check the datasheets of any particular parts you consider using. <A> As pointed out, your best option is an MCU. <S> You didn't point out your level of experience in the area <S> so I don't know if you have access to an MCU programmer or not. <S> If not, I would suggest looking into commercially available MCUs with pre-configured/installed boot loaders/programmers (i.e. Arduinos) that could easily achieve the task you are looking to complete.
Get one that uses very little power when sleeping, like a 16F PIC with the XLP (extra low power) feature.
Which color puts the least "strain" on an LCD pixel to avoid burn-in/ghosting? I am developing a user interface on a cheap Kinco color LCD HMI unit. After leaving a test UI on the screen for about 3 days, there was a very decided "ghosting" effect. I was able to eliminate the ghost outlines by alternating a solid black and white screen over the course of about 2 days. Now the screen is back to normal and the ghosted image does not appear. Since this HMI will be in service for several years with basically the same screens always in use, I want to build a screen-saver which will prevent ghosting. Note that users will view/interact with the HMI very little, as it's in a remote location and may be visited a couple times per week. In building a screen saver, should the display constantly alternate between solid black and white screens, or is there a specific color which places the least "strain" on an LCD pixel, which would avoid ghosting and ensure pixels remain as bright and color-accurate as possible? For example, is Black considered "full on" or "full off" by the LCD circuitry and pixels? Would a black or white pixel create more prominent ghost image, or is the issue going to appear for any pixel that does not change over a period of time? I assume either black or white would be the default/resting state of a pixel (i.e. no driver current applied to the RGB subpixel elements) but I don't know which. At this point I don't know whether alternating between colors is better or worse than filling the screen with a single, unchanging color. I'm looking to implement whichever scheme preserves the pixel brightness/dynamic response and eliminates ghosting. Note that I did enable the auto backlight-off feature, but even when the backlight is off, the LCD is still displaying the image. Your recommendations are appreciated. <Q> I don't think any one color will be more "easy" on your display. <S> The circuit and structure for a LCD monitor are the same for each color, they just have a color filter in front of them. <S> There are other ways to do it with color changing backlights or color wheels, but that doesn't change the advice that the color doesn't matter. <S> --Edit <S> @Trevor made me look up what the dominant behavior of LCD screens is <S> and I didn't find a definitive answer. <S> "It depends on the resting state of the LCD as to whether they require energy to stop light or to allow light to pass through," was one quote I read on Scientific American's website. <S> What I was trying to say before was it depends on the construction of your LCD as to whether the no power state is black or white. <S> You'd want to consult your datasheet or the manufacturer to find that piece of information and then put it in the off state for your particular device. <A> It is usually not a big issue in LCDs, they don't really burn in the classic sense, unlike say AMOLED (Spit!). <S> However the LEDs in the backlight have a finite life, and these far more then the LCD itself tend to determine the life of the display. <S> For this reason my usual screen saver is to just dim the backlight way down. <S> You seem to have a screen which you say is ghosting? <S> It may be worth having a look at the control chip register map, as you may be getting some DC across the liquid (never a good thing), also is it very cold where this thing is being deployed? <S> LCDs can get very slow to respond in the cold. <S> One thing I found is that you often get reversed logic between off and on when switching from a TFT to an IPS panel even when using the same controller, make of that what you will. <A> Moreover, if you can modulate the pattern to be a more sinusoidal transition between the two you may fine the net effect erases the residual faster.
Something more dynamic like an alternating pixel checkerboard pattern may actually be more effective than a simple fixed pattern.
Driving capacitance with a high voltage level shifter I am having issues with a 'high' voltage level shifter I am creating. I need to drive a 28Vpp waveform with specific timings ( <.1us rise time, 1us pulse width), converting from the 3.3V logic that comes out of my microprocessor. I currently have the circuit below: Which produces the following waveform: Great! However, I will need to need drive a fairly long cable. When I add 100pf of capacitance to the output: I get the following waveform: This waveform doesn't meet my requirements of having a rise time of less than 0.1us. What can I do to overcome this? I have tried replacing the transistor based circuit with an LM139 comparator, but that was still susceptible to capacitance. Are there ICs available that could fix my issue? I can handle about 4us of prop delay if needed. Thanks! <Q> Since you're getting the output from the collector, the output impedance of the whole circuit will be the R3, 1k. <S> It's quite high to drive a capacitive load, because if you connect one then it will form a low-pass filter with a cutoff frequency of \$f_c = <S> 1/(2\pi \ <S> 1000 \ C_L)\$. <S> For a 100pF load, the cutoff frequency will be around 1.6MHz. <S> That's why you cannot meet the rise-time requirement. <S> The output impedance should be low. <S> Extremely low. <S> You can try the following circuit: simulate this circuit – <S> Schematic created using CircuitLab <S> The first stage is a translator and the output is a push-pull output stage having extremely low output impedance. <A> The following is designed to achieve approximately rise and fall times near where you are discussing. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> It assumes that your drive impedance in about \$100\:\Omega\$, which is common enough these days in MCU output pins. <S> It also assumes that your load is resistive and at least \$10\:\textrm{k}\Omega\$. <S> So this is for demonstration purposes only, to point out the kind of work you need to do. <S> It's not necessarily a solution for your case. <S> Especially since your title says you are driving a capacitance as a load. <S> So this is educational only and partly written up to make a point. <S> (It might drive a capacitive load of \$1\:\textrm{nF}\$ as fast as you are looking for, as shown. <S> But no more than that.) <S> Here's a simulation image using the standard small signal, general purpose BJTs. <S> Nothing special about them (they are definitely NOT RF transistors.) <S> I'll let you work out the edges and pulse width by inspection. <S> But they measure about \$\approx 30\:\textrm{ns}\$ rise and fall for a \$\approx 1\:\mu\textrm{s}\$ pulse width. <S> (I have no information on your MCU's rise and fall time specifications for its output pin driver. <S> So that will need to be added in.) <S> Please take note of the added details, which include speed-ups and some diodes, as well as a little bit of base impedance at \$Q_3\$ to stem oscillations. <S> Not shown, but perhaps useful, are the usual local sets of decoupling capacitors. <S> If your output load is significant or is otherwise not resistive, a lot more may be required. <S> Again, this is just pointing out the details needed for something where the load is light and resistive (in other words, "simple.") <S> As you can see, there is stuff to do to get rapid shut-off and turn-on even when the load is relatively easy to drive. <A> The problem is that your capacitance is charged through a resistor. <S> When Q2 turns off C1 is charged up by R1 in a typical RC charge curve. <S> From \$ \tau = <S> RC \$ <S> we can calculate the time <S> constant \$ \tau = RC = <S> 1k \times <S> 100p = 100\ \mathrm{ns} <S> \$. <S> The waveform should reach 95% of V2 after \$ 3\tau \$ and this seems to be the case. <S> To improve the waveform you have three options: <S> Decrease R. Replace R with a current source. <S> This will give a linear rise in voltage rather than exponential.
Use a totem-pole output stage with Q2 pulling low and another PNP pulling high.
Stranded wire was cut... Connecting it back together, with solid wire... Is this safe and possible? I have this connector for a monitor and the wires were cut. So now I’m trying to put them back together with electrical tape. I stripped the stranded wires and connected it to a short solid wire, then connected the other end of the solid wire to the other stranded wire. Is this possible and SAFE? I’m afraid to test it out. I’m only using electrical tape; I have no soldering iron. Should I just connect the two stranded wires as is, without using a short solid wire as a middle man? Any advice would be very much appreciated. <Q> That's neither safe nor robust. <S> The solid wire could break after being flexed, and the tape won't reliably hold it together. <S> Perhaps more importantly, simply twisting the wires together without solder or the pressure of a crimp or screw terminal will not give a low-resistance, gas-tight joint. <S> The resistance will likely increase over time as the conductors oxidize. <S> \$P=I^2R\$ and <S> that heat can either directly start a fire or melt the insulation leading to a short circuit. <S> Buy a new cable, they're incredibly common and cheap. <A> Yes it's safe. <S> If your very paranoid, you could use heat shrink instead. <S> Some of these logic signals may not like being disturbed with mismatched lengths or resistance. <S> You can try it and if it works, great. <S> But you may try aftermarket sources like eBay for a replacement cables. <S> Instead of soldering a random amount of cables. <S> It's a power cable? <S> There is no issue honestly. <S> Though to be fair, modular power cables are cheap. <S> You should replace it if possible. <A> It is safe but it is not a very robust solution. <S> They are easily separated using a stanley knife or a strong cutter. <S> Use three of those to join the three stranded wires inside your cable. <S> If you are not in a critical environment, they may suffice. <S> For extra insulation and safety, wrap the assembly in electrical tape and you'll have a fairly strong connection. <A> You could use this kind of connector. <S> Suitable for both stranded and solid. <S> It is safe and simple to apply
If, for some reason, you cannot buy a new cable (e.g. urgent emergency repair), it is far more robust to use some low cost screw connectors like these: But it may not work. There is no safety issue with doing what you want. The power supply will draw more and more current to compensate for the voltage drop over that resistance.
How are high-speed circuits tested if test equipment doesnt exist? How was testing done for Ghz to THz range circuits and devices before fast enough scopes and frequency counters existed? <Q> For some perspective, consider that optical signals are still too high frequency for the instantaneous electric field to be sampled and measured, but there are still lots of different kinds of measurements we can do on an optical signal. <S> With a power sensor (a photodiode or even an LDR) we can measure the power of the signal. <S> With a prism or diffraction grating we can build a spectrometer and get a rough idea of the signal's spectrum and/or pulsewidth. <S> With an interferometer we can mix the optical signal with a delayed version of itself and measure the coherence time (bandwidth) of the signal with perhaps gigahertz resolution. <S> With a tunable local oscillator (laser), we can even down-mix the signal and measure its spectrum with an RF spectrum analyzer, getting 100's of kHz resolution. <S> All of these measurements have analogs in the microwave regime and were or could be used by microwave engineers prior to the advent of multi-gigahertz oscilloscopes. <A> Long ago they relied of the speed of Gunn diodes for sampling the input signal waveform with a control pulse duration so that the difference frequency could be displayed on a slow timebase oscilloscope. <S> If the sample duration was short enough to capture only the point on a recurring waveform the the waveform was preserved. <S> Gunn diodes were useful since they had low negative resistance so once triggered, that would accelerate then hold the result once the bias charge was depleted. <S> The key to reception of a frequency higher than can be observed or detected is to use imaging down-conversion to a useful IF frequency or direct to base band depends on the conversion efficiency, power level and SNR. <S> Methods such as interferometry, Diode detectors, pulsed samplers, where the harmonic of the sampling rate has sufficient harmonic energy in the band of interest. <S> Nonlinear mixers such as; "high temp" step-edge Josephson junction, varicaps, GaAs diodes and heterobarrier varactors (HBV) or optical pump with extreme fast rise times from small inert gas arc gaps. <S> These aliasing down-conversion type scopes were called Sampling oscilloscopes. <S> ( but only useful for repetitive waves) <S> further reading <A> 'Fast enough' oscilloscopes are a trick for displaying signals thatvary in time, but they aren't the only trick. <S> A 1 GHz oscillator,for example, will heat a resistor. <S> It will also resonatewith a cavity length of about 120mm (which can be determinedby sensing the heating of resistors). <S> The combination is called a 'wavemeter'. <S> A crude wavemeter is a length of wire placed on a paper plate, in a microwave oven. <S> The (about two inch) <S> right length of wiregets much hotter, and scorches the plate to a darker color, thanother wire lengths. <S> If you have a non-sine-wave, the various harmonics will ALL showup, and with a little care in measurement one can identify squareand triangle waves. <S> Most people wouldn't call that CD blank a 'measuring instrument',but <S> it does the job. <S> It just isn't convenient and precalibrated. <S> Neither is the paper plate in the microwave oven (and if you valuethe flavor of your food, you need to clean out smoky byproducts). <A> There are many ways to analyzed terrahertz device <S> so long one is not too interested in the precise time domain information. <S> You can always use a mixer/downconver, and perform digitization, and analysis on frequency domain. <S> A company call Virginia Diode produce such mixer.
You can tell the frequency, without a'frequency counter', of light by using a diffraction grating(a blank CDROM has 1 hour playing time, at 1 revolution per second,so you can measure the band with a ruler and use it to diffract a laser beam...) and measure the wavelength, thus (knowing thespeed of light) the frequency.
What is the correct term and schematic symbol for a bar "connecting" multiple toggles together? This has driven me mad, I've searched for this on google and am now going to the world to find out what the part is called. I've tried things like "Gang bar", "shared cover", did a google image search of the two images I have, and anything else I could think of, but without knowing what it is I cannot accurately search for it. It's a part that can connect multiple toggles together for an action, usually to "off", sometimes allowing for individual "on" activation. Attached are two pictures to assist. It's the Black bar on the brass switches image and the two Chrome bars on the other image. Any help is appreciated. <Q> Multiple switches that are physically ganged together could be represented with a dashed line between the individual throws, like this: simulate this circuit – <S> Schematic created using CircuitLab <S> Other physical connections between switches, like the ones in your pictures, are more of a mechanical component than an electrical one, and would typically be omitted from a schematic. <S> There is no standard symbology <S> I'm aware of for such a thing. <A> Just join the arrow or dot ends of the switch toggles, if the switches use these, or the bars of the switches with dashed lines. <S> In either Eagle, Altium or PADS, place a "part" consisting of a polygon on the dashed line and give that a part number including a description. <S> What we do when we need these sorts of things is to link that part number to a mechanical CAD drawing of the linkage device. <S> Ages ago, for a gangable potentiometer used a lot in vacuum tube designs there was a separate rod with detents and we had to assemble the 3, 4 or 5 gang pots and insert the rods by hand. <S> It was just called a Multipot Linkage. <A> In the Dr. Strangelove movie there's a scene where they release the safetys. <S> They are pairs of toggle switches connected by a metal tab. <S> I'm not sure if it's a real thing or made up for the movie, but if you could find out what those are called maybe it would give you a clue.
This is related to many older schematic symbols like multi section switches and ganged variable capacitors/potentiometers. This is a mechanical part just like a screw or standoff that would go into a standard Bill Of Materials.
Weird phenomenon with AGM batteries I will try to sum everything up and keep it as clear as possible. I have a solar system: inverter: 1500VA 24V system batteries: AGM 100Ah (20 months old daily cycle) controller: semi-MPPT 80A 24V Setup: Suddenly my system started shutting down quickly because of low battery, after testing I noticed that batteries (2,3,6,7) were 10.5V while the rest were 12.5V. I took them all out, balanced them all and returned in the exact same order and were charged to 100% (27V,13.5V/Bat.), within 48h the problem occurred again, so I decided to take the 4 bad batteries out and balanced the good batteries voltage, this is how the system looks like now: Within 24h and after fully charged and used I tested it and found out that batteries (5,8) were 10.5V while the rest were 12.5V What could be the cause? This has confused me a lot! Has anyone seen or heard of this phenomenon? I don't really know how to approach or diagnose the problem Load: max estimate is 1.2Kw used since sun down until tests time <Q> If the batteries all the same model, and you are doing a complete charge/discharge each day then you've done about 600 cycles. <S> This will mean the terminal voltage during discharge will vary and hence the re-charge be uneven. <S> You could extend the battery life by putting active balancing in. <S> I'd suggest that you should connect 1,3,5,7 and 2,4,6,8 in parallel and then the two strings in series. <S> Then you only need one active balance circuit for the whole group of batteries. <S> Even if you put active balancing in it's only looking at the 12 V level and not at individual cells, so variations in capacity/age will still occur. <S> This means you will slightly overcharge some batteries (AGM is very tolerant of this) and undercharge others. <S> Overall it's still better to have the batteries paralleled at the 12 V level to reduce differences as the fully charged terminal voltage will be the same regardless of the battery capacity. <A> Capacity loss is individual for each battery. <S> When two batteries are connected serial and one will throw the capacity a bit faster than the other. <S> It performs a deeper charge-discharge cycle. <S> Which leads to even more loss of capacity. <S> It is necessary to measure the capacities of the batteries , this will clarify the situation. <S> It is better to group the battery by the residual capacity. <S> It seems to me that batteries with a lower voltage (2,3,6,7) have a larger capacity and in a better condition. <A> Capacitance reduces and ESR rises sharply below 10% SoC or ~ <S> 11.5V <S> ( ballpark)Since there is no evidence of active or passive balancer for charge or discharge useage, all batteries have reached end of life for this design. <S> Had you included an active balancer for each cell, it would have slightly extended your end of life charge cycle to beyond <S> 20mo x 30d = 600 cycles ( full?) <S> Battery imbalance accelerates as it ages and one of the two in series will ALWAYS age faster. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> How much beyond 600 cycles , depends on many factors such as balancer capacity, charge rate and battery capacity and ESR balance when new. <S> Impressions: <S> It seems like a major expense and a high risk venture without a good BMS and active balancer. <S> Does it have 3 yr warranty? <S> or only 1?
At this cycle life age I'd expect the batteries are starting to show significant capacity differences.
How to connect a three-phase motor I am trying to fix an old three-phase drill. The motor has 3 wires coming out of it, which connect to the three phases. Connecting it to the power does nothing, and so I tested it with a multimeter, and saw that two of the wires coming from the motor are shorted together, but not with a good connection (~25ohm). I am not educated on electricity beyond your common layman knowledge, and definitely not on why three phases are needed or how they are used (or indeed, what this term even means, beside the obvious 3 wires) Therefore, my assumption at first was that this short is a fault somewhere inside the motor. Then upon reconsidering, I realized that if the three phases are completely separate, and there is no 0/ground going to the motor, then how can the circuit be closed? Is this short indeed a fault? how is there a closed circuit when the only lines going into the motor are power lines? Thanks :) /Edit Given the useful answers and comments, I can only assume something inside the motor is bad.This is because 1) Nothing at all happened when it was connected to electricity, not even anything bad. 2) The multimeter shows there is only a physical connection between one of the three pairs.I will hopefully be able to test this further and supply photos tomorrow.Thanks! /After further testing It seems I was misled, and the three phase socket in the wall didn't even have any power running to it. Whoops! With actual power feeding into the motor, it sort-of tries to spin, with A LOT of resistance, and eventually after a few seconds manages to spin very slow. It gets very hot. Because there is only a physical connection between one of the three pairs, I am guessing this means that only one of the phases actually does any work. I'll try perhaps to fully open it, although I don't believe I have quite the right tools for the job. Thank you a lot for the answers and explanations, at least I have some basic information about this subject that I knew completely nothing about two days ago :) /Conclusion The motor was sent to be fixed, and indeed the windings got ruined and had to be remade. A big thank you to you all on educating me :) <Q> If it is really a three-phase motor then the following applies. <S> Figure 1. <S> Top: three-phase motor windings connected in star (Europe) or wye (North America) configuration. <S> Bottom: delta (\$ \Delta \$) configuration. <S> Source: Electronic Project Focus . <S> You should get the same resistance reading between L1 - L2, L2 - L3 and L3 - L1. <S> Three wires does not mean three phases. <S> For example, it could be a single-phase motor with live (L), neutral (N) and earth (E). <S> Photos and a geographical location would help. <S> (That's why it's an option in your user profile.) <A> There is no neutral connection. <S> An earth/ground wire should be connected to the frame of the motor, but sometimes that is not done. <S> When a motor is part of a machine, the ground wire may be connected to the machine frame rather than the motor frame. <S> Each of the three wires serve as a "return path" for the other two. <S> The symmetric phase displacement among the phases makes the three wire connection a balanced symmetric system. <S> If you have connected the motor the same way it was originally connected to the same or an equivalent source and nothing happened, there are several possibilities. <S> There may have been a prior failure that completely burned open all of the internal motor connections. <S> The external wiring may not be making any connection. <S> Three-phase motors can be internally either wye (star) or delta with only three wires brought out for external connection. <S> It is probably more common for six or more wires to be available for connection options. <S> If you received the motor with a three-wire cable connected to it, that connection is appropriate for the original power source. <S> Don't change that without labeling everything and understanding what you are doing. <S> Any information about the motor rating and the power connection marked on the motor or the original machine might be very helpful. <S> The information that you have provided thus far strongly suggests a failed motor. <A> I appears as though the photos are a later addition to the above comments, but that is most definitely a 3 phase motor. <S> The nameplate indicates it is a single voltage design (400V), meaning whether it is connected in Delta or Star (Wye) is irrelevant. <S> YOU would only bring in the three wires to the three terminals they give you. <S> This motor is rated for 400V 50Hz, hopefully that's what you are attempting to give it (you didn't state that). <S> A chart I have on World Voltages indicates that the 3 phase standard in Israel is 400V 50Hz <S> so it should be fine, but make sure. <S> If you only give it 230V 3 phase, it should still spin normally with no load on it, but it would be weak when under load. <S> The fact that you do not read continuity from any pair other than one would indicate to me that you have an open winding inside of the motor, hence it failing to start. <S> Take it off and send it to a motor shop to be sure, but expect that the cost to fix it will likely be more than the cost to replace it with a new one. <S> Before hooking up the new one (or the repaired one), make sure you have the correct voltage though.
If "nothing" really means nothing, no sound, no tripped circuit breaker or blown fuse, no motor rotation, there must be a complete lack of connection. Three phase motors have only three "hot" power wires connected to the windings.
How to interpret an ammeter? I would like to interpret my ammeter. I don't understand what the point with the numbers 2,20,200 means and what is the exact current I am measuring. Thanks <Q> You've carefully trimmed the photos so we can't see where your probes are connected. <S> When using the 2m, 20m, and 200m ranges (switch positions) <S> the red meter lead should be in the "Volts/Ohms/mA" socket, and the black lead in the "Com" socket. <S> For the 20A range, the red lead should be in the "20A" socket. <S> If so, the readings on the mA ranges will be meaningless. <S> If so, the correct current is read with the range switch in the "20 A" position, and the current is 0.05 Amp, or 50 mA. <A> They are 2 milliamps (.002 amps), 20 mA, 200 mA and 20 A. <S> The readings are 0.005 mA, 0.05 mA, 0.5 mA and 0.05 A (50 mA). <A> That is the display you would get with a constant voltage applied to the meter (probably 0.5mV). <S> The input current would vary with the range setting. <S> You may be getting some kind of false reading due to EMI since the current drawn from the power supply is switched at relatively high frequency to feed the stepper motor. <A> I have a feeling that meter is only accurate to three digits, the forth being the 5 which is probably either 0 or 5. <S> It may have only 8 bit sampling resolution, or effectively 9, which would pretty much put you in that ballpark. <S> When you up the scale the rounding error then moves up from the mA range up to the tenths of an amp at the 200A scale. <S> As such, the least significant bit is pretty much useless on it's own. <S> It shows you the least significant bit is indicating that digit is more than zero and possibly as much as half that digit point. <S> This is really only a guess though, but it sort of fits the symptoms. <S> It would be interesting if you can force it to show other than a 5 in the last digit.
I suspect you are using the "20A" socket for all readings. The markings are the full-scale values for the selected ranges.
Why use a step-down transformer for soldering gun? As I understand it, some devices that heat up to a high temperature, like soldering gun, use a step-down transformer, in order to increase the current flowing through the heating element. An alternative option would be to use no transformer, and a heating element with a higher resistance. For the right value of resistor, this would result in the same power dissipation, and therefore the same heat, in the heating element. Why is this not done? What benefit does the step-down transformer have? Note: This question is not specifically about soldering guns, just any device where a step-down transformer is used with a heating element. <Q> A soldering iron has to meet several specifications, beyond getting hot enough to melt solder <S> it must not electrocute you, so must be isolated from the mains <S> it must be physically strong <S> it must get hot quickly enough to be useful <S> Obviously some of these requirements are more important than others. <S> You cannot tolerate an iron that kills people, but waiting for it to heat up is a compromise some people will accept if it's cheap enough. <S> There are two main ways to meet these requirements. <S> The first uses a heating element made of long thin floppy wire connected directly to the mains. <S> For isolation and strength, this is wound onto an insulating former, wrapped in an electrically insulating sleeve, and then further sleeved in a strong, earthed metal tube which heats the soldering tip. <S> This pattern takes a long time to bring the tip up to soldering temperature, as there are so many components to heat up through electrical insulation and additional interfaces. <S> The second uses a heating element made of a short thick wire, which is strong enough to be used as the soldering tip. <S> This cannot be connected directly to mains as it's far too low a resistance, and it needs to be isolated from mains. <S> Fortunately a transformer solves both problems with a single component. <S> The cost of the transformer can be traded off against the advantage of rapid tip heating. <A> Both of these are good things in the event of a fault. <S> However, I have several cheap irons I don't use much at all that would make you happy, as they are straight wall-plug at 120VAC with no transformer. <S> so it's not like you can't get those. <S> I'd happily sell you mine, though I would not recommend them over a good iron - but it would keep me from dragging the (nearly) useless things around. <A> In soldering guns, the current is going directly through the thick wire you are soldering with. <S> It needs a certain cross section to be mechanically stable. <S> That cross section directs the current you have to supply. <S> Making that wire long wouldn't add value but instead make it mechanically unstable again. <S> That's why soldering guns are using 4 <S> ..12V <S> at 5..10A.
One reason is fairly simple - using a transformer provides line isolation and limits the voltage on the iron.
When wiring a European (Schuko) plug, what thing should I focus on not to mess up? I know this might be a dumb question, as wiring a plug isn't some rocket science, but I am wondering if there is something not so obvious that can go wrong, or any tip at all for me to care when doing a cable assembly for a custom power extender cable? <Q> I assume the plug uses screw terminals. <S> In this case, and if you use stranded wires, use wire ferrules to protect the wire strands from breaking. <S> The wire must be rated for 16A usually. <S> Ensure that the strain relief is tightened properly. <S> Make the earth/PE wire little longer than N/L. <S> This is to ensure that the PE wire stays connected longer than N/L in case someone manages to pull the cable out. <A> As obvious as it gets, but better safe than sorry, and I've re-done enough such DIY plugs to think this is answer-worthy: make sure that the strain relief presses on the outer jacket <S> (the outer jacket should reach 2-3 mm beyond the strain relief "bridge") <S> cut the L and N wires <S> ~1 cm shorter than the PE wire, then bend the excess PE wire so that it will detach last when the cable is forcedly removed from the plug <S> use suitable pointed pliers to arrange all three wires neatly around the threaded studs that will later take the screws from the top part of the plug assembly (often insulation is damaged by squeezing between the plastic parts, or even the screw, which can be really dangerous with cheap models where the case screws are accessible while the plug is in the socket - the plug casing is plastic <S> so no PE protection here). <S> test the cable after assembly, with a load close to its rating (for example a 3000 W radiator) <S> Tin-coating may seem a cheap alternative to ferrules but is difficult to get right and dangerous . <S> For the last 15 years I've seen no professional electrician in Central Europe practising it any more. <S> A peculiarity of SCHUKO is that "polarity" does not matter - there is no marking for where the L and N should go because plugs are not directional like for example the British ones. <A> If you don’t have any ferrules handy and, from experience, most ordinary people don’t, then I tend to bare about 12mm of wire, twist it gently and then double it back on itself - then put it in the terminal and tighten the screw. <S> Only time one has come out on me was when somebody pulled so hard on the cable trying to pull the plug out... <A> I assume that you use flexible (multi core) cable. <S> Also, I shape the peeled tip of the cable spirally (as much as it goes without giving harm to the cable) with help of a pliers. <S> This way, you raise the density and solidity of material which will be squized with the screw and this way you get better contact conductance and better mechanical properties.
When I make an extender, after assembly I make a load test (running a high power resistance, about 2 kW for a few minutes) and be sure about the temperature of contact points are not more than usual. Note that neither SCHUKO nor EN 60204 (VDE 011) explicitely mandate solder or ferrules on stranded wire - but using ferrules is common practice and should be standard whenever a screw is used to clamp on a stranded wire.
Is 18650 battery capacity from 3V or 0V? To calculate the battery capacity in mAh of an 18650 battery I understand you can use something like an iMax B6 in discharge mode on a fully charged (4.2V) battery to get the mAh it provides. The iMax B6 stops at 3V. So my understanding of this is that the reading shown is the mAh that were provided by the battery going from 4.2V (or whatever it charged to) down to 3V. I understand that running an 18650 Lithium Ion battery below 3V causes chemical instability and shouldn't be done. So what I am unsure on is if the value shown on the iMax B6 is the "rated capacity" for this batteries (the ones shown on the specs and printed on some batteries) or if you need to do some other calculation to get the full capacity of the battery going down to 0V? <Q> The rated capacity of a battery is measured between its rated full charge and rated end of life voltages. <S> With lithium polymer, these are usually 4.2 V for full charge, but the end of life quoted varies between manufacturers. <S> I have seen 2.5, 2.7 and 3.0 V all used as the end of life voltage. <S> There is no 'right' voltage, as higher end point voltages extend the cycle life of the battery, at the expense of capacity, leaving the capacity/lifetime tradeoff as your choice. <S> If you are lucky, the manufacturer quotes the life of the battery, the number of expected charge/discharge cycles, between the same voltages. <S> If you are unlucky, the number of cycles will be quoted to a higher voltage than the Ah capacity. <A> I understand that running an 18650 Lithium Ion battery below 3V causes chemical instability and shouldn't be done. <S> Correct. <S> So what I am unsure on is if the value shown on the iMax B6 is the "rated capacity" for this batteries (the ones shown on the specs and printed on some batteries) <S> No, it's the actual capacity. <S> Rated capacity is whatever the manufacturer decides to put on it. <S> This figure may be wildly 'optimistic'. <S> Reputable manufacturers rate a bit lower than typical capacity, though some discharge below 3V (knowing that the cell may suffer damage) in order to get a higher rating. <S> Others just think of a number and double it or worse . <S> Here's a graph of actual capacity curves for many 18650 size Li-ion cells . <S> Most had very little capacity left below 3.3V, and all had virtually nothing left below 3.0V. <S> or if you need to do some other calculation to get the full capacity of the battery going down to 0V? <S> Going down to zero volts turns your rechargeable battery into a non-rechargeable battery, which makes the calculation of subsequent capacity very easy - 0mAh! <A> First, finding capacity of a battery has nothing to do with its physical size and layout. <S> "18650" refers to the size and shape of the battery, in this case the shape is cylindrical, 65 mm long and 18 mm in diameter. <S> It says nothing about the battery chemistry, or any electrical parameter. <S> Second, the real answer to your question is It doesn't matter . <S> There is so little energy left after you get the type of lithium cell you seem to be assuming down to 3 V that it won't change the whole much. <S> In this case, the capacity is what you get by discharging to 3 V. <S> The correct answer is, of course, to read the datasheet and see exactly how the manufacturer defines their capacity spec.
Usually, the capacity spec is what you get under optimal conditions discharging down to a specific voltage listed in the manufacturer's datasheet.
Electromagnet is acting perpetual... why? Here's what I have - I made it as to be as classical of a electromagnet as possible: a) Soft iron "U" shaped round bar-stock, ends machined to be flat & parallel. Spool of copper wire slid over one leg of this "U" shape iron. b) Soft iron flat "keeper" bar to go across the ends of part "a" above Momentarily connect the wire across a battery: the flat bar jumps to, and becomes firmly attached to the electromagnet. Remove battery. Flat bar stays firmly attached... as long as I don't pull it off... ...for months without losing strength. Why? Yet, when I do remove it, there is no remaining "pull" that attracts the flat bar back to the U shape rod... so it's not becoming "permanently magnetized"... which is why soft iron was used. Also, if a flashlight bulb is connected across the ends of the wire before removing the "keeper" bar, the bulb flashes. (in this case, the "keeper" is quickly smacked with a screwdriver handle in order to get high detachment speed. This happens no matter how long it's been sitting on my shelf... or even hanging upside-down by the "keeper" bar... with the weight of the electromagnet hanging by the keeper bar. <Q> The soft iron indeed got magnetized. <S> Your closed soft iron loop has a flux running in it as soon you magnetized it and connected with that is a mechanical force to minimize the magnetic resistance in the loop. <S> That's why the bar is stuck to the U. Removing the bar from the U <S> inserts an air gap, which has a large magnetic resistance and will dissolve all the flux previously running in the core. <S> That's why you cannot get the bar stuck again. <A> Figure 1. <S> Magnetic field (green) of a typical electromagnet, with the iron core C forming a closed loop with two air gaps G in it. <S> B – magnetic field in the core <S> BF – "fringing fields". <S> In the gaps G the magnetic field lines "bulge" out, so the field strength is less than in the core: BF < B BL – leakage flux; magnetic field lines which don't follow complete magnetic circuit <S> L <S> – average length of the magnetic circuit used in eq. 1 below. <S> It is the sum of the length Lcore in the iron core pieces and the length Lgap in the air gaps G.Both the leakage flux and the fringing fields get larger as the gaps are increased, reducing the force exerted by the magnet. <S> Source: <S> Wikipedia Electromagnet . <S> When you switch off a DC electromagnet the domains tend to remain aligned somewhat. <S> This is greatly aided by the presence of the flat-bar "keeper" - so called because it "keeps" the magnetic circuit closed. <S> Forcing the keeper bar to quickly break contact with the magnet causes a sudden change in the magnetic field strength in the core. <S> As Michael Faraday discovered, a changing magnetic field induces a current in a coil. <S> This is what lights your lamp. <S> Figure 2. <S> A simple generator schematic with automatic voltage regulator (AVR). <S> Source: Generator Guide . <S> Remnant magnetism is very useful. <S> Many generators rely on having some remnant magnetism left on the rotor to excite the stator on startup. <S> As the rotor spins up the resultant magnetism generates a very weak current in the excitation winding. <S> This feeds back through the rectifier, through the slip rings and onto the rotor, reinforcing the rotating magnetic field. <S> This in turn increases the excitation current and the generator quickly "boots" itself up. <S> Meanwhile Vout starts to rise and <S> when it gets to the AVR setting the voltage detection circuit starts to turn off Q to limit the excitation current to the value that maintains the desired output voltage. <S> I mention this because generators can lose their remnant magnetism if left idle for a long period. <S> The fix is to inject a little DC onto the slip-rings while the generator is spinning. <S> If you get the polarity right the output voltage will rise, regulation will be achieved and the rotor become magnetised again very quickly. <A> As I visualize the scheme, I assume that the total airgap is constant. <S> I think the culprit is the inducted eddy current on the free bar by the motion. <S> This ceases the flux (which Janka described) as it ceases along and so borrowed energy is transferred out. <S> edit: (expressing my thoughts a bit further) <S> If I understood correctly, so if the total airgap is constant despite the moving parts, then it has to be something other than mechanically affecting the flux. <S> An objection to that would be the idea of the magnetic field is parallel with the movement of bar. <S> But, it won't be that much parallel, because of the vast difference of µ of mediums. <S> My claim is, when it is moved, inducted currents will effect on the skewed pattern of flux, oppositing it, making it flux pattern smoother, this is where the flux is ceasing.
It has to be induction of eddy currents on the keeper bar.
What is the meaning of "ground" and "-V" in a simple push-pull circuit? In this simple "push-pull" circuit, there are +V, -V and ground connections.up until now, i always considered ground to be the minus side of the battery.but that is also the way i understand -V, so i don't see how current would flow through the PNP transistor from ground to the -V If i am using this circuit to drive a simple DC motor forwards(push) and backwards(pull) with a batterywhat would the -V and ground connections mean ? <Q> So if you had a +12v <S> and a -12v supply, that would need two 12v batteries, connected in series, with their mid point taken to be ground. <S> This split rail connection allows you to generate both positive and negative voltages into the load. <S> Your motor would connect between ground and \$V_E\$. <S> This would allow it to be driven in either direction. <A> If you wish, you needn't label "ground" or "V+" or "V-" at all. <S> Or put them where you wish. <S> Your schematic is re-drawn with none labeled: simulate this circuit – Schematic created using CircuitLab <S> The usual place to put "ground" is where Vin meets the junction of the two DC supplies, in this dual-supply circuit. <A> All ground means is.. "This is my reference point". <S> Or, to put it another way.. <S> "When taking voltage measurements in this circuit.. connect the voltmeter or scope black lead here." <S> You can make the "ground" anything you want... <S> even the positive terminal of the battery if it helps make the function of the circuit more understandable.
When using a 'split rail' like this, ground will be the mid terminal of the battery or power supply.
Rotate stepper motor in low RPM and minimum vibration I want to control a stepper motor about 3RPM and at very low vibration. I have a ROLLON SRL-M542H stepper controller.If I change it to high step per revolution it works smoothly without more vibration. Is it a suitable method for reduce vibration? And how to calculate pulse time for pecific RPM?Im trying to use an Arduino UNO for pulse genaration. <Q> Stepper motors are natuarally resonant around each holding position because the holding torque varies from zero at the holding position out to a step distance away where the torque at its maximum. <S> As such, in a lightly loaded or damped system they can vibrate for quite some time upon stepping. <S> Use of micro-stepping reduces this effect by bringing the rotor towards the holding point with less velocity and overshoot. <S> Basically, you do not hit the bell as hard. <S> Micro-stepping does however reduce the effective torque in an open loop system. <S> Since the motor is really alternating between pulling from each direction to establish an average position. <S> Those opposing forces cancel out leaving you with a lesser net force. <S> \$PulseTime = <S> 1/(RPM <S> * 60 <S> * Poles * MicroSteps)\$ Acceleration and deceleration profiles should also be employed to successfully ramp up and down the pulses at a rate that the motor can keep up with given your worst load and inertial conditions. <A> Whenever one needs to meet certain criteria, it must be measurable and then specified up front. <S> Here; low (?) Vibration , low (3) RPM, unknown (?) <S> torque and unknown inertial load(?) <S> Stepper torque in a 200 step/Rev Stepper motor is specified with a holding torque when rated electrical power is applied. <S> When microstep interpolation is enabled, the full torque is not available at the intermediate pole positions and is inversely related to the number of microsteps/Rev. <S> Your controller permits 400 steps to 25k steps /rev <S> so the pulse rate must be increased to achieve the same RPM. <S> ( 1.2 kHz to 75kHz ) <S> If the load torque is insignificant compared to the mass of the rotor, I might start with a step rate in 10kHz range like 9.6KHz with 1600 step/rev. <S> With a decent CNC bridge and GBRL code software, one can control the acceleration and velocity (RPM) precisely with built in constants for controlling the servo behavior via an Arduino USB serial port. <S> Or use the uStepper controller you have with suitable software. <S> (?) <S> -- <S> If your case has any significant friction or inertial mass with a need changing speed rapidly, I would opt to use toothed belt & gear reduction as a method to reduce vibration from half-steps while also increasing torque rather than the loss of torque with ustep. <S> The other way to reduce vibration is adding inertial mass of a flywheel such as in a turntable. <A> It looks like you have a very low step rate.... <S> ~10 <S> steps/second on a 200 step/revolution motor. <S> A typical hybrid step motor is a fourth-order non-linear system. <S> With an operating point near zero angular velocity, the system is like a mass/spring/damper where the mass is the inertial load, the spring is the slope of the torque curve through the detent position and the damping is controlled by what is seen by the motor's back-EMF looking out of the motor. <S> So, the damping is dependent on the inductance, the motor resistance and the effective resistance looking back into the driver. <S> You can't do too much about the inductance. <S> Your choice of a driver that regulates the currents in the windings of the motor is probably the wrong thing. <S> The resistance seen looking out of the motor will be very large. <S> As such, the damping will be very low! <S> You should consider the use of a simple L/R drive... <S> four transistors and perhaps some resistors in series with the motor winding. <S> Also, reducing the level of current in the motor can reduce the resonant frequency and improve the damping ratio. <S> In short, do some experimentation!
Pulse time for a specific RPM will be dependent on the number of steps per rev (poles) of the motor times whatever micro-stepping factor is employed by the driver.
Why do devices stop operating properly in extreme cold? I have noticed that a lot of smart phones say that they won't operate under -4 degrees F (-20 degrees C). Can anyone explain to me what happens when the phones get cold that stops them from operating? <Q> -4 F is -20C, which is a standard low limit for chips and electrical components. <S> Some of that is just because it is very hard to test chips at low temperature, but there are real issues you can run into, which include: <S> Batteries degrade at low temperatures, depending on their chemistry. <S> The battery output voltage is lower, meaning you need more current to get the same power <S> The battery internal resistance can increase. <S> The added resistance can heat up the board, but it also wastes power and makes the battery output voltage less stable, as it will change with current draw. <S> Thermal cycling of parts can become worse. <S> Things break when you make them cold and heat them up because of thermal expansion. <S> I believe this issue is worse at lower temperatures, possible related to metals becoming brittle when they are very cold. <S> Chips can draw more current at low temperatures. <S> This issue compounds the other two, since more current becomes more heat, which increasing thermal cycling. <S> Chip timing changes. <S> Digital circuits have special timing rules to ensure that all signals are in the right place at the right time. <S> Lowering the temperature changes all that and can create a race condition. <A> For most of these devices it's the display... <S> LCDs don't like the cold. <S> Typically, standard LCD character and graphics modules provide a temperature range of 0°C to +50°C . <S> However, several display manufacturers offer extreme temperature models with operating temperatures of -40°C to +80 or +85°C. <S> There also is a wide selection of standard versions that range from -20°C to +70° <S> Source <S> The newer OLED types do have a much better temperature tolerance though, -40 <S> °C to +80°C. <A> Batteries dislike cold. <S> Generally all batteries lose capacity and current in the very cold. <S> (However, using them often warms them up.). <S> Lithiums have a particular problem with being charged in the very cold . <S> Also, devices are concerned about condensation occurring inside the device from humid air entering the headphone jack etc. <A> Crystal oscillators may not start up; or the crystal resonant frequency, which has a temperature coefficient, may be outside the guaranteed Automatic Frequency Control (afc) range, needed to ensure the data packets start on expected time slots even after some hours of operation and phase slipping. <A> To add my 2 cents on all the great answers (which would apply not only to electronic devices but generally all electric devices) - the temperature drop results in change of material resistance (namely for metals they become less resistant), while this might seem a minor thing, in industrial equipment this is one of the items accounted for. <S> Electronic devices would suffer most at that because many microchips rely on resistors between some of their lines to be of specific value, if that value changes the microchip may start misbehaving or completely shutdown. <A> Analog circuits can also have problems at low temperatures. <S> Resistance changes across temperature, and so do transistor threshold voltages and transconductances. <S> If a reference voltage or current goes out of spec, it can affect other analog circuits that depend on it (like an ADC or a charge pump). <S> When you're simulating a design and (later) characterizing and testing hardware, you have to pick a lower bound for temperature. <S> There may not be any actual problems if you go a little below that temperature, but the manufacturer can only guarantee correct operation if you stay within the tested limits. <S> That being said, for smartphones the battery and display are probably the bigger concerns, as the other answers show. <A> When you get into minus figures things start to slow down, there is something called ABSOLUTE ZERO which is 0 degrees Kelvin or minus 273 C. @ <S> 0 <S> Kelvin <S> nothing moves, including protons and electrons, it basically freezes electricity (not sure what happens with photons). <S> Eventually chip speeds will slow down but not at the same rate so the sync is lost.
The heat caused by the extra resistance can potentially damage the battery, since you are heating up the inside while the outside is cold, creating a thermal gradient which adds mechanical stress.
Installed 555 timer socket backwards - can I reverse the chip in the socket? I installed the socket for a NE555 timer ic backwards on a PCB. Can I just reverse the chip in the socket to cure the mistake? <Q> The real question is why you'd use a socket for a cheap IC like a 666 555 timer, or why you'd even use a thru hole part in the first place, but I digress. <S> OK, closing my eyes and trying to imagine back to the Pleistocene. <S> It depends on what you mean by "backwards". <S> If you actually mirror-imaged the footprint, it can still be done, although it's a bit more tricky. <S> Basically, you bend all the legs 180°. <S> That's conceptually easy, but the trick is to not break them off in the process. <S> Do it <S> right <S> so you only need to bend them once . <S> Use a flat screw driver or something to bend each leg the other way right where it comes out of the package. <S> It's probably a good idea to somehow mark the pin 1 corner on what used to be the bottom of the chip but is now the top. <S> That avoids the next thing you'll do wrong, which is to plug in the flipped chip rotated. <S> Another trick for a mirrored footprint is to mount the unmodified chip on the bottom of the board instead of the top. <S> Back to your regularly scheduled time and place . <A> So long as the IC pins connect to the right PCB tracks, the orientation of the IC socket is irrelevant. <S> Look at where pin 1 of your 555 IC is and make sure it's connected to the pin 1 solder pad. <A> You might, given a DIP 555, carefully bend the 8 leads the opposite way, and insert into the socket.
If you simply started counting pin 1 in the wrong corner, then all you have to do is rotate the chip.
Can I use my PC SMPS 12V 450 Watt for my Embedded System work I am a computer programmer and after a project with Image processing which runs on ARM Embedded system board. Specification says it needs 12V 1 Amp Supply. I have a computer SMPS which gives me 5v and 12v constant supply and rated as 450 watt. Also it is written it can draw upto 24A current so here I cam confused. Can this heavy amount of current burn my board ? Also I recall Ohm Law that if the v is constant then current will vary according to load. Please give me some right direction whether I can use PC PSU for my embedded works. Thanks <Q> The PSU rating just means that up to 24 A is available to be drawn from the 12 V supply. <S> However, that means that the PSU will supply up to 24 A into an overload or short circuit from your board or 12 V connections coming out of it. <S> The PSU protection will kick in shortly (something like 10's ms) <S> after the overload is detected and either limit the output current to 24 A or most likely put the PSU into 'hiccup' mode, which you'll find details of on the interweb. <S> This burst/ongoing <S> 300 <S> + W of power is quite likely to burn out tracks on your PCB. <S> So, in short, if you use it, be very careful not to let your board short-circuit the 12 V supply, otherwise you are very likely to damage that board. <A> Yes. <S> But you shouldn't. <S> An ATX power supply is for finished and tested products. <S> This supply is 24 times more powerful than your requirements. <S> It also does not have any protections for prototype work. <S> It only has protections for itself. <S> If you accidentally make a short, you will see sparks, flames and/or smoke. <S> And you'll be replacing chips or getting new boards. <S> A much safer way to do this would be to use a lab power supply. <S> With configurable voltage, current limit and even over current trip if you are feeling fancy. <S> Or, just the appropriate size wall wart <S> if you're feeling cheap. <S> But not an 2000% overrated ATX power supply. <A> The current is completely based on your load, so if you have a voltage source with maximum 24A current, it means based on the load that you are using it can provide currents from miliamps to maximum 24 amps. <S> therefore if you need 24V and 1A there is no difference using a 24V-1A power supply or 24V-100A power supply. <S> It is highly recommend to use a fuses when you are using high amperage power supply. <S> you can use the wire fuses easily in the input voltage.
But , if for any reason, like mechanical or electronic problems if you get short circuit on your board, with higher currents your board is definitely going to burn up, tracks on your PCB will be burned or even it might cause the board and wires to get in fire. Your board will draw what it needs from the 12 V rail, up to 1 A from what you've said.
Why do computer graphic cards use 8 pin (4 positive and 4 negative wires) connector instead of connector with only single positive and negative wire? I've always wondered why computer graphic cards use an 8 pin (4 positive and 4 negative wires) connector instead of a connector with only a single positive and negative wire. <Q> This allows multiple cheap connectors and wires to be used, instead of single thick wires and more expensive high current connectors. <S> Multiple thin wires are also more flexible than thick wires. <S> Multiple pins on the circuit board ease the problem of tracking high currents on a PCB. <S> It's not just about power handling, it's the voltage drop on the cable that needs to be well controlled. <A> To allow for more current than a single connection can reliably provide. <S> For lower overall resistance, and therefore lower voltage drop. <S> For redundancy as individual connections increase in resistance due to oxidation, dirt, etc, as they age. <A> According to the PCIe Electromechanical Specifications , power connectors are specified to deliver +12 V, and a 2x3 connector can deliver 75 W, or 150 W for a 2x4. <S> High power PCIe devices (GPUs generally) can pull up to 300 W through a combination of PCIe power connectors and 25 W from the edge connector. <S> In chapter 3 of the document, it mentions: <S> The +12V delivered from the standard x16 edge connector and the additional +12V(s) delivered via the dedicated 2 x 3 and/or 2 x 4 auxiliary power connector(s) <S> must be treated as coming from independent separate system power supply rails. <S> The different +12V input potentials from different connectors must not be electrically shorted at any point on a PCI Express 225 W/300 W add-in card. <S> The power pins of a single 2 x 3 or 2 x 4 auxiliary power connector can be shorted together. <S> So per-connector there <S> may be no difference according to spec if all the wires were combined in the harness, thought splitting it out to all the pins of the Molex connector would be costly. <S> I don't know how the wiring is actually done in the power supplies, but the supplies are generally split into a few rails. <S> Combining them closer to the point of load might help with some regulation. <S> On the GPU there's usually several (10-20) voltage phases that combined can deliver ~200 <S> A @ 1 V into the GPU core and memory. <S> As an example, the Nvidia Titan V has a 2x3 and 2x4 connector and can produce up to 250 W of heat. <S> GamersNexus did a teardown of one and looked at the power delivery circuitry: <A> Several reasons, all stemming from the fact that GPUs draw a lot of current. <S> A GPU might draw 100 W through one of those connectors. <S> At 12 volts, that's 8 amps. <S> High current through a wire will produce a voltage drop from one end of the wire to the other, proportional to the resistance of the wire. <S> This results in a lower voltage at the GPU and lost power in the wire, resulting in less efficiency in the power transfer and more heat in the case. <S> Lowering the resistance means less voltage drop and more efficiency. <S> Another solution is to add an additional voltage sense wire that does not carry current so the power supply can sense and adjust the voltage at the load, compensating for voltage drop in the wires. <S> Another factor in play is mechanical concerns. <S> Thick wires are less flexible than thin wires, resulting in more difficult cable routing, bend radius concerns, and increased stress on connectors and circuit boards. <S> Using multiple pins also increases contact surface area, decreasing contact resistance. <S> Parallel current paths also provides some level of redundancy. <S> The 8 pin PCIe power connector uses both of these solutions: three parallel power pins, three parallel ground pins, and a pair of voltage sense pins. <S> Using three parallel power pins lowers the voltage drop through the cable while also providing good flexibility while the sense pins ensure that the load is actually receiving 12 volts. <A> The answer is more mundane, the 2x3 and 2x4 connectors are available in "all" systems as they are the default output on the power supply. <S> And as they are available they don't need special power supplies to have 75W or 100W outputs. <S> Also the lower individual power might simplify the transformers on the GPU card, using smaller components.
The voltage drop can be mitigated in two main ways: one is by lowering the resistance of the wiring, either by using larger wires or by placing multiple wires in parallel.
Voltage Spikes in MOSFET H Bridge As a part of my effort to build a pure sine inverter I made the following MOSFET H Bridge. This was connected to a load through a LC filter where PWM signals are given as in the diagram below.PWM frequency is 16kHz with Q1 & Q3 switching at 50Hz. While testing after working for few minutes the fuse blew, Q1 and Q2 failed short circuiting Drain & Source. For debugging I removed the Filter, connected a resistive load directly and reduced the supply voltage to 170V. I checked the gate signals of individual MOSFETs. which seems fine. However I got the following waveform for Vds of Q2. There are few spikes reaching up to 380V when supply voltage is 170V. I guess that a supply voltage of 325V these spikes will be much larger damaging the MOSFETs. What is the reason for these spikes? Can they be minimized by a RC snubber? If so how can i calculate values for R & C ? Any help is much appreciated. Thanks in advance. Update Further zooming in i found that the spikes occur when the opposite MOSFET is turned on. Following is the Vds of Q2(Yellow) and Vds of Q4(Cyan).I further reduced supply voltage to 90V. <Q> There are three main causes for what you are seeing, which one or combination is dependant on specifics of your setup Probe pickup <S> There is the possibility these spikes are not real & are artefacts of how you are probing. <S> If you are using a x10 or <S> x100 probe then the clip is EARTH referenced. <S> If you are connecting this to the SOURCE of the lower FET this will not be the same EARTH as the scope & thus there will be some bounce. <S> Not the same EARTH? <S> but the circuit indicates the SOURCE of Q2, Q4 are EARTH. <S> In practice they are not simply due to stray inductance - not all earths are equal. <S> It could pickup due to a loop you created at the point of measurement. <S> Poor powercore layout <S> Below is what you believe your layout is <S> ( I have added the DCLink capacitor because I really hope you have one...) <S> In practice it is slightly different In physically constructing your H-Bridge <S> you may have chosen convenience in placement of suitability of flow. <S> The stray inductance in RED are some that will compound switching overshoots as you force commutate the current. <S> Gatedrive and device specifics <S> Depending on specifics of your gate-drive you maybe driving the MOSFETs too hard (gate resistor too low), the layout is poor such that the driver can't keep the device off. <S> Voltage overshoot is an expected byproduct of forced-commutation <S> Finally... there will always be some voltage overshoot because the existence of stray inductance like there will always be reverse recovery current. <S> Improvements in layout can improve this, slowing down switching times can equally improve it or if it really cannot be reduced... snubber circuitry can be added to dissipate the additional energy <S> As to why you lost control... <S> While testing after working for few minutes the fuse blew, Q1 and Q2 failed short circuiting Drain & Source. <S> That may or may not be related to the observed overshoots BUT aspects of your control could also contribute: deadtime, minimum pulsewidth etc. <A> I would also place fast acting schottky diodes across the fets to protect them. <A> The answer was simple, But it took 6 burnt out mosfets to find out. <S> The problem was with the bulk capacitor. <S> It was initially placed a bit too far from the H bridge. <S> Moving it close,right next to the bridge and the filter capacitor seems to solve the issue. <S> Thanks everyone for your contribution.
I would place a decoupling capacitor after the fuse as some fuses can have a high inductance.
What gauge wire do I need to connect a 3.7V 750 mAh LiPo battery? I originally thought that I could use any type of thin wire, but after some research, it seems that different voltages and milliamps require different size wire. Am I correct with this? My question is, what type of wire is recommended to use with that type of battery? Thanks! <Q> The conductor thickness needs to be chosen relative to the current flow <S> ie a higher current needs thicker conductors and <S> as voltage increases the insulation tends to be thicker / better. <S> However, the voltage drop or losses also mean that a thicker conductor can be necessary. <S> You need to specify the maximum current to flow then as suitable size can be selected. <A> Typically, you get wire size from an ampacity table. <S> (Google "ampacity table"). <S> There are two main things to consider. <S> First what is the normal current? <S> Usually in short wires this is not a problem unless the discharge rate is very high. <S> But you can look up the resistance of the wire (lots of charts online), per length, and calculate the resistance of your wire harness. <S> Then multiply resistance by current to get voltage drop. <S> Often, I choose to allow 0.05 to 0.1V of drop in the wire, but it depends on many details. <S> Second thing is fault current. <S> What is the highest fault current that could possibly flow for an extended period? <S> For example, if you have a 1A fuse inside the battery pack, it can pass 1A for an extended period. <S> It would be a good idea to choose a wire that will not overheat with, say, 1.5A. <S> Then, no matter what fault occurs on your load, at least the wire insulation will not turn black or start a fire. <S> If the battery does not incorporate a fuse or other over-current protection, you should use a different battery that does. <S> Lithium ion battery packs should have a built-in protection circuit that limits charge and discharge current and voltage. <S> This is widely known in industry, so there is a good chance that the pack you are considering does have a protection circuit built-in to it. <A> Always quantity of size of wires varies according with your power rating of flow of electric current .For <S> a larger flow of electric current need wire with a larger crossection area .But <S> you can compare wires used to provide power for sockets at the sitting room <S> its crossectional area are smaller than the at kitchen socket .So <S> your correct about that because the crosectional area a varies according with the rating of flow of electric current
Wire must be sized adequately to carry the normal current without excessive voltage drop.
Do I need to learn AC circuits for working with micro-controller projects? I am a Computer engineering student. I am very interested in micro-controllers, FPGA, and digital design projects, and actually, I am good at these topics. In addition to that, I have a good knowledge of DC circuit analysis. My problem is in AC circuits. I am just not that good at it. My question is: would I really need AC circuit theory as a computer engineer interested in DC electronics applications only (digital circuits, micro-controllers, FPGA, etc.) I think once you feed a circuit with DC current through a power supply, from here on it is just DC stuff. If the answer is “yes I need AC”, then how much of it (the required topics), do I need? <Q> Yes, you need AC. <S> High-speed microprocessors like those used in boards such as RaspberryPi are running at high speeds — not DC at all — and you need to have a solid grounding in things like transmission line theory if you're going to connect one to an external memory chip successfully on a scratch-built board. <S> Not to mention high-speed <S> I/O interfaces such as Ethernet, WiFi, LVDS, etc. <S> Power conversion circuits that are typically used in such boards also require AC theory in order to calculate things like stability and transient response. <S> If you really don't want to get involved with that <S> and you prefer to focus on the digital side of things, I would recommend that you work with existing boards and evaluation modules that are available from various vendors. <S> There are hundreds of them out there, and they take care of all of that nasty "AC stuff" for you. <S> You mention "Arduino" in passing; most things that fall into that category use much slower microcontrollers (tens or hundreds of MHz, rather than GHz+). <S> Anyone and his brother can design one of those boards — and for the most part, they already have! <S> The only reason to do a custom board would be to make one for a complete specific application. <A> In my experience, there are two things people could mean when they talk about AC systems: Power electronics or frequency domain analysis. <S> You can live a pretty happy life with just a basic understanding of power electronics and not doing much with transformers or wall voltages. <S> There are a lot of opportunities for folks who excel at power system designs, but fortunately they've also made app notes and reference designs we can just copy. <S> Frequency domain stuff, like understanding the functions of a filter or how a waveform will be changed by a capacitor, is a fundamental part of electrical/computer engineering, and it helps you understand why you don't want your 200 MHz DDR RAM to be too far from your chip, or why one FPGA layout works and <S> another doesn't. <S> It really depends what kind of work you want to do. <S> In my experience, there are CS folks who just want to work inside the chip, and they tend to be stronger on the OS and OOP side. <S> The EE/CE folks can focus inside the chip, but are expected to do some schematic and board design work as well, and that tends to get into frequency domain work. <S> I've worked with folks who shut down when I started talking about impedance and frequency response, and it meant I could not work with them on very technical stuff. <A> Do not think that you can allways select what you are expected to do. <S> The narrower is your zone of comfort, the more likely you will get pulled out of there by others who don't waste one second by thinking of your pleasure, they only want to get the urgent job done, done straight now and without excuses. <S> Obviously some part of you have already realized it and made the rest of you to suspect, can you really exclude the understanding of AC. <S> You haven't to be a brilliant AC (including radio and pulse circuits) designer, but you should understand them and do it well. <S> Otherwise you cannot make any relevant decisions on the construction of the actual circuitry. <A> As you develop boards for sell, the EMC/EMI/susceptibility goals will require you to understand AC concepts.
If all of your work is purely logical without any need to analyze nor design actual electric circuitry, you can perform well without doing AC circuit calculations nor understanding them more deeply than an ordinary user.
If I lower the volts on a computer fan will the needed amps increase? Im new too electrical engineering (updated) I have a computer fan that I am using for a school competition. The rules state that voltage used can be no higher than 9 volts but most fans are 12 volts. The fan that I have is has a recommended 12 volt 4.5 amps. So if I used a 9 volt battery or 7.2 volt how many amps should that battery be? How do volts effect amps in motors? I am new to all this and so a simple answer would be appreciated. Also I need it only to run for about 10-15 minutes at most so if you have any battery recommendations that would also be helpful. The rules for batteries are no lithium or lead allowed. The rules are very prohibitive. Newly Added Information I am building a small scale hovercraft that has to be as light as possible. However I am awarded points for amount of weight it can carry. I am using this fan to fill the hovercrafts skirt. I have to use a computer fan because the rules state I can only use brushless motors if it is in a computer fan otherwise motors have to be brushed which I have had issues with. I found this fan in my house and found it very powerful. Here is a link https://www.digikey.com/product-detail/en/delta-electronics/PFR0912XHE-SP00/603-1707-ND/3078695 Here is a full copy of the rules its under hovercraft http://api-static.ctlglobalsolutions.com/science/SO_C_2018FINAL.pdf <Q> a simple answer would be appreciated <S> Okay. <S> if I used a 9 volt battery or 7.2 volt how many amps should that battery be? <S> You clearly mean ampere hours . <S> If it uses 4.5 A when being fed 12 V, then you can assume that with 9 V battery that it will drain roughly \$4.5\text{ <S> A}×\frac{9\text <S> { V}}{12\text{ V}}=3.375\text{A} = 3375\text{ <S> mA}\$. <S> Let's assume you will run it for 15 minutes, then you will need a 9 V battery with at least \$3375\text{ mA}×\frac{15\text{ min}}{60\text{ min}}=843.75\text{ mAh}\$. <S> Considering that you will be discharging it at 4 times the capacity, then I'd aim for a little bit higher capacity, say 2000 mAh or even higher, or put several 9V batteries in parallel. <S> Because high discharge (several amperes) will lower the capacity. <S> How do volts effect amps in motors? <A> Computer fans increase load current with voltage above the start threshold. <S> Most PC fans are <= <S> 5W <S> so I suspect you have an error in the decimal place. <S> from the red graph line <S> I expect your case fan is identical to this with 450mA at 12V and 300mA at 9V <A> The amps will for an ordinary fan be proportional to the voltage ie. <S> it will decrease with decreasing voltage. <S> not all fans can run at lower voltages than those specified but you should consolidate the datasheet. <S> any way you should expect the fan to run at a lower rpm.
If you increase the voltage, then the ampere will also increase.
How is the code written in ARM7 compatible with ARM9? I'm studying ARM families, In the above image, it is written that the code by supported ARM7 can be migrated to ARM9 and others. ARM7 uses the Von-Neumann architecture (single bus for data and instructions) and 3-stage pipeline (i.e., fetch, decode and execute). ARM9 and others use the Harvard architecture (separate bus for data and instructions) and 5-stage (fetch, decode, execute, memory and write (for ARM9)). Also, ARM7 doesn't support a memory management unit, but others do. How can the code be compatible if the processors are using different architectures and pipelines? Won't there be any affect of architectures to the codes? I'm assuming, as ARM9, ARM10, ARM11 have same architectures, code can be compatible, but ARM7 is different from other processors. Hence, one must do some changes in code before migrating because of different architectures. I'm wondering if it is correct or not. <Q> A processor is said to be code-compatible with another if their instruction set is compatible. <S> That is all there is to it. <S> Now, instruction sets can be made compatible whatever their pipeline architecture is. <S> It is right that pipelining can have a consequence on the instructions if you only target execution speed and core silicon area, but there are always workarounds if you need to ensure some compatibility with some existing processor. <S> It may complexify the core, but there is always a way. <S> Look at how the architecture evolved from the 8086 to the newest Pentiums. <S> Yet old code can still be executed. <S> Regarding the Von Neumann / Harvard differences, it can also be made to have minimal impact if the code and data busses actually end up to the same physical memory blocks with the same adresses (which is the case on all ARM implementations I have seen, except maybe for peripherals memory zones). <S> There may be an impact on corner cases like the need to call specific instructions when the code is self-modifying, but in normal cases, you won't notice. <S> Regarding memory management, that is another story. <S> This has an impact on the OS level. <S> The MMU is like an additional peripheral, whose configuration has an impact on memory layout, but it doesn't change the instruction set. <S> An algorithm is coded the same way whether there is an MMU or not. <A> The idea that "code can be migrated" means that the instructions will produce the same end result. <S> The architecture or number of pipeline stage do not affect that. <S> e.g. the code for the instruction: add r0,r1,r2 <S> Will be the same on both machines and will produce the same result: r0 ends up to be the sum of r1 and r2. <S> The latency may be longer. <S> e.g. on an ARM7 it takes 3 cycles and on an ARM 9 <S> it takes 5 cycles. <S> But that will be the case for all instructions so the net result will be the same. <S> The real time depends on the clock speed. <S> Thus the ARM9 may be faster despite taking 5 clocks because e.g. the ARM7 may be running at 100MHz and the ARM9 at 3 GHz. <S> The MMU on the ARM9 will be 'transparent' after a reset <S> thus you will not notice that it is present. <S> At least as long as you don't program it which ARM7 code will not do as there should be no code to touch the MMU. <S> A Harvard architecture does not mean code is executed differently. <S> In fact you still need to decode the instruction before you know which data to fetch/write. <S> It only allows the next instruction to arrive an be decoded at the same time as the data is read/written. <S> Having said all that, I remember there was an issues which branch instruction but that may have been when I transferred assembler to an A53 core. <A> ARM9 and other uses Harvard architecture (separate bus for data and instructions) <S> This is inaccurate. <S> A Harvard architecture CPU has separate memories for code and data; this is not the case in any implementation of the ARM architecture. <S> There are separate busses for instructions and data in some implementations, but they are always connected to the same memory. <S> Won't there be any affect of architectures to codes? <S> The pipeline is an implementation detail. <S> (The exceptions are all unusual, like self-modifying code.) <A> All that "architecture" refers to here is how the CPU is wired to memory - most of the instructions and their encodings and the registers are the same between the two chips - the only way you might to run into trouble with the difference between a harvard and other architectures, or the number of pipe stages might be if you were writing code into memory then executing it ... you just have to be careful about invalidating caches <S> Some things like exceptions and the like may be different
It does not affect the programmer's model of the CPU -- in almost all circumstances, the same code will run the same way on both implementations.
Why is "trimming" a carbon compositon resistor not possible? I read that carbon composition resistor can't be trimmed and thats a big reason why the tolerances for them are so high. What I don't understand is why I can't just carefully cut some of the resistive material out to increase the resistance value. <Q> You can. <S> In fact, long ago, when resistors were not nearly as readily available as they are today, it was not an uncommon technique. <S> Seal the cut with lacquer and mount the resistor such that there is no stress on its leads. <S> Of course, today we have access to precision resistors at very low prices, so there's no longer any reason to do this. <S> (I'm not sure why the other answers and comments are talking about film resistors, since you asked specifically about composition resistors, which are constructed differently.) <A> Figure 1. <S> Carbon film resistor construction. <S> Source: Resistor Guide . <S> Many years ago I was stuck for a shunt resistor for a Fluke clamp on AC current transformer to convert the current to a voltage for connection to a Fluke Scopemeter. <S> The required resistor was in the order of 10 or 100 Ω (I don't remember). <S> I had a selection of carbon film resistors and a calibrated ammeter available for the exercise. <S> I found that my chosen resistor was slightly low in value <S> but I reckoned that the track width of the carbon film in the lower value resistors would be wide and with a larger pitch as shown in the lower case in Figure 1. <S> I used a triangular file to cut a very small notch across the resistor somewhere around the middle while monitoring the effect. <S> Once I got the Scopemeter reading satisfactorily in agreement with the calibrated meter I stopped! <S> A little varnish was applied to keep the resistor sealed. <S> Figure 2. <S> Location of trimming notch. <S> My equipment was going to be used in a factory with room temperature controlled to within ±1.5° <S> C <S> so I wasn't concerned about temperature effects. <S> Why is “trimming” a carbon composition resistor not possible? <S> I demonstrated to myself that trimming is possible but the downside is that I created a hot-spot on the resistor. <S> This didn't matter in my application <S> but I'm sure it would in others. <S> Carbon film and carbon composition are two completely different technologies. <S> Carbon composition are made from a solid core of resistive material which is much more temperamental and varied in tolerance. <S> – Tom Carpenter Tom is correct and if I was ever aware of the difference I had forgotten it. <S> Figure 3. <S> They are made out of fine carbon particles mixed with a binder (for example clay). <S> After baking it has a solid form. <S> Although carbon composition resistors are widely applied in circuits, the majority of resistors are nowadays made by deposition of a metal or carbon film over a ceramic carrier. <S> Source: Resistor Guide . <S> The notching technique should work with this also <S> but I suspect that sealing it when complete may be even more important. <A> It's not that "trimming" (in this sense) is not possible, so much as it's not economical . <S> It's cheaper to use a different technology if more precision is desired, plus the temperature stability of carbon comp resistors is sufficiently poor that higher precision doesn't make much sense. <S> A further detail: <S> I have no idea if this is still the case, but at one time at least some carbon comp resistors were manufactured with a "cut once, measure twice" technique. <S> (Yeah, the "cut" before the "measure".) <S> That is, the resistors were manufactured with the approximate resistance range desired, then measured and marked after manufacture. <S> 5% resistors used a slightly more temperature-stable composition and were measured more carefully. <S> (Of course, some specific values are more commonly used than others, but the manufacturers could "tune" a batch to center values around some target, even if not all units hit that target.) <A> I expect that, unlike carbon film and metal film resistors, carbon composition resistors can't be trimmed during manufacture <S> In the past carbon composition resistors were trimmed by hobbyists by filing into the resistor element. <S> This would be done to make accurate voltage dividers and filters, as precision resistors were expensive (and even 5% resistors were considered rare and expensive at that time).
Carbon composition resistors (CCR) are fixed form resistors. Filing a notch into a composition resistor will increase its value, but it will also have detrimental effects on its mechanical strength and long-term reliability.
Can anybody explain the heat produced in the circuit during operating and non-operating state I have designed motor monitoring circuit(Agricultural Purpose) with Stepdown transformer, Rectifier, Relay Module, and processor.I planned to run the module to run 24*7 but I afride that heat will affect the circuit.My circuit working is When the relay module triggered the circuit got closed and the processor will handle remaining work. Here my Question is if the relay is in Off state(load is connected to NO, No load connected in NC) still my transformer and rectifier circuit get connected to the power supply which will produce heat or it will not produce heat? <Q> If would be good if you added a schematic so we know what you are talking about. <S> Generally though the transformer will create a little heat when under no load due to losses in the transformer, but not so much heat that you will be able to feel much. <S> Similarly, the rectifier has a little leakage current that makes some minimal heat. <S> If it is still getting hot, then either the transformer is not sized right or there is something else going on. <S> With a large load, this can produce significant heating. <S> UPDATE: <S> Thanks for the schematic. <S> Despite the fact the diode is in backwards, this answer remains basically unchanged. <S> The voltage regulator will, like the rectifier, consume minimal power ( ~24mW ) <S> because of the eight or so milliamps it needs to function when the relay is open. <A> The transformer, diode bridge, and regulator all will produce heat even if everything else is off or disconnected. <S> Current flows through the transformer primary even if the secondary has no load at all, and the wire resistance converts some of that current into heat. <S> The regulator has about 5-10 mA of static current from the input pin to the ground pin even if there is nothing attached to the output, so this produces some heat. <S> Also, that static current comes from the transformer secondary and through the diodes, so there is more heat produced in those elements. <S> All of this adds up to very little heat. <S> The surface temperatures of the components will vary depending on where the circuit is mounted and how much the ambient air is moving around it, but it should be way below anything uncomfortable to touch. <S> Depending on the relay coil and processor current requirements, the regulator probably will need a heatsink to prevent overheating when the system is on. <A> The regulator draws about 5mA with no load. <S> That's less than 100mW in the regulator even allowing for the transformer having bad regulation and supplying closer to 20V than the ideal 15VDC. <S> It will barely feel warm. <S> With the relay 'on' (assuming you fix that diode direction!) <S> the regulator will see considerably more dissipation. <S> If the relay is a 9V 40mA type (360mW coil) <S> the regulator could see an additional 0.04 * (15V-9V) = 240mW. <S> If the relay coil plus the processor draw is less than about 100mA <S> you won't even need a heatsink, assuming a TO-220 package. <S> If it is more, you may also need to rethink the 1000uF filter cap. <S> The diodes will run cold. <S> The transistor will see some dissipation during the (slow) switch off. <S> Worst case is Ix*2.25 where Ix is the nominal coil current. <S> So with a 100mA coil the peak dissipation could be 225mA, which is not worrisome even for a TO-92 transistor. <S> Obviously with the flyback diode incorrectly installed as shown, the transistor and regulator will tend to overheat. <S> I presume your "processor" contains an internal regulator so it runs from 9V.
When the thing is driving the load of course there will be more current passing through the transformer and regulator and heat will be produced due to the losses in the transformer and the voltage drops across the diodes in the rectifier.
Any possibility of dead short in trim pot I want to add one trimpot of 1K in my ADC board to test several analog channels. I thought of connecting two ends to VCC and GND and middle pin (wiper) to analog input. But I am not sure if I need to add any additional resistor on one of the sides to protect any possible short in case of faulty trim pot. But in such case, I guess I will not get 5V (or 0V) on extreme ends and that will be an issue for me. So my question is: Is there any possibility of defective trimpots making dead short between any two end points? Since pots are also being mechanical parts, does wear and tear cause any issue. Since resistance between ends is fixed, I am hoping that dead short may not occur and no extra safety is needed. But thought it is better to ask this forum. Any one have seen any dead short between ends? <Q> It's highly unlikely that you will end up with a dead short - the typical failure mode for a trimmer or pot would be open circuit due to an oxidised wiper or broken track. <S> The reason you would typically see a series resistor in a potentiometer circuit is for circuits where having zero resistance between the wiper and one or other end of the track would cause issues. <S> In your scenario where each end is connected to a supply rail and the wiper to an ADC input, this will not be an issue. <A> Augmenting Tom's answer . <S> Assumption here is the pot is not being over-driven. <S> If it is, then there is the possibility that it could burn out and molten parts could cause a short. <S> There is also always the potential for contamination from the introduction of foreign conductive objects or fluids. <S> (Note: Over use can cause some wear and conductive dust accumulation.) <S> Bad manufacture can of course also cause a short. <A> A look at the older open-frame type trimmer pots may allay your fears. <S> Figure 1. <S> An open-frame trim-pot. <S> Note the insulating base (white) and the carbon track (brown) with wiper sprung against the track. <S> (The dimple at the far side of the pot is making contact with the track and in the photo is at about mid-position. <S> It should be clear from the photo that end to end <S> failure (between the two near pins) is most unlikely to be a reduction in resistance. <S> It is far more likely that the track will fail due to excessive wear or power dissipation and that resistance end to end will increase. <S> If the adjuster is abused it could potentially fall in a position to short out the two end pins. <S> This seems unlikely.
However, if it works out of the box, and is properly rated and kept clean and dry, risk of a short is extremely unlikely.
Why are "ice cube" PCB mount power relays pinned out so that the COM pin is between the coil pins? If you have used "ice cube" type PCB mount power relays before, you're probably familiar with the de facto standard pinout for them: Why is this such a standard pinout though? It's clearly not an optimal pinout -- positioning the common terminal between the coil terminals forces you to put an isolation slot in the PCB in order to obtain reasonable creepage distances, and severely constrains the clearance distance available as well. It's also not something unique to cheap Cheese-shop-specials either: the Omron G5LE series uses this pinout, and so do equivalent relays from TE/P&B (ORWH) and Panasonic (JS1). You need to go to much costlier parts such as a Tyco PCH or RZ or Panasonic JW1 in order to get something that puts the common pin on the same side of the relay as the other contact pins. Is there some sort of internal construction detailing that makes this type of relay unsuitable for mains isolation to begin with? Or why can't the relay manufacturers bring the common terminal out on the "correct" side of the relay to allow an isolation barrier to be established? <Q> It's only to make the relay cheap. <S> It's sub-optimal internally (electrically) as well as externally <S> (pinout). <S> On the other hand it's cheap and the relay can be made reasonably sensitive (360mW typically for that construction). <S> One of the disadvantages of this construction is that the contact current always flows through the flexure, so a large surge (say to blow a fuse or circuit breaker) can anneal the spring and affect the operation permanently. <S> Below are a couple of photos of one <S> I did a tear-down analysis on a few years back. <S> The common pin is naturally located at the other end from the contacts. <S> They could have moved the coil pins to the front or back <S> but either way they are close to the contact potentials unless the relay is made wider. <A> form C has a very good reason for this layout. <S> Spoiler alert. <S> Sorry no answer here but good history on other aspects. <S> http://www.esterline.com/powersystems/ProductSupport/DesignReference/RelayHandbook.aspx <S> The 1 Form C defines the SPDT contact arrangement with “Form C” meaning Break <S> Before Make in the transition. <S> e.g. Vbat and 0V. <S> Although line voltage surface creepage can occur on dusty moist surfaces and breakdown threshold may be improved with a 1 ~ 2 mm slot air gap between coil and grid connected contacts, <A> I agree with you, that pinout is horrible. <S> It likely has to do with the way the moving arm is hinged inside the relay. <S> The hinge being on the left side of your image directly above the common pin. <S> As such, wiring the pin on the other side would need some sort of additional internal connection. <S> The latter does of course just move the isolation issues inside the relay. <S> I cant seem to find a picture of one opened up though.
For this reason, the moving pole is wired as far away from the contact throws as far as possible to mitigate short circuit currents to avoid a Make before Break in an application where this could cause a fire from followon current if the Throws were bridged. The coil-to-contact breakdown voltage and capacitance is inferior to that of better relays.
Is the rise time of the output of a logic IC or optocoupler independent of the rise time of the input? Is the rise time of the output of a logic IC (e.g. flip-flop or inverter) or optocoupler independent or dependent on the rise time of the input? If the answer is dependent, are there any other devices where the rise time of the output is independent of the rise time of the input? For example, I have a 555 timer output running in astable mode with a 50% duty cycle producing a 1.5kHz square wave where my scope says the rise time is approximately 400ns (and this changes if I change the frequency of the 555 output). I'd like to hook something up to the output of the 555, like a flip-flop or inverter or optocoupler (or anything that does the trick) to get a square wave with a rise time that's:1) less than 400ns and 2) preferably, independent of the 555 frequency (i.e., same whether I configure the 555 to output at 1.5kHz or 10kHz). <Q> For really slow input transitions there may be some jitter on the output. <S> Try looking at schmitt triggers - They have built-in hysteresis that takes a slow input and creates a fast jitterless output. <S> A more technical discussion is here . <S> Adding one would also make the rise time frequency independent. <S> An example part might be the CD40106B CMOS Hex Schmitt-Trigger Inverter . <S> It has a similar voltage range to the 555, but double check to make sure it works for your application. <A> It's usually pretty much independent of the input rise time. <S> With the old 4000-series CMOS they were originally unbuffered and didn't have much voltage gain even at the transition (on the plus side you could bias them into stable amplifiers, which you can't do with more modern chips). <S> Later parts have the 'B' suffix, meaning buffered (the linked TI application note SCHA004 goes into more detail). <S> Here is an unbuffered inverter 74HCU04: <S> An input change from about 2V to 2.5V will change the output from about 4V to about 1V (gain of 6) so a 400ns rise time could be reduced to about 70ns. <S> You would not generally use this part as an ordinary inverter- <S> it's more aimed at crystal oscillators and that sort of linear application. <S> An ordinary <S> 74HC04 has another two inverters in series inside, so the time (with a 400ns input transition) will not be determined by the input rise and fall time, but rather by the characteristics of the transistors and loading of the output. <S> For very slow input rise and fall times, noise immunity becomes a concern and you might prefer to use a 74HC14 which has a Schmitt trigger action. <A> Think of most systems with an input that drives an output as having both a gain, and a maximum output edge rate. <S> Both will be affected by the supply voltage, and the output loading, as well as the circuit. <S> The actual output edge rate will be limited by whichever is slower of the maximum rate and the gain times the input rate. <S> Gain is usually very large, and if the system if being driven by a similar gate, all the edges will be at the maximum slew rate. <S> The exception to the above is something like a schmidt trigger, where the output rate is independent of the input rate.
By adding some hysteresis the output rise and fall times are completely independent of even the slowest input rise and fall times (however the actual switching points are quite loosely specified).
Can I safely use rectified 120VAC to keep a 120V lead acid battery charged? I want to experiment with some 120V BLDC motors. I will need hundreds of amps at times. The controllers can typically take a bit more than 120V input, say 136V. I don't want a huge battery bank and complicated (expensive) charger. I'm thinking of ways do something cheap and easy. Let's say I use 10 "12V" lead-acid batteries in series. Normally they'd be charged to 135V or so. I'll switch a full-wave rectifier with an SCR and connect the output directly to the batteries. If the voltage gets over 130V(?), I'll simply cut the SCR for awhile. This should keep it below "float charge" range of the battery. (I'll also need to limit current in case I pull the battery way down.) I understand that this is not optimal and it wastes much of the potential of the batteries, but would it be safe? Is there a simple way do better? <Q> TL;DR Any quick/cheap method will probably damage and shorten the life of the batteries, especially if connected in a big series string. <S> A battery charger is more than just shoving charge into a battery. <S> It also has to do no damage, to you, and to the batteries. <S> To you. <S> It needs to be isolated from the mains. <S> This means a transformer. <S> To the batteries. <S> That means you have to respect the maximum charge rate and voltage at all times, which means a peak measurement. <S> Unfortunately when you measure raw rectified AC, most meters will make an average measurement, and so underestimate the effect of the current on the battery. <S> If you are measuring it with a meter, you should use smooth DC to get correct measurements. <S> You don't say whether you are using sealed or wet cells. <S> Wet cells have greater tolerance to overcharging, as you can replace the lost water by topping up, although gassing 10 batteries at the same time will generate a lot of explosive gas, make sure you have adequate ventilation. <S> Sealed cells must not be allowed excessive current during float, as this will result in a permanent loss of water to the battery. <S> Usually this is done by limiting the voltage to around 2.3v per cell (with ideally a temperature coefficient applied), but can also be done by checking the long term float current (expect current around C/1000, <S> C/100 is too high). <S> Normally, for battery balance, I'd say you don't have to worry about for lead and nickel chemistries, and they balance themselves on over-charge. <S> However, this is only at float currents. <S> Unfortunately, with the target float current in the C/1000 region, that makes for a very long float charge and balance if done by float overcharge alone. <S> If you want to recharge your batteries between experiments at a reasonable rate, then you need at least some form of per-battery voltage monitoring, if not active balancing, if you do it with them connected in series. <A> Rectified 120V AC will be somewhere over 160V, which would be destructive to the batteries, so nothing about this charger will be inherently safe. <S> Safety then is something you must add to the system ... and in all its ramifications that is not as simple as "cut the SCR for a while". <S> Neil mentions some of the issues and his chainsaw analogy is about right. <S> Frankly your best bet will be to find a commercial charger that handles the issues for you and lets you focus on your motors and your application, rather than worrying whether that SCR can fail permanently short circuit (yes it can!) <S> and how to reliably protect the batteries against that event and others. <A> if you're going this route, you want an inductor in series with the SCR, to both limit the current and to improve efficiency. <S> your power company probably wants you to balance the draw so that current is drawn on both half cycles, this means you need a bridge rectifier, and that means both ends of the battery are live. <S> simulate this circuit – <S> Schematic created using CircuitLab
With 10 batteries in series, if you're only monitoring the voltage from end to end, one cell could suffer severe water loss while the other cells are still accepting charge.
White Dust: Is it bad for electronic devices like smart TVs or phones? As a side result of using an ultrasonic air humidifier, this white dust thing can be seen on house furniture during its activity. I googled about it but I only found a few articles, saying that it is safe for human body but I found nothing about house devices. Does it have any bad effect on my LED TV? I use a homemade cover for it but, I'm still kind of worried. <Q> White dust: <S> https://www.hvac.com/faq/white-dust-concerned/ White dust is usually caused by mineral content in the water that goes into a humidifier. <S> When the mist lands on furniture or other surfaces and dries, it can sometimes leave behind the dried mineral deposits, or “white dust”. <S> The dust is a salt (ionic compound).The dust should be safe when dry, but as soon as it absorbs moisture it will become electrically conductive. <S> I would be more concerned about the liquid water that appears before the dust. <S> You could use distilled water as suggested by @jsotola to avoid the issue altogether. <S> I was wondering for myself a moment why "normal humidity" doesn't cause the white dust, but only the humidifiers do: <S> Ultrasonic humidifiers work by splashing water everywhere, the same thing you can do with your arms in water, the only difference being the size of the droplets. <S> So you should definitively use distilled water as the humidity from an ultrasonic humidifier is basically like splashing water on electrical appliances. <S> Tiny droplets of distilled water shouldn't be much different from normal humidity which generally isn't an issue. <A> Ultrasonic humidifiers have a way of vaporizing that causes the lime sediment in water to evaporate and condense on any nearby surface unlike steamer and spin drum (hot & cold types) . <S> A demineralization water filter can work. <S> Lime can be harmful , long term , to lead solder joints in copper plumbing, so water softening is recommended to remove the minerals. <S> Boiling water will also leave a white sediment in kettles, coffee boilers and cooking pots, according to how “hard” the water is. <S> Lime is not conductive or very salty as far as ionic levels are concerned so it will be harmless when dry. <A> The most practical and economical solution I have found to the problem of minerals depositing on my electronics while I run the ultrasonic humidifier was to buy a small, very affordable water distiller . <S> It can produce a gallon of distilled water every 4 hours (which in my case is enough to run the humidifier for a little over a full day), while using just under 3kW of power (which cost around 20 cents where I live). <S> This avoids buying and tossing plastic jugs of distilled water, which cost about 2 bucks a gallon. <S> The small, positive side effect of running the distiller is that during operation it releases heat, so I run it in the morning, after I leave for work, while the noise doesn't disturb anyone in the family, and sort of heats up the room a little in the meantime. <S> It shuts down automatically when it's done. <S> When I get home in the evening, I have a gallon of distilled water, which I mix ~50/50 with tap water.
With only half of the minerals in the humidifier as I would with 100% tap water, the built-in de-mineralizing cartridge/filter does a great job at completely eliminating the white dust.
How can microinverters be as efficient, or more, than power optimizers of solar arrays? According to Wikipedia : solar panels produce voltages around 30 V. This is too low to be effectively converted into AC to feed to the power grid. To address this, panels are strung together in series to increase the voltage to something more appropriate for the inverter being used, typically about 600 V. A power optimizer on each panel would then ensure the failure of one panel won't ruin the overall production of the serial circuit of panels, and roughly 600V would be delivered off of the roof to a single inverter. Microinverters, on the other hand, do not spit out 600V from a serially connected loop. They convert to 120VAC directly at each panel from the ~30VDC output at the panel. These microinverters are more expensive than power optimizers, for obvious reasons, but are touted as being more efficient. So: The wikipedia article says it's more efficient to convert to residential AC from 600VDC than from 30VDC Industry says the most efficient system is micro inverters, which converts to 120VAC from 30VDC How can microinverters be more efficient than power optimizers if the most efficient way to convert to residential AC is from 600VDC? <Q> With something like solar power generation efficiency is a tricky thing to pin down. <S> The micro-invertor itself may indeed be more efficient than the power optimizer, but it does that at a cost of much higher currents in the system. <S> The latter translates into much more expensive wiring, connections, switching systems, and ultimately the micro-invertor itself. <S> All of those can also add significant resistive losses in the system if you do not spend enough money on them that take away from the efficiency before you even reach the invertor. <S> Further, to complicate matters, when considering the efficiency of solar systems you also need to factor in costs, specifically costs per kW over the lifetime of the system. <S> A system that produces power at less dollars per kW can be considered as a more efficient system even if it is extracting less raw power from the panels. <S> This is especially true if you are trying to redeem your investment costs by feeding back into the grid at a fixed price. <A> In a micro-inverter you can use MPPT (Maximum power point tracking) on each panel to ensure you are extracting as much power as you can from each panel in the given sun/shade condition for each one. <S> Further there is more cable loss in a string inverter (600V) system. <S> So I think overall system efficiency is better with mirco-inverters. <S> Here's an article that may help you. <A> Solar systems use an array of panels. <S> From engineering standpoint the challenge is to sum all power from all panels/elements in most efficient way. <S> There are three main architectures for solar-to-AC_grid conversion. <S> This short article from EnergySage highlights distinctions of the three: <S> String inverter. <S> It uses bare solar panels connected in-series, to get about 600 VDC to convert it into AC-grid level. <S> Used where uniform insolation across the array happens. <S> Micro-inverters, they are attached to each low-voltage panel, and convert each panel directly into AC-grid level. <S> So, even if the efficiency of "string inverter" can be higher than each individual micro-inverter, the overall system output is better; String inverter with power "optimizers" on each panel. <S> Power optimizers also use DC-DC conversion technology with MPPT method to alleviate panel's uneven output, then the string inverter converts "optimised" string of panels into AC. <S> Apparently the "power optimizers" either can't optimize uneven outputs from different solar panels to the same level as the micro-inverters can do, or are less efficient than micro-inverters. <S> Obviously the primary currents are the same in both optimizers and micro-inverters, contrary to some other opinions. <S> So the actual question should be: "Why power optimizers can't do panel equalization as efficiently as micro-inverters can". <S> From higher perspective, the answer is that "power optimizer" DC-DC switchers work with about the same levels (30V) of conversion, while the micro-inverter upconverts 30 V into 120/220 V, or 4X. <S> The 4X conversion to 120 V has much better efficiency than 30-30 conversion, maybe not as good as 600 -> 120 V conversion, but still better. <S> Than's why micro-inverters outperform "optimizers" with string inverters, and optimizers are considered a compromise between more expensive micro-inverters and the standard string inverter.
The micro-inverters use MPPT (Maximum power point tracking) technology, which sums up power from individual panels much more efficiently than a single string of panels can do, on a system level.
FOSS-base AVR ISP programming with USB I've been using a free software stack for AVR programming - AVR GCC + avrdude on Linux, with parallel DIY programmer cable. Since the parallel bitbang programmer doesn't work with USB printer cables, I'm stuck with using my desktop PC. Is there an USB-based programmer that can be supported by a free-software (command-line based) programming tool? <Q> avrdude supports a wide range of programming devices, including: Atmel's first-party programmers, including the STK500 , STK600 , JTAG ICE and AVR Dragon . <S> The USBasp and USBtinyISP , and clones <S> thereof -- <S> both of which are available inexpensively online Devices which include an FTDI FT232/FT2232/FT4232 (not FT232R) <S> USB interface <A> I use an USBASP 2.0 clone for programming atMega328 (uno's and the like), works like a charm. <S> And dirt cheap from the well-known Chinese sources. <A> I own an USBtiny <S> I can recommend: https://learn.adafruit.com/usbtinyisp/overview
The Bus Pirate , which can be handy for other things too Arduino devices -- either to program the Arduino itself, or to program another device GPIOs on embedded devices (like the Raspberry Pi)
How any one achieve sinusoidal back emf in PMSM and Trapezoidal back EMF in BLDC ? What is the winding differences? How any one can achieve sinusoidal back emf and trapezoidal back emf in the motor.What is the winding differences in those motors. I have take an In-wheel BLDC exterior motor which is used in e-bike. I get sinusoidal shape instead of "Trapezoidal shape".In this case how can I calculate back emf constant. <Q> The magnets and pole faces can be shaped and positioned to achieve a more 'trapezoid' back-emf. <S> Different winding patterns may also have an effect. <S> However I suspect that matching back-emf to the drive waveform is often not done, because real BLDC back-emf waveforms are all over the place. <S> Here are some scope traces showing the phase-to-phase waveforms of 3 motors that I tested (vertical pulses are PWM drive, back-emf is the middle waveform that occurs when the phases are not driven):- <S> These are all small 'in-runner' BLDC motors rated for 100-300 Watts, designed to power RC model aircraft. <S> The first two motors have slotted iron stators. <S> One produces close to trapezoid back-emf, the other nowhere near it. <S> The last trace is from a coreless ironless motor, which explains its almost perfect sine wave back-emf. <S> Despite having a 'suboptimal' back-emf waveform, this motor (which only weighs 28 grams) produces 90W at 60,000rpm with 83% efficiency. <A> The general design configuration of the motor must first be considered. <S> That includes whether the air gap is radial or axial, whether the motor has an interior or exterior rotor and whether the claw-pole or conventional structure is used. <S> For a conventional motor with an interior rotor, the following design features would be considered: The influence of the stator winding on the shape of the back emf waveform is determined by the way that the stator windings are distributed among the rotor slots, the number of rotor slots per pole, the slot diameter and the skew angle of the slots. <S> The rotor design also influences the shape of the back emf waveform. <S> The relevant factors include the use of interior permanent magnets vs. surface permanent magnets, the skew angle of the magnets and the geometry of the magnets. <S> Re question revision <S> The design configuration in question seems to have an outer rotor that is a ring of homogeneous material that is magnetized in an alternating N-S pattern. <S> That would result in magnets do not have distinct edges. <S> That would tend to soften the edge of the resulting bemf waveform making it more sinusoidal. <S> The inner stator could have windings distributed to some extent, but not enough space for a lot of options in slot number or winding pattern. <A> Most motors are closer to sinusoids than trapezoidal waveforms. <S> (if you find one that is heavily trapezoidal, please let me know!) <S> The difference is the spatial distribution of windings. <S> The manufacturer can engineer the stator windings in a BLDC to be sinusoidal or nonsinusoidal. <S> In this case how can I calculate back emf constant. <S> Very easy. <S> With motor disconnected from drive electronics: <S> Spin a motor at constant speed (typically 1000RPM <S> but it can be anything as long as it's below the rated motor speed but not too far below it) <S> Measure RMS voltage between any two terminals with a DMM Measure the speed with a tachometer, or by taking the electrical frequency of backemf on an oscilloscope and dividing by the # of pole pairs (see motor datasheet, or put DC current into one pair of terminals and count the number of equilibrium points per mechanical rotation) <S> Calculate voltage divided by speed, e.g. 10Vrms / 2000 <S> RPM = <S> 5Vrms/ <S> KRPM line-line.
The flux distribution is probably far from sinusoidal, but there are enough factors in the design suppressing the higher order harmonics to make the current look somewhat more sinusoidal than trapezoidal.
When will electromagnetic waves reflect? What are the criteria for reflections? When an EM Wave comes to a boundary of two different media (i.e different electrical permitivity and magnetic permeability media), the things below could be seen: Reflections Refractions (Transmitted wave) Absorpiton Scattering (I am not sure) I want to learn the criteria for the above events. For example, when do reflections occur? When does transmittance occur? Do they depend on the characteristic impedance or electrical permittivity differences of the media? <Q> Yea dependence is there. <S> At the interface between two mediums : <S> 1 and 2 , the amount of reflection of EM waves for perpendicular incidence <S> , can be described by its reflection coefficient. <S> $$\rho = <S> \left(\frac{\eta_1 <S> -\eta_2}{\eta_1 <S> + <S> \eta_2}\right)$$where,$$\eta_1 = \sqrt{\mu_1/\epsilon_1}$$$$\eta_2 = <S> \sqrt{\mu_2/\epsilon_2}$$similarly the amount of transmission can be described corresponding transmission coefficient$$\tau = 1+\rho <S> $$ EDIT: <S> The coefficients are defined in terms of amplitudes of the incident, transmitted and reflected waves. <A> Reflections Refractions (Transmitted wave) <S> Absorption <S> scattering (I am not sure) <S> Generally, all four will happen at each interface between media. <S> Reflection occurs when the index of refraction is not perfectly matched between the two media. <S> (And when there is not 100% reflection) Scattering will occur when the interface between the media is not a perfectly flat plane. <S> Absorption mainly comes from propagation through any medium that is not perfectly lossless. <S> It can also happen at an interface between media if there is some loss mechanism localized at the interface, such as a surface charge that isn't perfectly conductive. <A> There is a set of equations exactly describing what you are asking for: they are called the Fresnel equations . <S> Reflection and transmission are covered by the Fresnel equations. <S> Scattering is, however, not covered as it would depend on the roughness of the surface. <S> So there can't be a generic formula for the scattering without quantifying the roughness (it also would be very dependent on the wavelength). <S> The Fresnel equations assume a smooth surface (i.e. roughness much smaller than wavelength). <S> Absorption doesn't matter, as it doesn't happen at the surface but requires a non-zero length of medium to be passed. <S> The Fresnel equations give coefficients of reflectance \$R\$ (i.e. ratio of reflected power to incident power) for EM radiation that is either polarized in the plane of incidence (p-polarized) or polarized perpendicular to the plane of incidence <S> (s-polarized). <S> \$R_s = \lvert\frac{Z_2\cos\theta_i - Z_1\cos\theta_i}{Z_2\cos\theta_i + Z_1\cos\theta_i}\rvert^2\$ <S> \$R_p = \lvert\frac{Z_2\cos\theta_t - Z_1\cos\theta_i}{Z_2\cos\theta_t + Z_1\cos\theta_i}\rvert^2\$ <S> where \$\theta_i =\$angle of incidence \$\theta_t =\$angle of transmission \$Z_k=\frac{\mu_k}{\epsilon_k}\$ <S> and \$k\$ is an index 1 or 2 for the medium. <S> There are other versions of the formulars. <S> E.g. under the assumption that \$\mu_1= <S> \mu_2 = <S> \mu_0\$ (permeability of vacuum) they can be rewritten as expressions of indices of refraction of both media. <S> (Note: since there are no non-linear effects involved you can compose any polarization into a linear combination of p- and s- polarized components). <A> From what I've recently read, in trying to understand Efield shielding myself from the first principles, reflections occur when the material has polarizable atoms. <S> The incoming energy gets (partially) stored by warping the electron orbits; warped orbits cause exciting behaviors in the material. <S> Look for the "wave coefficient" or "propagation coefficient" in the literature; this variable in its most general form will show various resonances (spectral lines) and show a general dependency with frequency, hence dispersion of pulses will occur.
Refraction will occur when the index of refraction is not perfectly matched and the angle of incidence is not exactly \$0^\circ\$ from normal.
Floorplanning vs Placement in VLSI The major steps of physical design that I learnt from a VLSI lecture are: 1)Partitioning 2)Floorplanning 3)Placement 4)Routing. The question of mine is about the steps 2 and 3. It seems like the steps floorplanning and placement are somehow overlapping. We decide the places of the sub-blocks in floorplanning. But in placement step, we also decide the places of the sub-blocks and this time we take the interconnections into account too. Placement step seems to be the expanded version of floorplanning. Then why do we have these two as seperate steps to be done one after another? Or should we think of them as a single step that are done interchangeably? <Q> Floor planning can be considered your top level design and it may for example be guided by pin placement or interference between different modules. <S> It is sensible to think about the overall design here; you may not want to place a sensitive analog component directly next to an RF oscillator. <S> Placement is then putting the gates within the overall plan. <S> This may be fine first time round for a simple design, but it could be that your original floor plan does not, for example, allow all timing constraints to be met. <S> All the steps are related, so think of planning and placement as different levels of abstraction. <S> It is usual to iterate through a number of designs to reach closure on all area and timing constraints. <A> Levels of Abstraction. <S> Floor planning is like designing the architecture of your house. <S> For ASIC, Floor planning typically includes : Defining width and height of core and die. <S> Define the location of macros/pre-placed cells and corresponding decoupling capacitors. <S> Power planning and pin placement. <S> Placement covers the majority of the placement process Binding the netlist with physical cells and placing it on the die. <S> Optimisation of placement of the cells on estimated wire length, keeping signal integrity. <S> Post Place timing analysis. <A> Floorplanning : <S> The stage where real design work happens (Port placement (Signal/clock/PG) , Power planning , Apt placement of fixed cells/blocks based on block shape and interacting ports ) . <S> See from above step <S> we did placement of Macros/Pre-placed cells , what is left over ? <S> Multi million cells !! <S> To place the left over multimillion gates we need very efficient placement algorithm that must be congestion aware ,timing aware and meets the power requirement.
Placement process is deciding what things have to be placed and where to place it inside your house.
What does shorted mean? In single phase motor copper or aluminum bars are permanently shorted at both end with the help of rings. So here what does the mean of shorted? When I am looking on Google all results are showing the meaning of short circuit. And I think both terms are different. So please help me to understand. <Q> This means that the aluminum or copper bars are placed in slots with the help of end rings, such that there is permanent contact at both ends. <S> This means they are electrically shorted and leads to a very small resistance in the rotor. <S> In response to your comments below: Short circuit and shorted are the same thing. <S> In the example above shorted means the bars are in contact at both ends. <A> It is not two different terms, it is one and the same that mean the same thing. <S> Shorting something means that you are attempting at making a 0 Ω wire between two points. <S> In simulations this can be done, in the real world you will always have some resistance. <S> A capacitor behaves like an open circuit if no AC is around. <S> If there is some AC then the capacitor gradually becomes more conductive, at infinite frequency, the capacitor becomes a short. <S> This is the basis for how a RC filter works. <S> So if I get a wire and connect it between the terminals across a battery and the wire has: 0 ohm, this is a short 0.1 ohm, I'd still call it a short 1 ohm, now it starts to behave like a heating element 10 ohm, this will work good as a heating element How I connect the wire doesn't matter. <S> What kind of material it is doesn't matter. <S> The fact that it is a relatively low resistance means that it is a short. <S> The battery has some internal resistance, and the typical value that it has (50 mΩ) is why I reasoned as above. <S> A good way to think about it is that in schematics there are wires everywhere to make it easier to show your design. <S> If you however remove all the wires and put each element right next to each other, then you have short ened the distances. <S> Your schematic still has the same functionality, every node is still there. <S> Every branch is still there. <S> Every wire has just been short ened. <S> Your body of several kilo Ω <S> can <S> short a lightning strike. <S> The resistance of the air (from the cloud down to you) will be in the terra Ω range. <S> It's all relative . <A> It is indeed referring to electrically shorted bars. <S> Re-read the operating principle of squirrel cage induction motor
In circuit analysis, a short circuit is defined as a connection between two nodes that forces them to be at the same voltage.
Anyone recognise this capacitor and its symbol (circle with dot)? Trying to repair an old Hameg CRT oscilloscope and looking at the schematics for the HV Z board I see the following capacitor symbol (in red box): The capacitor itself looks (vaguely) like this: I'm not suggesting it is a vacuum dielectric cap, it just looks like this. It's about 2 or 3 cm long with a glass tube and two metal ends. The symbol is different from that used for electrolytics and other non-polarized capacitors (like polys and ceramics). <Q> It's the symbol of a gas discharge neon lamp. <S> More info here and Here <A> Not a vacuum. <S> If it glows during operation it's being used as a voltage regulator. <S> If it doesn't it's some kind of protection. <S> The "dot" in the schematic symbol indicates a gas (usually neon, argon or some mixture) <S> fill. <S> Eg. <S> this part. <S> Here is a typical characteristic (from here ): <A> In TI's application note Guide to CRT Video Design : 2.2 Arc Protection <S> The CRT driver must be protected from arcing within the CRT. <S> To limit the arc-over voltage, a 200V spark gap should be used at each cathode. <S> Diodes D1 and D2 (see Figure 3) clamp the voltage at the output of LM2419 to a safe level. <S> The clamp diodes used should have a high current rating, low series impedance and low shunt capacitance. <S> FDH400 or equivalent diodes are recommended. <S> Resistor R54 in Figure 3 limits the arc-over current while R33 limits the current into the CRT driver. <S> Limiting the current into the CRT driver limits the power dissipation of the output transistors when the output is stressed beyond the supply voltage. <S> The resistor values for R33 and R54 should be large enough to provide optimum arc protection but not too large that the amplifier’s bandwidth is adversely affected. <S> Grids G1 and G2 should also have spark gaps. <S> A 300V and a 1 kV spark gap are recommended for G1 and G2 respectively. <S> The PC board should have separate circuit ground and CRT ground. <S> The board’s CRT ground is connected to the CRT’s ground pin and also directly connected to the chassis ground. <S> The spark gap’s ground return should be to the CRT ground so that high arc-over ground does not directly flow through the circuit ground and damage sensitive circuitry. <S> At some point on the PC board, the circuit ground and the CRT ground should be connected. <S> Often a small resistor is connected between the two grounds to isolate them <S> Your item looks like the spark-gap protector. <A> Your description sounds like it could be some sort of gas tube, mayby used as a voltage regulator. <S> A picture would be nice. <A> Tossing my 2 cents into the mix, yes... it is definitely a gas discharge device (note that a neon glow lamp is a gas discharge device). <S> Consider that the reference designator is not C601, but is _G_601. <S> G's are typically used as the refdesgs for gas discharge type devices. <S> Based on the labeling on the other components, this is board 6 in the design and the last 2 digits are the component reference.
That's a gas-discharge tube. You see these in telecom protection circuits for lightning protection.
Trace width in small PCB board designed to combine battery cells in series I am designing a PCB board that will combine 6 cells in series. However, I am thinking this might not work because of the amount of current that is required from those cells. My battery has to be able to provide up to 100A for few seconds, and 50A for continuous. Here a quick layout I did of what it will look like: The yellow plane (5mm in width) is what will be connecting each cell in series, however with most calculators it is asking for at least 55 mm with a 20C temp rise. I can't go higher on the temp rise because this will be a battery board and don't want my battery to heat up. I also can't make the board any bigger because of my mechanical requirements. However, I can go thicker with more layers, but I have no idea how to calculate how many layers I will need to make this connection work. <Q> Why don't you just flip all the odd cells? <S> If you flip them, you can connect them with a much shorter and wider path. <S> You say the yellow traces are 5 mm wide, I estimate they are 15 mm long. <S> If you keep the exact same landing pattern, but flip every other cell, you can connect the cells with traces approximately 20 mm wide, 3 mm long. <S> That is like 20x better (lower) resistance. <A> I would add more layers and make them solid planes. <S> Use the heaviest copper available <S> at least 2 oz. <S> What are the dimensions of your board? <S> That will be a factor in how many layers you need. <S> Another option would be to add large holes to solder thick wires or copper strips to carry the current. <A> Using this calculator ( http://www.4pcb.com/trace-width-calculator.html ) <S> it looks like if you use 10 amps and 2oz copper, you need 3.6mm for a 20C temp rise in air. <S> Using the top layer and bottom layer should result in approx 10A passing through each layer, so 5mm should be adequate. <S> It might be good to put some vias in, but not too many otherwise it will start decreasing the conductivity. <S> Another solution is using bus bars ( http://www.epectec.com/pcb/powerlink-technology.html or https://uk.rs-online.com/web/c/automation-control-gear/circuit-protection-circuit-breakers/busbars/ ), or solder coating. <A> There is little to be gained by putting the high current terminals on a board. <S> You will also have trouble soldering those large terminals, while not overheating the battery terminals.
At these currents, and such a simple circuit, use discrete copper wires, and a separate mechanical arrangement to hold the cells.
AC and DC devices Why do home appliances like a light, TV, and fan use AC power directly from the supply, but a phone and laptop use an adapter to convert AC to DC? <Q> This is mostly due to the line voltage being "high" compared to what normal electronics needs, and what is safe. <S> Devices that use significant power, like lights and fans, are built to run directly off the power line voltage. <S> The cost of protecting the user from the higher voltage is offset by the more efficient use of the power and not having to convert it. <S> Devices with electronics in them need low voltage to run the electronics. <S> This means a extra power supply , which converts the high line voltage to the low voltage needed by the electronics. <S> It also converts to DC, because the electronics need DC to operate. <S> Given that there is this converter between the line voltage and the actual internals of the device, manufacturers have a choice about where to put it. <S> In some cases, it makes sense to put this converter external to the device. <S> That alleviates the need to protect the user from high voltage at the device. <S> I go into more detail here . <S> Some electronic devices are large enough that it makes sense to put the power supply inside. <S> Your TV example is in this category. <S> It's still a electronic device that internally runs on low voltage DC, but it's big enough that the power supply is internal. <A> You have to start from "why is the voltage of power network is 110VAC/220VAC". <S> So it's AC because this way it's easy to convert voltage by transformers or to move motors (AC). <S> Both reasons were there long before electronics came around. <S> For electronics, like TV, laptop, whatever, you will always need DC, and significantly lower than 160V/315V (rectified AC). <S> This is because how semiconductors work- unlike vacuum tubes (which i have no idea about, i am not so old). <S> So why do we have sometime power supply inside a device and sometimes outside? <S> It's because application considerations. <S> Laptop- you don't want to have a power supply inside when you are on the move. <S> So you use something external, that you can leave at home. <S> For TV it doesn't matter, so normally it's better for the customer to have it inside. <S> Although sometimes for extremely slim devices the power supply will still be external. <A> I have a feeling that even though most of the explanations given here are mostly correct, they're not really answering what the OP asked. <S> Which is funny, because I think the explanation should only be a few sentences long. <S> (Besides that: the electricity is also AC when it is generated, for most sources at least.)When it comes in your home <S> , it is pretty much always converted to DC before use, because almost all modern devices need DC. <S> The ones that need some kind of AC will generate their own, and not use the AC supplied to it from the outlet. <S> There are only a few exceptions. <S> (I can think of only 3 for the moment, but there's probably more): <S> motors: <S> most (large) elektromotors run on AC, so they can use it straight away from the outlet. <S> incandescant lamps: they require heat to make the lamp glow. <S> Wheter that heat comes from AC or DC <S> doesn't matter, so there's no reason to convert it. <S> heating elements: same as for incandescant lamps <S> Your TV is not one of the exceptions, because it converts to DC internally, same as your laptop does with an external adapter. <A> Another reason for having a separate AC to DC converter is Safety . <S> Devices like laptops and mobiles are typically meant to be handled by the user with close contact to the skin. <S> A small fault with an internal AC to DC converter can be very dangerous. <S> The easiest solution is to have the line voltage handling outside and far away from the device with proper isolation in the adapter. <S> The device only sees the low voltage DC. <A> The home is supplied with AC because regular pole transformers and their bigger cousins require AC, and most early home electrical devices didn't care whether they got AC or DC. <S> And for a number of other reasons it's easier to deal with AC in the distribution process. <S> Pretty much all devices that process audio or video (including old tube-type radios and TVs) use DC internally for most of their operations. <S> But converting from AC to DC on a small scale is not particularly difficult.
The main reason AC comes out of your outlet, is because it is very inefficiënt to transport DC over long distances (and especially low voltage DC).
Paralleling Mosfets: Can I use a common gate resistor, or do I have to use a separate one for each mosfet? When calculating gate resistor for a single mosfet, first I model the circuit as a series RLC circuit. Where, R is the gate resistor to be calculated. L is the trace inductance between the mosfet gate and the output of the mosfet driver. C is the input capacitance seen from the mosfet gate (given as \$C_{iss}\$ in mosfet data sheet). Then I calculate the value of R for appropriate damping ratio, rise time and overshoot. Do these steps change when there are more than one mosfets connected in parallels. Can I simplify the circuit by not using separate gate resistor for each mosfet, or is it recommended to use separate gate resistors for every mosfet? If yes, can I take C as the sum of gate capacitors of each mosfet? simulate this circuit – Schematic created using CircuitLab In particular, I am aiming to drive a H-bridge made of TK39N60XS1F-ND . Each branch will have two paralleled mosfets (8 mosfets at total). The mosfet driver section will consist of two UCC21225A . The working frequency will be between 50kHz and 100kHz. The load will be primary of a transformer with an inductance of 31.83mH or more. <Q> Depends,And that depends is based upon your REAL circuit not your intended circuit simulate this circuit – <S> Schematic created using CircuitLab <S> Your practical placement will create something like this (there will be a few other stray inductances but for now this will do). <S> If you think about the current flow when you charge/discharge the gates it will be <S> MOSFET driver Gate resistor Split path to the MOSFET via each MOSFET source recombine at the common reference via some path back to the MOSFET driver <S> This loop is one you need to keep BALANCED & ideally minimised. <S> Imagine if due to poor layout/tracking/wiring <S> the right FET's source had 10x the inductance on the gate and/or source, it will switch slower which mean the left FET will experience more of the transient responses. <S> In large power devices they use a small individual gate resistor per die & then parallel all the devices up, but they keep the layout really-really tight & equally they are in control of the <S> MOSEFET/IGBT batch characteristics for very closely matched devices . <S> If you cannot do this then it is better to have a separate gate resistor. <S> Parallel IGBT die on a common substrate <A> Sharing a resistor is not recommended because of variations in VGS(TH). <S> With individual resistors, the FETs' switching will be more concurrent. <A> Resistors are cheap, so I would say it is not worth it, but the failures won't be immediate. <S> If both FETs have the same Vgs, then the peak current through Rg will double, and it is pulsed current which resistors aren't great at. <S> The Vgs of the FETs can be pretty random. <S> If the FETs have different Vgs, then they turn on at slightly different voltages, so one FET is slowing the voltage rise while it draws enough current to fully turn on <S> , then the voltage starts rising again and the other FET will turn on. <S> The device that turns on first will be conducting by itself before the other device turns on. <S> Remember to leave a lot of head room in your circuit, since the current sharing on the FETs won't be perfect. <S> And don't depend on the FET diodes, either, since diodes share current horribly.
The benefits of a separate gate resistor is, if you need to tune the response of one leg based upon other observations, you can
Why use a "load switch" and not just one transistor as a switch I am trying to understand the advantage of using a 'load switch" for switching applications. The load switch (like the one below), has two transistors to do the job. Why can't I just use one transistor (bjt/fet) for doing the same thing? <Q> You could use a single FET, but there are several advantages to using a load switch IC. <S> Voltages higher than the micro voltage can be switched. <S> (That can also be done by using 2 transistors. ) <S> This can be done with discrete components as well, but requires more engineering. <S> More often than not, load switches have monitoring, such as power good or overcurrent outputs, etc. <S> Tolerance analysis is easier when that entire circuit is on one die with guaranteed data on its performance. <S> As with all things engineering, trade-offs. <A> In addition to what other respondents have already written, a switch made with a single power MOSFET will have a body diode between source and drain. <S> As a result, the switch can block current only in one direction. <S> In the other direction, the body diode will conduct whether the switch is open or not. <S> An integrated load switch typically can block current in both directions. <S> This is done either by controlling the bias of the bulk in the MOSFET, or by using two MOSFETs back-to-back. <A> In this case, the second transistor is performing a level-shifting function. <S> The P-channel MOSFET requires an active-low control signal that is referenced to its source terminal (i.e., across the resistor). <S> The N-channel device allows you to control the switch using a ground-referenced active-high logic signal, which is much more convenient in most applications. <A> The purpose of this very common design, which includes BJT transistors as well, is to isolate <S> the 'EN' signal, which can be from a low voltage source. <S> Also the source may not tolerate high voltage above 3.3 VDC or 5 VDC logic voltage at its output terminals. <S> The PMOS transistor could also be most any PNP transistor. <S> It can switch an extremely high voltage on or off, such as 300 VDC for a long string of LEDs. <S> It could be the main power switch for all sorts of gadgets while keeping 'EN' isolated. <S> The maximum voltage limit for MOSFETs right now is about 700 VDC. <S> I should note that the NMOS transistor will be exposed to the same Vin voltage through the bias resistor, which is used to make sure the PMOS is OFF if 'EN' is low or at its ground/source voltage (zero volts). <S> The NMOS can be the type that turns on full at about 5 VDC or 10 VDC, depending on the logic driving it. <S> EDIT: <S> Because the PMOS is grounded when it is turned on, the limit for Vin is 20 VDC or less. <S> Thanks to @BeBoo for pointing that out. <S> For higher voltages the gate-source voltage would have to be clamped with a zener diode.
The load switch has inrush current limiting built in.
Efficient 60v to 12v DC-DC I'm on the process of building a new version of my ebike electronic systems and I'm searching for a way to lower the voltage of the battery (13s = 54,6v fully charged) to 12v/5v for the accessories and various electronics. I would like to keep my main microcontroller (which is a STM32 based board) powered at all time (with sleep modes) to be able to get GPS, GSM and battery monitoring running when necessary, and then switch (with mosfets/relays) the bigger loads if needed. Given that I would like to keep the thing connected all the time, because my battery box is screwed, I would like the quiescent current to be as minimal as possible, to avoid any unnecessary current consumption. So i started to dig up in my boxes to excavate various DC to DC converter I have, and found 3 of them which I tested on my PSU at 54,6v input : RCNUN E-Cart Fixed 12v, 10A : 15mA quiescent current SUKUZU Fixed 12v, 10A : 8mA quiescent current LM2596HV variable buck 3v-48v, 3A : 5mA quiescent current. Given that my battery bank is around 15Ah, 5mA would not be really dramatic, you could still run the thing for more than 4 months. But I'm trying to find a more efficient way to do it. So I looked at Ti's website and found this little chip : TI LM46002 Which provides pretty impressively low quiescent current : less that 30µA. But I was wondering if it would be the correct way to go and if you had simpler way to get a 12 or 5v out of 60v max battery bank. I hope my question is clear and that it's not a repost, I searched on the forums, schematics and stackexchange, but I might have missed the perfect answer. <Q> I think this chip is a very good option if you always want an supply from 13s battery. <S> I'm seeing that the problem is to get a power supply with 60 V input (max) and low quiescent. <S> Lower input voltage gives more options with low quiescent current. <S> With this in mind, a suggestion would be to use a second low power & low voltage battery (1s o 2s) that gets charged from the 13s, and supplies the low voltage electronics, and that totally switches on/off the higher voltage power supply that charges the battery. <S> Its a bit trickier but could also work. <A> I can personally¹ suggest the <S> LTC3637 <S> if you want to build a circuit yourself. <S> The quiescent current is 12µA (so pretty low), and has some nice additional features like current limiting, if you happen to require it. <S> But keep in mind that both the TI and LT chips require that you design a PCB around them, you can't really solder it on a prototype board. <S> So it may be unwise in your case, especially for an one-off thing; the pre-made modules could be the better option for you. <S> ¹ <S> I've built several hundred chargers on this design for an application that is not very dissimilar than yours and from what I hear they work well in practice. <A> But note that with small load like your case (28mA), eff will lower.
Step down converter like LM46002 is good and effective to get 12V or 5V with >80% eff.
How to/should I bypass switches? I want to make a stage box (with light in it) so when the guitarist or singer steps on it, it lights up. In the box I want to use a 12W (or maybe later 2) LED lights (which have a normal EU power plug). I want to make initially two means of turning them on: By stepping on the box (on a plateau) By pressing a foot pedal The reason is that I don't know if I can make option 1 very reliable, so option 2 is a backup. Also since both options might fail, I want to use rocker toggle switches to bypass both options (meaning if both rocker switches are ON/bypass, the light should always light up). I changed the answer to use a relay (instead of using mains, as of the comment of MrGerber below). I had the following circuit in mind, but I wonder: If it is a good way to bypass this way If for the foot bypass switch, I really need a DPDT switch (I couldn't find any other way). If I need another way as the relay. (actually the 12W Led lamp is inserted with a normal power EU 220V plug, but I couldn't find a symbol for this). <Q> I would put the plateau on 4 springs. <S> One contact under the plateau, the other on the frame of the box. <S> 3mm between the two contacts, aka "the switch". <S> When the guitarist steps on, the springs get pressed and the contacts get closed. <S> If the springs can be pressed with 1/10 of the weight of a person, the switch will be reliable IMO. <S> It's a good idea to use a relay. <S> It allows you connect whatever device you want. <S> I would use 9 or 12V to power the coils of the relay. <S> 5V is less reliable. <S> Make sure the connection is either 100% off or 100% on mecanicaly and that contacts are never slightly touching each other. <A> If any one (or all) of the switches should turn on the lights, just connect all the switches in parallel. <A> After DiBosco's comment to remove the bypass switches (and my idea to add an <S> Always On switch instead, and remove the floor switch): simulate this circuit – Schematic created using CircuitLab <S> This answers all my questions, although the functionality is different than requested, but probably better (if I can make the plateau switch very reliable).
No need to take any switch out-of-circuit, unless you expect it to fail "On".
Why is there a maximum time for length of write pulse to write on an EEPROM? I am still just learning about electronics on my own, so please bear with me. The EEPROMs that I have come across ( for example this one where the t_wp max is 1000 ns.) all have a time limit for the write pulse (I think this is called the Write Pulse Width). I am just curious, but 1) What is the reason that EEPROMS have this upper limit? 2) Are there any parallel EEPORMs with no upper time limit? Please note that I am not asking about the limit on the number of times one can write to an EEPROM. <Q> I assume your question relates to parallel EEPROMs. <S> The Write pulse (time) is a minimum specification and typically has no upper bound. <S> In other words the time specified limits the speed of writing (bits/bytes/words per second), but the chips will operate at any lower write rate. <S> For example here is the datasheet for the 26C64 write timing: <S> Notice <S> there is no upper bound for any of the chip select or write timing. <S> Addition: <S> The 28C16 you raised in the comments shows a limitation of the early EEPROMs ... <S> they needed a higher write voltage for the cell write/erase cycle. <S> This meant they could not work down to DC (the lowest freq of write cycle possible). <A> There are two reasons I can think of for having a limit to the write pulse length: <S> If the part uses dynamic latches to hold the address, those latches may only be able to hold their value for a certain length of time. <S> Since the address is latched on the falling edge of /CE & /WE, but the write doesn't start until the rising edge, giving the chip a write command <S> that's long relative to the time required to complete a write cycle could result in the dynamic latches forgetting the address before the write cycle is complete. <S> If that were the intended purpose, however, I would expect a specification that would indicate that write pulses within a certain range are guaranteed to be accepted, write pulses that are outside a larger range would be guaranteed to be ignored, and those between the two ranges might arbitrarily be accepted or ignored. <S> In either case, 1000ns seems like a curiously short maximum. <S> The address needs to be held for an entire write cycle, so any dynamic latches would need to be able to deal with that. <S> If the cycle limit is intended to guard against stray write events, engineering it to be usable with systems that run at slow clock speeds should have been trivial and would have improved usability. <A> CHIP CLEAR <S> The contents of the entire memory of the AT28C16 may be set to the high state by the CHIP CLEAR operation. <S> By setting CE low and OE to 12 volts, the chip is cleared when a 10 msec low pulse is applied to WE. <S> So if the write pulse is too long, you clear the chip.
If the device rejects any write cycles that are excessively long, that may help guard against erroneous write operations in cases where a system operation gets disrupted (e.g. by loss of power).